# One-factorial Analysis of Variance (One-way ANOVA)

The one factorial analysis of variance tests whether there is a difference between the means of more than 2 groups. Thus, one-way ANOVA is the extension of the independent t-test to more than two groups or samples.

## One Factor ANOVA Example

A classic use case for analysis of variance is in therapy research. For example, you might be interested in whether different therapies result in different therapeutic successes after a herniated disc. For this you could test three different therapies.

On the one hand you could just discuss with the patient which movements are good and which are bad for the disc, then you could treat one group with medication and with the last group you could do stretching and strength training.

At the end of the therapy, you could then measure the success and use an analysis of variance to calculate whether there is a significant difference between the three types of therapy. Of course, the assumptions have to be fulfilled in order to calculate an ANOVA, more about this later.

### Medical example dataset

To perform an analysis of variance (ANOVA) in a medical context, you would typically have a dataset with multiple groups or treatments, and you would want to determine if there are significant differences between these groups. Here's a sample dataset of fictitious data that could be used for a medical ANOVA analysis.

Suppose you are studying the effectiveness of three different drugs (Drug A, Drug B, and Drug C) in reducing blood pressure. You randomly assign 90 patients to one of the three drug groups and measure their blood pressure after one month of treatment. The blood pressure measurements (in mmHg) for each patient are as follows:

*Load blood pressure data*

In this dataset, each drug group represents a separate treatment or condition, and the blood pressure measurements for each patient in that group are recorded.

To analyze this dataset using ANOVA, you would compare the means of the blood pressure measurements among the three drug groups to determine if there is a statistically significant difference.

## Hypotheses of the one-factor analysis of variance

We want to know if the groups of the independent variable have an influence on the dependent variable.

The question that can be answered with a one-factor analysis of variance is: Is there a difference in the population between the different groups of the independent variable with respect to the dependent variable.

In the example above, the groups of the independent variable are the different types of therapy and the dependent variable is the perception of pain after the respective therapy.

Why do we want to test whether there is a difference in the population? Actually we want to make a statement about the population, unfortunately in most cases it is not possible to survey the whole population and we can only draw a random sample.

The aim is to make a statement about the population based on this sample with the help of the analysis of variance.

Of course, we did not do the experiment on therapy success with all persons who have a herniated disc, but only with a random sample, but we would still like to generalize the statement for the population.

The null hypothesis and the alternative hypothesis result in:

Null hypothesis H
_{0} |
Alternative hypothesis H
_{1} |
---|---|

There are no significant differences between the means of the individual groups. | At least two group means are significantly different from each other. |

Therefore, the null hypothesis states that there is no difference, and the alternative hypothesis states that there is a difference.

## Assumptions of the one-factor analysis of variance

For a one-factor ANOVA to be calculated, the following conditions must be met:

##### 1. Level of scale

The scale level of the dependent variable should be metric; that of the independent variable nominally scaled.

##### 2. Independence

The measurements should be independent, i.e. the measured value of one group should not be influenced by the measured value of another group.

##### 3. Homogeneity

The variances in each group should be approximately equal. This can be checked with the Levene test.

##### 4. Normal distribution

The data within the groups should be normally distributed.

What if the prerequisites are not met? If the scale level of the dependent variable is not metric and not normally distributed, then the Kruskal-Wallis test can be used. If the data is a dependent sample, then analysis of variance with repeated measures must be used.

## Calculate single-factor analysis of variance

To calculate an analysis of variance, the means of the individual groups and the overall mean must first be calculated. Then the different sums of squares QS can be calculated.

The mean squares can then be calculated from the square sums and finally the F-value can be calculated. The p-value can then be calculated from the F-value and the degrees of freedom using the F-distribution.

Usually, however, the p-value is simply calculated using statistical software such as DATAtab, see below.

## Effect size for single factor ANOVA

In single-factor analysis of variance, the effect size can be calculated in different
ways. can be calculated in different ways. The most common are the *Eta square*, *partial
Eta squared*, and *Cohen's effect size*.

#### Eta squared and partial Eta squared

Eta-squared η^{2} indicates the proportion of the total variance in the
dependent variable that can be explained by the independent variable.

In the case of single-factor analysis of variance without repeated measures, Eta squared corresponds to partial Eta squared.

#### Effect size *f* according to Cohen

After the partial Eta square has been calculated, the effect size
*f* according to Cohen is given by:

Here, the classification of Cohen (1988) can be used for orientation:

f | Classification according to Cohen (1988) |
---|---|

0.1 | weak effect |

0.25 | moderate effect |

0.4 | strong effect |

## Calculate single factorial analysis of variance with DATAtab

Calculate the example directly with DATAtab for free:

*Load data set*

If you want to calculate a one-factor analysis of variance with DATAtab, just click on the Statistics Calculator and then on the Hypothesis Tests tab.

If you now select a metric variable and a nominal variable with more than 2 values, an analysis of variance will be calculated automatically.

First you get the hypotheses and the descriptive statistics. Then you can graphically read the dispersion of the individual groups in a boxplot.

Finally, you get the Levene test of equality of variance. The Levene test yields a p-value of 0.184, which is greater than the significance level of 0.05. Thus, the null hypothesis is confirmed, which indicates that the variances of the different groups are equal and, hence, the variance homoginity is present.

In the table "ANOVA" you can read the calculated p-value of the analysis of variance. If this is greater than the significance level, which is usually chosen to be 0.05, the null hypothesis is not rejected and it is assumed that there is no significant difference between the groups. In this example, the p-value is 0.072, which is above the significance level of 0.05, so the null hypothesis is not rejected and it is assumed that there is no difference in reaction time between the three groups.

If you don't know exactly how to interpret the results of your own data, you can also
just click on *Summary in Words*.

## Bonferroni post-hoc tests

Finally, you are given post-hoc tests, such as the Bonferroni post-hoc test.

If the p-value of the analysis of variance is less than 0.05, it can be assumed that at least two groups differ in mean value. With the help of the Bonferroni post-hoc test, it can now be checked which of the groups differed.

Therefore, in this example, it makes no sense to calculate a post-hoc test because the p-value of the analysis of variance is greater than 0.05 and thus there is no significant difference between the groups.

If the p-value of the ANOVA were smaller than 0.05, you could simply look in the individual rows to see which p-value is smaller than 0.05. If one or more p-values is smaller than 0.05, it can be assumed that these groups differ significantly.

### Statistics made easy

- many illustrative examples
- ideal for exams and theses
- statistics made easy on 412 pages
- 5rd revised edition (April 2024)
**Only 7.99 €**

*"Super simple written"*

*"It could not be simpler"*

*"So many helpful examples"*