What test is used to compare two or more means?
Published on January 28, 2020 by Rebecca Bevans. Revised on July 6, 2022. Statistical tests are used in hypothesis testing. They can be used to: Statistical tests assume a null hypothesis of no relationship or no difference between groups. Then they determine whether the observed data fall outside of the range of values predicted by the null
hypothesis. If you already know what types of variables you’re dealing with, you can use the flowchart to choose the right statistical test for your data. Statistical tests flowchart Statistical tests work by calculating a test statistic– a number that describes how much the relationship between variables in your test differs from the null hypothesis of no relationship. It then calculates a p-value (probability
value). The p-value estimates how likely it is that you would see the difference described by the test statistic if the null hypothesis of no relationship were true. If the value of the test statistic is more extreme than the statistic calculated from the null hypothesis, then you can infer a statistically significant relationship between the predictor and outcome variables. If the value of the test statistic is less extreme than the one calculated from the
null hypothesis, then you can infer no statistically significant relationship between the predictor and outcome variables. You can perform statistical tests on data that have been collected in a statistically valid manner – either through an experiment, or
through observations made using probability sampling methods. For a statistical test to be valid, your sample size needs to be large enough to approximate the true distribution of the population being studied. To determine which statistical test to
use, you need to know: Statistical tests make some common assumptions about the data they are testing:
If your data do not meet the assumptions of normality or homogeneity of variance, you may be able to perform a nonparametric statistical test, which allows you to make comparisons without any assumptions about the data distribution. If your data do not meet the assumption of independence of observations, you may be able to use a test that accounts for structure in your data (repeated-measures tests or tests that include blocking variables). Types of variablesThe types of variables you have usually determine what type of statistical test you can use. Quantitative variables represent amounts of things (e.g. the number of trees in a forest). Types of quantitative variables include:
Categorical variables represent groupings of things (e.g. the different tree species in a forest). Types of categorical variables include:
Choose the test that fits the types of predictor and outcome variables you have collected (if you are doing an experiment, these are the independent and dependent variables). Consult the tables below to see which test best matches your variables. What can proofreading do for your paper?Scribbr editors not only correct grammar and spelling mistakes, but also strengthen your writing by making sure your paper is free of vague language, redundant words and awkward phrasing. See editing example Choosing a parametric test: regression, comparison, or correlationParametric tests usually have stricter requirements than nonparametric tests, and are able to make stronger inferences from the data. They can only be conducted with data that adheres to the common assumptions of statistical tests. The most common types of parametric test include regression tests, comparison tests, and correlation tests. Regression testsRegression tests look for cause-and-effect relationships. They can be used to estimate the effect of one or more continuous variables on another variable.
Comparison testsComparison tests look for differences among group means. They can be used to test the effect of a categorical variable on the mean value of some other characteristic. T-tests are used when comparing the means of precisely two groups (e.g. the average heights of men and women). ANOVA and MANOVA tests are used when comparing the means of more than two groups (e.g. the average heights of children, teenagers, and adults).
Correlation testsCorrelation tests check whether variables are related without hypothesizing a cause-and-effect relationship. These can be used to test whether two variables you want to use in (for example) a multiple regression test are autocorrelated.
Choosing a nonparametric testNon-parametric tests don’t make as many assumptions about the data, and are useful when one or more of the common statistical assumptions are violated. However, the inferences they make aren’t as strong as with parametric tests.
Flowchart: choosing a statistical testThis flowchart helps you choose among parametric tests. For nonparametric alternatives, check the table above. Frequently asked questions about statistical testsWhat is statistical significance? Statistical significance is a term used by researchers to state that it is unlikely their observations could have occurred under the null hypothesis of a statistical test. Significance is usually denoted by a p-value, or probability value. Statistical significance is arbitrary – it depends on the threshold, or alpha value, chosen by the researcher. The most common threshold is p < 0.05, which means that the data is likely to occur less than 5% of the time under the null hypothesis. When the p-value falls below the chosen alpha value, then we say the result of the test is statistically significant. What is the difference between quantitative and categorical variables? Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age). Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips). You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results. Sources in this articleWe strongly encourage students to use sources in their work. You can cite our article (APA Style) or take a deep dive into the articles below. This Scribbr article
Is this article helpful?You have already voted. Thanks :-) Your vote is saved :-) Processing your vote... What test is used to compare more than two means?For a comparison of more than two group means the one-way analysis of variance (ANOVA) is the appropriate method instead of the t test. As the ANOVA is based on the same assumption with the t test, the interest of ANOVA is on the locations of the distributions represented by means too.
What test is used to compare means?The T-test is a common method for comparing the mean of one group to a value or the mean of one group to another. T-tests are very useful because they usually perform well in the face of minor to moderate departures from normality of the underlying group distributions.
Can I use ANOVA to compare two means?A one way ANOVA is used to compare two means from two independent (unrelated) groups using the F-distribution. The null hypothesis for the test is that the two means are equal. Therefore, a significant result means that the two means are unequal.
How do you compare two different means?The four major ways of comparing means from data that is assumed to be normally distributed are:. Independent Samples T-Test. ... . One sample T-Test. ... . Paired Samples T-Test. ... . One way Analysis of Variance (ANOVA).. |