Analysis of variance

From Conservapedia
Jump to: navigation, search
Statistics
Matlab 3dplot.jpg
Major approaches
Frequency probability
Bayesian inference
Non-parametric statistics
Common methods
Analysis of variance
Chi-Square test
Students t-test
Z test
Linear regression
Bayesian model selection
Bootstrapping

Analysis of variance or ANOVA is a statistical method for comparing different models. Models are built with explanatory parameters that attempt to describe the variance of a given data set. The amount of the variance explained by two models are compared and a corresponding value is calculated for how well one model explains the data relative to the other compared models. This value is called the F-value (named for R.A. Fisher who developed the first ANOVA model). The larger the F-value the better one model explains the variance than the others. In order to determine statistical significance the f-value is compared to a specific Fisher distribution based on the degrees of freedom. The Fisher distribution can give the probability that a given data set was derived from one model relative to another.

The ANOVA test makes several assumptions about the data including that the error is independent, that the data is normally distributed, that error variance is heteroscedastic and homogeneity of variance.

The one-way ANOVA is probably one of the most used statistical methods for comparing data sets. Today most scientists calculate it using a statistical software package, of which several are available.