The *F*-test compares the Variances from 2 different Populations or Processes. It basically divides one Variance by the other and uses the appropriate *F *Distribution to determine whether there is a Statistically Significant difference.

If you're familiar with*t*-tests, the *F*-test is analogous to the 2-Sample *t*-test. The *F*-test is a __Parametric__ test. It requires that the data from both the 2 Samples each be roughly Normal.

The following compare-and-contrast table may help clarify these concepts:

If you're familiar with

The following compare-and-contrast table may help clarify these concepts:

Chi-Square (like *z*, *t*, and *F*) is a Test Statistic. That is, it has an associated family of Probability Distributions.

The Chi-Square Test for the Variance compares the Variance from a Single Population or Process to a Variance that we specify. That specified Variance could be a target value, a historical value, or anything else.

Since there is only 1 Sample of data from the single Population or Process, the Chi-Square test is analogous to the 1-Sample*t*-test.

In contrast to the the*F*-test, the Chi-Square test is __Nonparametric__. It has no restrictions on the data.

__Videos__: I have published the following relevant videos on my YouTube channel, "__Statistics from A to Z__"

]]>The Chi-Square Test for the Variance compares the Variance from a Single Population or Process to a Variance that we specify. That specified Variance could be a target value, a historical value, or anything else.

Since there is only 1 Sample of data from the single Population or Process, the Chi-Square test is analogous to the 1-Sample

In contrast to the the

*F*Distribution:__https://youtu.be/w1TvaQgoNCY__- Chi-Square -- the Test Statistic and Its Distributions:
__https://youtu.be/RJMNkzuxOA4__ *t*: the Test Statistic and Its Distributions:__youtu.be/3GCJU_RCgoM__

- Center: e.g. Mean
- Variation: e.g. Standard Deviation
- Shape: e.g. Skewness

Skewness is a case in which common usage of a term is the opposite of statistical usage. If the average person saw the Distribution on the left, they would say that it's skewed to the right, because that is where the bulk of the curve is. However, in statistics, it's the opposite. The Skew is in the direction of the long tail.

If you can remember these drawings, think of**"the tail wagging the dog."**

]]>

If you can remember these drawings, think of

See the Videos pages of this website for a listing of available and planned videos.

]]>

First of all, notice that the 2-Sample test, on the left, __does__ have 2 Samples. We see that there are two different groups of test subjects involved (note the names are different) -- the Trained and the Not Trained. The 2-Sample t-test will compare the Mean score of the people who were __not__ trained with the Mean score of different people who __were__ trained.

The story with the**Paired Samples t-test** is very different. We only have **one set of test subjects**, but 2 different conditions under which their scores were collected. For each person (test subject), a pair of scores -- Before and After -- was collected. (Before-and-After comparisons appear to be the most common use for the Paired test.)

Then, for each individual, the__difference__ between the two scores is calculated. **The values of the differences are the Sample** (in this case: 4, 7, 8, 3, 8 ). And **the Mean of those differences is compared by the test to a Mean of zero.**

For more on the subject, you can view my video, __t, the Test Statistic and its Distributions__. ** **

]]>The story with the

Then, for each individual, the

But, it seems that the confusion sowed by statistics knows no bounds. A PhD in chemical engineering recently told me,

"I never did get the hang of statistics."

]]>The horizontal axis shows values of the Test Statistic, *z*. So, *z *is a point value on this horizontal *z*-axis. *z* = 0 is to the left of these close-ups of the right tail. The value of *z i*s calculated from the Sample data.

For more on how these four concepts work together, there is an article in the book, "Alpha,*p*, Critical Value and Test Statistic -- How They Work Together". I think this is the best article in the book. You can also see that article's content on my YouTube video. There are also individual articles and videos on each of the 4 concepts. My YouTube Channel is "__Statistics from A to Z -- Confusing Concepts Clarified__".

]]>- Note that the calculated
*z*defines the boundary of a hatched area. The hatched areas under the curve represent the value of the Cumulative Probability,*p*. - And z-critical (the Critical Value of
*z*) defines the boundary of the shaded area representing α.

For more on how these four concepts work together, there is an article in the book, "Alpha,

The following illustrations are not numerically precise. But, conceptually, they portray the concept of Sum of Squares Within as the width of the “meaty” part of a Distribution curve – the part without the skinny tails on either side.

Here, SSW = SS1 + SS2 +SS3

For more on Sums of Squares, see my video of that name: __https://bit.ly/2JWMpoo__ .

For more on Sums of Squares within ANOVA, see my video, "ANOVA Part 2 (of 4): How It Does It:__http://bit.ly/2nI7ScR__ .

]]>For more on Sums of Squares within ANOVA, see my video, "ANOVA Part 2 (of 4): How It Does It: