I'm not sure at this point what will come next. But I'll announce it on this blog.

]]>It is also used to graph

The rightmost chart combines two line charts into one. It has the same

In a similar fashion, a Line Chart can help differentiate between Observed and Expected Frequencies in a

Statistics from A to z -- Confusing Concepts Clarified

This video explains the general concepts of Control Charts, which are used in process statistics (as in the Six Sigma discipline).

]]>The most commonly used statistical tests are "Parametric", that is, they require that one or more Parameters meet certain conditions or "assumptions". Most frequently, the assumption is that the Distribution of the Population or Process is roughly Normal. Roughly equal Variance is also a common assumption.

If these conditions are not met, the Parametric test cannot be used, and a Nonparametric test must be used instead. This table shows the Nonparametric test that can be used in place of several common Parametric tests.

]]>

]]>

So,

**Another way that Degrees of Freedom is described is "The number of independent pieces of information that go into the calculation of a Statistic."** To illustrate, let's say we have a Sample of n = 5 data values: 2, 4, 6, 8, and 10.

When we calculate the Sample Mean, we have 5 independent pieces of information – the five values of the data. They are independent because none of the values are dependent on the values of another. So, for the Mean, df = 5

Sample Mean = (2 + 4 + 6 + 8 + 10) / 5 = 30 / 5 = 6

But, when we calculate the Sample Variance, we use the Mean as well as the 5 data values. The Mean is not an independent piece of information, because is it dependent on the other 5 values.

Also, when we include the Mean, we only have 4 independent pieces of information left. If we know that the Mean is 30, and we have the data values 2, 4, 6, and 8, then we can calculate that the last data value has to be 10. So, 10 no longer brings independent information to the table.

**If we then use that Statistic to calculate another Statistic, it brings its own estimation error into the calculation of the second Statistic. **This error is in addition to the second Statistic's estimation error. This happens in the case of the Sample Variance.

__Example: Sample Variance__

**Numerator for Sample Variance: **

When we calculate the Sample Mean, we have 5 independent pieces of information – the five values of the data. They are independent because none of the values are dependent on the values of another. So, for the Mean, df = 5

Sample Mean = (2 + 4 + 6 + 8 + 10) / 5 = 30 / 5 = 6

But, when we calculate the Sample Variance, we use the Mean as well as the 5 data values. The Mean is not an independent piece of information, because is it dependent on the other 5 values.

Also, when we include the Mean, we only have 4 independent pieces of information left. If we know that the Mean is 30, and we have the data values 2, 4, 6, and 8, then we can calculate that the last data value has to be 10. So, 10 no longer brings independent information to the table.

The numerator of the formula for Sample Variance includes the Sample Mean. It takes each data value (the x's) in the Sample and subtracts from it the Sample Mean. Then it sums all those subtracted values.

So,**the Sample Variance has two sources of error:**

**The Degrees of Freedom is intended to adjust for the additional ****error introduced when one Statistic is used to calculate another.**

We don't need to make this adjustment for the Sample Mean, but we do need to do so for the Sample Variance. We divide by n – 1, instead of n.

]]>So,

**it is an estimate from Sample data****the estimation error from the Sample Mean**

We don't need to make this adjustment for the Sample Mean, but we do need to do so for the Sample Variance. We divide by n – 1, instead of n.

The

- Sample data is used to calculate a value for a Test Statistic, say,
*z.* - This value of
*z*forms the boundary for the area under the curve which represents the Cumulative Probability,*p*. - From this, tables or calculations give us the value of
*p*.

Similarly *α* contains the same information as the Critical Value.

So comparing *p* and the Critical Value is the same as comparing Alpha and the Test Statistic value. But the comparison symbols ( ">" and "<") point in the opposite direction. That's because p and Test Statistic have an inverse relation. A smaller value for *p* means that the Test Statistic value must be larger.

]]>