Correlation mainly uses the Correlation Coefficient,

Correlation analysis and Linear Regression both attempt to discern whether 2 Variables vary in synch.

In Correlation, we ask to what degree the plotted data forms a shape that seems to follow an imaginary line that would go through it. But we don't try to specify that line.

Correlation Analysis does not attempt to identify a Cause-Effect relationship

The data can be distributed in any way. For example -- as shown above -- it can be double-peaked and asymmetrical, or it can have the same number of points for every value of *x*. If we take many sufficiently large Samples of data with any Distribution, the Distribution of the Means (x-bar)'s of these Samples will be approximate the Normal Distribution.

There is something intuitive about the CLT. The Mean of a Sample taken from any Distribution is very unlikely to be at the far left or far right of the range of the Distribution. Means (averages), by their very definition, tend to average-out extremes. So, their Probabilities would be highest in the center of a Distribution and lowest at the extreme left or right.

Less intuitively obvious is that the CLT applies to Proportions as well as to Means.

There is something intuitive about the CLT. The Mean of a Sample taken from any Distribution is very unlikely to be at the far left or far right of the range of the Distribution. Means (averages), by their very definition, tend to average-out extremes. So, their Probabilities would be highest in the center of a Distribution and lowest at the extreme left or right.

Less intuitively obvious is that the CLT applies to Proportions as well as to Means.

Let's say that *p*is the Proportion of the count of a category of items in a Sample, say the Proportion of green jelly beans in a candy bin. We take many Samples, with replacement, of the same size *n*, and we calculate the Proportion for each Sample. When we graph these Proportions, they will approximate a Normal Distribution.

How large of a Sample Size,*n, *is "sufficiently large"? It depends on the use and the statistic. For Means and most uses n > 30 is considered large enough. But for Proportions, it's a little more complicated -- it depends on what the value of *p *is. *n *is large enough if *np ***> 5 **__and__ *n*(1 - *p*) > 5.

The practical effect of this is:

This table gives us the specifics; the minimum Sample Size,*n*, is shown in the middle row.

]]>How large of a Sample Size,

The practical effect of this is:

- If the value of
*p*is near 0.5, then we can get by with a smaller Sample Size. - If the value of
*p*is close to 0 or 1, then we need a larger*n*.

This table gives us the specifics; the minimum Sample Size,

Let's say we are inspecting shirts at the end of the manufacturing line. We may be interested in the number of defective Units – shirts, because any defective shirt is likely to be rejected by our customer. However, one defective shirt can contain more than one defect. So, we are also interested in the Count of individual defects – the Occurrences – because that tells us how much of a quality problem we have in our manufacturing process.

For example, if 1 shirt has 3 defects, that would be 3 Occurrences of a defect, but only 1 Unit counted as defective.

We would use the Poisson Distribution in analyzing Probabilities of Occurrences of defects. To analyze the Probability of Units, we could use the Binomial or the Hypergeometric Distribution.

There is an article in the book focusing on the Poisson Distribution. There is also a__video__, on my YouTube channel, Statistics from A to Z.

]]>We would use the Poisson Distribution in analyzing Probabilities of Occurrences of defects. To analyze the Probability of Units, we could use the Binomial or the Hypergeometric Distribution.

There is an article in the book focusing on the Poisson Distribution. There is also a

]]>

We test a number of subjects with dosages from 0 to 3 pills. And we find a straight line relationship, y = 3x, between the number of pills (x) and a measure of health of the subjects. So, we

But we __cannot__ make a statement like the following:

This is called extrapolating the conclusions of your Regression Model beyond the range of the data used to create it. There is no mathematical basis for doing that, and it can have negative consequences, as this little cartoon from my book illustrates.

In the graphs below, the dots are data points. In the graph on the left, it is clear that there is a linear correlation between the drug dosage (x) and the health outcome (y) for the range we tested, 0 to 3 pills. And we can interpolate between the measured points. For example, we might reasonably expect that 1.5 pills would yield a health outcome halfway between that of 1 pill and 2 pills.

For more on this and other aspects of Regression, you can see the YouTube videos in my playlist on Regression. (See my channel: Statistics from A to Z - Confusing Concepts Clarified.

]]>This is the 9th and final __video__ in my channel on Regression. Residuals represent the error in a Regression Model. That is, Residuals represent the Variation in the outcome Variable y, which is not explained by the Regression Model. Residuals must be analyze several ways to ensure that they are random, and that they do no represent the Variation caused by some unidentified x-factor.

See the videos page in this website for a listing of available and planned videos.

]]>See the videos page in this website for a listing of available and planned videos.