I just uploaded a new video: Alpha and Beta Errors.And previously, I had uploaded a video on Statistical Errors -- Types, Uses, and Interrelationships. See the Videos page on this site for a list of my videos previously uploaded.
0 Comments
All other things being equal, an increase in Sample Size (, including Alpha and Beta Errors and the Margin of Error.n) reduces all types of Sampling ErrorsA Sampling "Error" is not a mistake. It is simply the reduction in accuracy to be expected when one makes an
estimate based on a portion – a Sample – of the data in Population or Process. There are several types of Sampling Error.Two types of Sampling Errors are described in terms of their Probabilities: *p***is the Probability of an Alpha Error**, the Probability of a False Positive.*β***is the Probability of a Beta Error**, the Probability of a False Negative
Margin of Error (MOE) is the width of an interval in the units of the data. It is half the width of a 2-sided Confidence Interval.All three types of Sampling Error are reduced when the Sample Size is increased.This makes intuitive sense, because a very small Sample is more likely to not be a good representative of the properties of the larger Population or Process. But, the values of Statistics calculated from a much larger Sample are likely to be much closer to the values of the corresponding Population or Process Parameters For more on the statistical concepts mentioned here ( p, β, MOE, Confidence Intervals, Statistical Errors, Samples and Sampling), please see my book or my YouTube channel -- both are titled Statistics from A to Z -- Confusing Concepts Clarified.All processes have variation. A process can be said to be "under control", "stable", or "predictable" if the variation is**confined within a defined range (Control Charts can tell us that)**
and is**random/ shows no pattern (Run Rules determine this)**
Such Variation is called Common Cause Variation; it is like random "noise" within an under-control process. Variation which is not Common Cause is called Special Cause Variation. It is a signal that factors outside the process are affecting it.Any Special Cause Variation must be eliminated before one can attempt to narrow the range of Common Cause Variation.Until we eliminate Special Cause Variation, we don't have a process that we can improve. There are factors outside the process which affect it, and that changes the actual process that is happening in ways that we don't know. Once we know that we have Special Cause Variation, we can use various Root Cause Analysis methods to identify the Special Cause, so that we can eliminate it. Only then can we use process/ quality improvement methods like Lean Six Sigma to try to reduce the Common Cause Variation. Here are some examples of Special Causes of Variation: - an equipment malfunction causes occasional spikes in the size of holes drilled
- an out-of-stock condition causes a customer order to be delayed
- vibration from a passing train causes a chemical reaction to speed up
- a temporarily opened window causes the temperature to drop
- an untrained employee temporarily fills in
Here is an example of a Control Chart. Each point is the Mean of a small Sample of data. The Upper Control Limit (UCL) and the Lower Control Limit (LCL) are usually set at 3 Standard Deviations from the Center Line. We see that there is one anomalous Sample Mean outside the Control Limits. This is due to Special Cause Variation. So, we need to do some root cause analysis to determine what caused that. And we need to make changes to eliminate it, before we can try to narrow the range of the Control Limits.
In addition to being within Control Chart limits, the data must be random. There are a number of Run Rules which describe patterns which are not random. Some patterns are not always easy to spot by eyeballing charts. Fortunately, the same software which produces Control Charts will usually also identify patterns described by the Run Rules. Here are some common patterns which indicate non-random (Special Cause) Variation. A Sigma is a Standard Deviation. __Trend__: 6 consecutively increasing or 6 consecutively decreasing points__Shift__in the Mean: 8 consecutive points on the same side of the Center Line__Cycle__: 14 consecutive points alternating up and down- 2 out of 3 points beyond 2 Sigma and on the same side of the Center Line
- 4 out of 5 points beyond 1 Sigma and on the same side of the Center Line
- 15 consecutive points within 1 Sigma of the Center line
Reproduced by permission of John Wiley and Sons, Inc. from the book, Statistics from A to Z – Confusing Concepts Clarified I just uploaded a new video: -
**Confidence Intervals – Part 2 of 2 :**https://youtu.be/J00BvbXuudU
And, oops, it looks like I missed announcing on this blog the two videos before that:
- Confidence Intervals -- Part 1 of 2: https://youtu.be/bS2Tmxpc0mw
- Inferential Statistics: https://youtu.be/qOG0mXLximo
A larger Test Statistic value (such as that for z, t, F, or Chi-Square) results in a smaller p-value. The p-value is the Probability of an Alpha (False Positive) Error.And conversely, a smaller Test Statistic value results in a larger value for p.Here's how it works: - A value of the Test Statistic, say
*t*, is calculated from the Sample data. - That value is plotted on the horizontal axis of the Distribution of the Test Statistic.
*p*is then calculated as the area under the curve bounded by the Test Statistic value. It is shown as the hatched area in the diagrams below.
In the close-ups of the right tail, zero is not visible. It is the center of the bell-shaped
t curve, and it is out of the picture to the left. So, a larger value of the Test Statistic, t, would be farther to the right. And, the hatched area under the curve representing the p-value would be smaller. This is illustrated in the middle column of the table above.Conversely, if the Test Statistic is smaller, then it's value is plotted more to the left, closer to zero. And so, the hatched area under the curve representing p would be larger. This is shown in the rightmost column of the table.## Statistics Tip: In a 1-tailed test, the Alternative Hypothesis points in the direction of the tail1/2/2020 In the
previous Tip, , we showed how to state the Null Hypothesis as an equation (e.g. H0: μΑ = μΒ). And the Alternative Hypothesis would be the opposite of that (HA: μA ≠ μB). These would work for a 2-tailed (2-sided) tests, when we only want to know whether there is a (Statistically Significant) difference between the two Means, not which one may be bigger than the other.But what about when we do care about the direction of the difference? This would be a 1-tailed (1-sided) test. And the Alternative Hypothesis will tell us whether it's right-tailed or left-tailed. (We need to specify the tail for our statistical calculator or software.)How does this work? First of all, it's helpful to know that the Alternative Hypothesis is also known as the "Maintained Hypothesis". The Alternative Hypothesis is the Hypothesis which we are maintaining and would like to prove.For example, We maintain that the Mean lifetime of the lightbulbs we manufacture is more than 1,300 hours. That is, we maintain that µ > 1,300. This, then becomes our Alternative Hypothesis. HA: µ > 1,300Note that the comparison symbol of HA points to the right. So, this test is right-tailed. If, on the other hand, we maintained that the Mean defect rate of a new process is less than the Mean defect rate of the old process, our Maintained/ Alternative Hypothesis would be HA: µ New < µ Oldand the test would be left-tailed. The concept of Null Hypotheses can be confusing, because it is about nothingness. And the human mind is wired to understand things that exist, not things that don't exist. I've found that it helps to state the Null Hypothesis as either - No difference
- No change, or
- No effect
So, if we are comparing the Means of Populations A and B, we could say: Null Hypothesis (H0) "There is no difference between the Mean of Population A and the Mean of Population B." But, it can be stated even more clearly and succinctly as an equation: And the Alternative Hypothesis would be the opposite of this: This is for a 2-tailed test: It gets more complicated when we get into 1-tailed tests. In those tests, we'd have either But that would be the subject of another Statistics Tip.
I just uploaded a new video to the the book's channel on YouTube. It's the 3rd of 3 in a playlist on Samples and Sampling. It's called Sample Size Part 2 (of 2) – for Measurements/Continuous Data. https://youtu.be/mxR-Lsc3ikcFor a complete list of videos based on this book that are completed and next in line, please see the
videos page of this website. I just uploaded a new video, Sample Size Part 1 – Proportions of Count Data This was uploaded to my YouTube channel Statistics From A to Z, Confusing Concepts Clarified. It will be part of a playlist on Samples and Sampling. See the
Videos page of this website for the latest status of my statistics videos completed and plannedConfusing language and terminology is a big part of what makes statistics confusing. Each Binomial Trial -- also known as a Bernoulli Trial -- is a random experiment with only 2 possible outcomes, called "success" and "failure". The Probability of success is the same every time the experiment is conducted. A coin flip illustrates this perfectly. You either get "heads" or "tails", and the Probability of each coin flip (Binomial Trial) is always 50% heads (or 50% tails). In a Binomial Trial, each trial is counted as either a success or failure. And a success is defined as what we want to count. Let's say we are performing quality control in a manufacturing process. We are counting defects. Every time we find a defect, we add 1 to the count of "successes".
I always found that confusing, so in the book, instead of saying "success" or "failure", I suggest saying "yes" or "no". |
## AuthorAndrew A. (Andy) Jawlik is the author of the book, Statistics from A to Z -- Confusing Concepts Clarified, published by Wiley. ## Archives
December 2020
## Categories |