Alpha is the Significance Level of a statistical test. We select a value for Alpha based on the level of Confidence we want that the test will avoid a False Positive (aka Alpha aka Type I) Error. In the diagrams below, Alpha is split in half and shown as shaded areas under the right and left tails of the Distribution curve. This is for a 2-tailed, aka 2-sided test. In the left graph above, we have selected the common value of 5% for Alpha. A Critical Value is the point on the horizontal axis where the shaded area ends. The Margin of Error (MOE) is half the distance between the two Critical Values.
A Critical Value is a value on the horizontal axis which forms the boundary of one of the shaded areas. And the Margin of Error is half the distance between the Critical Values. If we want to make Alpha even smaller, the distance between Critical Values would get even larger, resulting in a larger Margin of Error. The right diagram shows that if we want to make the MOE smaller, the price would be larger Alpha. This illustrates the Alpha - MOE see-saw effect. But what if we wanted a smaller MOE without making Alpha larger? Is that possible? It is -- by increasing n, the Sample Size. (It should be noted that, after a certain point, continuing to increase n yields diminishing returns. So, it's not a universal cure for these errors.)If you'd like to learn more about Alpha, I have 2 YouTube videos which may be of interest:
0 Comments
Continuing the playlist on Regression, I have uploaded a new video to YouTube:
Regression -- Part 4: Multiple Linear. There are 5 Keys to Understanding, here is the 3rd. See the Videos pages of this website for more info on available and planned videos. Categorical Variables are used in ANOMA, ANOVA, with Proportions, and in the Chi-Square Tests for Independence and Goodness of Fit. Categorical Variables are also known as "Nominal" (named) Variables and "Attributes" Variables. The concept can be confusing, because the values of a Categorical Variable are not numbers, but names of categories. The numbers associated with Categorical Variables come from counts of the data values within a named category. Here's how it works:- In this example there are two
__Categorical Variables__, "Gender" and "Ice Cream (flavor)". - The
__values__of the two Categorical Variables are the__names of the categories__for the Variable. For example, the Categorical Variable "Gender" has 2 possible values: "female" and "male". - If we're going to use these Variables in a Chi-Square Test for Independence, for example, we need to have some numbers. The
__numbers are the counts__of the data values in each category. For example, the count of persons whose gender is "female" and whose favorite ice cream flavor is "vanilla" is 25.
Continuing the playlist on Regression, I have uploaded a new video to YouTube:
. It talks about things that are required for all 3 types of Regression covered in the book -- Simple Linear, Multiple Linear, and Simple Nonlinear Regression. Topics include clip levels for R squared, Residuals, establishing Cause and Effect, and the dangers of Extrapolation.Regression Part 3: Analysis Basics See the page of this website for the status of completed and planned videos.videos## Statistics Tip of the Week: How the selection of a value for Alpha specifies the Critical Value11/29/2018 In Hypothesis Testing, before the data is collected, a value for Alpha, the Level of Significance, is selected. The person performing the test selects the value. Most commonly, 5% is selected. Alpha is a Cumulative Probability -- the Probability of a range of values. It is shown as a shaded area under the curve of the Distribution of a Test Statistic, such as
z.If we have Distribution of a Test Statistic and a Cumulative Probability at one or both tails of the curve of the Distribution, software or tables will tell us the value of the Test Statistic which forms the boundary of the Cumulative Probability. In the above concept flow diagram, we show how selecting Alpha = 5% for a one-tailed (right tailed) test results in the Critical Value being 1.645. I earlier uploaded videos on the statistical concepts mentioned above to my YouTube channel: "Statistics from A to Z -- Confusing Concepts Clarified" Continuing the playlist on Regression, I have uploaded a new video to YouTube;
Regression -- Part 2: Simple Linear. See the page of this website for the status of completed and planned videos.videosA Boxplot, also known as Box and Whiskers Plot, is a good way to visually depict Variation in a dataset (e.g., a Sample or Population). And showing several Boxplots vertically is useful for comparing Variation among several datasets. The boxes depict the range within which 50% of the data falls for each dataset. - The bottom of the box identifies the 25th percentile (25% of the data is below)
- The line in the middle is the Median (50th percentile)
- The top of the box is the 75th percentile
- The line segments (the "whiskers") at the top and bottom extend to the highest and lowest values of the dataset. The whiskers are drawn to extend only as far as 1.5 box lengths. (If there are no data points that far out, the whisker ends at the farthest point.) Points beyond 1.5 box lengths are termed "Outliers". Points beyond 3 box lengths are called "Extremes" or "Extreme Outliers".
In this illustration, a higher score is better. Treatment A has the highest individual score, but it has considerable more Variation in results than Treatments B and C. The Medians for Treatments A, B, and C are fairly close. So, we can see at a glance that Treatment D can be eliminated from consideration. Treatment B has the highest Median and is gives very consistent results (small Variation). So, this plot may be all we need to select B as the best treatment.
## Statistics Tip: Sampling with Replacement is required when using the Binomial Distribution10/7/2018 One of the requirements for using the Binomial Distribution is that
each trial must be independent. One consequence of this is that the Sampling must be With Replacement.To illustrate this, let's say we are doing a study in a small lake to determine the Proportion of lake trout. Each trial consists of catching and identifying 1 fish. If it's a lake trout, we count 1. The population of the fish is finite. We don't know this, but let's say it's 100 total fish 70 lake trout and 30 other fish. Each time we catch a fish, we throw it back before catching another fish. This is called Sampling With Replacement. Then, the Proportion of lake trout is remains at 70%. And the Probability for any one trial is 70% for lake trout. If, on the other hand, we keep each fish we catch, then we are Sampling Without Replacement. Let's say that the first 5 fish which we catch (and keep) are lake trout. Then, there are now 95 fish in the lake, of which 65 are lake trout. The percentage of lake trout is now 65/95 =68.4%. This is a change from the original 70%.So, we don't have the same Probability each time of catching a lake trout. Sampling Without Replacement has caused the trials to not be independent. So, we can't use the Binomial Distribution. We must use the Hypergeometric Distribution instead.For more on the Binomial Distribution, see my YouTube video. |
## AuthorAndrew A. (Andy) Jawlik is the author of the book, Statistics from A to Z -- Confusing Concepts Clarified, published by Wiley. ## Archives
December 2018
## Categories |