Now that we have a language for probability distributions, and a few probability distributions we can use for modeling, we can address probability calculations that emerge in the process of hypothesis testing.
Recall that significance, power, and $p$-values are all probabilities.
In Chapter 1, we learned how to make statistical decisions using the framework of critical value hypothesis tests, but we never discussed how to pick a “good” rejection region. There are several ways to do this.
A 2$\sigma$-test is a quick “back of the envelope” technique for evaluating whether the outcome of an experiment is sufficient evidence to reject a null hypothesis.
<aside> <img src="/icons/fleur-de-lis_purple.svg" alt="/icons/fleur-de-lis_purple.svg" width="40px" />
Assumptions
Then the $2\sigma$-rule for constructing a rejection region is
$$ \mathcal{R} = \{X < \mu - 2 \sigma\} \, \cup \, \{X > \mu + 2\sigma\}. $$
</aside>
Template:
26Spring_Template_HT_2sigma.pdf
The $2\sigma$-rule will causes errors in your statistical decisions most often when your probability model is skewed, or when right and left tails of the distribution look substantially different than a standard Gaussian distribution. Skewness can happen when a binomial distribution has a very low or very high success probability. The tails of a HyperGeometric distribution can look non-Gaussian if the total population size $N$ is not large compared to the size of the target population $K$.