Getting Started

library(tidyverse)

Coin Flips

Suppose we flipped a coin 20 times and observed 17 tails and 3 heads.

coin_flips <- c(0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0)

But this looks suspicious - we would think: “what’s the probability of 17T and 3H if the coin is fair?”

Part 1: Manual testing with a known distribution

Step 1: Define the hypotheses.

Without realizing it, we are setting up a null hypothesis:

  • \(H_0\): The coin is fair.

Mathematically we state this null hypothesis in terms of \(p\) (the probability of the coin landing heads):

Q - What should \(p\) equal if the coin is fair?

  • \(H_0\): \(p = ?\)

This is a hypothesis about “truth”. The “true” parameter \(p\) that belongs to this particular coin when flipped infinitely many times. We don’t know the true \(p\), we make an assumption about it.

Step 2: Summarize data.

All we have is a sample of coin flips.

Q - What is our statistic for the proportion of heads?

\(\hat{p} = ?\)

In words, \(\hat{p}\) is called “sample-p” or “p-hat”. See a brief table of mathematical notation for common quantities of interest below:

Quantity Parameter Statistic
Mean \(\mu\) \(\bar{x}\)
Variance \(\sigma^2\) \(s^2\)
Standard deviation \(\sigma\) \(s\)
Median \(M\) \(\tilde{x}\)
Proportion \(p\) \(\hat{p}\)

Step 1 (revisited): Define the hypotheses.

Now we ask ourselves: “Is \(\hat{p}\) so different from our assumption that we have to reject our assumption as true?”

To answer this, the alternative hypothesis matters. Our options are:

    1. \(H_1: p < 0.5\), the coin is biased to land tails
    1. \(H_1: p > 0.5\), the coin is biased to land heads
    1. \(H_1: p \neq 0.5\), the coin is biased

Choice of hypothesis affects calculation of p-value in the next step and thus affects our conclusions. For this reason, one should choose an alternative hypothesis before looking at the data.

Step 3: Assess the evidence.

In this step, we compute a p-value. By definition, the p-value is the probability of seeing what we observed (in this case, \(\hat{p} = 3/20\)) or something more extreme if \(H_0\) is true. The direction of alternative hypotheses decide what “something more extreme” is.

The following probabilities are the p-value for each alternative hypothesis given above:

    1. The probability of observing a \(\hat{p} \leq 3/20\) if the null is true?
    1. The probability of observing a \(\hat{p} \geq 3/20\) if the null is true?
    1. The probability of observing a \(\hat{p} \leq 3/20\) or \(\hat{p} \geq 17/20\) if the null is true?

We will find the three p-values using simulations. First, we need to know what 20 coin flips would look like “under the null”. In other words, “if the null was actually true”.

Q - Create 200000 random numbers from Binomial(20, 0.5). This generates a distribution for \(\hat{p}\) (the sample proportion of heads out of 20 coin flips) given the coin is fair. This is called the null distribution.

set.seed(123)
null_heads <- rbinom(____, size = __, prob = __)
null <- data.frame(null_heads)

Q - Create a new column p that is the proportion of heads in each row and create a histogram with bins = 20.

Q - Find the three p-values using null.

## 1)

## 2)

## 3)

Step 4: Make a conclusion.

Q - Make a testing decision at the significance level 0.05 (\(\alpha = 0.05\)) for each alternative:

    1. \(H_1: p < 0.5\), the coin is biased to land tails
    1. \(H_1: p > 0.5\), the coin is biased to land heads
    1. \(H_1: p \neq 0.5\), the coin is biased

Rent in Manhattan

We will use the following data one more time to conduct hypothesis testing on the typical rent of one-bedroom apartments in Manhattan:

manhattan <- read_csv("data/manhattan.csv")

Part 2: Testing using functions in infer

We were lucky in Part 1 because we knew the theoretical distribution of the number of heads from \(n\) coin flips with a probability \(p\). The distribution was Binomial(\(n\), \(p\)), and rbinom() function generated random numbers from Binomial distributions.

Often, however, you do not know the exact distribution of the statistics of interest. In that case, we can use bootstrapping to capture variability in sample statistics from the original sample.

In Part 2, we will conduct hypothesis testing in a tidy way. We load the tidymodels package that contains the infer package.

library(tidymodels)

Step 1: Define the hypotheses.

Q - Suppose you are interested in whether the mean rent of one-bedroom apartments in Manhattan is less than $3000. State the null and alternative hypotheses.

Step 2: Summarize data.

Q - What is an appropriate statistic for \(\mu\)? Find its value.

xbar <- manhattan %>% 
  summarize(xbar = ____) %>% 
  pull()

The function pull() extracts a single column from a data frame. In the code chunk above, without pull() the resulting output is a tibble, while with pull() the output becomes a scalar.

Step 3: Assess the evidence.

Q - Let’s first approximate the null distribution by completing the code chunk below. Use 1,000 reps for the in-class activity. You will use about 15,000 reps for assignments outside of class.

set.seed(602)

# save resulting bootstrap distribution
null_dist <- manhattan %>%
  # specify the variable of interest
  specify(response = ____) %>%
  # declare the null hypothesis 
  hypothesize(null = ____, mu = ____) %>% 
  # generate reps bootstrap samples
  generate(reps = 1000, type = ____) %>%
  # calculate the statistic of each bootstrap sample
  calculate(stat = ____)

Compared to bootstrapping for estimation, we need one additional function for testing:

  • hypothesize: declare a null hypothesis about a point estimate, e.g. proportion, mean, median, variance (today) or about independence (tomorrow) by setting null = point or null = independence.
  • If making a hypothesis about a point estimate, set the null value, e.g., mu = 10.
  • This shifts the bootstrap distribution to be centered at the null value.
  • This is necessary for hypothesis testing unlike constructing confidence interval because we assume “the null hypothesis is actually true”.

Q - Visualize the null distribution. Fill in the blanks in the code chunk below.

visualize(null_dist) +
  labs(x = "______", y = "______", 
       title = "Simulated null distribution of sample means",
       subtitle = "[about the assumption]") 

After simulating under the null, next we want to find where our observed statistic lands in the null distribution.

Q - Use the code below to plot visualize the p-value on the graph above.

visualize(null_dist) +
  labs(x = "Sample mean", y = "Count", 
       title = "Simulated null distribution of sample means",
       subtitle = "if the true mean rent for one-bedroom apartments was $3000 in Manhattan") +
  shade_pvalue(obs_stat = ____, direction = ____)
  • shade_pvalue(): plots a p-value region on top of visualize(). We plug the observed statistic in obs_stat. direction = "left", direction = "right", direction = "two-sided" corresponding to the directionality of your alternative hypothesis.

Q - Manually compute the p-value and compare to the easy-p-value code chunk below.

Manual p-value:

Easy p-value:

Step 4: Make a conclusion.

Q - At the significance level 0.05, make your conclusion with the following 3 components:

  • How the p-value compares to the significance level.
  • The decision you make with respect to the hypotheses (reject \(H_0\) / fail to reject \(H_0\)).
  • The conclusion in the context of the analysis question.

Practice

  1. Suppose instead you wanted to test the claim that the mean price of rent is not equal to $3000. Which of the following would change? Select all that apply.
  1. Null hypothesis
  2. Alternative hypothesis
  3. Null distribution
  4. p-value
  1. Let’s test the claim in (1). Conduct the hypothesis test, then state your conclusions in the context of the data at significance levels 0.05 and 0.01.

Submitting Application Exercises

  • Once you have completed the activity, push your final changes to your GitHub repo.
  • Make sure you committed at least three times.
  • Check that your repo is updated on GitHub, and that’s all you need to do to submit application exercises for participation.