I've wanted, and struggled, to incorporate reading assignments into my stats teaching.
These are two readings incorporated into the 3rd year stats course geared towards life and health sciences.
The paper in the first assignment was "For Objective Causal Inference, Design Trumps Analysis", by Donald Rubin.
By skimming the first ten pages of the paper, the learner can find the answers to the following questions. To speed things up, they can also use the terms in quotations to jump to the appropriate section. These are basic knowledge level questions, but there is a lot of material in the paper, and 3rd year undergrads typically have little experience reading academic papers, especially outside of their chosen field.
R1) Randomized experiments make causal inference valid because we know the 'scores' "are known
from the design of the experiment". What are these 'scores'? (Name only)
R2) Rubin is stating that randomized experiments are 'the gold standard' for causal inference. Does that mean causality can be inferred from all randomized experiments, or are some "poorly suited"? If so, give an example.
R3) What are the two design steps that are "absolutely essential" for objective inferences?
R4) In Section 2.1, how is a treatment defined?
R5) What are covariates, "in contrast to" outcome/response variables?
R6) What "self optimizing" behaviour happens when treatments are not assigned randomly by the
Some of these questions were also addressed in the lecture covering casuality. My answers can be found in the assignment key.
The paper in the second assignment was, "The insignificance of Significance Testing", a commentary by Neville Nicholls.
The paper is shorter (5 pages), and less jargon intensive, but the answers to some of the questions involved combining knowledge from the lectures with material in the paper.
R1, 2 pts) Give an example of a data set that could have physical significance, but not statistical significance.
R2, 2 pts) Cohen mentions some probability when talking about an n=50 sample from a population with population correlation of ρ = 0.30. What is the name for this probability? (The name isn’t in the paper, it IS in our notes on hypothesis testing)
R3, 2 pts) The 1979-95 data used to calculate climate trends has no uncertainty from sampling, How is this possible?
R4, 3 pts) What deviations from a well-behaved distribution could affect the correlation between SOI and snowfall?
R5, 3 pts) What are three alternatives to null hypothesis testing?
R6, 2 pts) What is another term for the “repeated investigations” issue? That is, when a so-called insignificant result is left unpublished until someone repeats it and finds significance simply by chance? (Again, the term is in our notes, think Tukey)
R7, 3 pts) A permutation test produces a value that works very similarly to a p-value. However, a permutation test has one major advantage over a classic hypothesis test (and confidence intervals). What is it? (Hint: Spearman correlation and the median have this same advantage).
Interpretation of the null hypothesis is covered throughout the lectures, so I feel this paper really was a supplement and not just a distraction from the curriculum. This assignment isn't due yet, so I can't release the key here yet.
Part of the struggle of integrating readings into classes is making them relevant to the lecture material and the grading scheme. I was asked several times if the papers' material would be on the exams. Doing so would have too much of a distraction from the core material; the plan was to have students casually reading papers in data science, not study them in detail. Only the lecture on casuality was made examinable.
Another issue was making my expectations clear. I was intending the answers to each question to be 2 sentences or shorter. Some people wrote 1-2 paragraphs. For the second paper, I specified that each answer should be 25 words or shorter.
My biggest regret is not being able to find open access papers to source. On the other hand, finding an approachable paper by Rubin himself seemed worth it.