Wednesday, 28 December 2016

Reflections / Postmortem on teaching Stat 305 1609

Stat 305, Introduction to Statistics for the Life Sciences, is an intermediate level service course, mainly for the health sciences. It is a close relative to Stat 302, which I had taught previously in its requirements, audience, and level of difficulty. Compared to Stat 302, Stat 305 spends less time on regression and analysis of variance, and more time on contingency tables and survival analysis.


Changes that worked from last time: 
Using the microphone. Even though the microphone was set to almost zero (I have a theatre voice), using it saved my voice enough to keep going through the semester. Drawing from other sources also worked. Not everything has to be written originally and specifically for a given lecture. Between 40 and 50 percent of my notes were reused from Stat 302. Also, many of the assignment questions were textbook questions with additional parts rather than made from scratch.

Changes that didn't work from last time: 
De-emphasizing assignments. Only 4% of the course grade was on assignments, and even that was 'only graded on completion'. This was originally because copying had gotten out of control when 20% of the grade was assignments. This didn't have the desired effect of given people a reason to actually do the assignments and learn rather than copy to protect their grades.

Changes I should have done but didn't:

Keeping ahead of the course. I did it for a few weeks, but it got away from me, and I spent much of the semester doing things at the last feasible minute. This includes giving out practice materials. On multiple occasions I watched f.lux turn my screen from red to blue, which it does to match the colour profile of my screen to the rising sun.

What surprised me:

The amount of per-student effort this course took. There were fewer typos than in previous classes, and therefore student questions about inconsistencies in the notes. However, there was an unusually large amount of grade change requests. Maybe there was a demographic difference I didn't notice before, like more pre-med students, or maybe the questions I gave on midterms were more open to interpretation, or both.

What I need to change:

My assignment structure. There should have been more assignments that were smaller, and ideally they should include practice questions not to be handed in. Having more questions available in total is good because finding relevant practice material is hard for me, let alone students. Having smaller and more assignments mitigates the spikes student workload, and means that the tutors at the stats workshop have to be aware of less of my material concurrently.


Tophat is a platform that lets instructors present slides and ask questions of an audience using laptops and mobile devices that students already have. My original plan was to use iClickers as a means to poll the audience, but Tophat's platform turned out to be a better alternative for almost the same cost. It also syncs the slides and other material I was presenting to these devices. My concerns about spectrum crunch (data issues from slides being sent to 200-400 devices) didn't seem to be a problem, but I

Scaling was my biggest concern for this course, given that there were more students in the class than in my last two elementary schools combined. I turned to Tophat as a means of gathering student responses from the masses and not just from the vocal few. It also provided a lot of the microbreaks that I like to put in every 10-15 minutes to reset the attention span clock.

However, Tophat isn't just a polling system that uses people's devices. It's also a store of lecture notes, grades, and a forum for students. This is problematic because the students already have a learning management system called Canvas that is used across all class. This means two sets of grades, two forums (fora? forae?), and two places to look for notes on top of emails and webpage.

To compound this, I was also trying to introduce a digital marking system called Crowdmark. That failed, partly because I wasn't prepared and partly because students' data would be stored in the United States, and that introduces a whole new layer of opt-in consent. Next term, Crowdmark will have Canadian storage and this won't be a problem.

I intend to use Tophat for my next two classes in the spring, and hopefully I can use it better in the future.

The sheep in the room:

During the first two midterms, there was blatant, out-of-control cheating. Invigilators (and even some students) reported seeing students copying from each other, writing past the allotted time, and consulting entire notebooks. There was no space to move people to the front for anything suspicious, and there was too much of it to properly identify and punish people with any sort of consistency. Students are protected, as they should be, from accusations of academic dishonesty by a process similar to that which handles criminal charges, so an argument that 'you punished me but not xxxx' for the same thing is a reasonable defense.

 The final exam was less bad, in part because of the space between them and attempts to preemptively separate groups of friends. Also, I had two bonus questions about practices that constitute cheating and the possible consequences. For all I know, these questions did nothing, but some of the students told me they appreciated them nonetheless. Others were determined to try and copy off of each other, and were moved to the front.

What else can be done? Even if I take the dozens of hours to meet with these students and go through the paperwork and arguing and tears to hand out zeros on exams, will it dissuade future cheaters? Will it improve the integrity of my courses? Will I be confronted with some retaliatory accusation?

Perhaps it's possible to create an environment where there are less obvious incentives to cheat. Excessive time pressure, for example, could push people to write past their time limit. Poor conditions are not an excuse for cheating, but if better conditions can reduce cheating, then my goal is met. But why a notebook? The students were allowed a double sided aid sheet; that should have been enough for everything.

This is something I don't yet have an answer for.


The midterm exam was very difficult for people, and I anticipated a lot of exam anxiety on the final. On the final exam, I had two other bonus questions on the first page.

One of them asked the student to copy every word that was HIGHLIGHTED LIKE THIS, which was five key words that had been overlooked on many students' midterms in similar questions.

The other question employed priming, which is a method of evoking a certain mindset by having someone process information that covertly requires that mindset. The question was

What would you like to be the world's leading expert in?”

and was worth a bonus of 1% on the final for any non-blank answer. The point of the question was to have the students imagine themselves as being highly competent at something, anything, before doing a test that required acting competently. Most of them wrote 'statistics'. In literature on test taking, a similar question involving winning a Nobel Prize was found to have a positive effect on test scores in a randomized trial. It's impossible to tell if my question had any effect because it was given to everyone. However, several students told me after the exam that they enjoyed the bonus questions.

Priming is one of the exam conditions I want to test in a formal, randomly assigned experiment in the near future. It will need to pass the university's ethics board first, which it obviously will, but it's still required. It's funny how one can include something like this in an exam for everyone without ethical problems, but need approval if I want to test the effect because it's testing on human experiments. 

Facebook wound up in a similar situation where they ran into ethical trouble for manipulating people's emotions by adjusting post order, but the trouble came from doing it for the purpose of published research and not something strictly commercial like advertising.

Reading assignments:

 In the past, I have included reading assignments of relevant snippets of research papers using the methods being taught. Worrying about overwhelming the students, I had dropped this. However, I think I'll return to it, perhaps as a bonus assignment. There were a couple students that even told me after the class that the material was too simple, and hopefully some well-selected articles will satisfy them without scaring everyone else.


Using R in the classroom:

In the past, I also had students use R, often for the first time. I had mentioned in a previous reflection the need to test my own code more carefully before putting it in an assignment. Doing so was easily worth the effort.

Another improvement was to include the data as part of the code, rather than as separate csv files that had to be loaded in. Every assignment with R included code that defined each variable of each dataset as a vector and then combined the variables with the data.frame() function. The largest dataset I used had 6 columns and 100 rows; anything much larger would have to be pseudo-randomly generated. I received almost no questions about missing data or R errors; those that I did involved installing a package or the use of one dataset in two separate questions.

No comments:

Post a Comment