Posts Tagged ‘Clickers’

The Importance of Feedback

May 22, 2014

My semester is ended, and now is the time to write some post-mortem entries into this weblog. The first idea is something that is probably obvious, but I over-thought it. I have been been putting more of the course’s assessment at the end of the semester lately, thinking that that is when students are most prepared to do well.

And I am correct, but I took it too far. I did not give my students enough regular feedback during the first part of the semester this spring. My education students actually pointed this out to me—I realized that they were correct as soon as they said it (it also reinforced that they are pretty on top of education issues). Fortunately, I get to teach that course for education majors again this fall; I will make things right this time.

Additionally, I am working on ways of getting students immediate feedback. Clickers are one way of doing this, but I also might have students start grading their own quizzes (I would provide a couple of solution keys and a marker for them) and doing more computer-graded stuff.

Semester Reflection, Part I

December 12, 2011

I am back to blogging after a semester of figuring out how to be the parent of two kids.  We are slowly figuring it out.

 

Anyway, below is a summary of what I did for the semester followed by how I would change in future semesters.  Recall that I am teaching real analysis.

 

  1. Students read a section of the text and watch some screencasts before class.  Students had to answer some questions online before class; if students did not answer the questions, they got a nagging email asking why.  This led to a very high completion rate.

  2. Students could request screencasts, thereby giving them a customized lecture (of sorts).

  3. For the first 60% of the semester, students spent about 75% of the time answering clicker questions (individually and in teams of three).  The remaining 25% of the time was spent starting homework problems.

  4. For the last 40% of the semester, we reviewed.  Students had to re-read a chapter before class.  In class, I gave the students four proofs to do in teams of two on a whiteboard.  Two proofs were very basic, and two were more complex.  I went around and gave feedback to each of the teams individually.  The idea was to run through the proofs of these four problems by the end of class (I put the proofs on slides), but we rarely got to all four questions.  I would also present the proof of a major theorem from the chapter about halfway through class.

  5.   Students were graded according to a midterm, a final, a portfolio, and a “practice portfolio.”  The exams are fairly standard.  The portfolio is a collection of each student’s best proofs throughout the semester, and the student has to provide evidence that he/she understands each of the course topics.  These are yet to be graded.  The practice portfolio was the same idea mid-way through the semester; this was graded on completion only, since the purpose of the practice portfolio was to get the students used to this different way of grading.

  6. Students who wanted to get an A for the semester had to do a project.  This means that they had to create screencasts on a section of the textbook that we had not covered during the semester (I used Abbott’s textbook, and he has them designated as “project sections”).

 

What went well:

 

  1. The clickers/peer instruction.  Analysis is full of ideas that are difficult to understand; if you do not understand them, it is even more difficult to prove anything about them.  The clickers really gave everyone—with virtually no exception—a solid idea of what was going on.

  2. The last 40% of the class was terrific.  We essentially went through the textbook twice, and the students made huuuuuuuge improvements the second time.

 

What I would improve next time:

 

  1. During the “clicker” portion of the semester, the class time spent starting the homework was not effective (in part because I did not give it enough time, but I don’t think that it would have been great with ample time, either).  I would recommend giving them the “basic” proofs that I did in the review portion of the semester each class period instead.  Perhaps do 50% clickers each class and 50% “basic proofs” (two would probably suffice, and most teams would probably only get to one).

  2. Do the practice portfolio much earlier.  I did it right after midterm, and that did not give students enough time to digest it.  Also, I recommended whether someone should do a project based on this, and students would have more time for projects if the practice portfolio went earlier.

  3. I also did two Calibrated Peer Review assignments. These failed due to errors on my part. First, I had students put their proofs on Moodle, which they linked to on the CPR site. This was a problem because students do not actually have access to the files on Moodle (it worked when I tested it because I have more permissions). Second, I told the students the wrong deadline for the second assignment. I think that this tool has a ton of potential, but I need to eliminate the user error first.
  4. I screwed up the standards a bit. For example, I was missing “Cauchy sequences” and “Limits” (Limits!). I was able to come up with fair workarounds for the students, but I think that I will only release standards to the students as we reach them in later semesters. This should force me to think through the standards an nth time, and I likely won’t miss anything major by doing this.

 

The jury is still out on portfolios.  We will see.