This post is a summary of my experiment with portfolios in my real analysis class. The basic idea is this: students would submit proofs for me to comment on each week. They would improve and resubmit as needed. At the end of the semester, students linked proofs to learning goals and submitted for grading. The students were responsible for only including correct, well-written proofs. This portfolio was a major portion of their grade: usually around 60%.
Here are my thoughts, in no particular order:
- Students submitted an average of 37.44 proofs. The lowest number of proofs was 9, and the highest number was 69.
- Students got pretty stressed out about the portfolio at the end of the semester. In part, this was due to procrastination—some had a semester’s worth of material to organize in just a couple of days. But students were also stressed out that they had to evaluate whether their work is correct (even though I gave them feedback on many—but not all—of the proofs). I sympathize with this, since it is difficult for beginners to figure out. I feel like I graded the portfolios correctly (I was somewhat lenient, but not too lenient), but this did not remove their earlier stress.
- The students did self-assessments. The students typically assigned themselves a grade that was 0.18 GPA points higher than what I gave them. For instance, if I gave a student a B (3.0), then the student typically gave themselves a high B (3.18). I thought this was pretty good.
- Nine of the 25 students’ self-assessments completely agreed with my assessment. Seventeen of the 25 were within half of a grade (in either direction). The three least accurate guesses were high by 2.5 grades points, high by 2.0 grade points, and low by 2.0 grade points.
- In spite of the high numbers of proofs to grade for each student, grading went fairly easily: I got through 25 portfolios in two full work days. I credit this mostly to the fact that I kept pretty good records on the students’ proofs throughout the semester, and there were many proofs I did not have to read again.
- I allowed students to use problems from in-class midterms in the portfolios. I would not do this again, as these problems were much simpler than others due to the time constraints of the in-class midterms.
- One student only submitted nine problems. He wrote that he just procrastinated too much, and was unable to make it up by the time he realized that he needed to. This was the only horrible case of procrastination, which is pretty good considering they never received points for any of the work until the end of the semester (save for the practice portfolio—graded on completion only—halfway through the semester).
My current assessment is that this practice has a lot of potential—particularly in proof-based courses. I really feel that I was able to get a sense of how a student wrote. Better yet, I was able to see each student’s progression. I am likely to do something similar in my elementary education course this semester (although I will be using a Rundquistesque Voice version of it).
Can you think of any ways that I could improve this process?
[Update: I just found out that many of my students skipped their other classes—but not mine—at the end of the semester so that they could finish the portfolio. I did not anticipate that. I might have to build in some anti-procrastination measures next time.]