Posts Tagged ‘Assessment for learning’

Peter Elbow is awesome

January 12, 2013

I am busy preparing for classes, but I want to post something here so that I can find it later: Peter Elbow writes about “minimal grading,” which is essentially the wheel that I am reinventing. Enjoy the article.

(hat tip to Angela Vierling-Claassen, who tweeted the article)

Students demand: “More clicker questions!”

May 3, 2012

In my IBL-flipped hybrid complex analysis class, we finished the “flipped” portion (and the entire textbook) in half of the semester. We spent the next quarter of the semester finishing IBL presentations.

I asked what they wanted to do for the remainder of the semester. They said “more clicker questions” (they also requested time to work on problems in class, including application problems). This is what we have been doing.

After starting class working through some of their homework problems, I asked how the class wanted to spend the last 30 minutes of class—I had both clicker questions and some new problems to work on. We found that 79% of the students wanted clicker questions (clickers are nice for a variety of reasons). We did a couple, and the remaining questions start a new theme, so I decided to stop with clicker questions for the day so we could work on some of the problems.

That was a mistake. They were mad. Okay, not mad, but they really wanted more questions. So we did them for the remainder of class.

I asked why they had such a preference for clicker questions, they gave two answers:

  1. It is too easy to get stuck on the problems that I have been giving (I have been giving them trickier proofs, since this is their third time through the material).
  2. The clicker questions really help them learn.

I was really happy to hear that.

Jigsawing

April 19, 2012

My elementary education students are creating vlogs that explain why different algorithms work for different operations. They have been creating roughly one video per week, posting them, and then getting feedback from the course grader. The only graded part of this is at the end of the semester after many drafts.

This week, we did a jigsawing-type activity to improve the videos (like most everything else, this idea was inspired by Andy Rundquist. On Tuesday, I split the students into four groups: one for addition, one for subtraction, one for multiplication, and one for division. The students came to class having watched all of the videos on their particular operation, and the class period was spent deciding what makes for a good explanation for that operation. At the end of the class, we split into new groups where one member of the group had just studied addition, one subtraction, one multiplication, and one division.

Today, we spent the entire class period reviewing videos in these teams. One team member was an “expert” on each operation from Tuesday, and they made suggestions on how to improve the explanations.

I asked everyone if this was useful enough to repeat on our fractions algorithms, and every student said that it was (most were emphatic). This appears to be a success.

My one reservation: although I am not sure, it appears that some students are trying to memorize a good explanation rather than understand. I know that I will be able to tell which students really understand from the oral exams, but I am wondering if it will be clear from the videos. Does anyone have any experience with memorizers?

Assessing with Student-Generated Videos

January 17, 2012

I regularly teach a course for future elementary education majors. The point of the class is for the students to be able to do things like explain why you “invert and multiply” when you want to divide fractions. This involves defining division (which, itself, requires two definitions—measurement division and partitive division are conceptually different), determining the answer using the definition, and justifying why the “invert and multiply” algorithm is guaranteed to give the same answer. At this stage, I simply tweak the course from semester to semester. This semester, though, I am making a major change in how I will assess the students.

Since this class is for future teachers, it makes sense to assess them teaching ideas. So there are three main ways of assessing the students this semester:

  1. The students will have two examinations. Part of each examination will be standard (a take-home portion and an in-class portion), but there will also be an oral part of the examination. The oral portion will require students to explain why portions of the standard arithmetic algorithms work the way they do.

    I only have 31 students in this class (I have two sections), so hopefully this will be doable. Moreover, I am going to distribute the in-class portion of the exams over a period of weeks: many classes will have a 5 minute quiz that will actually be a portion of the midterm.

  2. The students will regularly be presenting on the standard algorithms in class. This is only for feedback, and not for a grade. I am hoping that the audience will listen more skeptically to another student than they listen to me.
  3. The students will be creating short screencasts explaining each of the standard algorithms (Thanks to Andy Rundquist for this idea). Students will be given feedback throughout the semester on how to improve their screencasts, but they will create a final portfolio blog that contains all of their (hopefully improved) screencasts for the semester. This portfolio blog will be graded.

I will keep you posted. I welcome any ideas on how to improve this.

CPR

August 22, 2011

“One idea people had was to check out Calibrated Peer Review. I have only scratched the surface at that site but I’m grateful for being pointed to it.”

That was a sentence from Andy Rundquist’s blog. As much as Andy ever has a throwaway line in a blog entry, this was it—this was his only mention of Calibrated Peer Review (CPR). I imagine that Andy simply put it in his weblog so he could find it later, on the off-chance that he ever thought about it again. But it changed my semester.

I have decided to use CPR in my real analysis courses this semester. Here is what CPR is in a nutshell:

  1. Students log on to CPR to get a writing assignment.
  2. Students complete the assignment and upload it to the CPR website.
  3. Students view three copies of the same assignment, all written by the instructor. These three copies are examples of differing quality.
  4. Students need to make judgements about the quality of each of the three instructor-written examples. The students answer specific questions about each article. If a student’s assessment of each of the three pieces agrees with instructor’s, the student moves to the next step. Otherwise, she must start the evaluation process again. This repeats until the student agrees with the instructor’s assessment.

    The purpose of this step is to “train” students to critically example these assignments; this is the “calibrated” part of “Calibrated Peer Review.”

  5. The student reads an anonymous article from a peer and rates it on the same criteria as the previous step. This happens a total of three times.
  6. The student evaluates his/her own article.
  7. The student sees the results from other people’s evaluation of his/her article.

By the end of this process, the student will have evaluated a total of seven different versions of the writing assignment, and will have thought about what makes a good piece of writing seven times.

I was planning on doing peer review, and I was planning on having students evaluate three different versions of the same proof. This combines the two in a nice way.

[Edit: A member of the CPR team emailed me to tell me that there is a pay version of CPR that supports a direct upload of PDF files (among other things). I don’t think that I can make it work this school year, but that would render the rest of the post irrelevant.]

[Edit: Also, here is a link to a screencast on the perhaps-unnecessary process below.]

The one catch: the CPR website only accepts text and html, which does not work well with mathematics. My workaround is this:

  1. The student writes up the solution offline in \LaTeX.
  2. The student uploads the resulting PDF to our Moodle site.
  3. The student copies the URL from the Moodle site, and simply creates a link to the Moodle site within the CPR website.

This is not the most elegant workaround, but it should work. If you have a better idea, I would love to hear it.

SBF Grading Policy (Draft)

July 1, 2011

I previously wrote about transitioning from Standards Based Grading (SBG) to Standards Based Feedback (SBF). Here is a first pass at the policies. These will hopefully address Andy Rundquist’s question about grading.

In a nutshell, a student’s grade is determined by (roughly) the number of standards met. Slightly more detail is given in the list below, and an excerpt from the first draft of my syllabus provides even more detail below it.

I would appreciate feedback, ideas, and critiques. This is a first draft, and there must be many improvements that can be made.

  1. Homework (in the form of proofs) will be assigned regularly, but it will only be submitted for written feedback—no grades.
  2. Students will also have frequent (weekly?) opportunities for peer feedback on their proofs.
  3. Since I need to assign a grade at the end of the semester, the students will need to reflect on how their homework has demonstrated understanding of the course standards. They will assemble well-written, correct homework in a portfolio that summarizes how they met the standards for the semester.
  4. The portfolio will also contain the student’s favorite three proofs for the semester. These should be correct and well-written. Ideally, the students will also have other reasons for including them—perhaps they worked really hard on the particular proofs, found them surprising, or found them particularly interesting.
  5. Also in the portfolio will be a cover sheet cataloging the homework assignments that correspond to each standard.
  6. The portfolio will contain a self-evaluation. We will take class time in the beginning of the semester to discuss what constitutes a good proof, and the syllabus (see below) details how the portfolio will be graded. The student will have to do an honest self-evaluation of the portfolio.
  7. At midsemester, students will need to submit a trial portfolio (thanks, Joss). This will be done for credit—students either get 100% on this assignment or 0%. The purpose for this is to give students a practice run at this unusual form of grading—I don’t want their first experience with it to be high-stakes.
  8. There will also be at least one traditional graded midterm (the students will decide how many) and a final.
  9. There will be at least one ungraded, feedback-only midterm.

Syllabus Excerpt

Homework

You will be given a selection of homework problems to do each night. You are encouraged to work with other people, but you must write up your own solutions.

There are three levels to handing in homework.

  1. Once per cycle, you can hand in three proofs for me to look at; these proofs should be considered drafts, not final papers. I will give you comments on what you did well and what you need to improve upon in your next draft. I will give you only feedback on how to improve; I will not give you a grade.
  2. There will frequently be an opportunity for peer feedback of the proofs in class. Your classmates will give you feedback on the quality of your proof, and you will do the same to their proofs.
  3. At two points in the semester, you will hand in proofs to be graded. See the grading section below.

Basically, I want you to have very good proofs by the time they are assigned a grade, and I am going to help you improve your homework (without any penalty) until then.

This homework should be mostly done in \LaTeX, if only for the very practical reason that you will be re-submitting drafts; instead of re-writing each draft by hand, you will be able to simply edit a computer file. You will put more time into creating the file at the beginning, but you will save time with each draft after that.

Portfolio

At the end of the semester, you should have a collection of completed homework problems. At the end of the semester, you will reflect on the problems you have done, organize your homework, and submit a selection of your completed homework assignments (called your “portfolio”) for a grade. At the end of the semester, you will literally create a physical portfolio of your best work.

Here is how you will select your portfolio:

  1. You will select all bits of homework that show evidence of the Course Topics (see the section above) and place them in the portfolio. You should have multiple proofs for those labelled “Core Topics;” you only need one proof to demonstrate evidence for the “Supporting Topics.”
  2. You will select your three Favorite Proofs and put them in the portfolio. These will be well-written according to the criteria discussed in class. Also, these may be proofs that you are particularly proud of.

There is a balancing act when deciding whether a proof goes into your portfolio. On one hand, you want to provide as much evidence for the Core Topics as possible (and some evidence for the Supporting Topics). Other the other hand, an incorrect or poorly-written proof is not counted as evidence and will weaken your portfolio. Part of your goal for the semester is to learn to determine what is a good proof and what is not, and use your judgment accordingly.

Here is how your portfolio will be graded.

A: All of your Favorite Proofs are well-written, complete, and concise. Well-written, complete, concise proofs are provided for all topics; many proofs demonstrate understanding of each core topics. There are no wrong or poorly-written proofs in the portfolio.

B: All of your Favorite Proofs are well-written, complete, and concise. Many well-written, complete, concise proofs are provided for all Core Topics. Most of the Supporting Topics are supported by well-written, complete, concise proofs. There is at most one wrong proof in the portfolio.

C: All of your Favorite Proofs are well-written, complete, and concise. At least a couple of well-written, complete, concise proofs are provided for all Core Topics. Many of the Supporting Topics are supported by well-written, complete, concise proofs. There are at most two wrong proofs in the portfolio.

I will use my judgement to decide for the grades AB, BC, CD, D, and F.

Finally, you will evaluate your portfolio and determine what grade you think you deserve according to the criteria above. Be honest and be specific in your justification.

Here is how you will organize your portfolio. The first page(s) will be a cover sheet with your name, your self-assigned grade (but no discussion of it), and a list of the topics for the course. You will see that you are going to number the proofs; you should write the number of each proof that provides evidence for each topic (a single proof might provide evidence for more than one topic).

After the cover page, include your three Favorite Proofs. Start numbering these with “1.”

Next, include proofs that demonstrate each of the Core Topics for the first Core Topic in the list in the syllabus. Continue numbering these proofs as needed. If one of your Favorite Proofs provides evidence for the first Core Topic, you do not need to include a second copy of it—your cover page will indicate that the proof is evidence for both. Then, do the same with the second Core Topic. Note that if a proof from the first Core Topic also demonstrates evidence for the second Core Topic, you do not need to include a second copy of it—your cover page will indicate that the proof is evidence for both.

Continue with the other Core Topics in the same manner. Then do the same for the Supporting Topics (in the order they are listed).

Finally, include your detailed self-assessment of the portfolio; be sure to include your self-grade on this sheet, too.

SBF

June 6, 2011

I stated in my previous post that Standards Based Grading has been an improvement over the old system, but I also stated that there is a lot of room for improvement. In this post, I will suggest how SBG could be greatly improved (and provide evidence that these changes will lead to improved learning).

Basically, I like two-thirds of SBG—specifically, the “standards based” part. The part where there is an opportunity for great improvement is the “grading” part of SBG. In particular, we should get rid of it whenever possible.

I think that Shawn Cornally has the right idea: the goal should be to move toward Standards Based Feedback. I am not sure what Shawn means by this, but I interpret it to mean that we should keep the communication of the expectations for the course (the “topics/standards”), but replace the “grading” with detailed, ungraded feedback. All sorts of research says feedback is how students learn, and giving a score/grade negates the value of any feedback students are given.

In particular, one study split students into three groups. One group received only comments, one group received only grades, and one group received both. The group that received only comments improved the most. The other two groups did equally well. In other words, there is evidence that the “B+”/”3.5″/”Acceptable” we write on top of the students’ papers negates all of the brilliant comments we write to help them learn (in the literature, “ego-involving≈grades” and “task-involving≈comments”). So let’s keep the good part of SBG and get rid of the bad part.

In fact, once we do this, we are awfully close to Assessment FOR Learningstuff (I needed a fourth word for my fourth link. The proper term is Assessment FOR Learning).

But we live in a world where we are expected to give final grades at the end of the semester/quarter/year. How could Standards Based Feedback (SBF?) work in determining final grades? I will suggest having students create portfolios at the end of the semester that show of their best work for each standard. This means that students would have to reflect on what they have done in the class, re-evaluate it, and choose their works that best demonstrate their understanding (it is like studying for a final exam, but better). This would do multiple things:

  1. Students would be forced to reflect on what it means to demonstrate understanding. The class would have to decide early in the semester what it means to demonstrate understanding, and students would select items from their portfolio based on this discussion.
  2. This would give you a means of creating a final grade for the student—simply evaluate their portfolio according to the class’s idea of what makes for a good demonstration of understanding.
  3. In private communication, Joss Ives has correctly wondered about how “synthesis questions” fit into SBG. With portfolio grading, this would not be an issue. Students might be able to use a synthesis question as evidence for multiple standards.

There are certainly other ways of doing this, but the most important thing is to minimize the grading as much as possible. I am planning on implementing this in my real analysis courses in the fall. I will keep you posted.

What other ideas do you have? Please share in the comments.

[Update: My post is very similar to a post by Dan Anderson. If you like the ideas for SBF, please read his post.]

Peer Assessment

May 2, 2011

In my ongoing attempt at helping my students to understand the difference between arithmetic and mathematics, I had my students do a peer assessment exercise on the papers they are writing to explain why certain arithmetic algorithms give correct answers (e.g. why long division gives the correct answer to a division question).

I had all of the students bring drafts of their papers in, and the students had self-assessed their papers by “traffic-lighting:” a mark of green at the top of the paper means that the student thinks that the paper is close to being the final draft, a “red” means that they think they have a long way to go, and a “yellow” is somewhere in between. I then grouped the students by “traffic-light,” planning on having the “green” group and “yellow” group read each other’s papers and offer feedback, while the “red” group would work with me directly to get them on track. The reality is that pretty much everyone gave themselves a “yellow,” so this was not much of a differentiation. (I stole this whole idea from Assessment for Learning: Putting it into Practice).

Here is what I learned:

  1. This seemed to be extremely helpful to some students. I asked some students what they learned, and they told me exactly what I had hoped they had gotten out of it.
  2. I am not very good at organizing peer assessment sessions yet. I got the sense that many students did not know what they were supposed to be doing, and consequently they were off-task and/or left a couple minutes early. I also think that this might not warrant an entire class period.

I am hoping that I look back on this in five years and laugh at how hard this was for me in 2011. In the meantime, I would love any advice that people have on peer assessment—I really do need to improve on this.

Physics Blogs

April 19, 2011

I am delighted to have started reading a lot of physics blogs recently. In fact, they are beginning to make up the bulk of my PLN, productivity-wise.

One quick note: Mark Hammond recently wrote about intentionally showing (and having students create) mistakes. Some ideas are his, and others he attributes to other people (particularly Jim Doherty), but I am going to give him sole credit for the purposes of this post. He talked about showing two problems side-by-side—one with an error, and one without. The students must figure out which one is correct and where the error is.

This reminds me of the exercise I got from Assessment FOR Learning, described here. I recently repeated this exercise (with the question: “Why is the area of a right triangle \frac{1}{2}bh?”). Again, I had one example that actually answered this question (by combining two right triangles into a rectangle), and two examples that just explained what to do with the formula. The response from the students was intriguing: the first class was evenly split among the three as to which actually answered the question, whereas the second class picked the correct one (by a vote of 18 to 2 to 2).

Still, this is a difficult question for the students. I am having the students write short papers explaining why different algorithms give the correct answer (these algorithms are: standard multiplication algorithm, long division, “multiplying across” for fraction multiplication, “common denominators” for fraction addition and subtraction, “invert and multiply” for measurement fraction division, and “invert and multiply” for partitive fraction division). The students can submit drafts, and I will comment but not grade them. The final draft will be due at the end of the semester.

So far, students are still mostly struggling with answering the question that was given, although some progress is being made.

Okay, if it seems like the whole “physics blog” topic was just a cover for me to talk more about Assessment FOR Learning, that’s because it was. Sorry about that, physicists of the world.

Assessment FOR Learning, Take 1

April 1, 2011

I love working with pre-service elementary education majors. I frequently teach their content courses, and I am teaching them this semester. I usually spend a decent amount of time in my elementary education courses having the students explain why the standard algorithms for the operations on integers and fractions give correct answers. That is, I work to get the students to understand how the algorithm relates to the definition of the operation. This is something that they always have trouble with (I have taught the course 5-6 times).

But I think that this semester may be significantly better. The reason why is that I am applying techniques I learned in Black’s Assessment for Learning: Putting it into Practice. Here is what we did in class yesterday:

  1. I had the students determine qualities that make an explanation “good.” I prodded them on a couple of these, but we came up with:
    • The explanation is relevant; that is, the explanation answers the question at hand.
    • The explanation is appropriate for the audience (i.e. the explanation uses knowledge common to both the explainer and the explainee).
    • The answer is correct.
    • The answer is complete; there are no gaps that the audience would need to understand the explanation.
    • The answer is concise; it is long enough, but no longer.
  2. I gave them three explanations for why the standard addition algorithm is really the same as definition of addition (roughly, “combining and counting”). Here are the explanation: one was decent, another was solely an explanation of how (not why) the algorithm works, and a third was somewhere in between.
  3. I asked them how well each of the explanations did in each of our categories from 1.
  4. Initially, the students all loved the “how but not why” explanation (the second one). But when we delved into relevance, several students started saying that it did not answer the question. I could almost literally see light bulbs going off over several of the students’ heads. I think that this will greatly help their justification of several multiplication and division algorithms; I will keep you posted.

    In some sense, I am kicking myself for not doing this before. I have (in theory, at least) been a proponent of helping students develop their metacognitive skills. It seems like that is what I was doing yesterday: giving them tools to think about how they are thinking about explanations.