Posts Tagged ‘Formative Assessment’

So you suddenly have 68 students enrolled?

August 11, 2013

Suppose you are an instructor who uses Inquiry-Based Learning. You are used to running a particular course with 30–35 students, and you are about to start teaching that course in two weeks. But then you find out that you have 68 students registered for your class. What do you do?

Peer Instruction, hands down. Here is how you do it.

Since I am assuming that you only have two weeks to prepare, this is the most basic way of implementing Peer Instruction. Robert Talbert’s Guided Practice idea would be better to include if you are able.

If it is too late to get “clickers,” use Poll Everywhere, Socrative, or Learning Catalytics. I would tend toward Poll Everywhere, since it is pretty cheap ($65 per month for 68 students—get someone else to pay for it), and students only need a texting plan to use it. But Learning Catalytics seems pretty awesome; I just don’t trust all of the students to have a tablet or smart phone.

Do you have a textbook for the course? If so, here is the recipe:

  1. On the first day of class, assign students to fixed teams of 2 or 3. This will help every student feel like they are part of a community in your class. Students should sit together with their team. You may want to change up teams later in the semester.
  2. Students read a section of the text the night before class. You prepare 5–10 multiple choice questions based on the section. These questions should cover the main points of the section. Some questions will only be to help students understand a definition/concept, other questions will force students to confront misconceptions. Peer Instruction is awesome for confronting misconceptions. Just make sure that you have good distractors for each question.
  3. Everyone comes to class.
  4. If you need to pass back papers, make administrative announcements, etc, you can do that at the beginning of class. But do not, under any circumstance, give an overview of the section; this will teach them that they do not need to read the section, and the result will be that your class will eventually morph into a standard lecture. Instead, simply start the first clicker question.
  5. Display the question on the screen. Have students silently think about the question themselves and “click” their favorite answer when they are ready. You may want to give them a fixed time limit here, although I usually do not; I can usually tell how much students need by the number of students who have already responded. But I usually do not have 68 students.
  6. Look at the results, but do not let them see the results (mute the projector if you need to). If the students overwhelmingly get the correct answer, display the results and give a very brief explanation about why the correct answer is correct AND why the other answers are incorrect. (Note: there is a high bar for “overwhelmingly correct.” For instance, on a True/False question, if half of the students know that the correct answer is True, say, and the other half guess blindly, then 75% of the students will answer correctly. This is bad, since half of the class does not understand. So you might want 90% correct answers on a True/False question, slightly lower for a question with three options, etc. This is an art and not a science, though).
  7. On the other hand, if the students do not overwhelmingly answer correctly, tell the students to discuss their answers with their team. The students should try to convince the other team members of their answer, but the students should be open to changing their mind. Once the team agrees on a single answer, have them re-vote. You should wander around the class as much as you can here, eavesdropping. Once most students have responded (or your time limit is up), display the results to the class.
  8. Now, explain why the correct answer(s) is (are) correct AND why the incorrect answers are incorrect. You can tell how long you should spend talking about this by how the teams did in the most recent round of voting. If they did well, do not talk for long. If they did not do well, give them a more thorough lecture (although you probably will not need to talk for more than 10 minutes).
  9. Repeat with the remainder of your questions until class ends.

This will get every single student involved, and my students have overwhelmingly loved the experience. There is also evidence that Peer Instruction will help students learn enough to increase grades by half of a grade.

If you do NOT have a textbook, you should do your best to find some sort of a free online text for them, write your own notes, create your own lecture videos for students to view before class, and/or use existing videos (e.g. Khan Academy) to use to “transfer” knowledge to the students before class. Then you can use class time to have the students make sense of the new knowledge.

Failing this, lecture. But build some number of clicker questions into your lecture. The process is the same as outlined above, but you will just have fewer questions.

Advertisements

Grading for Probability and Statistics

January 23, 2013

Here is what I came up with for grading my probability and statistics course. First, I came up with standards my students should know:

“Interpreting” standards (these correspond to expectations for a student who will earn a C for the course.

  1. Means, Medians, and Such
  2. Standard Deviation
  3. z-scores
  4. Correlation vs. Causation and Study Types
  5. Linear Regression and Correlation
  6. Simple Probability
  7. Confidence Intervals
  8. p-values
  9. Statistical Significance

“Creating” standards (these correspond to a “B” grade):

  1. Means, Medians, and Standard Deviations
  2. Probability
  3. Probability
  4. Probability
  5. Confidence Intervals
  6. z-scores, t-scores, and p-values
  7. z-scores, t-scores, and p-values

(I repeat some standards to give them higher weight).

Finally, I have “Advanced” standards (these correspond to an “A” grade):

  1. Sign Test
  2. Chi-Square Test

Here is how the grading works: students take quizzes. Each quiz question is tied to a standard. Here are examples of some quiz questions:

(Interpreting: Means, Medians, and Such) Suppose the mean salary at a company is $50,000 with a standard deviation of $8,000, and the median salary is $42,000. Suppose everyone gets a raise of $3,000. What is the best answer to the following question: what is the new mean salary at the company?

(Interpreting: Standard Deviation) Pick four whole numbers from 1, . . . , 9 such that the standard deviation is as large as possible (you are allowed to repeat numbers).

(Creating: Means, Medians, and Standard Deviations) Find the mean, median, and standard
deviation of the data set below. It must be clear how you arrived at the answer (i.e. reading the answer off of the calculator is not sufficient). Here are the numbers: 48, 51, 37, 23, 49.

Advanced standard questions will look similar to Creating questions.

At the end of the semester, for each standard, I count how many questions the students gets completely correct in each standard. If the number is at least 3 (for Creating and Advanced) or at least 4 (for Interpreting), the student is said to have “completed” that standard (the student may opt to stop doing those quiz questions once the student has “completed” the standard).

If a student has “completed” every standard within the Interpreting standards, we say the student has “completed” the Interpreting standards. Similarly with Creating and Advanced.

Here are the grading guidelines (an “AB” is our grade that is between an A and a B):

-A student gets at least a C for a semester grade if and only if the student “completes” the Interpreting standards and gets at least a CD on the final exam.
-A student gets at least a B for the semester grade if and only if the student “completes” the Interpreting and Creating standards and gets at least a BC on the final exam.
-A student gets an A for the semester grade if and only if the student “completes” all of the standards, gets at least an AB on the final exam, and completes a project.

The project will be to do some experiment or observational study that uses a z-test, t-test, chi-square test, or sign test. It can be on any topic they want, and they can choose to collect data or use existing data. The students will have a poster presentation at my school’s Scholarship and Creativity Day.

I would appreciate any feedback that you have, although we are 1.5 weeks into the semester, so I am unlikely to incorporate it.

Peter Elbow is awesome

January 12, 2013

I am busy preparing for classes, but I want to post something here so that I can find it later: Peter Elbow writes about “minimal grading,” which is essentially the wheel that I am reinventing. Enjoy the article.

(hat tip to Angela Vierling-Claassen, who tweeted the article)

Assessing with Student-Generated Videos

January 17, 2012

I regularly teach a course for future elementary education majors. The point of the class is for the students to be able to do things like explain why you “invert and multiply” when you want to divide fractions. This involves defining division (which, itself, requires two definitions—measurement division and partitive division are conceptually different), determining the answer using the definition, and justifying why the “invert and multiply” algorithm is guaranteed to give the same answer. At this stage, I simply tweak the course from semester to semester. This semester, though, I am making a major change in how I will assess the students.

Since this class is for future teachers, it makes sense to assess them teaching ideas. So there are three main ways of assessing the students this semester:

  1. The students will have two examinations. Part of each examination will be standard (a take-home portion and an in-class portion), but there will also be an oral part of the examination. The oral portion will require students to explain why portions of the standard arithmetic algorithms work the way they do.

    I only have 31 students in this class (I have two sections), so hopefully this will be doable. Moreover, I am going to distribute the in-class portion of the exams over a period of weeks: many classes will have a 5 minute quiz that will actually be a portion of the midterm.

  2. The students will regularly be presenting on the standard algorithms in class. This is only for feedback, and not for a grade. I am hoping that the audience will listen more skeptically to another student than they listen to me.
  3. The students will be creating short screencasts explaining each of the standard algorithms (Thanks to Andy Rundquist for this idea). Students will be given feedback throughout the semester on how to improve their screencasts, but they will create a final portfolio blog that contains all of their (hopefully improved) screencasts for the semester. This portfolio blog will be graded.

I will keep you posted. I welcome any ideas on how to improve this.

SBF Grading Policy (Draft)

July 1, 2011

I previously wrote about transitioning from Standards Based Grading (SBG) to Standards Based Feedback (SBF). Here is a first pass at the policies. These will hopefully address Andy Rundquist’s question about grading.

In a nutshell, a student’s grade is determined by (roughly) the number of standards met. Slightly more detail is given in the list below, and an excerpt from the first draft of my syllabus provides even more detail below it.

I would appreciate feedback, ideas, and critiques. This is a first draft, and there must be many improvements that can be made.

  1. Homework (in the form of proofs) will be assigned regularly, but it will only be submitted for written feedback—no grades.
  2. Students will also have frequent (weekly?) opportunities for peer feedback on their proofs.
  3. Since I need to assign a grade at the end of the semester, the students will need to reflect on how their homework has demonstrated understanding of the course standards. They will assemble well-written, correct homework in a portfolio that summarizes how they met the standards for the semester.
  4. The portfolio will also contain the student’s favorite three proofs for the semester. These should be correct and well-written. Ideally, the students will also have other reasons for including them—perhaps they worked really hard on the particular proofs, found them surprising, or found them particularly interesting.
  5. Also in the portfolio will be a cover sheet cataloging the homework assignments that correspond to each standard.
  6. The portfolio will contain a self-evaluation. We will take class time in the beginning of the semester to discuss what constitutes a good proof, and the syllabus (see below) details how the portfolio will be graded. The student will have to do an honest self-evaluation of the portfolio.
  7. At midsemester, students will need to submit a trial portfolio (thanks, Joss). This will be done for credit—students either get 100% on this assignment or 0%. The purpose for this is to give students a practice run at this unusual form of grading—I don’t want their first experience with it to be high-stakes.
  8. There will also be at least one traditional graded midterm (the students will decide how many) and a final.
  9. There will be at least one ungraded, feedback-only midterm.

Syllabus Excerpt

Homework

You will be given a selection of homework problems to do each night. You are encouraged to work with other people, but you must write up your own solutions.

There are three levels to handing in homework.

  1. Once per cycle, you can hand in three proofs for me to look at; these proofs should be considered drafts, not final papers. I will give you comments on what you did well and what you need to improve upon in your next draft. I will give you only feedback on how to improve; I will not give you a grade.
  2. There will frequently be an opportunity for peer feedback of the proofs in class. Your classmates will give you feedback on the quality of your proof, and you will do the same to their proofs.
  3. At two points in the semester, you will hand in proofs to be graded. See the grading section below.

Basically, I want you to have very good proofs by the time they are assigned a grade, and I am going to help you improve your homework (without any penalty) until then.

This homework should be mostly done in \LaTeX, if only for the very practical reason that you will be re-submitting drafts; instead of re-writing each draft by hand, you will be able to simply edit a computer file. You will put more time into creating the file at the beginning, but you will save time with each draft after that.

Portfolio

At the end of the semester, you should have a collection of completed homework problems. At the end of the semester, you will reflect on the problems you have done, organize your homework, and submit a selection of your completed homework assignments (called your “portfolio”) for a grade. At the end of the semester, you will literally create a physical portfolio of your best work.

Here is how you will select your portfolio:

  1. You will select all bits of homework that show evidence of the Course Topics (see the section above) and place them in the portfolio. You should have multiple proofs for those labelled “Core Topics;” you only need one proof to demonstrate evidence for the “Supporting Topics.”
  2. You will select your three Favorite Proofs and put them in the portfolio. These will be well-written according to the criteria discussed in class. Also, these may be proofs that you are particularly proud of.

There is a balancing act when deciding whether a proof goes into your portfolio. On one hand, you want to provide as much evidence for the Core Topics as possible (and some evidence for the Supporting Topics). Other the other hand, an incorrect or poorly-written proof is not counted as evidence and will weaken your portfolio. Part of your goal for the semester is to learn to determine what is a good proof and what is not, and use your judgment accordingly.

Here is how your portfolio will be graded.

A: All of your Favorite Proofs are well-written, complete, and concise. Well-written, complete, concise proofs are provided for all topics; many proofs demonstrate understanding of each core topics. There are no wrong or poorly-written proofs in the portfolio.

B: All of your Favorite Proofs are well-written, complete, and concise. Many well-written, complete, concise proofs are provided for all Core Topics. Most of the Supporting Topics are supported by well-written, complete, concise proofs. There is at most one wrong proof in the portfolio.

C: All of your Favorite Proofs are well-written, complete, and concise. At least a couple of well-written, complete, concise proofs are provided for all Core Topics. Many of the Supporting Topics are supported by well-written, complete, concise proofs. There are at most two wrong proofs in the portfolio.

I will use my judgement to decide for the grades AB, BC, CD, D, and F.

Finally, you will evaluate your portfolio and determine what grade you think you deserve according to the criteria above. Be honest and be specific in your justification.

Here is how you will organize your portfolio. The first page(s) will be a cover sheet with your name, your self-assigned grade (but no discussion of it), and a list of the topics for the course. You will see that you are going to number the proofs; you should write the number of each proof that provides evidence for each topic (a single proof might provide evidence for more than one topic).

After the cover page, include your three Favorite Proofs. Start numbering these with “1.”

Next, include proofs that demonstrate each of the Core Topics for the first Core Topic in the list in the syllabus. Continue numbering these proofs as needed. If one of your Favorite Proofs provides evidence for the first Core Topic, you do not need to include a second copy of it—your cover page will indicate that the proof is evidence for both. Then, do the same with the second Core Topic. Note that if a proof from the first Core Topic also demonstrates evidence for the second Core Topic, you do not need to include a second copy of it—your cover page will indicate that the proof is evidence for both.

Continue with the other Core Topics in the same manner. Then do the same for the Supporting Topics (in the order they are listed).

Finally, include your detailed self-assessment of the portfolio; be sure to include your self-grade on this sheet, too.

Peer Assessment

May 2, 2011

In my ongoing attempt at helping my students to understand the difference between arithmetic and mathematics, I had my students do a peer assessment exercise on the papers they are writing to explain why certain arithmetic algorithms give correct answers (e.g. why long division gives the correct answer to a division question).

I had all of the students bring drafts of their papers in, and the students had self-assessed their papers by “traffic-lighting:” a mark of green at the top of the paper means that the student thinks that the paper is close to being the final draft, a “red” means that they think they have a long way to go, and a “yellow” is somewhere in between. I then grouped the students by “traffic-light,” planning on having the “green” group and “yellow” group read each other’s papers and offer feedback, while the “red” group would work with me directly to get them on track. The reality is that pretty much everyone gave themselves a “yellow,” so this was not much of a differentiation. (I stole this whole idea from Assessment for Learning: Putting it into Practice).

Here is what I learned:

  1. This seemed to be extremely helpful to some students. I asked some students what they learned, and they told me exactly what I had hoped they had gotten out of it.
  2. I am not very good at organizing peer assessment sessions yet. I got the sense that many students did not know what they were supposed to be doing, and consequently they were off-task and/or left a couple minutes early. I also think that this might not warrant an entire class period.

I am hoping that I look back on this in five years and laugh at how hard this was for me in 2011. In the meantime, I would love any advice that people have on peer assessment—I really do need to improve on this.

Assessment FOR Learning, Take 1

April 1, 2011

I love working with pre-service elementary education majors. I frequently teach their content courses, and I am teaching them this semester. I usually spend a decent amount of time in my elementary education courses having the students explain why the standard algorithms for the operations on integers and fractions give correct answers. That is, I work to get the students to understand how the algorithm relates to the definition of the operation. This is something that they always have trouble with (I have taught the course 5-6 times).

But I think that this semester may be significantly better. The reason why is that I am applying techniques I learned in Black’s Assessment for Learning: Putting it into Practice. Here is what we did in class yesterday:

  1. I had the students determine qualities that make an explanation “good.” I prodded them on a couple of these, but we came up with:
    • The explanation is relevant; that is, the explanation answers the question at hand.
    • The explanation is appropriate for the audience (i.e. the explanation uses knowledge common to both the explainer and the explainee).
    • The answer is correct.
    • The answer is complete; there are no gaps that the audience would need to understand the explanation.
    • The answer is concise; it is long enough, but no longer.
  2. I gave them three explanations for why the standard addition algorithm is really the same as definition of addition (roughly, “combining and counting”). Here are the explanation: one was decent, another was solely an explanation of how (not why) the algorithm works, and a third was somewhere in between.
  3. I asked them how well each of the explanations did in each of our categories from 1.
  4. Initially, the students all loved the “how but not why” explanation (the second one). But when we delved into relevance, several students started saying that it did not answer the question. I could almost literally see light bulbs going off over several of the students’ heads. I think that this will greatly help their justification of several multiplication and division algorithms; I will keep you posted.

    In some sense, I am kicking myself for not doing this before. I have (in theory, at least) been a proponent of helping students develop their metacognitive skills. It seems like that is what I was doing yesterday: giving them tools to think about how they are thinking about explanations.