Posts Tagged ‘exam formats’

Assessment Idea for Calculus I: Feedback desperately wanted!

June 25, 2014

I am planning an overhaul of Calculus I for the fall. I used a combination of Peer Instruction and student presentations in Fall 2012, and I was not completely happy with it.

So I am starting from scratch. I am following the backwards design approach, and I feel like I am close to being done with my list of goals for the students. Here is my draft of learning goals, sorted by the letter grades they are associated with:

View this document on Scribd

I previously had lists of “topics” (essentially “Problem Types”). These lists had 10–20 items, and tended to be broad (e.g. “Limits,” “Symbolic derivatives,” “Finding and classifying extrema”). This list will give me (and, I hope, the students) more detailed feedback on what they know.

This differs from how I did things in the past, in that I used to list “learning goals” as very broad topics (so they weren’t learning goals at all, but rather “topics” or “types of problem”). Students would then need to demonstrate their ability to do these goals on label-less quizzes.

The process would be this:

  1. A student does a homework problem or quiz problem.
  2. The student then “tags” every instance of where she provided evidence of a learning goal.
  3. The student hands in the problem.
  4. The grader grades it in the following way: the grader scans for the tags. If the tags correspond to correct, relevant work AND if the tag points to the specific relevant part of the solution, the students gets credit for demonstrating that she understands that learning goal. Otherwise, no.
  5. Repeat for each tag.
  6. Students need to demonstrate understanding/mastery/whatever for every learning goal n times throughout the semester.

Below are three examples of how this might be done on a quiz. The first example is work by an exemplary student: the student would get credit for every tag here (In all three of the examples, the blue ink represents the student work and the red ink indicates the tag).

View this document on Scribd

The second example has the same work and the same tags, but the student would not get credit due to lack of specificity; the student should have pointed out exactly where each learning goal was demonstrated.

View this document on Scribd

The third example (like the first) was tagged correctly. However, there are mistakes and omissions. In the third example, the student failed to claim credit for the “FToCI” and the “Sum/Difference Rule for Integrals.” Because of this, the student would not get credit for these two goals (even though the student did them; the point is to get students reflecting on what they did).

Additionally, the student incorrectly took the “antiderivative of the polynomial,” which caused the entire solution to the “problem of motion” to be wrong. Again, the student would not get credit for these two goals.

However, the student does correctly indicate that they know “when to use an integral,” could apply the “Constant Multiple Rule for integrals,” and “wrote in complete sentences.” The student would get credit for these three.

View this document on Scribd

I like this method over my previous method because (1) I can have finer grained standards and (2) students will not only “do,” but also reflect on what they did. I do not like this method because it is more cumbersome than other grading schemes.

My current idea (after talking a lot to my wife and Robert Campbell, and then stealing an idea from David Clark) is to require that each student show that he/she can do each learning goal six times, but up to three of them can be done on homework (so at least three have to be done on quizzes). I usually have not assigned any homework, save for the practice that students need to do to do well on the quizzes. This is a change in policy that (1) frees up some class time, (2) helps train the students on how to think about what the learning goals mean, (3) force some extra review of the material, (4) provide an additional opportunity to collaborate with other students, and (5) provide an opportunity for students to practice quiz-type problems.

My basic idea is that I will ask harder questions on the homework, but grade it more leniently (which implies that I will ask easier questions on the quizzes, but grade it more strictly).

I have been relying solely on quizzes for the past several years, so grading homework will be something that I haven’t done for a while. I initially planned on only allowing quizzes for this system, too, but it seemed like things would be overwhelming for everyone: we would likely have daily quizzes (rather than maybe twice per week); I would likely not give class time to “tag” quizzes, so students would do this at home (creating a logical nightmare); I would probably have to spend a lot more time coaching students on how to tag (whereas they now get to practice it on the homework with other people).

Let’s end this post, Rundquist-style, with some starters for you.

  1. This is an awesome idea because …
  2. This is a terrible idea because …
  3. This is a good idea, but not worth the effort because …
  4. This is not workable as it is, but it would be if you changed …
  5. Homework is a terrible idea because …
  6. You are missing this learning goal …
  7. My name is TJ, and you are missing this process goal …

Grading for Probability and Statistics

January 23, 2013

Here is what I came up with for grading my probability and statistics course. First, I came up with standards my students should know:

“Interpreting” standards (these correspond to expectations for a student who will earn a C for the course.

  1. Means, Medians, and Such
  2. Standard Deviation
  3. z-scores
  4. Correlation vs. Causation and Study Types
  5. Linear Regression and Correlation
  6. Simple Probability
  7. Confidence Intervals
  8. p-values
  9. Statistical Significance

“Creating” standards (these correspond to a “B” grade):

  1. Means, Medians, and Standard Deviations
  2. Probability
  3. Probability
  4. Probability
  5. Confidence Intervals
  6. z-scores, t-scores, and p-values
  7. z-scores, t-scores, and p-values

(I repeat some standards to give them higher weight).

Finally, I have “Advanced” standards (these correspond to an “A” grade):

  1. Sign Test
  2. Chi-Square Test

Here is how the grading works: students take quizzes. Each quiz question is tied to a standard. Here are examples of some quiz questions:

(Interpreting: Means, Medians, and Such) Suppose the mean salary at a company is $50,000 with a standard deviation of $8,000, and the median salary is $42,000. Suppose everyone gets a raise of $3,000. What is the best answer to the following question: what is the new mean salary at the company?

(Interpreting: Standard Deviation) Pick four whole numbers from 1, . . . , 9 such that the standard deviation is as large as possible (you are allowed to repeat numbers).

(Creating: Means, Medians, and Standard Deviations) Find the mean, median, and standard
deviation of the data set below. It must be clear how you arrived at the answer (i.e. reading the answer off of the calculator is not sufficient). Here are the numbers: 48, 51, 37, 23, 49.

Advanced standard questions will look similar to Creating questions.

At the end of the semester, for each standard, I count how many questions the students gets completely correct in each standard. If the number is at least 3 (for Creating and Advanced) or at least 4 (for Interpreting), the student is said to have “completed” that standard (the student may opt to stop doing those quiz questions once the student has “completed” the standard).

If a student has “completed” every standard within the Interpreting standards, we say the student has “completed” the Interpreting standards. Similarly with Creating and Advanced.

Here are the grading guidelines (an “AB” is our grade that is between an A and a B):

-A student gets at least a C for a semester grade if and only if the student “completes” the Interpreting standards and gets at least a CD on the final exam.
-A student gets at least a B for the semester grade if and only if the student “completes” the Interpreting and Creating standards and gets at least a BC on the final exam.
-A student gets an A for the semester grade if and only if the student “completes” all of the standards, gets at least an AB on the final exam, and completes a project.

The project will be to do some experiment or observational study that uses a z-test, t-test, chi-square test, or sign test. It can be on any topic they want, and they can choose to collect data or use existing data. The students will have a poster presentation at my school’s Scholarship and Creativity Day.

I would appreciate any feedback that you have, although we are 1.5 weeks into the semester, so I am unlikely to incorporate it.

What oral exams taught me

June 8, 2012

In my course for elementary education students, I once again gave oral exams—this time for the final exam. Here are two take-aways from the oral exams.

First, I need to do some peer instruction next time. In particular, students had a difficult time understanding the difference between the “whole” of a fraction and the “denominator” of a fraction (Consider “\frac{1}{2} of a mouse” and “\frac{1}{2} of an elephant.” Both have a denominator of “2,” but the whole of the first is “mouse” and the whole of the second is “elephant.” This leads to different meanings. I think that three clicker questions would eliminate this.

Second, I was shocked at how ineffective my lectures were. The oral exam questions (which they also had to create screencasts for) were ones that were previous done in class (for example: why does inverting and multiplying give the correct answer to a division problem?). The process was this: students would figure out why the algorithm works, and then present at the end of a class period. I begin the next class period by giving the same argument. Other class periods begin with students presenting on similar questions, the class evaluating the presentations, and—if needed—me presenting the correct explanation.

Furthermore, I gave the answers to each of the oral exam questions on the last day of class. Test test So students saw the answer to each oral exam question at least three times, and probably more (especially since I had students view other students’ video solutions).

I was concerned that students would simply memorize these explanations. This simply did not happen. Either students understood the algorithm (I can tell from the oral exams—these students could answer any question that I had on the algorithm) or students did not understand any portion of the algorithm.

Most puzzling is that, in my student evaluations, some of my students complained that they were never shown how to do the algorithms correctly. This is in spite of seeing a completely correct solution to every problem between 3 and 10 times. I can only explain this in two ways:

  1. Somehow students did not understand that the solutions they saw were solutions to the problems from the oral exams and screencasts. This would mean that I did not clearly communicate the intent of presenting the solutions.
  2. Lecture was monumentally ineffective in helping them learn—so much so that students did not even remember that they occurred.

Do you have any other ideas?

Timed Midterms

November 9, 2009

I just finished administering and grading all of my midterms—I had one class on Thursday, and two classes on Friday. I attempt to write midterms that are short. I have written about this topic before

I have two things to add. The first is that I think this gives students a more accurate assessment of their ability. Short exams eliminate the “I knew the material, but I got a bad grade because I ran out of time”-factor. You eliminate an excuse for the students.

The second comment is one of goals: I am generally happy with the length of the exam if the exam has an LD-50 of “10 minutes.” By this, I mean that 50% of the students finish with at least ten minutes left in the class period. There is no scientific basis to this, only a sense that I am providing a thorough enough exam without creating much of a time pressure.

I was able to meet my goal in two of my three classes. Interestingly enough, I teach two sections of multivariable calculus and only one class met the LD-50 goal. I am not sure why this is.

On Midterms

October 22, 2009

I am in the middle of midterms. I tend to write three different types of exams: two types of in-class exams, and one time of take-home exam. I will mix the take-home with either type of in-class midterm.

The first type of in-class midterm is a check that students are able to do the basic things from the course. This includes recalling definitions and answering straightforward questions. In a calculus class, I might include a question like “What is the derivative of f(x)=x^2?” The purpose of this in-class exam is to act as an incentive for the students to take time to learn the course material.

The take-home exam has a different purpose. Here, I’ll ask questions that require students to think about concepts in novel ways. I often make these open book, open notes, group exams. In a calculus class, I might include a problem like: “Find the equation of a tangent line to f(x)=x^2+1 that goes through the point (4,8).” The purpose of this type of midterm is less to assess the student’s knowledge than to help her acquire more. I hope that thinking about these questions leads to a greater understanding of the material.

The second type of in-class midterm is like the take-home, only it is in-class and not a group test (with a couple of exceptions). The main lesson I have learned here is to only give a small number of questions, since each of the questions is fairly involved.

I have given all three types of midterms so far this semester. I tend to always include a component of “learning exam” (rather than “accessing exam”), as my main goal is to help students learn. However, I also need to assign grades, and this is the reason for the pure assessing exams.

I don’t feel great about giving the assessing exams. I do not like the idea of making the students demonstrate that they learned the material, largely because I have read psychology results that say this type of “incentive” (a bad grade is a “stick,” or a good grade is a “carrot”) decreases student learning. I would love to hear of creative ways of having students learn mathematics, assessing what they know, and having the two complement—rather than work against—each other.

Please leave comments, although please offer evidence if you say “students would never learn if I don’t give them exams/homework/etc.”

Educational Goals

September 25, 2009

My goal for today is to discuss large-scale educational goals. I expect this to be a running theme in this weblog, since it is an essential, yet under-recognized, part of education.

This theme will start with one example: standardized testing. This is a polarizing issue in education. One side, which is currently “winning,” claims that standardized testing is essential. We cannot know if students learned what they should unless students are given an unbiased exam. Moreover, standardized exams give us information about the teachers and schools; if too many students fail a standardized exam, it is evidence that a teacher and/or school is failing. Largely, standardized tests are the only true way to establish accountability.

The other side claims that standardized testing hurts education. Among other reasons, it is easiest to write a standardized exam about memorized facts; testing higher learning skills is considerably more difficult–and therefore much rarer. This gives us a skewed view of how the students are doing; “no information” would be better than “wrong information.” This problem compounds itself when teachers “teach to the test,” favoring bite-sized facts to complex problem solving. Furthermore, some standardized exams predict family income better than future grades. This could lead to promising students from poorer backgrounds to be denied access to education. Finally, standardized tests are expensive, and school districts could spend the money better elsewhere.

I did my best to be fair to both sides (I have my own opinion), although my arguments for each is by no means exhaustive.

There is debate about standardized testing in some circles, and arguments like these are thrown back and forth at each other. However, I think a more constructive step would be to delve deeper to determine the education goals and attitudes of both sides. What follows is my attempt to determine what kind of attitudes both sides might have about education.

Pro-standardized testing attitude: Students need to learn what we teach them, and we teach them things that are easy to measure–either a student knows how to add, or she doesn’t. Because of this, we need to provide incentive to the students to put in the work to learn. One way of doing this is testing–the student will learn what we teach in order to do well on the exam.

Anti-standardized testing: While facts are important, it is more important for students to develop habits and thought patterns that will make them a successful citizen. Knowing the fifty states is nice, but it is more important that students develop a habit of providing evidence when making assertions (and requiring evidence when hearing assertions).

If I were to have the pro-standardized testing attitude, it would be obvious to me that standardized testing is essential. With the other attitude, it would be clear that standardized testing would be difficult to administer, at best. Because of this, I believe it would be better for the sides to attempt to reach agreement on the educational goals, rather than standardized testing. Even if both sides were to agree on standardized testing, we would have only solved one symptom; the underlying cause of the dispute–different attitudes toward education–would linger and create new disagreements.

I propose that we all identify our educational goals and attitudes before we decide what tools (such as standardized testing) would best meet these goals.

Time pressure and exams

September 17, 2009

My wife teaches math at the public university in the area. I highly recommend marrying an academic, as dinner-time conversations can quickly turn into professional development opportunities.

Our dinner conversation on Tuesday centered around timed exams; that is, exams where you have to do many questions in a relatively short period of time. We debated their merit, and we came up with the following:

  1. Timed exams should only be used if they fit your goals and values. A discussion on goals and values will be the topic of an upcoming post; for now, I will just say that they are woefully neglected in education.
  2. Timed exams really only work if the students are only expected to either recall, or to do a very basic computation repeatedly.
  3. Timed exams are not appropriate if students are engaged in complex problem solving.

We reached these conclusions mainly by acknowledging that brilliant people can sometimes take a long time to figure things out – professors are never expected to start and finish a paper within a week. Deep thinking takes time. Therefore, adding time pressure to an exam can give a faulty assessment of one’s understanding (assuming this is the reason why the exam is being given).

On the other hand, there are other times when we do not want our students to think much, and here timed exams could give useful information. Examples of such topics include an elementary student demonstrating that they know their multiplication tables, or a calculus student demonstrating that they know how to quickly compute easy derivatives. In both cases, we want students not to have to think deeply about these questions (it is really difficult to get common denominators when both the concept of adding fractions AND multiplying integers require concentration; the cognitive load is just too much).

As I posted (years) before, I have started to give short midterms. This is because I mainly want to test problem solving and conceptual understanding, and I don’t include as much recall and computation (although they have to do computation in order to do other problems). I tend to test recall and computation in different parts of the course.

Exam Lengths

November 15, 2007

I decided this semester to make a concerted effort to shorten my examinations. We typically have two hour exams, and this normally translates into 11-12 questions.

However, this creates a great time pressure on some students. I was concerned that exams designed to take 2 hours might not evaluate some students’ knowledge of material, as some students think a little more slowly or feel test anxiety. Since I am not concerned with the speed with which a student finishes the exam (we tend to test concepts much more than mechanical skills), I decided to shorten the exam to only 7 questions. After all – there is no rule that says we need to keep students working for the entire duration of the exam.

I was quite pleased with the result. The results were that students did slightly better than usual, but still within a normal range. I felt like I got a good assessment of my students’ knowledge, and the students were happier. The students were generally happy with the exam on our midterm evaluations.

There are two other advantages to shorter assessments besides (what I believe to be) more accurate evaluation of student knowledge. The first of which is that I was able to ask one or two harder exam questions. Students have more time to think on any one question, so I can choose a couple questions to be harder and still save time on the exam.

The second advantage is not a pedogagical one, but is pleasant nonetheless: there is less grading.

Ultimately, this stems from my philosophy that it is not important to do math quickly (there are a handful of exceptions to this). Mathematicians are never expected to start and finish a paper within the span of a day, and I don’t think that our students should feel such time pressures, either.