Posts Tagged ‘SBG’

Marzano’s _Classroom Assessment & Grading That Work

June 9, 2017

I thought that I would do a couple of book reports this summer. I have been hearing about Marzano for years, and I thought that I should finally read some of what he says about Standards-Based Grading. The book I read is Classroom Assessment & Grading that Work.

I read the book about a month ago, so I do not remember everything. However, below are the ideas that stuck with me.

First, you should use “topics” for your class, and there should be about 15–20 of them. These are akin to standards in SBG. Whenever you test a standard, you should give the student a question in three parts. The first part should be basic details and/or facts that you would expect every student to know, the second part measures whether students understand what was covered in class, and the third part asks the students to go beyond what was done in class. I am teaching real analysis in the fall, so I am going to give an example for real analysis on the topic of compactness:

  1. Is the interval [2,3.5) compact?
  2. Show that if S \subseteq \mathbb{R} is a compact set, then the supremum of S exists and is in S.
  3. Give an example of a metric space M with a set S \subseteq M such that S is closed and bounded but not compact.

I don’t love my example, but I hope it gives you an idea. You then grade the students answer according to the following rubric:

  • A student receives a score of 4.0 if she is able to answer all three questions (“I can make connections that weren’t explicitly taught.”).
  • A student receives a score of 3.0 if she can answer the first two questions (but not the third) without mistakes (“I can do everything that is taught without mistakes.”).
  • A student receives a score of 2.0 if she can answer the first question (but not the second or third) without mistakes (“I can do the basics without mistakes.”).
  • A student receives a score of 1.0 if she can answer some portion of the questions with help.
  • A student receives a score of 0.0 if she cannot do any of the questions, even with help.

Half scores of 0.5, 1.5, 2.5, and 3.5 can be defined in a reasonable manner (Marzano does this in the book). Marzano claims that this scoring system leads to a roughly normal distribution.

Marzano then suggests that each topic is graded in one of two ways: either you find a function of the form a*x^b “of best fit” for each topic to predict where they will be at the end of the semester (using software). I will not be using this method. He also recommends using the “Method of Mounting Evidence,” which basically means that you keep track of all of the student’s scores within a topic (e.g. 1.5, 2.0, 1.5, 2.5, 2.5, 3.0, 2.5). Once you are convinced that a student’s “true score” is at a certain level, you mark it down and then look for evidence that they surpass it in future assessments. For instance, in the example list of numbers above, the second 2.5 is in italics, which might indicate that our hypothetical teacher thinks that our hypothetical student has convince him that she is definitely at 2.5 level for this topic. On assessments following that corresponding to the italicize score, the teacher will be mainly looking to see if the student jumped to a 3.0, 3.5, or 4.0 as her true score. And is she gets, say, a 1.5 on a future assessment? The teacher just returns the assessment and asks her to correct the missed “easy” work, with the assumption being that the student just had a bad day rather than no longer knows the material.

You can assess students as many times as you like, although Marzano recommends assessing students you are unsure of more. This seems entirely reasonable.

It seems possible that a student could get the hardest question correct but not the easiest question. Marzano mentions this possibility, but basically says that he assumes that a student who can answer the hardest question should be able to answer the easiest. So, ideally, the assessment writer would write questions in such a way that this is true.

At the end of the semester, the student’s score for each topic is just wherever they ended up with from the Method of Mounting Evidence. Marzano then talks about ways of averaging together the topic scores, although this is not particularly of interest to me. His other method for determining a final grade is something akin to what many of us to already, which is creating rules like, “A student gets a B for the semester if no topic score is below 2.0 and the majority or 2.5 or above.”

The two ideas that I am thinking a lot about are:

  1. Topics should be assessed at different levels, as with my real analysis example. I have been heading this way for a while now, and maybe this is the year to try it.
  2. You can give grades based on whether a student can solve it with help. I think that this is brilliant. However, I still need to figure out how to assess this in a reasonable way with 75 students. But I like it.
Advertisements

Reporting Grades in SBG

January 25, 2017

I mostly have liked the course management software I have used (Moodle and Canvas), but both are pretty terrible when it comes to keeping track of grades in an Standards-Based Grading system. I have mostly kept the my grades in a spreadsheet, which does all of the calculations that I want it to, but the students then do not have access to their grades. I tried using Canvas to report grades in Spring 2016 and Fall 2016, but Canvas will not do the calculations I need it to (I just posted the raw scores to Canvas, and I gave the students the logic to figure it out); I had to keep a separate spreadsheet to do everything I needed.

Neither of these made me happy, because I want my students to have access to their grades (if only to check for mistakes I have made), but I also want a single place to put my grades. My solution was inspired by Drew Lewis, who created a PERL script to send his students email updates of their grades directly off his spreadsheet. If I were more computer-savvy, this probably should have been an obvious solution, but I am very grateful that Drew pointed out what I could not recognize on my own.

I am more familiar with Python, so I wrote my own script (included below). Once I have the code written, I go to a command line (I use Linux), type “crontab -e” to edit my crontab, and type (without the quotation marks) “14 3 * * 2 /usr/bin/python 118-S17/Grades/118EmailGrades.py” to send an email at 3:14 am (the 14 3) every Tuesday morning (the “2” in “14 3 * * 2”). The “/usr/bin/python” says to run the program “python” and input the file “118-S17/Grades/118EmailGrades.py.”

Below is the code. It seems to work, but there is one issue that I am ironing out: I am only allowed to send five emails at a time when I tested it. I am pretty sure that this is a limitation on the server’s end, since I am sending the messages to only a couple of email addresses (all mine for the test runs). My (ugly) hack, which worked on Tuesday, is that I broke up my code so that each program only emails 5 students. I welcome troubleshooting ideas from those who know about this stuff, although I suspect that I could just try the single program and it would work, since I am not actually going to email to the same email address more than once for my class.

Here is the code. Note that indentation matters A LOT in Python, so be careful if you cut-and-paste.

import openpyxl
import smtplib
import email
import time

#This gets the spreadsheet the grades are in.
wb=openpyxl.load_workbook('Grades118S17.xlsx',data_only=True)

#Here I am getting each 'sheet' of the spreadsheet.
rosterSheet=wb.get_sheet_by_name('Roster')
summarySheet=wb.get_sheet_by_name('Summary')
quizSummarySheet=wb.get_sheet_by_name('QuizSummary')
quizLogicSheet=wb.get_sheet_by_name('QuizLogic')

#I put my password in my spreadsheet, since that is supposed to be more secret than this code is.  I put it in Cell AA1 of the "Roster" sheet, and this gets it out.
pw=rosterSheet['AA1'].value

#I am logging into my email server here.
smtpObj=smtplib.SMTP('exchange.csbsju.edu',587)
smtpObj.ehlo()
smtpObj.starttls()
smtpObj.login('bbenesh@csbsju.edu',pw)

#I want to put the date in the email, so I am getting it here.
todaysDate=str(time.strftime("%m/%d/%y"))

#The From email address and Subject of the email will be the same for every student; I put today's date in the Subject for the students' convenience.
fromVar="bbenesh@MYSCHOOL.edu"
subject="Math 118: Grade Update for "+todaysDate

#Put the last row you want to check prior to the +1  
NUMBEROFROWS=54+1

#Put the rows you do not want to check (because they are blank or because the student dropped) in the list below.
EXCEPTIONS=[28]


#I have to hardcode the range, and I skip rows 28 and 29 because they do not contain student data.
#The commented out for loop is just to test so that I am not flooded with emails.
for rowVar in range(2,NUMBEROFROWS):
	&nbsp#This just skips the blank rows that I hard-coded into the exceptions.	
	if rowVar in EXCEPTIONS:
		continue
	#This gets the student's first name and email, and I print the first name so that I can see who received an email (I get an email update once this program runs).
	firstName=rosterSheet.cell(row=rowVar, column=3).value
	print firstName
	toEmail=rosterSheet.cell(row=rowVar, column=4).value
	todaysGrade=summarySheet.cell(row=rowVar,column=2).value
	#I am going to put together the body of the message in several steps, storing it in the 'text' variable each time.  This is just the saluation of the email.
	text="Dear %s,\n\nBelow is your weekly grade update for %s.  If the semester ended today, you would receive a grade of %s.  Of course, I fully expect your grade to go up, since the semester is not yet over."   % (firstName,todaysDate,todaysGrade)
	
	#Here I am getting the summaries of their grade components and putting it in the text.		
	quizGrade=summarySheet.cell(row=rowVar,column=19).value
	gatewaysGrade=summarySheet.cell(row=rowVar,column=20).value
	teamProjectGrade=summarySheet.cell(row=rowVar,column=21).value
	individualProjectGrade=summarySheet.cell(row=rowVar,column=22).value
	SRLGrade=summarySheet.cell(row=rowVar,column=23).value
	text+="\n\nBelow are your current letter grades for each of the components of your semester grade.  Your grade is determined by the lowest of these, so you should focus on the component with the lowest grade; see the syllabus for more details.\n\nQuiz Grade: %s \nGateways Grade: %s\nTeam Project Grade: %s\nIndividual Project Grade: %s\nSelf-Regulated Learning Reflections Grade: %s\n\n" % (quizGrade,gatewaysGrade,teamProjectGrade,individualProjectGrade,SRLGrade)

	#Here I am giving them the next two things they should be studying to improve their grade; the logic in the spreadsheet figures this out.	
	firstMissingQuiz=quizLogicSheet.cell(row=rowVar,column=2).value	
	secondMissingQuiz=quizLogicSheet.cell(row=rowVar,column=3).value	
	text+="The two Learning Outcomes you should focus on next are %s and %s.  At the bottom of this email is a list of the number of times you have demonstrated each of the Learning Outcomes.  Please check this over to see that it is correct, and be sure to email me if you find a mistake.\n\nHave a great day!\nBret\n\n\n" % (firstMissingQuiz,secondMissingQuiz)

 	#Next, I am just going to loop over the raw data for each Standard and print it out at the end of the email.  This is so they can check to make sure that their records agree with mine.	
	#Put the number of the column corresponding to your last learning outcome prior to the +1
	NUMBEROFCOLUMNS=23+1
	#Again, I am hardcoding the column range for my spreadsheet.	
	for columnVar in range(2,NUMBEROFCOLUMNS):
		labelCode=quizSummarySheet.cell(row=1,column=columnVar).value	
		numberOfMarks=quizSummarySheet.cell(row=rowVar,column=columnVar).value	
		text+="%s:  %s\n" % (labelCode,numberOfMarks)	

        #Here I just format the final message, addding a subject header to my 'text' variable.  Then I send the email.	
	message='Subject: %s\n\n%s' % (subject,text)	
	smtpObj.sendmail(fromVar,toEmail,message)

#I log out of the email server.
smtpObj.quit()

How Specs Grading Is Influencing Me

December 17, 2014

I hope I have not come off too negatively about specs grading. Reflecting on what I have written, it could seem like I am trying to discourage people from using it. I hope that is not the case. I am engaging in this conversation so much because I am very hopeful about it.

So when I say that the examples of specs given in the book are “shallow,” I do not intend this to say that specs grading is bad. Rather, what I mean (but say poorly) is that the examples of specs do not capture what I would want in a mathematics class. To put a word count requirement on a proof would be a very shallow way to grade, but I do not necessarily think that word counts are bad for other subjects (at the very least, I don’t know enough how to teach other subjects to make a judgment).

So this whole process is mainly to help me figure out how to make specifications grading work in my courses. I apologize if it sounds complainy.

So I am going to switch gears to describe the positive things I learned from the book.

  1. I should include specifications. I see no reason not to explicitly tell students what my expectations are; I just need to stop being lazy and do it.

    For instance, I collected Daily Homework in my linear algebra class last spring. It was graded only on completion, but some students did not know what to do when they got stuck or didn’t understand the question. If I had explicitly given them a set of specifications for Daily Homework that included something like, “If you cannot solve the problem, you should show me how the problem relates to \mathbb{R}^2” (we often worked in abstract vector spaces), I think that I would have been much happier with the results.

    Similarly, I gave my students templates (as Lawrence Leff does) for optimization and \delta\epsilon proofs in calculus, but I could be doing more of that.

    The one catch is that I do not know how to specify for “quality” (thanks, Andy!). I think I have been annoying people on Google Plus trying to figure out how to solve this—sorry. But this is essential for my proofs-based courses. If I can’t figure out how to specify for quality in those courses, I will likely have to modify specs grading beyond recognition if I am going to use it in those courses.

  2. To get a higher grade in my course, I have been requiring students to master more learning goals. This is fine, but the book suggested that I could also consider having students meet the same learning goals, but have students try harder problems if they want a higher grade. Nilson’s metaphor is that the former is “more hurdles,” whereas the latter is “higher hurdles.”

    I really like this idea, and I can sort of imagine how that could work. In my non-tagging system, I could give three versions of the same problem: C-level, B-level, and A-level. For optimization in calculus, I could imagine that a C-level problem would give the function to be optimized, a B-level question wouldn’t, and an A-level would just be a trickeier version of a B-level question.

    This would require me to write more questions AND it would require me to be able to accurately judge the relative difficulty of problems. But I think that both are doable, and I like the idea.

  3. Specs grading requires that students spend tokens before being allowed to reassess. The thinking is that if reassessments are scarce, students will put forth more effort the first time. The drawback is that each assessment has higher-stakes.

    I definitely want to keep things low-stakes, but I am also finding that students aren’t working as hard as they should until the end of the semester. Using a token-like system could be a partial-solution to that.

  4. The book reminds me that I should be assigning things that are not directly related to course content; the book calls them meta-assignments. Here is a relevant quotation:

    Other fruitful activities to attach to standard assignments and tests are wrappers, also called meta-assignments, that help students develop metacognition and become self-regulated learners…Or to accompany a standard problem set, he might assign students some reflective writing on their confidence before and after solving each problem or have them do an error analysis of incorrect solutions after returning their homework (Zimmerman, Moylan, Hudesman, White, & Flugman, 2011).

    One such idea that I had to help the students start working earlier in the semester (see my previous item) is to have students develop a plan of action for the semester. Determine a study schedule, set goals for when to demonstrate learning goals, and (if they want to) determine penalities for missing those goals.

  5. I should consider including some “performance specs” (which simply measures the amount of work, not the quality of the work) in my grading. I don’t like this philosophically, but I think that it might help my students to practice more.

So even if I don’t convert to specifications grading, I have already learned a lot from it.

Specification Grading vs Accumulation Grading

December 8, 2014

Thursday, Robert Talbert and Theron Hitchman discussed the book Specifications Grading: Restoring Rigor, Motivating Students, and Saving Faculty Time by Linda Nilson on Google Plus (go watch the video of the discussion right now!)

First, I would like to say that using Google Hangouts like this is not done enough. Robert and Theron wanted to discuss the book, but live in different states. Using Skype or Google Hangouts is the obvious solution, but not enough people make the conversation public, as Robert and Theron did. I learned a lot from it, and I hope that people start doing it more (including me). Additionally, I think that two people having a conversation is about the right number. I found it more compelling than when I have watched panel-type discussions of 4–6 people on Google Hangouts.

As some of you know, I have pompously started referring to my grading system as Accumulation Grading. When Robert first introduced me to the Nilson’s book, I ordered it through Interlibrary Loan immediately. It has not arrived yet, so I probably should wait until I read it before I start comparing Specification Grading to Accumulation Grading.

But I am not going to wait. The people are interested in Specification Grading now, and so I am going to compare the two now. Just know that my knowledge of Specification Grading is based on 30 minutes of Googling and 52 minutes and 31 seconds of listening to two guys talk about it on the internet. I will read the book as soon as it arrives, but feel free to correct any misconceptions about Specification Grading that I have (there WILL be misconceptions).

Here is how to implement Specification Grading in a small, likely misconceived nutshell:

  1. Create learning goals for the course.
  2. Design assignments that give the students opportunities to demonstrate they have met the learning goals.
  3. Create detailed “specifications” on what it means to adequately do an assignment. These specifications will be given to the students to help them create the assignment.
  4. “Bundle” the assignments according to grade. That is, determine which assignments a B-level student should do, label them as such, and then communicate this to the students. This has the result that a student aiming for a B might entirely skip the A-level assignments.
  5. Grade all assignments according to the specifications. If all of the specifications are met, then the student “passes” that particular assignment. If the student fails to meet at least one of the specifications, the student fails the assignment. There is no partial credit.
  6. Give each student a number of “tokens” at the beginning of the semester that can be traded for second tries on any assignment. So if a student fails a particular assignment, the student can re-submit it for potentially full credit. You may give out extra tokens throughout the semester for students who “earn” them (according to your definition of “earn”).
  7. Give the student the highest grade such that the student passed all of the assignments for that particular grade “bundle.”

Recall that Accumulation Grading essentially counts the number of times a student has successfully demonstrated that she has achieved a learning goal (students accumulate evidence that they are proficient at the learning goals). My sense is that Accumulation Grading is a type of Specifications Grading, only with two major differences: in Accumulation Grading, the specifications are at the learning goal level, rather than the assignment level, and also the token system is replaced with a policy of giving students a lot of chances to reasses.

Let’s compare the two point-by-point (the Specification Grading ideas are in bold):

  1. Create learning goals for the course.
    This is exactly the same as in Accumulation Grading.

  2. Design assignments that give the students opportunities to demonstrate they have met the learning goals.
    This is exactly the same as in Accumulation Grading. In Accumulation Grading, this mostly takes the form of regular quizzes.

  3. Create detailed “specifications” on what it means to adequately do an assignment. These specifications will be given to the students to help them create the assignment.
    This is slightly different. In Accumulation Grading, the assignment does not matter except to give the student an opportunity to demonstrate a learning goal. So whereas Specifications Grading is focused on the assignments, Accumulation Grading is focused on the learning goals.

    To compare: in Specifications Grading, students might be assigned to write a paper on the history of calculus. One specification might be that the paper has to be at least six pages long.

    In Accumulation Grading, this would not matter— a four-page paper that legitimately meets some of the learning goals would get credit for those learning goals. If you wanted students to write a six page paper, you would create a learning goal that says, “I can write a paper that is at least six pages long.”

  4. “Bundle” the assignments according to grade. That is, determine which assignments a B-level student should do, label them as such, and then communicate this to the students. This has the result that a student aiming for a B might entirely skip the A-level assignments.

    This is technically happens in Accumulation Grading, as you can see at the end of my syllabus:

    However, something else is going on, too. The learning goals are really the things that are “bundled,” as you can see in the list of learning goals below:

    I love this flexibility. Every student (at least those who wish to pass, anyway) need to know that a derivative tells you slopes of the tangent lines and/or an instantaneous rates of change, but only student who wish to get an A needs to figure out how to do \delta-\epsilon proofs on quadratic functions.

  5. Grade all assignments according to the specifications. If all of the specifications are met, then the student “passes” that particular assignment. If the student fails to meet at least one of the specifications, the student fails the assignment. There is no partial credit.

    This is similar to Accumulation Grading, but not exactly the same. In both, there is no partial credit. The difference is that—since the main unit of Accumulation Grading is the learning goal, not the assignment—students will have multiple ‘assignments’ (really, quiz questions) that get at the same learning goal. Students can fail many of these ‘assignments’ as long as they demonstrate mastery of the learning goals eventually.

  6. Give each student a number of “tokens” at the beginning of the semester that can be traded for second tries on any assignment. So if a student fails a particular assignment, the student can re-submit it for potentially full credit. You may give out extra tokens throughout the semester for students who “earn” them (according to your definition of “earn”).

    There are no tokens in Accumulation Grading. Rather, students get many chances at demonstrating a particular learning goal.

  7. Give the student the highest grade such that the student passed all of the assignments for that particular grade “bundle.”

    This is exactly the same in both grading systems.

So the fundamental difference seems to be that Accumulation Grading focuses on how well students do at the learning goals, while Specifications Grading focuses on how well students do on the assignments. As long as the assignments are very carefully constructed and specified, I don’t really see one as being “better” than the other. However, it seems more natural to focus on learning goals rather than assignments, as the assignments are really just proxies for the learning goals; I would rather focus on the real thing than the proxy.

Another major difference is that Specification Grading uses a token system while Accumulation Grading automatically gives students many, many chances at demonstrating proficiency. One system’s advantage is the other’s disadvantage here:

  • Accumulation Grading requires creating a lot of assignments (which have mostly been quiz questions for me), whereas Specification Grading requires fewer assignments. Moreover, Accumulation Grading requires that a lot of time be spent on reassessment—either in class or out (this is probably a positive in terms of learning, but definitely a negative with respect to me having a lot of class time available for non-reassessment activities and getting home for dinner on time).
  • Accumulation Grading ideally requires some time for students to learn each learning goal between when it is introduced and when the semester ends. This is because the student needs to demonstrate proficiency multiple times (usually four times) during the semester. So either the last learning goal must be taught well before the end of the semester, or the Accumulation Grading format must be tweaked for some subset of the learning goals (you could use a traditional grading system just for the learning goals at the end of the semester). I do not think that this is an issue for Specifications Grading. On the other hand, I do not think that Specifications Grading would give the same level of confidence in a student’s grade, as it does not necessarily require multiple demonstrations of each learning goal.
  • I am concerned that the token system could hurt the professor-student relationship, whereas freely giving reassessments helps it. Specifically, I am concerned that it might seem overly arbitrary and harsh to deny a tokenless student a chance to reassess—I could see being frustrated with the professor toward the end of the term for not allowing a reassessment. On the other hand, the professor in Accumulation Grading is the hero, since she allows students as many times as possible to reassess.

That last sentence is a half-truth, since there are limitations. For instance, I only allow reassessments in class now, so that immediately limits the number of possible reassessments (my life got really crazy when I allowed out-of-class reassessments). But that seems to me to be more reasonable than the token system, since class days are not arbitrarily set by the professor, but the tokens are.

The main thing working against Accumulation Grading is that one must figure out how to reassess in a reasonable way. I have been compressing my semester to fit more quizzes in at the end of the semester, and that has worked well for me. Other people may be fine doing reassessments outside of class.

Please correct me on where I am wrong on any detail of Specifications Grading. Right now, I am still leaning toward Accumulation Grading, although I hope that Specifications Grading blows me away—I am always looking for a better system, and I will gladly switch if I find it better.

Quiz-Video Combination Instead of Lecture

November 14, 2014

Please help me understand if how much I am rationalizing here.

Here is a reminder of how I have been organizing my classes: I create learning goals for the course, and spend roughly two-thirds of the semester teaching them the content. The grading system is set up so that students have to demonstrate proficiency of each learning goal n times, where n \approx 4. The last third of the semester is spend 50/50 on quizzes and review.

I have felt a tiny bit guilty about this format for two reasons. First, I was concerned that I was depriving the students of 1/3 of the traditional instruction time. Second, I felt like a slacker because I don’t usually have to prep much for classes in the last third of the semester (also during the quizzes: I am writing this post during one of their quizzes, and I am slightly uncomfortable that they are working so hard on the class and I am not).

But I don’t feel all that bad about things now, because I realized a couple of things.

First, taking quizzes is about as active as learning gets (and maybe there are Testing Effect-type effects, especially since I purposefully spread out the learning goals on the quizzes). So students are very actively thinking about the material during the quizzes. So I am definitely giving them learning experiences, which goes a long way to alleviate my first source of guilt.

Also, I spent a lot of time creating solutions for every quiz problem. These are posted right after the quizzes so that students can get immediate feedback. This makes me feel better about my current lack of prep time—especially since I am still spending a decent amount of time writing the quizzes.

This also feels a bit better about my students’ learning experience in the last third of the semester. One of the ways I compress the material down to two-thirds of the semester is that I go lighter on the number of examples I give in the first part of the semester. However, my students probably have at least as many examples from the videos by this point in the semester than they would have gotten under a more usual course structure, and they have the added benefit of having had to attempt the problem first before viewing the solution (I am thinking about trying to make this the norm as much as possible. Ideally, things would go: try a problem on your own, try the problem with your team, see me do the problem, then try a similar problem on your own. This is a different blog post, though).

Finally, my overall impression is that the course is going well. I think that students are learning, and they are probably learning more than previous times I have taught the course.

So how much am I simply rationalizing here, and how much of my reasoning is sound?

Three Benefits of “Accumulation Grading with Tagging”

October 15, 2014

So I decided to give my grading system a name: Accumulation Grading (or Accumulation Grading with Tagging). I just sick of writing “this grading system” or “how I am grading” all of the time.

Here are three benefits that I am seeing from this system. One has been mentioned before here (at least in the comments), one I anticipated, and one I only realized this week.

First, I suspect that there may be some sort of a metacognitive boost with this grading system. Students are forced to reflect on what they have done, and this may be helpful.

Second, grading is much easier when students use different approaches. In a very real way, I am just grading whether their “tags” are legitimate (the are correct, relevant to the problem, and point to a specific part of the solution where it is relevant). This means that students can have wildly different solutions with completely different tags, and they will both get appropriate credit. This hasn’t happened a lot yet, although I imagine it could.

Finally, my new realization is that this grading system may do away with a lot of fighting over grades. For example, a colleague recently complained that when students are asked to “graph functions” in Calculus I, many students were doing so simply by plotting points. My colleague did not want to give them credit, since he intended for them to find intervals of increasing/decreasing/concavity/etc. The students were not happy that they did not receive credit.

This is not an issue in Accumulation Grading with Tagging. Students are welcome to simply plot points to graph a question, but they run into an issue when they start to tag their work with the relevant learning goals (there are none). But nothing is marked wrong (because it isn’t wrong), so there is no real disagree to be had between student and teacher.

Update on Student-Claimed Learning Goals

October 8, 2014

I am halfway through the semester where I am using a new grading scheme for Calculus I. Here is a rough summary of the scheme:

  1. I give the students a list of learning goals. These are much finer than I have done in the past, which means that there are many more of them.
  2. I give students quizzes in class.
  3. For each quiz question, the student solves the problem as best as she can.
  4. Here is the important part: after solving the problem, the student reviews her work and determines which learning goals she has met.
  5. She indicates exactly where she met each learning goal. If she does not claim a learning goal, she does not get credit for the learning goal.

Basically, the students are forced to reflect on what they did in order to get credit for their work.

I just completed my midterm grades, and I would like to report on them. But I will first summarize where we are and describe my assessment of the course prior to seeing the grades.

We just finished off differential calculus. We will cover all of integral calculus in the next 2.5 weeks (I accelerate the schedule), and then we will move on to the review-and-quiz portion of the semester (we have quizzes for the entire class on Tuesdays and Fridays, and we review for the quizzes on Mondays and Wednesdays).

I have been simply thrilled with both sections of Calculus I. They discuss ideas, ask questions, and generally are willing to try whatever I throw at them. This has been a really fun semester. In contrast, I have heard that the other Calculus I classes have been struggling.

The good news is that my midterm grades reflect this. There are only three students who are presently in danger of getting below a C, assuming students continue on their current paces (one drawback to this grading system is that literally every student technically has an F right now, due to the fact that none of them have demonstrated any ability to work with integrals. But this is simply because they haven’t had a chance yet. But my original point for this parenthetical statement is that any student who starts slacking off is in danger of failing).

I am pleased and relieved about this. I certainly had considered that having the students claim credit for relevant learning goals could have been a disaster, but this not the case; the students have had minimal trouble with this.

One reason why they may not have had trouble is that I have been specifically referencing learning goals when they come up in class and then posting the slides to the CMS so that students can find where each learning goal is introduced. I also have been highlighting the relevant learning goals in the daily assignments (Example: “For Wednesday: We will discuss Learning Goals C4 and B9. Read 2.4 and 2.8. You should be able to do Preliminary Exercises 1 and 2 of 2.4, Exercises 63 and 65 of 2.4, and Preliminary Exercise 1 of 2.8.”).

So I am very happy and relieved at how the first half of the semester has gone. I really think that the focus on the learning goals has helped students learn how to talk about calculus. I will keep you all posted.

Assessment Idea for Calculus I: Feedback desperately wanted!

June 25, 2014

I am planning an overhaul of Calculus I for the fall. I used a combination of Peer Instruction and student presentations in Fall 2012, and I was not completely happy with it.

So I am starting from scratch. I am following the backwards design approach, and I feel like I am close to being done with my list of goals for the students. Here is my draft of learning goals, sorted by the letter grades they are associated with:

View this document on Scribd

I previously had lists of “topics” (essentially “Problem Types”). These lists had 10–20 items, and tended to be broad (e.g. “Limits,” “Symbolic derivatives,” “Finding and classifying extrema”). This list will give me (and, I hope, the students) more detailed feedback on what they know.

This differs from how I did things in the past, in that I used to list “learning goals” as very broad topics (so they weren’t learning goals at all, but rather “topics” or “types of problem”). Students would then need to demonstrate their ability to do these goals on label-less quizzes.

The process would be this:

  1. A student does a homework problem or quiz problem.
  2. The student then “tags” every instance of where she provided evidence of a learning goal.
  3. The student hands in the problem.
  4. The grader grades it in the following way: the grader scans for the tags. If the tags correspond to correct, relevant work AND if the tag points to the specific relevant part of the solution, the students gets credit for demonstrating that she understands that learning goal. Otherwise, no.
  5. Repeat for each tag.
  6. Students need to demonstrate understanding/mastery/whatever for every learning goal n times throughout the semester.

Below are three examples of how this might be done on a quiz. The first example is work by an exemplary student: the student would get credit for every tag here (In all three of the examples, the blue ink represents the student work and the red ink indicates the tag).

View this document on Scribd

The second example has the same work and the same tags, but the student would not get credit due to lack of specificity; the student should have pointed out exactly where each learning goal was demonstrated.

View this document on Scribd

The third example (like the first) was tagged correctly. However, there are mistakes and omissions. In the third example, the student failed to claim credit for the “FToCI” and the “Sum/Difference Rule for Integrals.” Because of this, the student would not get credit for these two goals (even though the student did them; the point is to get students reflecting on what they did).

Additionally, the student incorrectly took the “antiderivative of the polynomial,” which caused the entire solution to the “problem of motion” to be wrong. Again, the student would not get credit for these two goals.

However, the student does correctly indicate that they know “when to use an integral,” could apply the “Constant Multiple Rule for integrals,” and “wrote in complete sentences.” The student would get credit for these three.

View this document on Scribd

I like this method over my previous method because (1) I can have finer grained standards and (2) students will not only “do,” but also reflect on what they did. I do not like this method because it is more cumbersome than other grading schemes.

My current idea (after talking a lot to my wife and Robert Campbell, and then stealing an idea from David Clark) is to require that each student show that he/she can do each learning goal six times, but up to three of them can be done on homework (so at least three have to be done on quizzes). I usually have not assigned any homework, save for the practice that students need to do to do well on the quizzes. This is a change in policy that (1) frees up some class time, (2) helps train the students on how to think about what the learning goals mean, (3) force some extra review of the material, (4) provide an additional opportunity to collaborate with other students, and (5) provide an opportunity for students to practice quiz-type problems.

My basic idea is that I will ask harder questions on the homework, but grade it more leniently (which implies that I will ask easier questions on the quizzes, but grade it more strictly).

I have been relying solely on quizzes for the past several years, so grading homework will be something that I haven’t done for a while. I initially planned on only allowing quizzes for this system, too, but it seemed like things would be overwhelming for everyone: we would likely have daily quizzes (rather than maybe twice per week); I would likely not give class time to “tag” quizzes, so students would do this at home (creating a logical nightmare); I would probably have to spend a lot more time coaching students on how to tag (whereas they now get to practice it on the homework with other people).

Let’s end this post, Rundquist-style, with some starters for you.

  1. This is an awesome idea because …
  2. This is a terrible idea because …
  3. This is a good idea, but not worth the effort because …
  4. This is not workable as it is, but it would be if you changed …
  5. Homework is a terrible idea because …
  6. You are missing this learning goal …
  7. My name is TJ, and you are missing this process goal …

Students Figure Out Which Standards They Meet

April 16, 2014

I am starting to think about planning for Calculus I for next year, and there is an idea I would like to try: I want to stop labelling problems according to the corresponding standard, and put the burden on the student to determine which standards they met. I have tried this before (as have other people), but I would implement it different from how I did it last time.

So each quiz would go like this: I give them several (unlabelled) quiz problems. The students do what they can. When they are done, they submit their work. However, when they submit, we make some sort of a copy (perhaps a paper copy, perhaps just take a picture with the smart phone), and then the student takes one copy home.

At home, the student tries to figure out which standards she met on the quiz. For each standard, she writes up an argument as to why she met that standard. Specificity is key—the student would need to explicitly say where and how she met the standard. She submits this at the next class period, and this is graded as I usually do.

Here are the things I like about this idea:

  1. Students have to reflect on their work in order to get credit. This could lead to higher quality writing.
  2. Students would have to take ownership of their learning. They need to be aware of the standards they are missing, and make a concerted attempt to learn it well enough to be able to apply it on a quiz (including recognizing where it makes sense to apply it).
  3. Students can solve problems any way they like. As long as they can solve the problem using a standard, it counts. For instance, a linear algebra student might get “eigenvalue” and “determinant” credit for finding the eigenvalues of a matrix.
  4. Students are forced to really think about what the standards are and mean. There could be metacognitive benefits.
  5. I can ask more synthesis questions on quizzes; I do not need to isolate ideas for each question.
  6. Students no longer get the hint that the label provides (if the quiz question is labelled as corresponding to the “Tangent line” standard, then the student has a pretty good idea that he should find a tangent line at some point).
  7. It might give me room to have more standards (and more specific standards of the “I can do this” variety, rather than standards that are really topics, as in “Tangent lines.” David Clark encouraged me to make this transition last weekend).

Here are some potential problems:

  1. If the problems are too synthesis-y, then students won’t be able to do very many on each quiz. This might be fine, but it would be bad for a student who gets stuck and does not know where to start (on the other hand, maybe it would help teach students to start with something?).
  2. Students may try to shoehorn standards where they do not belong. This is what I would do if I were missing a small subset of standards.
  3. I am not certain I can write quiz problems that will give everyone the opportunities they need at the end of the semester. Students need different things, so I would have to have a lot of questions (note: this actually doesn’t need to be any different than how it is now; I can just provide straightforward, say, “Tangent lines” problems to quizzes if I need to. So this actually isn’t much of a problem).
  4. It forces students to be aware of what they have not yet demonstrated; this might be asking too much of some first-years.

I am on the fence about this, although I would really like to try it. Perhaps I could do both: keep the old way (with the labels) and do the new way. I could make that work.

What am I missing? What other advantages, disadvantages, and difficulties would this have?

Office Hours Again

April 3, 2014

I wrote about office hours three years ago, and I have noticed that my office hours less attended than my colleagues’ (some of them, anyway). I used to have packed office hours, but that slowed to a trickle a couple of years ago.

This concerns me a bit. While I am happy that students might be learning on their own, I have somehow internalized the message that “being a good professor means having a lot of students at your office hours.”

But then I learned of something that might make me feel better. I had the pleasure of meeting Andy Rundquist (and Matt Wiebold) for lunch last week, and he commented that has not had many students in office hours recently, either. We talked briefly about why this might be. Here are some possibilities:

  1. I am somehow intimidating, and students do not want to come to my office hours. Or, even if I am not intimidating, I am sending some message that students are not welcome.
  2. Neither Andy nor I collect homework that is graded for accuracy.
  3. Both Andy and I use something akin to Standards-Based Grading.

I never realized it before, but my conversation with Andy makes me wonder if SBG and/or a No Homework Policy might naturally lead to a decrease in students coming to office hours.

For instance, I have found that while I have a smaller quantity of students in my office hours, I typically have a much higher quality interaction during the office hours. Students tend to come with specific questions about why they are stuck on a problem, or (better yet) specific questions about something they are just curious about. I remember this happening a lot less previously. Before, it seemed like there were mainly requests that I do homework problems (or problems similar to homework problems). So it seems like the No Homework policy got rid of students coming to office hours for the sole purpose of finishing busy work (I think this is a good thing).

[Edit 10:38 pm CDT: This is not just a matter of “the course is easier because there is no homework,” which was my first thought of how to explain this. The students have closed notes quizzes on the SBG topics, so students still need to understand the material; they just demonstrate it on quizzes rather than on homework, which is harder to do.]

A plausible explanation for why SBG might lead to fewer students attending office hours is that students are being supported just enough to learn independently. When I used a Traditional Grading scheme, it likely was not clear what the most important ideas of the course were. I could see a student wanting more guidance if every detail in the course seems as important as every other detail (it probably did not help that I would typically respond with “Everything” when students asked what they should be studying for an exam). My hypothesis is that SBG gives students just enough guidance that they can determine what to study on their own.

This is a balancing act, of course: I do think that most everything that I do in class is important, and that students should know it. However, I would be willing to sacrifice students learning some of the course topics if it resulted in students learning the most important topics more deeply and becoming more independent learners. So I hope that this is what is happening.

Have other people noticed that office hour attendance is correlated with how you structure class? Can anyone think of any other explanation for the change in office hour attendance?