If you teach reflective, discussion-heavy courses, you know the pattern: students chase points, not progress. First drafts are treated like finals. Office hours drift into “why did I get an 87?” instead of “how do I make this argument stronger?” That’s exactly where RAIK 185 (Foundations of Leadership, ~40–42 students, once a week) started.
The instructors wanted students to learn leadership by actually iterating, hearing feedback, adjusting, and trying again, without the panic of “one-and-done.”
We supported the implementation of Mastery Grading with TeachFront to help shift students to a growth mindset and ease the stress around “point-getting” to developing subject proficiency.
What changed (and what didn’t)
The course switched to Mastery Grading with targeted reassessment on a handful of high-value assignments. Students could come back to one specific objective (think: clarity of claim, integration of evidence, reflection depth), request a reassessment, and submit a better version. The key is “targeted.” No blanket redos. No “maybe this time I’ll get luckier.” Just: key in on the area of the assignment that needs refinement.
How students responded to Mastery Grading
When students resubmitted, they focused on improving an objective. In Spring ’23, 75% of resubmissions targeted exactly one objective. In Spring ’24, it was still the majority at 52.2%. That means they were reading feedback, identifying the gap, and correcting precisely where the learning lived. It’s the opposite of gaming the system; it’s learning how to improve on purpose.
Did it move the needle?
Yes, meaningfully, and without inflating grades. In Spring ’24, the larger multi-objective assignments saw mastery ratings jump from 1.28 → 3.97 on a 4-point E-S-N-U scale (+2.69 levels).
On “Effortful Engagement,” students rose from 1.03 → 2.00. Nearly every single objective improved on reassessment: 100% on E-S-N-U, 96.6% on Effort. And students started strong, with their average initial (reversed) mastery was 3.73, which still climbed to 3.96 with revision.
Did more students actually engage?
When reassessment was available, participation nearly doubled. In Spring ’23, 34.1% of students tried at least one reassessment. In Spring ’24, 65% did. With that came 46 resubmissions and 120 targeted requests. This wasn’t busywork; it was focused practice on the skills that separate an okay paper from a credible one.
What this feels like by Week 3
- In class: You hear better questions: “Is my counter-argument actually addressing the strongest objection?” instead of “How many points is the conclusion worth?”
- In your LMS: Feedback gets used. We saw 2,000+ ratings across 920 graded instances, averaging ~2.18 ratings per student-assignment overall and ~4 ratings on resubmittable work. That’s a feedback map students can act on.
- In office hours: Conversations shift from grade defense to craft improvement. It’s less adversarial, more collaborative.
- For you: You’re not grading 12 versions of everything. Because students request targeted reassessment, your workload stays sane. You respond to the exact objective they flagged, not the entire assignment.
Isn’t this grade inflation in disguise?
The data argues the opposite. If this were fluff, you’d see wild jumps everywhere. Instead, you see precise gains on the exact skill students worked on, while their already-strong initial ratings still nudged higher. That’s not inflation; that’s refinement. And because reassessment is limited to select, high-value assignments, rigor stays intact.
Will weaker students just take unlimited shots?
They didn’t. The majority of resubmissions targeted one objective. That’s a behavior you get when the system is clear, the rubric is specific, and revision is purposeful. Students learn to diagnose. That’s the leadership skill we actually want: identify the constraint, improve the constraint.
“What about my time?”
Two design choices kept the lift reasonable:
- Targeted requests (students specify the objective).
- Selective availability (only the most important, multi-objective assignments are reassessable).
Those two levers concentrate everyone’s effort where it has the highest instructional ROI. You give feedback once, and it keeps paying dividends as students apply it across the rest of the course.
Why it resonates with students (and your course evals)
Students don’t feel like one grade defines them. Anxiety drops, agency rises, and effort shifts earlier, because a first submission becomes a starting point, not a verdict. When the path to improvement is visible, you don’t need threats to get effort; you need clarity. That’s what mastery grading supplies.
If you want to pilot this next term
- Pick 3–5 high-value assignments with multiple objectives.
- Use an objective-level rubric (E-S-N-U works well) with short, actionable descriptors.
- Require one targeted objective per reassessment request (two max).
- Cap the number of reassessments per student or per assignment to protect your time.
- Track four simple KPIs: participation rate, requests per resubmission, mastery delta, and ratings per submission.
When students are invited to fix the right thing, they do. And when instructors make feedback usable, learning becomes visible quickly: participation up (34% → 65%), precision intact (majority one-objective requests), and mastery gains that are both large and legitimate (+2.69 levels on key tasks). That’s how you shift a course culture from points to progress, without giving up rigour, and without giving up your weekends.
