If you teach discussion-heavy courses, you’ve probably wondered: what if revision weren’t the exception but the norm? In RAIK 186H (Leadership II, ~40–42 students, in-person, once a week), nearly every assignment was open to targeted reassessment, and the results look like what we wish office hours sounded like: students fixing the right thing, fast, and moving work from “good” to “truly strong” without lowering the bar.
What changed (and what didn’t)
Rigour stayed. The path to mastery changed. Spring ’24 ran a tight sequence of 10 assignments, and 9 of 10 allowed resubmission. These weren’t lightweight; every single one carried 5–6 objectives, and resubmittable tasks averaged 6.0 objectives, so revisions targeted real complexity (analysis, application, reflection) rather than busywork.
How students used the option
Engagement was consistent and focused. 89.7% of students (35/39) submitted at least one reassessment, creating 104 resubmissions and 182 objective-level requests. Crucially, 51% of those resubmissions targeted one objective: the pattern you see when learners read feedback, isolate the constraint (e.g. clarity of claim, evidence integration, reflection depth), and fix it rather than redoing everything. Average load stayed sane at ~1.70 objective requests per resubmission, surgical, not scattershot.
Did it move the needle?
Yes, substantively and predictably. Across 173 reassessed objectives on the reversed E-S-N-U scale (4 = Exemplary), average ratings rose from 2.38 → 3.92 (+1.54 levels). 93.1% of reassessments improved, and none declined. That’s not randomness or leniency; that’s targeted lift on the exact skill students chose to remediate.
Context matters: students already started strong
Initial performance across the course was high, averaging 3.84/4.0 on first ratings, yet still climbed to 3.97 by the final rating. Reassessment didn’t rescue weak work; it helped solid work become excellent, especially on multi-objective tasks where small gaps (logic, evidence choice, reflective specificity) separate “good” from “great.”
What this feels like by Week 3
- In class: Questions shift from “What’s this worth?” to “Does my analysis actually answer the case’s core tension?” and “Where exactly is my reasoning thin?”
- In your LMS, you see objective-level requests; students name what they’re fixing and why. Your comments get used, not just viewed.
- In office hours: Conversations turn collaborative: “If I tighten this claim and add one counterexample, does it meet the objective?”
- For you: Because almost all tasks are revisable, but requests are targeted, you respond to the precise gap rather than regrading the entire artifact. Workload = focused.
Isn’t that grade inflation?
The pattern says refinement, not inflation. Gains are localized to the objectives students address; there’s no across-the-board spike you’d expect from softer grading. And with zero declines on reassessed objectives, students weren’t throwing darts; they were applying feedback to close identifiable gaps (clarity, evidence, depth), just like they’ll need to do in capstones and the workplace.
Will students take unlimited shots?
They didn’t. When revision is framed as objective-level and tied to clear descriptors, students behave like responsible collaborators: fix the smallest thing blocking quality. The median move looked like tightening a claim, aligning evidence with the stated rationale, or making reflection more specific and actionable, minor edits with outsized impact.
Why this lands with students (and your evals)
When the first submission is a checkpoint, not a verdict, anxiety drops and agency rises. Students learn to diagnose which part of their work needs attention and how to improve it. That habit. Naming ambiguity, tightening logic, and showing evidence are leadership practices. It also makes feedback loops feel fair: transparent standards, visible progress, real second chances.
How to pilot this
- Make revision usual, not endless: Allow reassessment widely but cap requests to 1–2 objectives per resubmission.
- Keep descriptors observable: Tie each objective to the evidence you expect (e.g., “claims are specific and falsifiable,” “reflection identifies a behaviour change”).
- Use a predictable loop: Student states the objective + change; you respond only to that slice.
- Track four simple KPIs: % of students engaging (participation), objective requests per resubmission (precision), mastery delta (lift), and ratings per submission (feedback richness). Then review the dashboard mid-term to see where to coach in class.
When revision becomes standard and requests stay targeted, you get the best of both worlds: near-universal engagement (89.7%), focused behaviour (51% single-objective edits, ~1.70 requests each), and meaningful mastery gains (+1.54 levels; 93.1% up, 0% down), all while keeping instructor effort concentrated where it moves quality. That’s how you turn “rework” into real growth in a leadership course.
