If you teach an upper-level software course, you’ve lived this: students ship a “final” SRS, user stories, or UML, then hope for the best. But in the real world, requirements are negotiated, iterated, and clarified under feedback. 

In Requirements Elicitation & Modeling; 88 students across undergrad + grad sections, in-person; taught by Dr. Bonita Sharif, leaned into that reality. The course introduced targeted reassessment and limited opportunities to revise specific objectives on the most significant, most integrative assignments, so students practiced the loop they’ll use on cross-functional teams for the rest of their careers.

What changed (and what didn’t)

Standards didn’t drop; the path to mastery changed. Out of 55 total assignments in Spring ’24, only 5 allowed resubmission. By design, these were the heavy hitters (avg 3.8 objectives each vs. 1.84 course-wide), where authentic feedback and a second swing deliver the most learning ROI. That choice concentrated both student and instructor effort precisely where modelling quality, traceability, and stakeholder clarity matter most.

How students used the option

Participation was solid and purposeful. 44.3% of students submitted at least one reassessment; when they did, they typically targeted the exact weak spot rather than redoing everything. On average, students made 2.16 objective-level requests per resubmission, and in 50% of cases, they flagged one objective only, e.g., “Use-case scenario completeness” or “Glossary and domain vocabulary alignment.” That behaviour tells you students were reading feedback, isolating the constraint, and fixing that constraint, exactly how healthy engineering teams operate.

Did it move the needle?

Yes, in ways that look like real learning, not grade inflation. Across 120 reassessed objectives on the E-S-N-U mastery scale (reversed to 4=Exemplary), average ratings rose from 2.13 → 3.75 (+1.62 levels). 88.3% of objectives improved, none declined. If students were just “resubmitting to gamble,” you’d expect randomness. What we saw was consistent, targeted lift aligned with the exact objective students worked on (e.g., improving the measurability of acceptance criteria, tightening actor-goal alignment, clarifying non-functionals).

Context matters: they already started strong

This wasn’t a rescue net for weak work. Even the earliest ratings across the course hovered 3.68–3.82 out of 4.0. Students were submitting credible artifacts on the first pass and still had the chance to refine. That’s the difference between “chasing points” and “iterative improvement.” The reassessment window enables students to close gaps: ambiguity, missing pre/postconditions, or inconsistent glossary terms.

What this feels like by Week 3

  • In studio/lab: Questions shift from “Is this an A?” to “Does this use case actually cover exception paths?” and “Do our NFRs map to testable criteria?”

  • In your LMS: You see objective-level requests; students name the thing they intend to fix. Your feedback gets used, not just viewed.

  • In teams: Peer discussions mirror stand-ups: “We’re solid on the domain model, but our stakeholder interview notes aren’t reflected in the glossary, let’s fix terminology drift before it spreads.”

  • For you: Because reassessment is limited to 5 of 55 assignments and requests are targeted, your time is spent where it moves product quality, not on blanket re-grading.

“Isn’t this just grade inflation?”

The pattern argues otherwise. Gains are localized to the objectives students work on; there’s no across-the-board spike that would suggest leniency. We also observed no declines on reassessed objectives, students weren’t throwing darts; they were applying feedback to improve specific engineering behaviours (clarity, completeness, consistency, and testability). That’s refinement, not inflation.

“Will some students take unlimited shots?”

They didn’t. With resubmission restricted to high-value work and framed as objective-level requests, students behaved like responsible contributors: fix the smallest thing blocking correctness. The median resubmission looked like tightening an actor’s goal, aligning a use-case step with system responsibilities, or replacing vague NFRs (“fast,” “secure”) with measurable criteria, surgical edits that lift quality without bloating instructor workload.

Why it resonates with software students (and hiring managers)

Requirements is where engineering maturity shows up. When students can iterate under feedback, they practice the habits that make them employable: negotiating scope, naming ambiguity, and converging on testable acceptance criteria. Anxiety drops because a first submission is a checkpoint, not a verdict, and agency rises because the path to “better” is explicit and objective-tied. That translates directly to internship and junior-dev readiness.

How to pilot this next term (and protect your time)

  • Pick the right targets: Limit reassessment to your most integrative deliverables (e.g., Vision & Scope, Domain Model + Glossary, Use Case or User Story set with acceptance tests).

  • Use objective-level rubrics: Keep descriptors brief and observable (e.g., “Each NFR is measurable and mapped to validation”).

  • Require focused requests: 1–2 objectives per resubmission maximum; students must state “what” and “how” they improved.

  • Track four simple KPIs: Participation rate (who engages), requests per resubmission (precision), mastery delta (lift), and ratings per submission (feedback richness).

  • Close the loop in class: Do 5-minute “before/after” shares; make iteration socially normal and technically specific.

A low-frequency, high-impact loop works: limited reassessment on high-objective assignments led to targeted student behavior (half of resubmissions fixed a single objective), broad improvements (88.3% of reassessed objectives rose, none fell), and meaningful mastery gains (+1.62 levels on average). That’s not softer grading, it’s better engineering practice embedded into assessment. In CSCE 468/868, students didn’t just resubmit; they learned to diagnose, negotiate, and ship clearer specs, the habits hiring managers actually pay for.