Tracking a score is easy.
Tracking learning state correctly is not.
SCORM 1.2 had a relatively flat and limited data model.
Most developers remember:
-
cmi.core.lesson_status -
cmi.core.score.raw
Simple. Limited. Often overloaded.
One field tried to describe too many states at once.
SCORM 2004 changed that fundamentally.
It introduced a more structured and expressive data model that separates concerns instead of blending them.
🔍 The Critical Separation
Completion, Success, Progress
The most important shift in SCORM 2004 is the separation of three concepts:
-
completion_status -
success_status -
progress_measure
These are not interchangeable.
✅ completion_status
Describes whether the learner finished the activity.
Common values:
- completed
- incomplete
- not attempted
This answers:
Did the learner finish what was required?
✅ success_status
Describes whether the learner passed or failed.
Common values:
- passed
- failed
- unknown
This answers:
Did the learner meet the success criteria?
Completion and success are independent dimensions.
✅ progress_measure
A numeric value between 0 and 1.
This indicates how far the learner has progressed through the activity.
For example:
- 0.0 means no measurable progress
- 0.5 means halfway
- 1.0 means fully progressed
This allows sequencing rules to calculate rollup logic more precisely.
🧠 Why This Separation Matters
A learner can:
- Complete a course and fail it
- Pass an assessment but not complete all required activities
- Progress partially without finishing
SCORM 1.2 struggled to represent these nuances cleanly.
SCORM 2004 models them explicitly.
This matters because sequencing rules consume these values.
They are not passive reporting fields.
They influence:
- Whether the next activity unlocks
- Whether rollup marks a parent activity complete
- Whether navigation options become available
The data model drives behavior.
📈 Beyond Status Fields
Expanded Capabilities
SCORM 2004 also expanded support for:
- Objectives tracking
- Interaction reporting
- Scaled scores using
cmi.score.scaled - Learner preferences
- Detailed attempt and session tracking
These elements are not decorative.
They integrate directly with Sequencing and Navigation rules defined in the manifest.
For example:
- Objectives can be mapped and rolled up
- Scaled scores can determine success_status
- Progress measures can trigger completion thresholds
The Run-Time Environment provides the data.
The Sequencing engine interprets it.
⚠️ Common Implementation Mistakes
In SCORM 1.2, many LMS platforms inferred logic from minimal data.
If a raw score was set and
lesson_status
changed, the system attempted to make sense of it.
SCORM 2004 expects explicit state management.
If your content:
- Only sets a raw score
- Ignores
completion_status - Never updates
success_status - Omits
progress_measure
The LMS may behave unpredictably.
Many real-world reporting inconsistencies trace back to misunderstanding this separation.
A course marked “completed but failed” is not necessarily broken.
It may be correctly reflecting distinct states.
🧩 Reporting vs Behavior
It is tempting to think of the data model as a reporting tool.
It is not.
It is a behavioral input layer.
Sequencing rules rely on it.
Rollup logic depends on it.
Navigation availability is influenced by it.
If the data model is incomplete or inconsistent, the entire architectural stack becomes unstable.
Understanding SCORM 2004 means respecting the precision of its data model.
🔢 4 of 12 | SCORM 2004: The Sequencing Era of Learning Standard








