How did early learning systems know when a learner finished a course, passed a test, or how long they spent in a module?
➡️ Through the AICC Data Model, one of the earliest and most influential blueprints for tracking learning progress.
Before modern learning analytics, before SCORM’s
cmi.*
fields or xAPI’s JSON statements, AICC defined a clear, structured way for content and LMSs to speak the same language.
🧩 The Core of the Model
At its heart, the AICC Data Model was a list of standardized variables, data elements, that described everything important about a learner’s interaction with a course.
Each piece of information was exchanged as a simple name/value pair, such as:
core.lesson_status=completed
core.score.raw=95
core.session_time=00:45:32
The prefix
core.
grouped the data under a single logical namespace, making it easy for both the LMS and the course to interpret.
These values were passed back and forth via the HACP protocol, usually through plain HTTP POST requests.
The LMS stored these records in its database, while the content used them to decide what came next — for instance, whether to unlock a follow-up lesson or display a completion message.
🧠 What It Captured
The AICC Data Model was surprisingly complete for its time. It tracked elements like:
core.lesson_status
Learner’s progress state (incomplete,completed,passed,failed).core.score.raw
The numeric score, often between 0 and 100.core.total_timeandcore.session_time
Time spent overall and per session.core.lesson_location
Bookmarking data, allowing learners to resume where they left off.core.student_nameandcore.student_id
Identifying information from the LMS.core.entryandcore.exit
Indicators for course entry and exit behavior.
While minimal compared to today’s learning records, this structure enabled consistent progress tracking across vendors and platforms. For the 1990s, that was a breakthrough.
🧮 From Data Fields to Learning Insights
AICC’s data model introduced an early form of learning analytics, not in dashboards or charts, but in standardization.
It allowed organizations to measure training effectiveness, track completion rates, and ensure compliance training was verifiably done.
This was particularly crucial in aviation, where regulatory oversight demanded clear proof of completed training.
The data model made that possible; not through complexity, but through clarity.
Every subsequent standard, SCORM’s
cmi.*
, xAPI’s
result
and
context
fields, and even cmi5’s statements, can trace their lineage back to AICC’s simple idea:
“Define a shared vocabulary for learning.”
🧭 Why It Still Matters
AICC’s data model was more than a specification. It was a philosophy of interoperability.
It showed that data models are not just technical documents; they are contracts between systems that define not only what is exchanged, but also what is understood.
Even decades later, when developers design APIs, schemas, or event payloads, the same principles apply:
Keep it human-readable.
Define your vocabulary clearly.
Think about longevity, not just today’s implementation.
💡 Developer Reflection
The AICC Data Model reminds us that simplicity can scale when it is built on shared understanding.
It was the first true language of learning, proving that interoperability starts with empathy between systems.
In your own work, how often do you define data models with that kind of long-term clarity in mind?
🔭 What’s Next
Next week, we’ll explore how AICC content was actually hosted, launched, and tested. From packaging and launch URLs to the LMS endpoints that made everything connect.
You’ll see how developers deployed these early courses and how surprisingly modern the flow still feels today.
🔢 4 of 8 | AICC – The Origins of E-Learning Standards








