Tech Tuesday: Translating mLevel Events to xAPI Statements

xapi-blog-resized

As I mentioned in last week’s post, mLevel is beginning to push the learning industry in the direction of using all of the data available to L&D to make better decisions on how its employees are interacting with their learning programs.  One way to do that is to use the events that occur during an interaction to translate them into actionable items for the L&D leaders to know how to evolve their learning programs to be as efficient as possible.  mLevel has been capturing events about the user’s behaviors using the mLevel platform since its inception, but only recently have we been able to provide data back to the learners and managers on what happened and when.  Specifically, as you can see in the JSON snippet below, mLevel captures the information around the user’s score data as well as simple events such as a GameStart, QuestionDisplayed, CorrectAnswer, etc. which are represented by the gameEventType.  The event stream is ordered and, in some cases, will contain additional data to assist with the parsing of the events.

json_event_stream

Upon completion of any activity on the mLevel platform, this JSON file is sent over to the platform and is prepared for processing through the various event-state engines to provide analytics.  One of the steps of that preparation allow the system to validate that the user’s score information (score, stars, duration and knowledgeAccuracy) are correct to detect any fraudulent attempts at pushing data into the system.  Upon verification completion, mLevel pushes this event payload out to a Microsoft Azure Event Hub to be consumed by anything that is subscribing to it.  From there any subscribing platform can take the data and massage it into a format that can be consumed by its system.

Quick Overview of xAPI

Before going into how mLevel has built consumers for its event hub to parse into xAPI statements, it is important to understand how xAPI works and what the goal is.  xAPI is a standard that was released in 2013 through the Tin Can project which was later renamed as the Experience API or xAPI for short.  The goal of this standard was to provide a common way to parse learning interaction / experience data to provide value back to users.  xAPI utilizes structured statements that read very similarly to the old Use Cases that were used in software development where there was an Actor, Action and Object on which the action was performed on.  The key principle around what xAPI was built for was that it could be used to define any interaction with anything in the learning program in a common structure to provide reporting across all objects with little manipulation.  Additionally, the level of granularity that can be used for reporting these statements can be determined by the system that is generating the statements themselves.  When a statement generation engine is executed to turn some sort of data into these statements, there are various types of additional metadata that can be provided to help the reporting engine provide to most detail that it can to enhance the viewer’s analytical experience.  A sample structure, provided by the xAPI-Spec (link below), is shown below:

sample-xapi-statement

It can be noted that it is clear that the Actor, Verb (or action) and Object are very clearly outlined in this case and it can be seen that even attachments can be added to this statement.  If you were to look at the statement itself, and not the raw-JSON as shown above, you might see something like:

Sample Agent answered Multi Part Activity with the attachment [link].

After the statements are created, they can then be sent on to a Learning Record Store (LRS) which generally represents a persistence layer which will handle these JSON structures in a way that allows for complex reporting.  Specifically, mLevel has integrated into two LRS systems internally to test the different types of reports that can be generated using the data available from mLevel events.  One of those systems is SCORM Cloud which is a system managed by Rustici Software and can be set up pretty quickly and requires no hardware as it is cloud-based.  The other option is called Learning Locker which is an open source LRS which does require a server to host the LRS.  It comes with a simple web interface and a quick and dirty ad hoc report creation tool which I will go through in my next entry.

For more information about the LRS systems that mLevel has used, please see the references at the end of this post.

mLevel Events to xAPI Statements

On the mLevel platform, events are picked up by an internal Azure Web Job to translate the events into several different levels of statements (Mission, Activity or Question) which are configured at the Customer or Tenant level in the mLevel admin panel.  In addition to the level assignment, the routing of where the statements are submitted is also controlled at the same level.  Thus, the mLevel job can send messages to any configured LRS by simply putting the RESTful HTTP end point where the messages should be sent.  One thing to note is that each of the levels of messaging provides a different depth of data and events so it is important when setting this up to ensure that the LRS can handle the frequency at which the messages will be sent in.  The platform has been designed in a way that a combination of all of these types of levels can be implemented to send over all of the data that can be sent to the LRS for use at a later date.  The “Mission” level is probably the most common and most aligned with the Learning Management Systems (LMS) of the past as it simply will build out a statement that explains “John Doe has completed Mission One”.  Historically, this is the only type of reporting that a learning admin had looked for as it could provide them with the “check the box” answer for each employee rather than providing the level of depth that would allow them to grow and expand the learning programs to a point of maturity that would help effectiveness.

In the mLevel specific implementation of the xAPI statement generation, the two areas to focus on are the Mission and Question level statements as the Activity is just a middle-tier version of them both.  The most granular version of statements will come from the Question statements and the Mission is on the other end of the spectrum.  Knowing the data provided earlier in the form of the event stream is all that the xAPI processor is aware of will lead to a need for additional data.  Thus, mLevel has built out a suite of micro-services that can provide simple sets of data to help populate the statements with all of the metadata required to generate robust statements.  For example, in the case of the Mission statement, the xAPI statement needs to know about which mission is being attempted and it needs to know what the user’s aggregate score details would be in order to provide whether or not a user has truly completed a Mission or not.

mission-xapi-statementv2

 

By digging into the statement above, you can see that I attempted this mission and my knowledge accuracy is 12.5% across the mission.  You can also see the mission details and the deep link to the mission in case you wanted to launch the mission from the LRS directly.  All of this data is populated as a result of a request to the Leaderboard micro-service to hydrate the statement with metadata required to report in detail.  The snapshot above represents a single JSON message which is submitted to the Learning Locker LRS hosted in the mLevel development environment.  In a more detailed example, you can see in the snapshot below how a set of statements can be generated for an activity interaction within the mLevel platform.  Specifically, you can see that ten (10) question statements are generated and a single mission statement is generated for a single play of an activity in the mLevel platform.  The statement generation job gets all of the basic information it can get from the event stream then pushes it to the Leaderboard and GameContent service to retrieve mission performance aggregation and question / answer details respectively.  All of the statements are generated and sent to the LRS so reports can be generated to truly understand what the learner knows when interacting with the mLevel platform.

statementsv2

In conclusion, during this post, you can see how the mLevel system has begun to integrate with future of learning technologies, the LRS and xAPI.  The path towards the newer technologies will allow the learners and the learning administrators to get a real feel for what each employee is struggling with in a quicker timeframe.  Each person will be able to get into the LRS to run reports within minutes of activity completion to devise action items for each learning program and the learners that are interacting with the activity.  Next week in the conclusion to this series, I will dig deeper into the reports that can be generated out of the box from Learning Locker and show the types of action items that can be generated to continue to evolve learning programs in the same way that product development teams has employed agile methodologies to improve the way software has been developed.

 

Related Links:

Learning Locker – https://learninglocker.net/

SCORM Cloud – http://scorm.com/scorm-cloud/

xAPI Specification – https://github.com/adlnet/xAPI-Spec