In the past, various "waves" of attention and "hype" on technologies and solutions were related to Technology Enhanced Learning. Today’s buzzword is “Big Data”. The term refers to the huge amount of data coming from different data sources, that becomes too large, complex and dynamic for any conventional data tools to capture, store, manage and analyse. Big data approaches and technologies interest many different application fields, as an answer to the management of what has been called “data deluge”. Big Data therefore introduce two issues: how to address the problem of storing such a large amount of data, and how analytics tools could be created for the problem of analysing these huge datasets, trying to find faster and more scalable solutions to store and process all data collected. The paper will present our experience in designing and implementing a mechanism to refactor the persistence layer of our virtual learning platform, named “Online Communities”. At the moment, volumes are still those of a middle-range, non-critical database application, with approximately ten of thousand users using every day the platform for their e-learning tasks. When we added SCORM logging, the problem started to become much more complex. Furthermore, SCORM was not enough for some educational paths, especially outside University courses, and being aware of the limitations of the SCORM standard, we decided to implement a meta-SCORM service, that we called “educational paths”. Here, logs became even bigger, from clicks and users’ actions to logs of the SCORM player to logs of the educational path service itself. In general, there a several data gathering and manipulation elements that push our virtual learning environment towards Big Data and that increase the need of a structural change of LMSs architecture towards approaches and technologies: • traditional weblogs; • internal logs of usage of the platform, the so-called “digital breadcrumbs”, that track the learner’s journey throughout the entire learning experience. • Service logs, users’ actions on the different elements of the platform like documents, forums, blogs, FAQ etc. • logs from the SCORM player; • mobile logs, where data about mobile learning actions are collected; • TIN-CAN API calls, in case the platform is connected or acting as a Learning Record Store • Massive Open Online courses (MOOCs), by definition a generator of high volumes of data; • Life-long learning, an old buzzword of the e-learning jargon, which is still valid and interesting and, most of all, is another generator of big data, specifically along time; • Serious games, that will use materials inside the platform to transform the games into an educational setting, thus generating a relevant dataset related with users’ performances; From the analysis we made in a test use case, the interaction of users generate 2GB of data per month only for the users’ actions log, just for 2.000 users/day. It’s clear that the overall picture could become much more compelling for any LMS involved with hundreds of thousands participants following intensive MOOCs. The paper will present the architectural and software solutions planned and experimented inside our virtual communities’ platform to start moving towards the big data field, with an explanation of the technologies chosen, the changes in the architecture of the application, and the (big data) analytic tool for the real time analysis of users performances.
Big data and the impact on e-learning platform: a casa study / Molinari, A; Bouquet, P. - ELETTRONICO. - (2016). (Intervento presentato al convegno EDULEARN tenutosi a Barcelona, Spain nel 04-06/07/2016) [10.21125/edulearn.2016].
Big data and the impact on e-learning platform: a casa study
Molinari A;Bouquet P
2016-01-01
Abstract
In the past, various "waves" of attention and "hype" on technologies and solutions were related to Technology Enhanced Learning. Today’s buzzword is “Big Data”. The term refers to the huge amount of data coming from different data sources, that becomes too large, complex and dynamic for any conventional data tools to capture, store, manage and analyse. Big data approaches and technologies interest many different application fields, as an answer to the management of what has been called “data deluge”. Big Data therefore introduce two issues: how to address the problem of storing such a large amount of data, and how analytics tools could be created for the problem of analysing these huge datasets, trying to find faster and more scalable solutions to store and process all data collected. The paper will present our experience in designing and implementing a mechanism to refactor the persistence layer of our virtual learning platform, named “Online Communities”. At the moment, volumes are still those of a middle-range, non-critical database application, with approximately ten of thousand users using every day the platform for their e-learning tasks. When we added SCORM logging, the problem started to become much more complex. Furthermore, SCORM was not enough for some educational paths, especially outside University courses, and being aware of the limitations of the SCORM standard, we decided to implement a meta-SCORM service, that we called “educational paths”. Here, logs became even bigger, from clicks and users’ actions to logs of the SCORM player to logs of the educational path service itself. In general, there a several data gathering and manipulation elements that push our virtual learning environment towards Big Data and that increase the need of a structural change of LMSs architecture towards approaches and technologies: • traditional weblogs; • internal logs of usage of the platform, the so-called “digital breadcrumbs”, that track the learner’s journey throughout the entire learning experience. • Service logs, users’ actions on the different elements of the platform like documents, forums, blogs, FAQ etc. • logs from the SCORM player; • mobile logs, where data about mobile learning actions are collected; • TIN-CAN API calls, in case the platform is connected or acting as a Learning Record Store • Massive Open Online courses (MOOCs), by definition a generator of high volumes of data; • Life-long learning, an old buzzword of the e-learning jargon, which is still valid and interesting and, most of all, is another generator of big data, specifically along time; • Serious games, that will use materials inside the platform to transform the games into an educational setting, thus generating a relevant dataset related with users’ performances; From the analysis we made in a test use case, the interaction of users generate 2GB of data per month only for the users’ actions log, just for 2.000 users/day. It’s clear that the overall picture could become much more compelling for any LMS involved with hundreds of thousands participants following intensive MOOCs. The paper will present the architectural and software solutions planned and experimented inside our virtual communities’ platform to start moving towards the big data field, with an explanation of the technologies chosen, the changes in the architecture of the application, and the (big data) analytic tool for the real time analysis of users performances.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione