Introduction
Recent technological advances have reshaped education with the emergence of e-learning, a transformative approach that uses computers and the internet to provide highly accessible learning materials. However, this shift has unveiled limitations in traditional e-learning systems. Traditional e-learning platforms often follows a one-size-fits-all approach, neglecting the diverse learning styles and abilities of students. This issue is further compounded by variations in the rate at which students grasp new material and potential challenges in specific subtopics.
To meet these diverse expectations and needs, adaptive e-learning systems have emerged, focusing on personalisation by selecting appropriate learning materials based on individual student characteristics.
Research Aims
The aim of this research is to create an adaptive e-learning system that uses an ontology as a foundation for knowledge. This system aims to evaluate students' proficiency across various concepts by assessing their responses to questions provided by an AQG. Furthermore, the adaptive e-learning system should be able to produce an output that would be compatible with an AQG and NLGA.
Methodology
The adaptive learning system is grounded in Item Response Theory (IRT), a mathematical concept that assesses how individuals respond to test items based on their characteristics. IRT is used to estimate learner abilities or proficiency for test concepts, which are stored in a learner ability bank. Learner ability for a concept can be influenced in two ways: first, by calculating the learner's ability for that concept based on their responses, and second, through other learner abilities. This occurs due to the correlation between concepts and the assumption that a learner's knowledge of one concept could indirectly affect their knowledge of a related concept.
The adaptive learning system comprises two core components: one that processes learner responses and calculates learner abilities for each concept, and the ontology handler. The ontology handler uses the knowledge stored in the ontology and communicates it to all other system components. It manages all ontology-related operations and constructs the output of the adaptive system.
The system's output is a learner knowledge model, which includes triples representing connections between subjects in the ontology. These triples include all the subjects in the test, their corresponding learner abilities, and related subjects.
Evaluation
Three evaluations were conducted on the adaptive learning system. The first evaluation involved a qualitative analysis where real participants assessed the system's capability to accurately estimate their proficiency in test concepts. The second evaluation was a quantitative analysis that aimed to determine if the adaptive learning system responded correctly when presented with simulated tests and to compare the scores assigned to each concept using the adaptive learning system with those obtained through the traditional test scoring method. The third evaluation focused on system performance and efficiency during large-scale tests. Additionally, we compared the performance of the adaptive learning system to other systems that use IRT.
Results
In the first evaluation, findings revealed that while most participants perceived the system as moderately accurate in estimating their proficiency levels for each concept, only 33% believed it accurately estimated their true abilities. In contrast, 45% found it somewhat accurate, with 22% indicating inaccuracy, emphasising the need for further improvements.
The second evaluation revealed variations in scoring, with the adaptive system's scoring showing some differences from traditional scoring. The discrepancies in scoring were due to factors like the difficulty of questions and the non-linear relationship between learner ability and score in IRT assessments.
In the third evaluation, performance tests indicated that the system's execution time increased linearly with more concepts tested. Additionally, the comparison of execution times with conventional IRT assessments showed that conventional IRT assessments were significantly faster, though the adaptive system offers unique advantages. Potential enhancements include the implementation of multi-threading and more efficient algorithms for IRT calculations.
Conclusions
Through evaluations, the adaptive system demonstrated its ability to accurately estimate learners' proficiency levels and pinpoint their areas of weakness during testing. While it showcased its potential to offer more in-depth insights than traditional test scoring, longer execution times and some variability in accuracy were observed, indicating the need for further algorithm enhancements. The introduction of additional systems like the Automatic Question Generator (AQG) and Natural Language Generation Algorithm (NLGA) provides comprehensive support for dynamic, personalised education and guidance, providing a more effective and personalised educational experience.
Future Work
The initial version of the adaptive system presents opportunities for multiple enhancements aimed at improving performance, expanding functionality, exploring improved and more efficient algorithms to estimate learner abilities, and introducing additional features.