Operationalizing a weighted performance scoring model for sustainable e-learning in medical education : insights from expert judgement

dc.contributor.authorOluwadele, Deborah
dc.contributor.authorSingh, Yashik
dc.contributor.authorAdeliyi, Timothy
dc.contributor.emaildeborah.oluwadele@up.ac.zaen_US
dc.date.accessioned2025-01-22T07:56:02Z
dc.date.available2025-01-22T07:56:02Z
dc.date.issued2024-07
dc.description.abstractValidation is needed for any newly developed model or framework because it requires several real-life applications. The investment made into e-learning in medical education is daunting, as is the expectation for a positive return on investment. The medical education domain requires data-wise implementation of e-learning as the debate continues about the fitness of e-learning in medical education. The domain seldom employs frameworks or models to evaluate students' performance in e-learning contexts. However, when utilized, the Kirkpatrick evaluation model is a common choice. This model has faced significant criticism for its failure to incorporate constructs that assess technology and its influence on learning. This paper aims to assess the efficiency of a model developed to determine the effectiveness of e-learning in medical education, specifically targeting student performance. The model was validated through Delphi-based Expert Judgement Techniques (EJT), and Cronbach's alpha was used to determine the reliability of the proposed model. Simple Correspondence Analysis (SCA) was used to measure if stability is reached among experts. Fourteen experts, professors, senior lecturers, and researchers with an average of 12 years of experience in designing and evaluating students' performance in e-learning in medical education participated in the evaluation of the model based on two rounds of questionnaires developed to operationalize the constructs of the model. During the first round, the model had 64 % agreement from all experts; however, 100% agreement was achieved after the second round, with all statements achieving an average of 52% strong agreement and 48% agreement from all 14 experts; the evaluation dimension had the most substantial agreements, next to the design dimension. The results suggest that the model is valid and may be applied as Key Performance Metrics when designing and evaluating e-learning courses in medical education.en_US
dc.description.departmentInformaticsen_US
dc.description.sdgSDG-03:Good heatlh and well-beingen_US
dc.description.sdgSDG-04:Quality Educationen_US
dc.description.urihttps://academic-publishing.org/index.php/ejelen_US
dc.identifier.citationOluwadele, D., Singh, Y. and Adeliyi, T. 2024. “Operationalizing a Weighted Performance Scoring Model for Sustainable e-Learning in Medical Education: Insights from Expert Judgement”, Electronic Journal of e-Learning, 22(8), pp 24- 40, https://doi.org/10.34190/ejel.22.8.3427.en_US
dc.identifier.issn1479-4403 (online)
dc.identifier.other10.34190/ejel.22.8.3427
dc.identifier.urihttp://hdl.handle.net/2263/100235
dc.language.isoenen_US
dc.publisherAcademic Publishing International Limiteden_US
dc.rights© 2024 Deborah Oluwadele, Yashik Singh, Timothy Adeliyi. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.en_US
dc.subjectE-learning evaluation modelen_US
dc.subjectMedical educationen_US
dc.subjectContent validationen_US
dc.subjectPerformance optimizationen_US
dc.subjectExpert judgment techniqueen_US
dc.subjectSDG-03: Good health and well-beingen_US
dc.subjectSDG-04: Quality educationen_US
dc.titleOperationalizing a weighted performance scoring model for sustainable e-learning in medical education : insights from expert judgementen_US
dc.typeArticleen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Oluwadele_Operationalizing_2024.pdf
Size:
1.09 MB
Format:
Adobe Portable Document Format
Description:
Article

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: