The mEducation Alliance Evidence Showcase
The 2016 mEducation Alliance Annual Symposium ended with a call for more rigorous evidence in the ICT4E sector. As a step toward answering this call, the Alliance will compile and showcase rigorous, innovative, and relevant evaluations for practitioners, policy makers, and researchers to use. The studies showcased on this page were sent to the mEducation Alliance directly, or were found in the course of members’ ICT4E work. If you would like to suggest an evaluation or resource to include on the Showcase page, please contact us by clicking here.
Drawing on interviews, experience, and a review of recent literature, this report looks to the future of ICT in education with a focus on primary schooling. The authors make broad conclusions for what the future of ICT4E will hold, but in the near-term note that there should be better use of existing technologies and a stronger emphasis on online protection for children. This is an excellent document for understanding where ICT4E currently is and where it may be going.
This evaluation studies the effect of teacher assisted learning (CAL) and computer assisted instruction (CAI) on English scores in China. Overall, CAL had a stronger effect on scores than CAI, but results vary based on who is implementing the program. Government implementation was less effective than implementation managed by the researchers, which raises important questions about scale-up, generalizability, and program design. This is an excellent study for gaining a better understanding of how technology can be integrated into teaching and learning.
This paper provides an excellent summary of the challenges in evaluating education technology interventions, discusses the impact of a platform neutral education technology program, and includes a discussion of the cost constraints in educating students in low- and middle-income countries. The paper’s authors conduct a randomized evaluation of Mindspark, a technology-led instructional program in India, with a focus on secondary school students. There are five main findings, including the fact that students who participated in the Mindspark program more than doubled their test scores in Hindi and math.
Print or digital? This RCT evaluation of a program in high-poverty Honduran schools asks the question that has vexed education researchers and newspaper publishers alike. The researchers find that low-income students in Honduras are similar to people everywhere: easily distracted when in front of a computer. Replacing print textbooks with digital lessons did not improve scores, but it is slightly more cost-effective. Given that technology may provide resources at a lower cost, but does not automatically raise or affect scores, this paper is excellent for setting a discussion of how best to use and deliver technology in the classroom.
“The impact of education programmes on learning and school participation in low- and middle-income countries” by Birte Snilstveit, Jennifer Stevenson, Radhika Menon, Daniel Phillips, Emma Gallagher, Maisie Geleen, Hannah Jobse, Tanja Schmidt, and Emmanuel Jimenez.
This systemic review covers evidence from 216 education programs, based on impact evaluations as well as mix-methods research. The authors cover what works in education programming, what doesn’t work, what is promising, and what remains unknown. On page 31, the authors address 16 programs involving computer-assisted learning. The results for computer and technology assisted learning are mixed, with some improved scores in math and improvement in student attendance. They find that in some cases ICT4E interventions actually decrease learning, particularly when technology is introduced to replace a traditional educator, and that education technology is often not properly integrated into schools and that teachers are often not trained properly in ICT4E use. It is worth noting that many of the studies included in the meta-analysis focus on providing technology inputs, such as One Laptop per Child, and that a majority of the standardized results are not statistically significant.