Impact Study Progress

EnezaEdtech Posts, MPrep UpdatesLeave a Comment

The past two months, we have been working closely with a group of 32 students from Chandaria Primary, located in the industrial area north of the city called Baba Dogo. This group of students is known as our “Impact Study Group,” and Chris, our Operations Director, has been busy leading them with the MPrep study tool. Our goal with this group is to increase their achievement scores over a period of time. In the beginning of the term (mid-January), students were given baseline assessments to see where they started. The term ends next week, and we are excited to see how their end of terms compare to their baselines.

Because MPrep aggregates data from formative assessments (quizzes), we have been able to record data from this group all along. While analyzing our data yesterday, I noticed several interesting trends in usage and achievement data. Take a look at this data of student usage from week to week:

While I’m analyzing more specifics of the data and thinking of ways we want to train teachers on it, I’ve noticed a couple of interesting things. In the beginning, students took a bit of time to learn and understand how to use MPrep. Their usage went up in the second week as they became more familiar. During the next couple of weeks, the usage started to plateau. This is when the ‘honeymoon’ period of the product ended, and the novelty effect wore off. (Sound a lot like your classroom, teachers? You bet.)

During the week of 20 March, we set numerical goals for our students and introduced incentives. That’s when usage shot up. We simulated what we would use in our market: incentives in the form of airtime to reward students for usage and scores. While we saw this drop off a little bit in the week of 27 March, we’ve started getting a better idea of what will continually motivate students to utilize MPrep. So far, airtime and praise have reined supreme, two things that can also be mimicked in our larger market.

Now, here is our data on achievement scores so far:

This data is a little more complicated to analyse, but I’ll take it from a teacher’s perspective and point out some of the nuances in what we’re seeing. At first glance, one will notice that the scores decreased from the students’ initial introduction to MPrep. This is due to a couple of things: 1) Students started taking more quizzes. They began to take more risks with the product once they figured out how it worked. I noticed this from personal observations as well; 2) The quizzes get harder as you proceed through the syllabus. The easiest quizzes are actually in the beginning; 3) The novelty effect started to wear off. Students did not feel the need to ‘try their best’ with a system they got used to.

And then, ta da! We saw a huge spike when we introduced incentives into the system. No surprise there. We offered 2 types of incentives to students: 1) Airtime and 2) Praise. We gave real time praise to students as they were finishing study sessions, and you could tell by the smiles on their faces that they were proud. We’re hoping this week we will continue to see their achievement scores rise.

Now, I know a lot of RCT (randomized control trial) experts and research pros are saying to yourself right now, “Well what’s the impact of the technology tool then? How is this any better than pen and paper if incentives are causing the kids to achieve?” We don’t want to prove that the technology itself is creating an impact with the students. As a teacher for the past 7 years, I have been a cynic of any tool that claims that it can significantly increase student achievement without the need of teachers. In edtech, that’s not our goal.

What we do want to prove is that MPrep, in parallel with incentives that can be mimicked in the market, works. It creates an impact and causes students to study more and helps them improve their scores. So far, it looks like we’re going in that direction.

I was worried a few weeks ago while I was watching students start to ‘plateau’ with MPrep, but as a teacher/educator, I couldn’t just sit by and worry about how ‘scientifically correct’ our study was. These are kids. And we need to do whatever it takes to help them to learn.

What I did make sure of, however, is that we simulated what we can do in our real market. We don’t want to make our impact study an anomaly, we want to be able to replicate what works and scale it. What we’ve learned is how important it will be for teachers to be deeply involved with quick data analysis and praise. We’ll keep you updated on our findings over the next few weeks and how we’re adjusting course. I love data and what we can do with it.

— Toni Maraviglia, CEO

Leave a Reply

Your email address will not be published. Required fields are marked *