Sina haraka

Things didn’t always go quite as planned. While I was in Tanzania, the team struggled a bit to find an NGO that we could work with to implement Phase 2 of our research. Things took a bit longer than planned, but in the end, most of our problems worked themselves out. Each member of the team took responsibility for different aspects of the research, and I was asked to lead the initiative to follow-up with some of the women from Phase 1 of the experiment.

Last year, 57 women took part in a randomized control trial, where some received phones and others did not. A baseline survey was conducted at the start of the summer and then the team did a follow-up survey after a two-month treatment period. To be fair, all the women received phones at the end. Our task was to see what the effects were one year out.

In order to assess the impact of the phones, we collected qualitative and quantitative data. The purpose of this research is mainly to provide analysis for the NGOs that we worked with in the form of an impact assessment—not for scientific purposes. (Since all the women received phones, there’s no longer a control group that we can use for comparison.)

It took quite a bit of time for us to finalize a short phone survey and have it translated into Swahili. We didn’t want to ask highly sensitive questions (e.g. about women’s domestic situation) over the phone, so we focused our questionnaire on phone status, phone access, business operations, and overall wellbeing.

We had a list of the women from last summer’s experiment with the phone numbers we gave them and contact information for friends and family. We had a phone to make the calls. And we had Juliana, our excellent enumerator, to place the phone calls. It should take a couple of days to make the calls, right?


As it turned out, many of the women no longer had the SIM cards that we gave them. Many of them switched SIM cards, opting to use a different mobile carrier, which meant that they also switched phone lines. Several women reported that the phones we gave them were either lost, broken, or stolen, but most (about 7 in 10) still had them one year later. This problem made it very difficult for us to contact the people who we wanted. After 2 days, we had only done 17 interviews.

In survey studies, it is very important to try to make sure that people don’t “drop out” of the experiment. If many people from particular test groups drop out of the study, then the data can be compromised. In our case, this Phase 1 follow-up wasn’t a scientific study, per se, but it was still important that we get a high response rate so that the data we collected accurately reflected the results of the group. Obviously it was bound to be harder to contact people who no longer had their phones, which meant that our data would inevitably over-represent those who still had their phones to some degree.

We did our very best to contact everyone, calling most of our available contacts at least 4 or 5 times if necessary. Data quality (in this case directly tied to quantity) was more important than speed. We were ultimately able to collect 34 surveys—a response rate of 60%. Persistence paid off. Sina haraka (“no hurry”) was a good maxim to live by.

Qualitative interviews are up next.


  1. “Obviously it was bound to be harder to contact people who no longer had their phones, which meant that our data would inevitably over-represent those who still had their phones to some degree.” Thanks for your honesty about how this hitch could skew the ultimate results.