A P-Value Party for the Mentoring Study: The Final Results!

You’ve been with me since I started on this project at the start of the summer and now here’s what we’ve all been waiting for! Let’s dig into some results, shall we?

For starters, I want to discuss a few caveats to the data. Because of the nature of the study’s design, it is impossible to draw cause and effect relationships. The results simply imply that there is a relationship between one variable of interest and another. I’ll let you know when that relationship is statistically strong enough to call it significant. In my study, that means that it has a p-value less than .05.

Furthermore, we did not have a control group to compare outcomes of mentored kids to. That limits the results because we cannot determine what changes are due to mentoring or due to regular childhood development, classroom learning, etc. The comparison that I’m relying on is the change in scores from baseline (prior to the start of the mentoring program) until the end.

One more thing to mention: the program lasted for an academic year with chapters of the program beginning around late September into October and operating through until April of the next year. This intervention is realtively short. For those kids who stuck with their original mentors the entire program, those matches lasted about 26 weeks on average. For children who changed mentors during the program, those original matches lasted about 15 weeks. Program effects could be larger if the program lasted longer. In fact, research has shown that time has a compounding effect for program benefits—the longer kids remain in a mentoring program, the better off they are on a variety of outcomes (DeWit, 2016; Grossman & Rhodes, 2002).

With those items of business out of the way, we can look at the specifics. The preliminary analyses were focused on finding out if there were any significant changes in variables of interest from the start to the end using dependent sample t-tests. I categorized those variables into three themes: academic performance and aspirations, emotional well-being, and interpersonal relationships. All of the data on those variables were captured using inventories from prior research, some with adaptations for the specific population of study (i.e. adjusting questions for young children). The sample comprised of 215 little buddies. They ranged in grade level from third to sixth grade with a mean age of 9.63 years. Little buddies were included if they met the following criteria: they were paired at the start of the program with a randomly assigned mentor, they handed in a follow-up survey, and their parents gave consent for us to use their data.

Looking at the area of academics, significant positive results were gathered for two variables: grades and desire to graduate college. Little buddies had significantly higher grades at program end than beginning. They also reported that they would experience more disappointment if they did not graduate from college by the end of the study than they did at the start. This makes sense because the mentoring program itself really emphasizes college aspirations and tries to familiarize children with college by pairing them with college students as mentors and taking them to their mentor’s campus for mentoring sessions. However, an unexpected result was a significant decline in children’s attitudes about school. They indicated less excitement for school at the end of the program. This could be explained by that anticipation for summer break that tends to happen to children as the year comes to an end. Again, if these results could be mirrored against a control group of kids without mentoring, we could see if this is a typical trend in attitude experienced by a general student population.

Emotional well-being had several insignificant items—scores on anxiety and two forms of emotional regulation tactics (cognitive reappraisal and emotional suppression) did not change from start to finish. However, depression, which was calculated by indicating the frequency that children experienced several symptoms, increased in score by program end significantly. This trend could work similarly to the school attitude outcome explained before—it might be a natural consequence of a school year coming to a close and not the mentoring.

Lastly, no measures from the interpersonal relationships (social) theme achieved significance. Measures for the level of attachment kids would feel for their parents and peer acceptance were predicted to increase by the end, but no such pattern was found. Mentoring theory suggests that having a mentor to model positive interpersonal communication and secure relationships could transfer over to a child’s relationship with their friends and parents (Rhodes et al. 2006). A complete set of results can be found in the table below.

Screen Shot 2017-08-24 at 9.10.08 PM

 

Although the preliminary analyses showed positive changes in only two outcome measures, I’m going to undertake a round of secondary analyses on the data. Those will look specifically about the impact of match changes on a child’s outcomes. While organizing data this summer, I noticed that little buddies ended up changing their matches pretty frequently throughout the year. Because mentors are college students, their schedules are constantly in flux: new jobs, new classes, and new activities could prevent college students from staying in the mentoring program from fall into spring. Prior research has shown that short match duration, less than three months, is actually associated with declines in several outcomes (Grossman & Rhodes, 2002). Mentoring theorists explain the possible detrimental effects of shorter matches and early program departure using ideas about attachment. If little buddies have experienced inconsistencies in other relationships in their lives and their mentor abruptly leaves early, those insecurities and sensitivities could resurface causing program involvement to have negative impacts on those children (Grossman & Rhodes, 2012).

I plan on including those analyses in my poster for the summer showcase! I’m going to work on interpreting those calculations once classes begin and have those results ready for you all soon. This is my last blog post, so if you have any questions about the project, feel free to get in touch with me!

Thanks so much for following my progress this summer! I’m incredibly thankful to the Charles Center for making this possible, the staff at CMFK, my advisor, Elizabeth Raposa, the other research assistants who helped in the lab summer, and the mentors, parents, and children who offered their time and effort to participate in this study. I can’t wait to see what kind of insights this sample will produce in the future.

Phoebe Flint

 

Speak Your Mind

*