Achievement Mentoring Program

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read the full review on the Crime Solutions website.


In praise of… praise

In reading about the design and delivery of the Achievement Mentoring Program (AMP), one is struck by the emphasis on praise as a defining feature of mentor-youth interactions. In some ways, that’s not surprising, given that the program builds on social learning theory and the idea that personal, behavioral, and environmental factors all interact and reinforce each other in determining how we learn and how well we engage with the act of learningthus, praising a youth’s positive behavior should also influence his or her concept of self and positive feelings about school, ideally creating a positive feedback loop that nurtures longer-term academic success. And while most mentors certainly praise their mentees when they do something positive or achieve a goal, the structure that AMP puts around doing this notable.

Mentors are tasked with meeting with the students’ teachers before every weekly meeting and finding some positive things from the previous week to praise the student for. Their meetings with the mentees begin with this praise before moving on to addressing challenges, getting the meetings off to a positive start and demonstrating to the student that the “environment” of the school is recognizing their positive behaviors. This praise is then also passed on to other teachers or school staff who can repeat this praise, as well as to parents (the home “environment”) so they can further reinforce this positive behavior and encourage more. This explicit attempt at recognizing and reinforcing all the things the students are doing right is notable and something that mentoring programs could seemingly benefit from more systematically building into the work of mentors. It is likely a lot of extra work for mentors to gather the needed information and subsequently pass it on to other staff and parents too. But that reinforcement of what’s going right is likely important, if not essential, for getting youth to buy into the support given and feel like they are being seen honestly and accurately by the adults who are supporting them (at least in contrast to conversations that might overemphasize the negative). It’s likely a lot easier for a high school freshman to hear a message about improving study habits if they’ve previously been praised for working hard in other aspects of their schoolwork. AMP should be commended for emphasizing this subtle, yet likely important, wrinkle.

Making short-term mentoring a longer term intervention

One of the other interesting aspects of AMP is that the model is able to bring about significant changes from a rather punctuated short-term burst of mentoring. One of the evaluations of AMP describes it as an 18 month mentoring program, and while that’s technically true, most of that mentoring happens during the second semester of students’ first year in the program. That semester features weekly meetings with mentors, but only for about 15-20 minutes at a time. In the second evaluation of AMP (Clarke, 2009), mentored students only met with their mentors during that core semester an average of about five times for just over 20 minutes a meeting (on average). That’s not a lot of interpersonal timethere are plenty of models for struggling students where a youth would exceed that volume of mentoring in a matter of weeks.

After this core semester of mentoring, the mentor continues to meet once a month with the student in the following academic year (once again, for about 20 minutes on average). So while the mentoring relationship lasts a long time, it doesn’t have nearly as much mentoring “dosage” as one might expect given the results obtained.

But the secret to making this work, in using these brief interactions to bring about longer-term academic improvement, may lie in how that time was spent. As noted above, the emphasis is not only on praising specific positive behaviors or achievements, but also on directly addressing struggles and identifying and implementing solutions. Mentors are tasked with spending considerable time during these sessions building skills such as good study habits, organizational and time management skills, interpersonal communication and emotional regulation, test taking skills, and so forth. By combining praise with new skills the mentees can use to address immediate concerns, the program is well positioned to be able to both reinforce positive efforts and reduce future academic or behavioral struggles. The praise shows youth that they have agency over their success or failure (and adults who cared), while the problem-solving gives practical help on current struggles (and building long-term competencies). And the ongoing check-ins the next year can simply reinforce and strengthen the progress made.

These considerations illustrates that the caring and love a mentor provides may go only so far. There may be times when they will have to make a mentee confront a struggle and then do some teaching and skill building so that the youth can overcome their challenges. AMP offers an excellent illustration of how both of those things can be provided in a short, punctuated period of time.

How much can we count on mentors to mentor the way we want them to mentor?

One interesting bit of information in the second evaluation of AMP is the emphasis that was placed on fidelity to the model of mentor-mentee interactions: Did they meet as often as intended, and for how long? Did they follow the protocol of praise, review of academic progress, discussion of problems and solutions, and so forth? This is a highly structured program and the theory of change of the program is premised on some fairly set activities and conversations.

The mentors participating in evaluations of the program to date have all been teachers and other school staff who could reasonably be expected to understand the importance of following the program guidelines as intended and who all can be expected to have had significant prior experience interacting with students and discussing schoolwork. So these were not random volunteers who were just getting used to delivering information to, or talking with, students. They were also compensated for their mentoring time: $90 for completing the training, $500 for full program participation. The developers of the program also spent considerable time checking in with the mentors, both to provide support and to track the fidelity to the idealized implementation of the program.

What they found was that these mentors did not always follow the script. 

According to Clarke:

… mentors demonstrated less compliance on items such as “asked about mentee’s circumstances or perceptions around problem or goal (45%),” “checked how previous plans worked (48%),” and “made plans with mentee to implement a solution (57%).” The mentors had greater compliance for items such as “identified a problem or goal (81%),” “mentor verbalized next step (80%),” and “talk to or left message for mentee (77%).”

The overall fidelity of implementation of the model was 72.5%. This illustrates a few things:

  1. Even programs using paid professionals in a school setting with a clear-cut protocol of interactions and meeting frequency can still have trouble getting the mentors to implement the plan as intended. This is not a criticism of AMP as much as it’s a reminder to all mentoring programs that the point of service delivery to youth is not at the program staff level but at the level of the mentor. Even the best planned interventions are dependent on their adherence to the planned activities (or, if adaptations are made, these are well-aligned with broader principles and goals of the program). And that is not always as reliable as we’d like to think. Kuddos to AMP for getting positive impacts in spite of any gaps in implementation, but we should note that sometimes when a program is less effective than expected, it’s may often be an implementation issue, rather than an issue with the program’s theory of change or general premise.

  2. All mentoring programs should be making some effort to track fidelity of implementation (as well as adaptations made “on the ground” by mentors and staff, some of which may be excellent fodder for program improvements). The most obvious ways of doing this are the ones that most programs already do: Is the match meeting as often and for as long as they should? But programs should consider digging deeper and seeing if mentors are saying the right things, having the right conversations, and helping the youth grow, change, or transform in the ways needed to align with the theory, goals, and vision of the program. The details of mentoring seem undoubtedly to matter and programs can often forget that as they train mentors and send them on their way, hoping that they do the right things. Programs may want to borrow an idea from AMP and be a bit more directive about what good implementation looks like and then meticulously track that information to see if mentoring is being simply offered, or offered with fidelity. That distinction may make all the difference.

References

Clarke, Lolalyn. “Effects of a School-Based Adult Mentoring Intervention on Low Income, Urban High School Freshmen Judged to be at Risk for Dropout: A Replication and Extension.” PhD diss., Rutgers, The State University of New Jersey, 2009.


For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resources" section of the National Mentoring Resource Center site.

Request no-cost help for your program

Advanced Search