SOURCE Program

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read the full review on the CrimeSolutions.gov website.


In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “No effects” (that is, a program that has not shown definitive evidence of effectiveness)…

1. A good theory-of-change can be undone by one limitation.

In reading the evaluation of SOURCE by Bos and colleagues, one is struck by the purposefulness in the design of the program. The authors do a nice job of noting the barriers that keep lower income high school students from applying to and attending higher education, taking care to distinguish between perceived barriers (families overestimating tuition costs) and actual barriers (lack of information about the application and financial aid processes), as well as prior attempts to address both in other programs. This grounds the theory of change of the SOURCE program in some concrete approaches⎯providing a wealth of pertinent information to youth and families topped with mentoring support to complete each step of the process⎯that have proven to work in previous efforts and that should get at the actual barriers that limit low income enrollment in higher education.

But as the developers of the program note, those prior successful efforts were fairly resource intensive. The SOURCE program was intended “to investigate whether a less intensive and less expensive intervention that begins late in students’ high school careers could achieve impacts similar to those of longer term, more comprehensive, and significantly more expensive college access interventions, especially if targeted to high school students whose academic records suggested that they were academically on track to meet the college admissions requirements…” So on paper, this program was targeting the right youth with the right service. But that decision to try and offer this support using fewer resources would come at a cost.

The SOURCE program spent approximately $1,000 per student served, which is an amount that certainly adds up when applied to large groups of students, but that might not “buy” as much mentoring support as the intervention needed to be successful. The program relied on college students in a paid mentoring role to work with the students and families. But the average mentor worked with 15 students over the course of the year. Given the busy schedules of college students, that seems like a large number of students and families to keep track of and connect with. The evaluation notes that connecting with students and families was a major challenge for the mentors and that there was limited in-person time as a result. Students received only an average of 11 hours of mentoring interaction over the course of the year, with much of that happening via email, text messages, phone calls, and other remote interactions. All students got at least one in-person meeting, but for many, that was it. Ultimately, the program did not achieve many strong outcomes related to application and enrollment and the lack of intensity of the mentoring may be one of the main culprits in these diminished returns. While mentoring programs should always seek to deliver their services with efficiency and cost-effectiveness in mind, this may be an instance where the emphasis on that “cost” wound up sacrificing the “effectiveness.”

2. Be careful not to let evaluation design backfire on a good effort.

A few things really stand out when looking at the outcomes of the SOURCE evaluation that may have made the program look less effective than it actually was in reality. First is the sample of students who participated in the evaluation. It turns out that of the young people selected to participate in the program, the vast majority were already planning on attending some form of higher education already, prior to the intervention. The program sought out participants who were already on track to graduate with grades that would qualify them for college if they applied. Most of these students and families had thus already been preparing for academic pursuits after high school.

In fact, a whopping 93.7% of the control group wound up applying for college anyways and many of them availed themselves of other support services designed to help with the college exploration and application process. This left the evaluators in a very difficult spot when it came time to look for differences between the treatment and control groups. Those control families found other ways to overcome the barriers that the program was hoping to address. This arguably may have made it almost impossible for the evaluators to detect the impact of the program.

In some ways, this is a good development as it lets the school district know that perhaps other students who are less likely to apply for college might benefit more from this type of service and that they should allocate resources accordingly. But this also serves as a reminder when designing an evaluation (or an intervention) that services should be directed to those who stand to benefit the most from what’s offered, that have the most potential to “have a needle move,” so to speak. The developers of SOURCE may not have known it before the study, but they certainly know now.

The other challenge presented in this evaluation design, and one that appears to have been primarily responsible for the “No effects” designation in Crime Solutions, is that the evaluation looked at persistence through college as one of the primary indicators of program success. While all college access programs want students not only to initially enroll as freshman but to also complete college (otherwise, what’s the point), we know that the supports needed for lower income, first generation, and minority college students to persist once they get on campus are critically important. Such supports, however, are far beyond those that SOURCE was intended to provide. The evaluation design was essentially holding the program responsible for outcomes that were far beyond those that would appear to be most reasonable to anticipate based on the scope of the services provided.

Here is how the main goals of SOURCE are described in the evaluation by Bos: “The intent of the program was to help students understand their college options, the actual costs and benefits of attending and completing college, and the requirements of college admission and financial aid. The program also actively helped participants manage and complete specific activities and milestones associated with the process of applying for college and financial aid.”

Well, the program did achieve statistically significant (albeit small) positive outcomes for some of the steps and milestones noted above: taking the SAT, submitting FAFSA paperwork, and applying to colleges in the California university system, which as the lowest-cost option seems like a good outcome for these low-income families, even if the program didn’t impact applications to other, more expensive, institutions. So the program did provide families with information about the college application process and did help students take some of those most important steps in a meaningful way. But things unravel a bit the further out from those initial outcomes one goes.

SOURCE did not explicitly help these students stick with it once they arrived on campus. In the end, SOURCE mentees were no more likely to complete a 2- or 4-year college than the control group. But one shouldn’t be surprised that an intervention designed to get young people onto campuses didn’t do a great job of influencing their success once they got there. Those supports and services were beyond the scope of SOURCE and primarily the responsibility of the higher education institutions. Perhaps one key change to SOURCE that might result from these findings is providing students and families about the types of support services they can engage with once they get to campus. But in some ways, this evaluation design reached pretty far into the future. One might say that there is no harm in looking far out for program impacts, but programs then run the risk of looking like they failed when perhaps they did not. This is one of the many ways in which the nuances of evaluation design can have a big impact in how we perceive programs.

3. It’s always worth the time to look at subgroup differences!

The one area where SOURCE had the most impact was in the outcomes made by the types of students that were perhaps most important to the designers of the program: Spanish-speaking students and students whose parents had not attended college. These two groups accounted for almost all of the positive impact of the program. This might suggest to SOURCE that targeting these types of students in the future could be the best allocation of resources.

Most program evaluations emphasize looking at the outcomes for ALL of the youth in a program. After all, the whole point of a program is to equally serve and support every youth and family that walks through the door. But time and time again in mentoring program evaluations we see that programs are more effective, sometimes dramatically so, for subgroups of mentees that, for whatever reason, are well-positioned to take advantage of or be responsive to the service being offered. In this case, these two groups that have historically struggled to apply for college compared to their peers benefitted the most from the mentoring and achieved statistically significant enrollment rates, both initially after the program and at the long-term follow-up! These students got to college and persisted in college. And while that overall rating may say “No effects” for these students and families, there is certainly evidence to suggest that the effects of SOURCE were important and meaningful.

This serves as a reminder to programs that when evaluating their efforts, they should always examine differences among subgroups of participants. You might well find that the program works best for some youth and be able to uncover critical information for improving the intervention over time or, at the very least, be in a better position to target services to those youth who are going to get the most out of them.


For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resources for Mentoring Programs" section of the National Mentoring Resource Center site. 

Request no-cost help for your program

Advanced Search