Measurement Guidance Toolkit

Toolkit

Overview

The National Mentoring Resource Center’s Measurement Guidance Toolkit provides recommended instruments for measuring key youth outcomes in mentoring programs as well as several risk and protective factors that may be relevant to program outcomes.

The instruments recommended here are grouped into different domains in which mentoring has well-established potential for impact. All recommended instruments have been carefully reviewed and selected by the Research Board of the National Mentoring Resource Center. Please use the links to the left to navigate the domains and recommended measures for outcomes within each domain.

Learn more about the development and contents of the Toolkit and how this resource can help your program’s evaluation efforts.

 

Using These Instruments in Your Program

While this Toolkit can help your mentoring program measure many outcomes more effectively, the recommended instruments will be most effective if used within the context of a well-designed evaluation plan. It should be kept in mind, furthermore, that the instruments are likely to lose their value if they are changed (excepting the potential modifications that are discussed in the Toolkit for selected measures), administered incorrectly, or used in contexts where the desired outcomes are apt to be elusive due to poor program implementation or an insufficient theory of change driving program activities. Please see the Key Evaluation Considerations and Advice for Designing and Administering Evaluation Tools pages for more information about how to maximize the value of these instruments within the context of a strong overall evaluation.

 

About This Toolkit

The Measurement Guidance Toolkit was developed by the Office of Juvenile Justice and Delinquency Prevention’s National Mentoring Resource Center (NMRC), through the work of the NMRC Research Board. A full list of contributors to the Toolkit appears below.

 

The Need

Youth mentoring programs are increasingly looking to make informed decisions about strengthening their programs and convincingly demonstrate their impact to stakeholders. These needs place a premium on the availability of practical guidance for how to evaluate the impact mentoring relationships are having on youth within their unique contexts. Most programs recognize that valid and reliable measurement tools are an essential component of any high-quality evaluation.

But program staff are often challenged to find accessible surveys, scales, and other data collection instruments they can use in their work. It can be daunting to wade through the many instruments available for measuring even a single outcome and select the one that is “best” (i.e., provides a brief, yet accurate assessment of the outcome).

The goal of this Toolkit is to provide well-vetted recommendations for instruments that are suitable for use by mentoring programs and their evaluators. This should allow programs to more accurately capture their impacts, thus setting the stage for both improvements in program quality over time and, in the case of programs that show promise for effectiveness, generating a stronger and more targeted argument for programs. A further goal of the Toolkit is to create greater consistency in how youth outcomes are measured across programs. In this way, more meaningful comparisons can be made across programs. For example, does one program really improve school connectedness to a greater extent than others or is the difference due simply to how this outcome is being measured? Likewise, increased uniformity in how outcomes are assessed will provide researchers with the ability to aggregate evidence on impact across programs using common outcome metrics, thus offering an unprecedented opportunity to track and document trends in effectiveness for the youth mentoring field as a whole.

 

Toolkit Structure and Instrument Profiles

The Toolkit is built around six domains of youth outcomes that the NMRC Research Board identified as the most common areas in which mentoring programs could expect to have an impact: mental and emotional health, social emotional skills, healthy and prosocial behavior, problem behavior, interpersonal relationships, and academics. The Toolkit also includes recommendations for assessing different types of risk and protective factors, which can be used to capture the challenges and needs that youth bring to mentoring programs, providing valuable context for understanding other outcomes. In 2018, we added measures of relationship quality and characteristics, as assessing these factors can help ensure that youth are receiving appropriate support and that their mentoring relationships are progressing as intended and are likely to result in positive outcomes in relevant domains. Additional domains, and outcomes within domains, will be added to the Toolkit over time.

Within each domain, the Research Board identified several specific outcomes of likely interest to mentoring programs. For example, Aggression, Delinquent Behavior, and School Misbehavior are addressed as outcomes within the domain of Problem Behavior. For each outcome, the Research Board selected a recommended measure. A brief profile of the measure explains exactly what it is designed to measure, why it is recommended for use by mentoring programs, practical guidance for using the tool in real-life contexts, and information about how to access the measure. Alternative measures are also described that may be more suitable for a given mentoring program depending on its goals, the age range of the youth it serves, or other considerations.

For the Mentoring Relationship Quality and Characteristics measures, the Research Board engaged in a similar process of identifying measures that had been used previously in mentoring evaluations and had validity evidence in that context. The Board conducted a thorough review of 17 multi-faceted measures (several of which were represented by both youth- and mentor-report measures); 13 unidimensional measures; and 9 more tailored measures that had been used in youth mentoring studies, reviewing information on the measure’s characteristics, strength (e.g., how well the scale(s) holds together, measures what it intends to measure, and relates to other youth outcomes) and usage. The search yielded mostly measures that capture internal match quality, particularly the relational aspect of this component. Most reviewed measures were also from the youth’s perspective, as opposed to mentor-reported measures. Thus, as part of the selection process, the Board also considered which aspects of relationship quality were covered and strived to include at least a handful of mentor-reported measures. You can learn more about the selection criteria for these measures in the introduction to that section of the Toolkit.

You can use the navigation to the left to toggle through the various domains and measures within each.

The Toolkit also includes a section that features tips for administering instruments as well as advice for incorporating the recommended measures effectively into broader evaluation designs.

 

Toolkit Contributors Research Board Members:

  • Ed Bowers, Ph.D. – Clemson University
  • David DuBois, Ph.D. – University of Illinois at Chicago (Chair)
  • Chris Elledge, Ph.D. – University of Tennessee at Knoxville
  • Stephanie Hawkins, Ph.D. – RTI International
  • Carla Herrera, Ph.D. – Independent Researcher (Associate Chair)
  • Enrique Neblett, Ph.D. – University of North Carolina at Chapel Hill
  • Naida Silverthorn, Ph.D. – University of Illinois at Chicago (Senior Research Specialist)
  • Fasika Alem – University of Illinois at Chicago (Post-Doctoral Research Associate)

 

Additional Toolkit Content and Technical Support By:

  • Michael Garringer – MENTOR: The National Mentoring Partnership
  • Mandy Howard – FirstPic, Inc.

 

Measurement Domains

The Measurement Guidance Toolkit is built around several domains, each representing an area in which mentoring programs have demonstrated potential to benefit youth (or in the case of risk and protective factors, increase understanding of youth challenges and assets that may have important implications for effectiveness). Users can find specific recommended scales for measuring outcomes (or risk/protective factors) in each domain. These domains currently are:

  • Mentoring Relationship Quality and Characteristics
  • Measures of Program Quality
  • Mental and Emotional Health
  • Social Emotional Skills
  • Healthy and Prosocial Behavior
  • Problem Behavior
  • Interpersonal Relationships
  • Academics
  • Benefits of Mentoring for Mentors and Others Outside the Mentoring Relationship
  • Risk and Protective Factors

 

Mentoring Relationship Quality and Characteristics

The relationships that develop between youth and their mentors are thought to be the central route through which mentoring can benefit (or, inadvertently, harm) youth (Rhodes, 2005; Karcher & Nakkula, 2010a). Thus, it is important to be able to assess the salient characteristics and processes that indicate the quality of youth’s mentoring relationships.

Mentoring programs generally understand the importance of relationship quality and its potential role in fostering program benefits for youth. In fact, many programs have made relationship quality a central component of their internal evaluation activities. But programs often struggle to determine which components of relationship quality are most essential to measure and often use “homegrown” tools or limited measures of relationship satisfaction, rather than digging deeper into what happens within the relationship and how it is experienced across many dimensions or from multiple viewpoints. And when programs do try to use research-backed measures, they are faced with a dizzying array of options to choose from and it is difficult for them to gauge their relative merits and potential fit with their programs.

This section of the Measurement Guidance Toolkit is intended to help programs with this process of selecting reliable and valid tools for assessing the quality of the mentoring relationships that they are cultivating through their efforts.

A Framework for Understanding Relationship Dimensions

To guide our selection of measures to include in the toolkit, we followed a framework developed by Nakkula and Harris (2014). This framework highlights the following aspects of relationship quality: internal match quality (consisting of relational and instrumental components), match structure, and external match quality.

Internal Match Quality encompasses how the mentor and youth feel about their relationship and each other as well as more objective indicators of quality:

  • Relational aspects of this dimension reflect how the youth and mentor feel about each other and the way they relate to each other, including their perceptions of compatibility as well as feelings of mutual closeness, trust, and overall satisfaction with the relationship. Objective indicators in this category include the frequency and duration of meetings and the longevity of the mentoring relationship;
  • Instrumental aspects of internal match quality reflect the degree of growth orientation in the relationship. Specific indicators include the extent to which the mentor and youth focus on achieving goals together and the youth’s satisfaction with the support received. More objective indicators include the types and frequency of support received.

Match Structure includes what the mentee/mentor want to do together, how they decide what to do, and objective measures of the types of activities in which they ultimately engage.

Finally, External Match Quality includes elements outside of the mentoring relationship that can affect its development, such as perceived program support and the degree of parent engagement in the match. We have not yet focused on this last component, limiting our review to the measurement of relationship components that occur within the relationship. Exploring measures that assess these external influences on mentoring relationships may be a priority in future additions to the Toolkit.

We selected the Nakkula and Harris (2014) framework from several strong and influential theoretical frameworks in the field (see Karcher & Nakkula, 2010a), mainly because it is extremely comprehensive in the facets of relationship quality that it includes and thus would support our goal of considering measures of a broad range of different aspects of mentoring relationships. The framework also reflects, or accounts for, elements emphasized in other important frameworks in the field. For example, early seminal work by Morrow and Styles (1995) emphasized the importance of mentor approach, with findings suggesting that a developmental approach (focusing on youth’s voice in the relationship) is most conducive to relationship success, relative to a prescriptive approach (letting the adult’s goals for youth guide the relationship). Hamilton and Hamilton’s (1992) conceptualization of mentoring focuses on the instrumental roles that mentors take on when helping youth achieve different goals (see also Hamilton, Hamilton, DuBois, & Sellers, 2016). The TEAM framework (described in Karcher & Nakkula, 2010b) emphasizes the focus, purpose and authorship of the mentoring relationship and how these factors can interact in shaping its tenor. Keller and Pryce’s (2010) framework conceptualizes all relationships in terms of power (i.e., whether the relationship is vertical as in a parent-child relationship or horizontal as in friendships) and permanence (the degree to which the relationship is obligated or voluntary). Mentoring relationships are a unique combination of these elements (i.e., both unequal in power and voluntary). A mentor’s ability to maintain this hybrid role in her or his approach is posited to be key to relationship success (Keller & Pryce, 2012). All of these conceptually rich frameworks provide some guidance in which aspects of the relationship may be important to measure and are highly recommended for review as programs consider which aspects of relationship quality may be particularly telling for their specific program.

Selected Measures

A total of ten measures of mentoring relationship quality are included in this section of the Toolkit (an overview of how the measures were selected can be found in the overview. The recommended instruments are organized into three groups.

The first group, multi-dimensional measures, consists of four measures. These measures are designed to provide insight into multiple aspects of relationship quality or, more specifically, at least two of the aspects of relationship quality outlined in the Nakkula and Harris framework (2014). These measures consist of multiple scales and thus are relatively lengthy but compensate for that length in richness and scope. The Mentoring Processes Scale, in particular, captures two of the quality dimensions suggested by the framework, but focuses on the active expression of these qualities in activities and behaviors that mentors and youth engage in (Tolan et al., 2020). Three of the multi-dimensional measures include both a mentor and a youth version.

The second category consists of unidimensional measures. These instruments assess one dimension of relationship quality, in most cases, using only one scale. The two selected measures may be particularly attractive options for programs that can ask their participants only a limited number of questions. Nakkula and Harris (2014) propose that if only one aspect of relationship quality can be assessed, the relational aspect (as described above) is most central. In fact, almost all of the instruments we reviewed focused, at least in part, on relational aspects of relationship quality—and both of the unidimensional measures cover this feature of relationships—suggesting general agreement in the field that this is one of the more telling aspects of relationship quality.

The final category consists of measures of specific facets of relationships. These measures are oriented toward assessing aspects of mentoring relationships that, although likely to be relevant to many programs, are not routinely captured by more general-purpose measures of mentoring relationship quality. Illustratively, recent survey data indicate that as many as 1 in 3 youth served by mentoring programs meet with their mentor in a group context (Garringer, McQuillin, & McDaniel, 2017). Yet, none of the recommended measures in the other two categories described above are geared toward capturing the nature and quality of the various types of interactions and group dynamics that may take place within these types of programs. Thus, one of our recommended measures in this category covers this important area.

The four facets of mentoring relationships assessed by the measures in this last category are:

  1. Youth-centeredness: Youth’s “voice” in the relationship–that is, the extent to which youth feel that the activities and direction of the relationship reflect their own interests and needs. Morrow and Styles (1995) in their qualitative study of Big Brothers Big Sisters community-based mentoring relationships provided important early evidence of the importance of youth voice in contributing to successful relationships. More recent work (Herrera, DuBois & Grossman, 2013) has linked youth reports of youth-centeredness to youth reports of a stronger growth/goal focus in the relationship (see “Growth focus” below) and to program supports (i.e., those mentors who are trained and better supported have mentees who report higher levels of youth centeredness in their relationship).
  2. Mentor cultural sensitivity: The mentor’s attention to supporting her or his mentee’s cultural identity (Sánchez, Pryce, Silverthorn, Deane, & DuBois, 2018; Spencer, 2007). Studies suggest that improving mentor’s attention to this important component of youth identity can foster higher-quality relationships (Sánchez, Pryce, Silverthorn, Deane, & DuBois, 2018; Spencer, 2007). In fact, mentors’ reports of multicultural competence have been found to be correlated with their reported levels of satisfaction with their relationships with both the mentee and the mentoring organization as well as the quality of their relationship with the mentee’s family (Suffrin, Todd, & Sanchez, 2016). Reports by youth of color of receiving mentor support in this area were also found to be positively associated with the youth’s own reports of satisfaction with relational and instrumental aspects of the mentoring relationship (Sanchez et al., in press).
  3. Growth focus: The extent to which the relationship includes a focus on growth or goal achievement (Karcher & Nakkula, 2010b). Youth reports of growth focus in their mentoring relationship have been linked positively with the mentor’s receipt of training (both early on in the match and ongoing training) and receipt of higher quality support from program staff (Herrera et al., 2013). In the same study, youth who rated their mentors as higher in youth centeredness also tended to report a stronger growth/goal focus in their relationships.
  4. Group mentoring processes: The various interactional processes that occur when mentors meet with youth in a group context (Kuperminc & Thomason, 2014). Relatively few studies have been conducted that explore this important area. One recent study, however, reported that youth reports of their experiences within their mentoring group predicted several key youth outcomes including self-efficacy, school belonging, and school participation (Kuperminc, Sanford, & Chan, 2017).

We hope this section of the toolkit helps programs strategize more thoughtfully about the aspects of mentoring relationship quality that help both the relationships and youth they support to thrive and how they might go about measuring these qualities at various points in the mentoring relationship. Please also remember that any mentoring program can get free technical assistance to help think through how best to assess mentoring relationship quality in their program by requesting assistance through this website.

  • Cited Literature

     

    1. Bayer, A., Grossman, J. B., & Dubois, D. L. (2015). Using volunteer mentors to improve the academic outcomes of underserved students: The role of relationships. Journal of Community Psychology, 43(4), 408-429. doi: 10.1002/jcop.21693
    2. DuBois, D. L. and Neville, H. A. (1997). Youth mentoring: Investigation of relationship characteristics and perceived benefits. Journal of Community Psychology, 25, 227-234. doi:10.1002/(SICI)1520-6629(199705)25:3<227::AID-JCOP1>3.0.CO;2-T
    3. DuBois, D. L., & Neville, H. A. (1997). Youth mentoring: Investigation of relationship characteristics and perceived benefits. Journal of Community Psychology, 25, 227–234.
    4. Garringer, M., McQuillin, S., & McDaniel, H. (2017). Examining youth mentoring services across America: Findings from the 2016 National Mentoring Program Survey. Boston, MA: MENTOR: The National Mentoring Partnership.
    5. Hamilton, S. F., & Hamilton, M. A. (1992). Mentoring programs: Promise and paradox. Phi Delta Kappan, 73(7), 546.
    6. Hamilton, M. A., Hamilton, S. F., DuBois, D. L., & Sellers, D. E. (2016). Functional roles of important nonfamily adults for youth. Journal of Community Psychology, 44(6), 799-806.
    7. Herrera, Carla, David L. DuBois and Jean Baldwin Grossman. (2013). The Role of Risk: Mentoring Experiences and Outcomes for Youth with Varying Risk Profiles. New York, NY: A Public/Private Ventures project distributed by MDRC.
    8. Karcher, M. J., & Nakkula, M. J. (2010a). New Directions for Youth Development, 2010.
    9. Karcher, M. J., & Nakkula, M. J. (2010b). Youth mentoring with a balanced focus, shared purpose, and collaborative interactions. New Directions for Youth Development, 2010(126), 13-32.
    10. Keller, T. E., & Pryce, J. M. (2010). Mutual but unequal: Mentoring as a hybrid of familiar relationship roles. New Directions for Youth Development, 2010(126), 33-50.
    11. Keller, T. E., & Pryce, J. M. (2012). Different roles and different results: How activity orientations correspond to relationship quality and student outcomes in school-based mentoring. The Journal of Primary Prevention, 33(1), 47-64.
    12. Kuperminc, G., Sanford, V., & Chan, W. Y. (2017, February). Building effective group mentoring programs: Lessons from research and practice on Project Arrive. Workshop presented at the National Mentoring Summit, Washington, DC.
    13. Kuperminc, G. P., & Thomason, J. D. (2014). Group mentoring. In D. Dubois & M. Karcher (Eds.), Handbook of youth mentoring (2nd ed., pp. 273-289). Thousand Oaks, CA: Sage Publications.
    14. Morrow, K. V., & Styles, M. B. (1995). Building relationships with youth in program settings: A study of Big Brothers/Big Sisters. Philadelphia, PA: Public/Private Ventures.
    15. Nakkula, M., & Harris, J. (2014). Assessing mentoring relationships. In D. Dubois & M. Karcher (Eds.), Handbook of youth mentoring (2nd ed., pp. 45-62). Thousand Oaks, CA: Sage Publications.
    16. Parra, G. R., DuBois, D. L., Neville, H. A., Pugh‐Lilly, A. O. and Povinelli, N. (2002), Mentoring relationships for youth: Investigation of a process‐oriented model. Journal of Community Psychology, 30, 367-388. doi:10.1002/jcop.10016
    17. Rhodes, J. E. (2005). A model of youth mentoring. In D. L. DuBois & M. J. Karcher (Eds.) Handbook of youth mentoring (pp. 30–43). Thousand Oaks, CA: SAGE.
    18. Rhodes, J. E., Schwartz, S. E., Willis, M. M., & Wu, M. B. (2017). Validating a mentoring relationship quality scale: Does match strength predict match length?. Youth & Society, 49(4), 415-437.
    19. Sánchez, B., Pryce, J., Silverthorn, N., Deane, K., & DuBois, D. L. (in press). Do mentor support for racial/ethnic identity and cultural mistrust matter for girls of color? A preliminary investigation. Cultural Diversity and Ethnic Minority Psychology.
    20. Spencer, R. (2007). “It’s Not What I Expected” A Qualitative Study of Youth Mentoring Relationship Failures. Journal of Adolescent Research, 22(4), 331-354.
    21. Suffrin, R. L., Todd, N. R., & Sánchez, B. (2016). An ecological perspective of mentor satisfaction with their youth mentoring relationships. Journal of Community Psychology, 44(5), 553-568.
    22. Thomson, N., & Zand, D. (2010). Mentees’ perceptions of their interpersonal relationships: The role of the mentor-youth bond. Youth & Society, 41, 434-447.
    23. Tolan, P. H., McDaniel, H. L., Richardson, M., Arkin, N., Augenstern, J., & DuBois, D. L. (2020). Improving understanding of how mentoring works: Measuring multiple intervention processes. Journal of Community Psychology. http://dx.doi.org/10.1002/jcop.22408
    24. Zand, D. H., Thomson, N., Cerventes, R., Espiritu, R., Klagholz, D., LaBlanc, L., & Taylor, A. (2009). The mentor–youth alliance: The role of mentoring relationships in promoting youth competence. Journal of Adolescence, 1–17. doi:10.1016/j.adolescence.2007.12.006

Multi-dimensional

Youth Mentoring Survey (YMS) and Match Characteristics Questionnaire (MCQ)

The 51-item YMS for 4th to 12th graders measures the mentee’s perspective on several aspects of relationship quality and match structure (i.e., the focus of match activities).

What It Measures:

Quality of the mentoring relationship and dynamics thought to influence quality (e.g., structure and external support), as rated by the youth or mentor.

25 items assess internal relationship quality using 3 subscales: Relational Quality (e.g., feeling happy with the relationship), Instrumental Quality (instrumental benefits from the relationship), and Prescription (the extent to which mentees feel that their mentors focus too much on changing them). Items are rated on a 4-point scale: Not at all true, A little true, Pretty true, or Very true. 22 items assess match structure using 3 subscales: Fun Focus, Sharing Focus, and Growth Focus. Items are rated on a 5-point scale: Never, Less than half the time, Half the time, More than half the time, or Every time. 4 items ask about the location and frequency of mentor-youth meetings. The versions for K to 1st graders and 2nd to 3rd graders include 11 and 27 items, respectively, and measure similar aspects of mentoring dynamics.

The 71-item MCQ measures the mentor’s positive and negative perceptions of the relationship, their perceived competence, their prioritization of different match activities, and the effects of external influences on the match. In the first section, 22 items assess internal mentoring relationship quality using 5 subscales: closeness, not distant, academic support seeking, non-academic support seeking, and satisfaction. These subscales, along with 2 subscales from the third section (general and risk-related compatibility) can be combined to create an overall score or three broadscale scores for closeness (closeness, not distant, and satisfaction), compatibility (general and risk-related compatibility), and availability to support (academic and nonacademic support seeking). In the second section, 20 items assess match structure or purpose (how much mentors value different types of activities) using 5 subscales: fun, sharing/relating, character development, outlook, and academic growth. These subscales can be combined to create 2 broadscale scores for relating (sharing/relating and fun) and growth (character development, outlook, and academic growth). In the third section, 29 items assess internal quality (general and risk-related compatibility), competence (a 5-item scale), and external match quality, which includes 4 subscales: programmatic support, parental support, peer support, and interference. Finally, the measure also includes a question asking mentors to list and rank their top 3 priorities for the match. All scale items are rated on a 6-point scale that varies across these sets of items.

Intended Age Range

Youth from 4th to 12th grade; Mentors of all ages. Also versions for students from K to 1st grade and 2nd to 3rd grade.

Rationale

These scales were selected due to their broad, unparalleled coverage of multiple aspects of mentoring relationship quality, evidence of reliability and validity, scales for use by both mentors and youth, and appropriateness for use with youth from a wide age range.

Cautions

None.

Special Administration Information

An administration guide for the YMS can be found here.

How to Score

Scoring of each scale requires simple calculations using the selections on each youth survey. Full details are included from the developer upon request of use of the scales.

How to Interpret Findings

Higher scores on the YMS and MCQ internal relationship quality and match structure subscales (and on the MCQ external match quality subscale) reflect more positive perceptions of the mentoring relationship.

Access and Permissions

The YMS and MCQ scales are available for non-commercial use with no charge and can be requested here.

Alternatives

The Natural Mentoring Experiences (NME) Survey is a brief (8-item) alternative for assessing mentor perceptions of relationship quality. The scale is strongly correlated with the Internal Relationship Quality subscale of the MCQ and with mentor intentions to continue mentoring in the program. Information about the measure can be found here.

  • Cited Literature
    1. Applied Research Consulting (n.d.). Retrieved from http://mentoringevaluation.com/Tools.htm.
    2. Harris, J. T., & Nakkula, M. J. (2018). Match Characteristics Questionnaire (MCQ) (Unpublished measure). Applied Research Consulting, Fairfax, VA.
    3. Harris, J. T., & Nakkula, M. J. (2018). Youth Mentoring Survey (YMS for 4th Graders and Up) (Unpublished measure). Applied Research Consulting, Fairfax, VA.

Multi-dimensional

Social Support and Rejection Scale

The scale consists of 22 items assessing four dimensions of social support and social rejection that youth may experience in relationships with important non-parental adults

These dimensions are: feels valued (6 items, e.g., “This person cares about me even when I make mistakes.”); trust (5 items, e.g., “I talk to this person about problems with my friends.”); mentoring (6 items, e.g., “I learn how to do things by watching and listening to this person.”); and negativity (6 items, e.g., “I feel that this person will let me down.”). Response options are Never, Rarely, Sometimes, Often, or Always.

What It Measures:

A youth’s reported positive and negative interactions with significant non-parental adults.

Intended Age Range

10- to 18-year-olds.

Rationale

This measure was chosen based on its comprehensive assessment of mentoring relationship quality, evidence of reliability and validity, and support for use with diverse populations of youth and types of relationships.

Cautions

None.

Special Administration Information

References to “this person” can be replaced by a specific individual (e.g., “my mentor”).

How to Score

Each item is scored from 1 (Never) to 5 (Always). Each subscale score is the average of the items that make up the subscale.

How to Interpret Findings

Higher scores on the 3 positive scales reflect higher levels of support within the relationship. Higher scores on the negativity scale reflect higher levels of stress and negativity within the relationship.

Access and Permissions

The scale is available for non-commercial use with no charge and is made available here.

Alternatives

None recommended.

  • Cited Literature
    1. Roffman, J. G., Pagano, M. E., & Hirsch, B. J. (2000). Social support and rejection scale. Evanston, IL: Human Development and Social Policy, Northwestern University.

Multi-dimensional

Network of Relationships Inventory-Social Provisions Version (NRI-SPV)

The NRI-SPV is a 30-item scale that includes 10 3-item subscales: Companionship, Conflict, Instrumental Aid, Antagonism, Intimate Disclosure, Nurturance, Affection, Reassurance of Worth, Relative Power, and Reliable Alliance.

Two global dimensions of relationship quality, Support and Negative Interactions, also can be computed. The items for the youth and mentor versions of the scale are identical except for slight differences in the items in the Instrumental Aid subscale (e.g., “How much do you teach this person how to do things that they don’t know?” versus “How much does this person teach you how to do things that you don’t know?”). The scale was developed to assess the quality of a wide range of relationships. However, it can also be administered focusing only on the relationship(s) of interest. The current 30-item NRI-SPV is a slightly revised version of the original NRI-SPV (Furman & Buhrmester, 1985). See the NRI manual for a description of this revision. Response options are: Little or none, Somewhat, Very much, Extremely much, or The most.

What It Measures:

10 characteristics of personal relationships (e.g., the mentoring relationship) as rated by the youth or mentor.

Intended Age Range

8- to 18-year-old mentees; Mentors of all ages.

Rationale

The NRI-SPV was selected because it has a strong theoretical framework, evidence of reliability and validity, and a mentor and youth version. It also assesses a variety of relationship dimensions using only a few items per dimension.

Cautions

The application of the NRI-SPV to the assessment of mentor-mentee relationship characteristics is limited to only a few studies, and these studies have used the global dimensions of relationship quality or formed a support construct by averaging the Support subscale of the NRI-SPV with other measures of relationship satisfaction. Thus, although the validity of the NRI subscales is well established, the validity of individual subscales in the context of mentoring relationships is less clear.

Special Administration Information

A description of the scale’s items and how they map on to subscales of the NRI-SPV can be found on page 3 of the NRI manual. The manual describes how to administer the scale to ask about multiple relationships in the youth’s life. Programs interested in only assessing the mentoring relationship may simply use the following introduction prior to administering the scale items: “These questions ask about your relationship with your mentor. Please think about your mentor when answering them.” A mentor-report version replaces the term “mentor” in the introduction with “mentee.”

How to Score

Each item is scored on a 5-point scale from 1 (Little or none) to 5 (The most). Subscale scores are calculated by averaging the 3 items corresponding to each of the 10 dimensions. Support is calculated by averaging items from the Companionship, Instrumental Aid, Intimate Disclosure, Nurturance, Affection, Admiration, and Reliable Alliance subscales. Negative Interactions is calculated by averaging items from the Conflict and Antagonism subscales.

How to Interpret Findings

Higher scores on NRI-SPV subscales reflect higher levels of the assessed relationship dimension.

Access and Permissions

The Network of Relationships Inventory-SPV is available for non-commercial use with no charge. You may request permission to use the measure here. A full copy of the manual and measure can be obtained here.

Alternatives

None recommended.

  • Cited Literature
    1. Furman, W. & Buhrmester, D. (1985). Children’s perceptions of the personal relationships in their social networks. Developmental Psychology, 21, 1016-1022.

Multi-dimensional

Mentoring Processes Scale (MPS)

Youth- and mentor-report versions of the MPS are available.

The 26-item youth-report measure assesses positive mentor-mentee engagement. The items are organized around the five mentoring processes; however, one study suggests the items may be best used as a single 26-item scale. Response options are Not at all true, Almost always not true, Usually not true, Somewhat true, Usually true, Almost always true, Very true.


The 39-item mentor-report measure assesses the five distinct mentoring processes: (a) role modeling and identification (10 items); (b) advocacy (6 items); (c) relationship and emotional support (9 items); (d) teaching and information providing (8 items); and (e) shared activities (6 items). Mentors respond using the same response options as youth.

What It Measures:

A youth and/or mentor’s experience of five mentoring processes essential in promoting strong mentoring relationships and positive youth outcomes: (a) role modeling and identification; (b) advocacy; (c) relationship and emotional support; (d) teaching and information providing; and (e) shared activities.

Intended age range
Youth 10 to 21 years old; Mentors 18 years old or older who are mentoring a young person who is 10 to 21 years old.

Rationale
The MPS was selected because it is one of the first reliable and valid measures of relationship quality that assesses a broad range of relationship processes (e.g., activities and behaviors) involved in mentoring relationships that are theoretically linked with positive outcomes, from both the mentor and youth’s perspective.

Cautions
Evidence of reliability and validity is limited to one study. In addition, although the item wording is simple, the large number of items and 7-point response scale may be difficult for younger children or those with reading challenges. Studies also have not yet shown how well these measures perform across age, gender, racial/ethnic, or match-length differences.

Special administration information
None.

How to score
The youth and mentor versions are scored from 1 (Not at all true) to 7 (Very true). Prior to scoring, negatively worded items are reverse scored. This includes two items in the youth scale (“My mentor does not stand up for me,” and “I don’t learn much from my mentor”) and 3 items in the mentor scale (“I do not stand up for my mentee,” “My mentee doesn’t seem to want to be like me,” and “My mentee doesn’t seem to learn much from me”).

Subscale scores are created by averaging ratings across all items within each subscale. Scores for each subscale should be calculated if at least 75% of items are completed. The overall mentoring processes score is computed as the average of items across all subscales.

A detailed scoring guide for the MPS can be found here.

How to interpret findings
Higher scores reflect more positive perceptions of the mentoring relationship.

Access and permissions
Both the youth-report and the mentor-report measures are available for non-commercial use with no charge and are made available here. A formatted version can be found here.

Alternatives
None

  • Cited Literature

    Tolan, P. H., McDaniel, H. L., Richardson, M., Arkin, N., Augenstern, J., & DuBois, D. L. (2020). Improving understanding of how mentoring works: Measuring multiple intervention processes. Journal of Community Psychology, 48(6), 2086-2107. https://doi.org/10.1002/jcop.22408

Unidimensional

Youth Strength of Relationship (YSoR) and Mentor Strength of Relationship (MSoR)

The youth version of this scale consists of 10 items assessing both positive (6 items, e.g., “My Big has lots of good ideas about how to solve a problem”) and negative (4 items, e.g., “When I am with my Big, I feel ignored”) perceptions of the relationship with their mentor.

Youth respond on a 5-point scale: Never true, Hardly ever true, Sometimes true, Most of the time true, or Always true. The mentor version consists of 14 items assessing both positive and negative perceptions of the relationship using two subscales: Affective (10 items, e.g., “I enjoyed the experience of being a Big,” “Sometimes I feel frustrated with how few things have changed with my Little”) and Logistical (2 items, e.g., “It is hard for me to find the time to be with my Little”). Mentors respond on a 5-point scale: Strongly disagree, Disagree, Neutral, Agree, or Strongly agree.

What It Measures:

A youth or mentor’s perceptions of, and experiences in, the mentoring relationship.

Applicable Age Range

5- to 21-year-old mentees; Mentors 17 and over (though the items also appear relevant for slightly younger mentors).

Rationale

The YSoR and MSoR scales were selected because of their brevity and the fact that they capture both negative and positive experiences within the mentoring relationship. Both mentor and youth versions also have demonstrated good reliability for the total scores and associations with match length in a sample of BBBS community-based matches.

Cautions

Although promising, evidence of reliability and validity is limited to one study.

Special Administration Information

When administering, references to “Big” can be substituted with “mentor,” and “Little” can be substituted with “mentee.”

How to Score

The mentor version is scored on a 5-point scale from 1 (Strongly disagree) to 5 (Strongly agree). The youth version is scored on a 5-point scale from 1 (Not true at all) to 5 (Always true). Prior to scoring, negatively worded items are reverse scored (items 3, 4, 6, & 8 on the YSoR and items 2, 5, 6, 8, 9, 10, 12, & 13 on the MSoR). The total score is the average of all 10 YSoR items or 14 MSoR items. For the YSoR, subscale scores are computed as the average for the Positive (items 1, 2, 5, 7, 9, & 10) and Negative subscales (items 3, 4, 6, & 8). For the MSoR, subscale scores are computed as the average for the Affective (items 1-4, 6-9, & 11-14) and Logistical (items 5 & 10) subscales.

How to Interpret Findings

Higher scores reflect more positive perceptions of the mentoring relationship.

Alternatives

The Relationship Quality Scale (Rhodes et al., 2005) is an earlier version of the scale that was revised to become the YSoR (see link in Citations below).

Access and Permissions

Both the youth-report and the mentor-report measures are available for non-commercial use with no charge and are made available here.

  • Cited Literature
    1. Rhodes, J. E., Reddy, R., Roffman, & Grossman, J. B. (2005). Promoting successful youth mentoring relationships: A preliminary questionnaire. The Journal of Primary Prevention, 26, 147-167. https://doi.org/10.1007/s10935-005-1849-8
    2. Rhodes, J. E., Schwartz, S. E. O., Willis, M. M., & Wu, M. B. (2017). Validating a mentoring relationship quality scale: Does match strength predict match length? Youth & Society, 49, 415-437. https://doi.org/10.1177/0044118X14531604

Unidimensional

Mentor-Youth Alliance Scale (MYAS)

This 10-item scale includes items assessing both the youth’s own feelings about and experiences within the mentoring relationship as well as perceptions of the mentor’s feelings toward him/her.

Sample items include: “I feel my mentor cares about me, even when I do things s/he does not approve of” and “My relationship with my mentor is important to me.” Youth respond on a 4-point scale: Very false, False, True, or Very true.

What It Measures:

Youth’s feelings of compatibility with the mentor and satisfaction with different aspects of the mentoring relationship.

Intended Age Range

9- to 19-year-olds.

Rationale

This measure was selected based on its brevity and evidence of reliability and validity.

Cautions

None.

Special Administration Information

None.

How to Score

Each item is scored from 1 (Very false) to 4 (Very true) and averaged to create one overall score.

How to Interpret Findings

Higher scores indicate more positive youth perceptions of relationship quality.

Access and Permissions

The measure is available for non-commercial use with no charge and is made available here.

Alternatives

None recommended.

  • Cited Literature
    1. Zand, D. H., Thomson, N., Cervantes, R., Espiritu, R., Klagholz, D., LaBlanc, L., & Taylor, A. (2009). The mentor-youth alliance: The role of mentoring relationships in promoting youth competence. Journal of Adolescence, 32, 1-17. doi:10.1016/j.adolescence. 2007. 12.006
    2. Thomson, N., & Zand, D. H. (2010). Mentees’ perceptions of their interpersonal relationships: The role of the mentor-youth bond. Youth & Society, 41, 434-445. doi:10.1177/004 4118X09334806

Specific Facets of Relationship

Group Mentoring Climate

The scale consists of 11 items assessing three dimensions of group climate experienced in group mentoring programs.

These dimensions are: group cohesion (4 items, e.g., “Kids in this group care about each other.”), engagement (3 items, e.g., “Do you think the activities you do in your group are interesting?”), and mutual help (4 items, e.g., “How much did the group help you deal with everyday problems?” ). Response options are: Not a lot, A little bit, Somewhat, or Very much.

What It Measures:

A youth’s report of group processes within a group mentoring program including group engagement, group cohesion, and mutual help.

Intended Age Range

14- to 18-year-olds.

Rationale

This measure was chosen based on its strong theoretical support and use with group mentoring programs, its comprehensive assessment of group mentoring processes, and its promising evidence of reliability and validity.

Cautions

Reliability and validity evidence for the measure is based on a sample of ninth grade students who were at high risk for dropping out of school. They have not been assessed with different group mentoring programs across diverse samples.

Special Administration Information

None.

How to Score

Each item is scored from 1 (Not a lot) to 4 (Very much). Each subscale score is the average of the items that make up the subscale.

How to Interpret Findings

Higher scores on the subscales reflect more positive climate within the group mentoring program.

Access and Aermissions

The scale is available for non-commercial use with no charge and is made available here.

Alternatives

Fostering program belonging is a key goal for many group mentoring programs. A global measure of Program Belonging has been used with out-of-school-time programs and mentoring programs. This scale may be particularly useful when assessing the group mentoring experiences of fairly young children. A brief five-item version of the measure is available here.

  • Cited Literature
    1. Kuperminc, G., Sanford, V., & Chan, W. Y. (2017, February). Building effective group mentoring programs: Lessons from research and practice on Project Arrive. Workshop presented at the National Mentoring Summit, Washington, DC.

Specific Facets of Relationship

Mentor Support for Racial/Ethnic Identity

This instrument consists of 6 items developed for a mentoring program for girls of color, although items also appear relevant for boys.

Sample items are: “My Big/mentor is respectful of my racial/ethnic background and culture,” “My Big/mentor makes me feel proud of my racial/ethnic background and culture,” and “My Big/mentor helps me learn new things about my racial/ethnic background and culture.” Youth respond on a 4-point scale: Not at all true, A little true, Pretty true, or Very true.

What It Measures:

Youth perceptions of the mentor’s support for their racial/ethnic background, culture and identity.

Intended Age Range

10- to 14-year-olds, but items also appear appropriate for older adolescents.

Rationale

Very few existing instruments focus on issues of culture and race within a mentoring context. This measure was chosen because of its theoretical grounding in positive youth development and critical race feminism for girls of color (i.e., how society organizes itself along intersections of race, gender, and class), its use with girls of color in a mentoring program, and promising evidence of reliability and validity.

Cautions

This instrument was developed as part of a study with predominantly African American and Latina girls and has not been widely used. Thus, evidence for reliability and validity is limited to this group and has not been tested with boys of color or different race/ethnicities.

Special Administration Information

When administering, references to “Big” can be substituted with “mentor.”

How to Score

Each item is scored from 1 (Not at all True) to 4 (Very True) with one reverse-scored item (i.e., “My Big/mentor seems uncomfortable talking to me about my racial/ethnic background and culture”). The total score is the average across items.

How to Interpret Findings

Higher scores indicate more mentor support for the mentee’s racial/ethnic identity.

Access and Permissions

The scale is available for non-commercial use with no charge and is made available here.

Alternatives

None recommended.

  • Cited Literature
    1. Sánchez, B., Pryce, J., Silverthorn, N., Deane, K., & DuBois, D. L. (in press). Do mentor support for ethnic/racial identity and mentee cultural mistrust matter for girls of color? A preliminary investigation. Cultural Diversity and Ethnic Minority Psychology.M

Specific Facets of Relationship

Youth-Centered Relationship

This scale consists of 5 items assessing the extent to which the youth feels the mentor considers their preferences and interests when selecting activities.

Sample items include: “My mentor almost always asks me what I want to do” and “My mentor and I like to do a lot of the same things.” Youth respond on a 4-point scale: Not true at all, Not very true, Sort of true, or Very true.

What It Measures:

Youth’s perceptions of the extent to which the activities engaged in with the mentor are centered on the youth’s interests.

Intended Age Range

10- to 18-year-olds.

Rationale

This measure was chosen because of its brevity and evidence of validity and reliability across youth of differing gender, race/ethnicity and risk profiles.

Cautions

None.

Special Administration Information

None.

How to Score

Each item is scored from 1 (Not true at all) to 4 (Very true). The overall perception of youth-centeredness is created by averaging across all five items.

How to Interpret Findings

Higher scores on the scale reflect higher levels of youth-centeredness in the mentoring relationship.

Access and Permissions

The scale is available for non-commercial use with no charge and is made available here.

Alternatives

None recommended.

Specific Facets of Relationship

Youth-Report Measure of Growth/Goal Focus in Youth Mentoring Relationships

This 6-item measure assesses the degree to which the youth perceives that their mentor is working to help them achieve goals or personal growth as part of the mentoring relationship.

Sample items include: “My mentor helps me to set and reach goals” and “My mentor and I spend time working on how I can improve as a person.” Each item is rated on a 4-point scale: Very false, Mostly false, Mostly true, or Very true.

What It Measures:

Youth perceptions of a focus on personal growth and goal attainment in the mentoring relationship.

Intended Age Range

Youth aged 9 and older.

Cautions

None.

Special Administration Information

None.

How to Score

Each item is scored from 1 (Very false) to 4 (Very true). The scale score is the average across all items.

How to Interpret Findings

Higher scores indicate a greater perception on the part of the youth that their mentoring relationship is focused on their goal attainment and personal growth.

Access and Permissions

The measure is available for non-commercial use with no charge and is made available here.

Alternatives

None recommended.

  • Cited Literature
    1. DuBois, D. L. (2008). Youth-report measure of growth/goal focus in youth mentoring relationships. Unpublished measure, University of Illinois at Chicago.

Measures of Program Quality

Mentoring programs include a wide range of interdependent components and activities. Prominent among these are mentor recruitment, screening, and training, mentor-mentee matching, and robust support and oversight throughout all stages of mentoring relationships from initiation to closure. The multi-faceted concept of program quality encompasses the extent to which appropriate practices are in place to support such areas of activity, whether they are being implemented as intended, and, importantly, the degree to which they are experienced as helpful by youth, parents, mentors, and others to whom they are directed. Assessments of program quality, therefore, must take into account not only multiple types of activities, but also varied vantage points from the relatively objective status of extant policies and standards to the more subjective experiences of those who are on the receiving end of a program’s practices. Doing so in a reliable and valid manner, understandably, can be a daunting challenge for many programs.

This section of the Measurement Guidance Toolkit provides relatively brief and easily accessible tools that programs can use to assess the quality of their practices and service delivery. Programs are likely to find it helpful to augment these tools with other available resources for assessing program quality. These include the National Quality Mentoring System (NQMS), which is available through Affiliates of MENTOR, and the Youth Program Quality Intervention (YPQI), a continuous quality improvement approach developed for out-of-school time (OST) programs. Mentoring programs interested in these systems can find more information at the above links.

A Framework for Understanding Program Quality

In considering measures to add to this section of the toolkit, we used the framework of MENTOR’s Elements of Effective Practice for Mentoring, Fourth Edition (Garringer et al., 2015). The original edition of the Elements focused on a set of research-informed guidelines to help develop high-quality mentoring programs. With successive editions, priority has been placed on integrating research evidence with practitioner experience in identifying standards reflecting high-quality mentoring programs. The current, fourth edition of the Elements identifies six core standards of practice: (1) Recruitment, (2) Screening, (3) Training, (4) Matching and Initiation (5) Monitoring and Support, and (6) Closure. Each core standard includes specific benchmarks and enhancements. Benchmarks are practices that must be followed to meet the core standard and are determined by two criteria: (1) evidence that the practice is associated with effective mentoring relationships; and (2) the practice is designed to protect the safety of mentees. Enhancements are practices that are not required for programs to meet the core standards, but were determined by authors of the Elements to be promising, innovative, and useful based on practitioner input and research evidence. Measures that reflected elements of these core standards were prioritized for inclusion in this section of the Measurement Guidance Toolkit.

The Elements of Effective Practice for Mentoring are informed by research that supports links between program practices and mentor commitment (Drew et al., 2020), mentoring relationship quality (Herrera et al., 2013; McMorris et al., 2018; McQuillin et al., 2015), match length (Herrera et al., 2013; Kupersmidt et al., 2017) and youth outcomes (DuBois et al., 2002; DuBois et al., 2011; Herrera et al., 2008; Jarjoura et al., 2018). Most of these studies have examined one or more program features in isolation. However, one recent study tested a more comprehensive measure which reflected all standards in the third edition of the Elements. The study included Big Brothers Big Sisters matches across the U.S. and found that the number of standards implemented (of the six possible standards), as reported by programs, predicted greater match length and having a long-term match of 24 months or more (Kupersmidt et al., 2017). A meta-analysis by DuBois and his colleagues (2002) similarly found that programs with greater numbers of practices suggested as important by theory and/or research yielded stronger youth outcomes.

Selected Measures

Six measures of mentoring program quality are included in this section of the Toolkit (a brief description of how measures are selected can be found in the Toolkit’s overview). The recommended instruments are organized by respondent (staff, mentor, youth, and caregiver).

As the individuals responsible for implementing a mentoring program’s policies and practices, staff are arguably in the best position to provide detailed information about them and their implementation. Program staff may also provide reports of their own perception of the agency’s culture and support of staff in providing services to youth and families. Staff reports of program practices have been associated with aspects of mentoring relationship quality, such as match length (Kupersmidt et al., 2017; Stelter et al., 2018). In addition, staff reports of specific program practices, such as ongoing training and regular support contacts, have been linked to lower rates of premature match closure (McQuillin & Lyons, 2021).

Of those to whom practices may be directed in mentoring programs (mentors, youth, and their caregivers), mentors are typically the recipients of the broadest variety of program practices and supports, as they are the central conduit through which the program serves youth. For these reasons, mentoring researchers have developed a variety of instruments to measure mentor experiences of program delivery (for example, their experiences of training and support procedures) as well as their perceptions of the program more broadly and the staff they have interacted with. Mentor reports of their experiences with program practices, such as matching, mentor training, and staff support contact time, have been linked with mentor commitment and satisfaction in the mentoring relationship and in their volunteer experience (Drew et al., 2020; Keller et al., 2020). Similarly, mentor perceptions of program support have been associated with more positive mentoring relationship experiences as reported by both mentors and youth (Sass & Karcher, 2013; Weiler et al., 2019). The toolkit includes reviews of two measures of mentor-reported program quality—one reflects mentor experiences across a number of practice areas, and the other provides a more focused assessment of various aspects of program support.

Youth reports of their programmatic experiences—apart from their experiences within the mentoring relationship itself—have been less common in mentoring studies than in evaluations of other types of youth-serving programs, such as site-based after-school programs in which youth typically have more frequent interactions with staff members and wider-ranging involvement in programming (e.g., curriculum-based instruction, group activities). These latter studies (e.g., Zeldin et al., 2014) suggest that youth perceptions of program qualities, such as whether they are given a voice in program decision-making and experiencing supportive relationships with program adults, are associated with youth agency and empowerment (i.e., confidence in their ability to effect change; Zeldin & Collura, 2010) and other positive outcomes such as academic attitudes and achievement (Seitz et al., 2021). In recognition of the site-based format of many mentoring programs, the toolkit includes a measure that draws from this broader out-of-school time program literature. A more preliminary measure also is included that can be used with community-based programs.

There is currently a lack of comprehensive measures that assess aspects of caregiver experiences of program practices. Thus, we know very little about how these experiences may be associated with program implementation, the development and progression of the mentoring relationship, and youth outcomes. Research, however, has yielded findings that are consistent with the importance of caregiver experiences within mentoring programs (Basualdo-Delmonico & Spencer, 2016). One study, for example, found that mentor-youth meetings were more frequent when the youth’s parent received more regular support contacts and check-ins from staff (Herrera et al., 2013). However, large-scale studies of caregivers’ experience of other program practices and how their overall program experience may affect their child’s mentoring relationship have not been conducted. Thus, while the measure reflecting caregiver experiences included in this section of the toolkit is preliminary and in need of further validation work, it was judged important to include as a tool for facilitating integration of a caregiver perspective in assessments of program quality.

We hope this section of the toolkit provides a rich set of options for tools that programs can use to assess program quality. Please remember that mentoring programs can request free technical assistance through the NMRC; MENTOR affiliates also can access the NQMS; as noted previously.

  • Cited Literature

    Basualdo-Delmonico, A. M., & Spencer, R. (2016). A parent’s place: Parent’s, mentors’ and program staff members’ expectations for and experiences of parental involvement in community-based youth mentoring relationships. Children and Youth Services Review, 61, 6-14. https://doi.org/10.1016/j.childyouth.2015.11.021

     

    Drew, A. L., Keller, T. E., Spencer, R., & Herrera, C. (2020). Investigating mentor commitment in youth mentoring relationships: The role of perceived program practices. Journal of Community Psychology48(7), 2264-2276. https://doi.org/10.1002/jcop.22409

     

    Garringer, M., Kupersmidt, J., Rhodes, J., Stelter, R., & Tai, T. (2015). Elements of Effective Practice for Mentoring (4th ed.). MENTOR: The National Mentoring Partnership.

     

    Herrera, C., DuBois, D. L., & Grossman, J. B. (2013). The role of risk: Mentoring experiences and outcomes for youth with varying risk profiles. A Public/Private Ventures project distributed by MDRC. http://www.mdrc.org/sites/default/files/Role%20of%20Risk_Final-web%20PDF.pdf

     

    Herrera, C., Kauh, T. J., Cooney, S. M., Grossman, J. B., & McMaken, J. (2008). High school students as mentors. Public/Private Ventures. https://www.sophe.org/wp-content/uploads/2017/01/mentoring_1149.pdf

     

    Jarjoura, G. R., Tanyu, M., Forbush, J., Herrera, C., & Keller, T. E. (2018). Evaluation of the Mentoring Enhancement Demonstration Program: Technical report (Office of Justice Programs’ National Criminal Justice Reference Service Document Number 252167). https://www.ncjrs.gov/pdffiles1/ojjdp/grants/252167.pdf

     

    Keller, T. E., Drew, A.L., Clark-Shim, H., Spencer, R., & Herrera, C. (2020). It’s about time: Staff support contacts and mentor volunteer experiences. Journal of Youth Development, 15(4), 145-161. https://doi.org/10.5195/jyd.2020.879

     

    Kupersmidt, J., Stump, K. N., Stelter, R. L., & Rhodes, J. E. (2017). Mentoring program practices as predictors of match longevity. Journal of Community Psychology, 45(5), 630-645. https://doi.org/10.1002/jcop.21883

     

    McMorris, B. J., Doty, J. L., Weiler, L. M., Beckman, K. J., & Garcia-Huidobro, D. (2018). A typology of school-based mentoring relationship quality: Implications for recruiting and retaining volunteer mentors. Children and Youth Services Review, 90, 149–157. https://doi.org/10.1016/j.childyouth.2018.05.019

     

    McQuillin, S. D., & Lyons, M. D. (2021). A national study of mentoring program characteristics and premature match closure: The role of program training and ongoing support. Prevention Science22(3), 334-344. https://link.springer.com/article/10.1007/s11121-020-01200-9

     

    McQuillin, S. D., Straight, G. G., & Saeki, E. (2015). Program support and value of training in mentors’ satisfaction and anticipated continuation of school-based mentoring relationships. Mentoring & Tutoring: Partnership in Learning23(2), 133-148. https://doi.org/10.1080/13611267.2015.1047630

     

    Sass, D.A., & Karcher, M.J. (2013). Analyses of the contribution of case managers to mentor support and match outcomes. In Herrera, DuBois & Grossman (Eds.), The role of risk: Mentoring experiences and outcomes for youth with varying risk profiles (pp. 120-125). New York, NY: Public/Private Ventures project distributed by MDRC. https://www.mdrc.org/publication/role-risk/file-full

     

    Seitz, S., Khatib, N., Guessous, O., & Kuperminc, G. (2021). Academic outcomes in a national afterschool program: The role of program experiences and youth sustained engagement. Applied Developmental Science, 1-19. https://doi.org/10.1080/10888691.2021.1993855

     

    Stelter, R. L., Kupersmidt, J. B., & Stump, K. N. (2018). Supporting mentoring relationships of youth in foster care: Do program practices predict match length? American Journal of Community Psychology, 61(3-4), 398–410. https://doi.org/10.1002/ajcp.12246

     

    Weiler, L. M., Boat, A. A., & Haddock, S. A. (2019). Youth risk and mentoring relationship quality: The moderating effect of program experiences. American Journal of Community Psychology63(1-2), 73-87. https://doi.org/10.1002/ajcp.12304

     

    Zeldin, S. & Collura, J. (2010, June). Being Y-AP savvy: A primer on creating & sustaining youth-adult partnerships. Ithaca, NY: ACT for Youth Center of Excellence. https://ecommons.cornell.edu/bitstream/handle/1813/19325/YAP-Savvy.pdf

    Zeldin, S., Krauss, S., Collura, J., Lucchesi, M., & Sulaiman, A. H. (2014). Conceptualizing and measuring youth-adult partnership in community programs: A cross-national study. American Journal of Community Psychology, 54(3-4), 337-347. https://doi.org/10.1007/s10464-014-9676-9

     

     

     

     

Measures of Program Quality

Elements Quality Improvement Process (EQUIP) 3.0

Applicable age range:

Adult mentoring program staff

What It Measures:

A mentoring program’s alignment with the benchmark practices outlined in the Elements of Effective Practice for Mentoring, 3rd Edition (EEP).

Description

This is a 31-item index. Sample items include, “Do you think that the prospective mentors have a good idea of what it means to be a mentor in your program?” and “Does your agency have a stated expectation for the length of commitment for matches in your program?” Items represent the six EEP standards of recruitment, screening, training, matching, monitoring and support, and closure. For most items, response options are: Yes or No. Eleven items ask respondents to check all that apply from a list of options.

Rationale

The EQUIP 3.0 scale was selected because it is easy to complete and assesses a broad range of research-informed practices. Higher scores on this measure were found to be associated with longer average match length.

Cautions

The EQUIP 3.0 is based on benchmarks established in the third edition of the EEP.  Whether and how scores are associated with the use of practices outlined in the fourth edition of the EEP are not clear. An EQUIP 4.0, based on this more recent edition of the EEP, is currently being developed. Reliability of EQUIP 3.0 (e.g., consistency of scores across different respondents for a given program) has not been established.

Special Administration Information

Ideally, the measure should be completed by a staff person at the program with knowledge of the organization’s current operations and procedures across all requested areas.

How to Score

Items are first combined to represent 22 practices. A score of “1” is assigned for each practice with “Yes” responses on all relevant items. For items in which respondents select all responses that apply, scoring instructions vary. In most cases, if all responses are selected, the item is noted as “Yes”. The Total Benchmark Score is the sum of all practices and ranges from 0 to 22. A Total Standard Score representing the total number of standards implemented can also be calculated. Respondents scoring “1” for all practices within a standard receive a score of “1” for that standard; otherwise, they receive a score of “0”. The Total Standard Score ranges from 0 to 6.

How to Interpret Findings

Higher scores indicate greater compliance with the Elements of Effective Practice, 3rd Edition benchmark practices.

Access and Permissions

This measure is available for non-commercial use with no charge. To request permission to use the measure, please contact the authors at mentoringcentral@irtinc.us.

Alternatives

For a measure that includes both program practices outlined in the EEP and aspects of infrastructure and management practices that have been identified as key to running a successful mentoring program, see the Staff Perceptions of Program Practices scale developed by Keller and his colleagues. The measure is available for non-commercial use with no charge and a list of items can be found here.

  • Citations

    Keller, T. E., Herrera, C., Spencer, R. Unpublished manual. Portland State University.

    Kupersmidt, J. B., Stelter, R.L., & Rhodes, J.B. (2011). Elements quality improvement process (EQUIP) 3.0: Web-based, program self-assessment questionnaire. Innovation Research and Training.

    Kupersmidt, J. B., Stump, K. N., Stelter, R. L., & Rhodes, J. E. (2017). Predictors of premature match closure in youth mentoring relationships. American Journal of Community Psychology, 59(1-2), 25-35. https://doi.org/10.1002/ajcp.12124

Measure of Program Quality

Mentor Perceptions of Program Practices (MPPP)

What It Measures:

Mentor experiences of program practices aligned with the Elements of Effective Practice for Mentoring (EEP).

Description

This is a 24-item scale. Sample items include, “Prior to matching you with your mentee, to what extent did your mentoring program realistically portray the benefits and challenges of being a mentor in the program” and “Since your match was made, to what extent has your mentoring program provided suggestions and ideas for activities.” Response options are: Not at all true, Not very true, Sort of true, Mostly true, and Very true.

Rationale

The MPPP was selected because of its comprehensive assessment of components of program implementation aligned with the EEP and its validation across a wide range of mentoring programs. Scores on the MPPP have been associated with staff-reported practices and mentor reports of the quality of their relationships with their mentee and with program staff.

Cautions

Evidence of reliability and validity are limited to one study. Although the tool is designed to capture change over time in program operations, its ability to do so has not been rigorously tested.

Special Administration Information

Ideally the measure should be self-administered in a private setting, responded to anonymously, and viewed only across all respondents (i.e., in the aggregate) to ensure mentors feel comfortable providing honest responses.

How to Score

Each item is scored on a 5-point scale from 1 (Not at all true) to 5 (Very true). The total score is the average of all 24 items.

How to Interpret Findings

Higher scores on the MPPP indicate greater mentor experience of program practices aligned with the EEP.

Access and Permissions

A copy of the MPPP can be found here and is available for non-commercial use with no charge.

Alternatives

The Mentors’ Perceived Program Support Scale (MPPSS; reviewed here ) offers a more focused assessment of various aspects of support. For a brief mentor-report measure of mentoring program quality, see the 6-item program quality subscale of the Match Characteristics Questionnaire (MCQ), reviewed here.

  • Citation

    Keller, T. E., Drew, A., Herrera, C., Clark-Shim, H., & Spencer, R. (2022). Do program practices matter for mentors?: How implementation of empirically supported practices is associated with youth mentoring relationship quality [Manuscript submitted for publication]. School of Social Work, Portland State University.

Measures of Program Quality

Mentors’ Perceived Program Support Scale (MPPSS)

What It Measures:

Mentor experiences of program support.

Description

This 11-item scale assesses the extent to which a program provides support to mentors in four areas: emotional, informational, tangible assistance, and appraisal. Responses options are: Not at all, A little, Mostly, and Very much.

Rationale

This measure was chosen because of its evidence of reliability and validity in community-based, school-based, group, and one-to-one mentoring programs, as well as the breadth of types of support it assesses.

Cautions

The MPPSS is relatively new and has been validated in only one study with mentors who were primarily female (75%) and White (74%) and resided in rural and suburban regions. Studies have not tested the measure with other groups of mentors.

Special Administration Information

Ideally the measure should be self-administered in a private setting, responded to anonymously, and viewed only across all respondents (i.e., in the aggregate) to ensure mentors feel comfortable providing honest responses.

How to Score

Each item is scored on a 4-point scale from (1) Not at all to (4) Very Much. The total score is the average of all 11 items.

How to Interpret Findings

Higher scores reflect mentor perceptions of higher levels of program support.

Access and Permissions

A copy of the MPPSS can be found here. It is available for non-commercial use with no charge. The original validation study for the measure is noted below.

Alternatives

For a measure assessing the quality of the relationship between the mentor and mentoring program staff, see the 14-item Mentor-Staff Working Alliance scale, adapted by Keller and colleagues (Keller et al., 2022) from the Working Alliance Inventory (Horvath & Greenberg, 1989). Scores on the measure are associated with mentor reports of the implementation of several mentoring program practices and mentor-reported mentoring relationship quality. It is available for non-commercial use with no charge and listed in its entirety here.

  • Citations

    Horvath, A. O., & Greenberg, L. S. (1989). Development and validation of the Working Alliance Inventory. Journal of Counseling Psychology36(2), 223-233.  https://doi.org/10.1037/0022-0167.36.2.223

    Keller, T. E., Drew, A., Herrera, C., Clark-Shim, H., & Spencer, R. (2022). Do program practices matter for mentors?: How implementation of empirically supported practices is associated with youth mentoring relationship quality [Manuscript submitted for publication]. School of Social Work, Portland State University.

    Marshall, J. H., Davis, M. C., Lawrence, E. C., Peugh, J. L., & Toland, M. D. (2016). Mentors’ Perceived Program Support Scale: Development and initial validation. Journal of Community Psychology, 44(3), 342-357.  https://doi.org/10.1002/jcop.21772

Measures of Program Quality

Youth Experiences of Program Quality (YEPQ)

What It Measures:

Youth experiences of program supports before and during their mentoring relationship.

Description 

This is a 13-item scale. Each item begins with: “There is someone at my mentoring program (other than my mentor) who…” Items include, “…asked me about the kind of mentor I wanted” and “…who I could go to if I had a problem with my mentor.” Six items are recommended in addition to the original 7 items to obtain broader coverage of practices in the Elements of Effective Practice for Mentoring (EEP). Response options are: Not at all true, A little true, Mostly true, Very true, and Not applicable.

Rationale

This is one of the only comprehensive youth-report measures of program practices created specifically for mentoring programs and used in several large-scale mentoring evaluations. It was selected because it measures several program components included in the EEP and has broad application across youth mentoring program types.

Cautions

Although the tool has been used in several large-scale evaluations of mentoring programs, evidence of reliability and validity are limited.

Special Administration Information

Ideally the measure should be administered in a private setting, responded to anonymously, and viewed only across all respondents (i.e., in the aggregate) to ensure youth feel comfortable providing honest responses.

How to Score

Each item is scored on a 4-point scale from 1 (Not at all true) to 4 (Very true) with “Not applicable” an additional option for one item. The total score is the average of all items completed.

How to Interpret Findings

Higher scores on the YEPQ reflect experience of more practices aligned with the EEP.

Access and Permissions

A copy of the YEPQ can be found here. The measure is available for non-commercial use with no charge.

Alternatives

For group and other site-based mentoring programs in which youth experience more frequent contact with program staff and other site-based aspects of program supports, see The Big Three and Perceptions of Safety measure, reviewed here.

  • Citation

    Herrera, C., DuBois, D. L., & Grossman, J. B. (2013). The role of risk: Mentoring experiences and outcomes for youth with varying risk profiles. A Public/Private Ventures project distributed by MDRC.

Measures of Program Quality

The Big Three and Perceptions of Safety

What It Measures:

Youth’s experience of program quality in site-based programs.

Description

This 24-item scale assesses three program features essential to delivering effective youth development programming: positive and sustained adult-youth relationships (7 items, e.g., “Adults at the program care for me.”), life-skill-building activities (6 items, e.g., “At the program, I learn skills that help me succeed in life.”), opportunities for participation in and leadership of valued activities (6 items, e.g., “I make meaningful contributions to the program.”). Perceptions of program safety (5 items, e.g., “My program takes place in a safe space.”) are also assessed. Response options are: Strongly disagree, Disagree, Agree, Strongly agree.

Rationale

This measure was chosen due to its strong connection to core ideas found across theoretical frameworks of program quality in site-based youth programming that youth are best equipped to report on.

Cautions

The Big Three and Perceptions of Safety measure is relatively new. Reliability and validity evidence was developed from a sample of Rwandese youth (49% female), with a subset of items further tested in a sample of Salvadoran youth (50% female). The measure was translated into Kinyarwanda and Spanish for delivery, and the results of validity and reliability testing have not yet been completed on an English version. In addition, this measure does not assess youth experiences of inclusion and equity. Additional measures should be used to evaluate these important aspects of site-based programming.

Special Administration Information

Ideally the measure should be administered in a private setting, responded to anonymously, and viewed only across all respondents (i.e., in the aggregate) to ensure youth feel comfortable providing honest responses.

How to Score

In published studies, each item is rated on a scale from (0) Completely Disagree to (100) Completely Agree. However, this response scale can be difficult to use reliably outside of a computer-based survey platform. Thus, a 4-point scale from (1) Strongly disagree to (4) Strongly agree is recommended. Items for each of the four subscales are averaged to create one score for each subscale.

How to Interpret Findings

Higher scores reflect more positive experiences of program quality in each of the four assessed areas.

Access and Permissions

The measure is available for non-commercial use with no charge and can be found here. The original validation study for the scale is noted below and available here.

Alternatives

For a measure of youth’s experience of group processes within a group mentoring program including group engagement, group cohesion, and mutual help, see a review of the Group Mentoring Climate scale here.

  • Citation

    Tirrell, J. M., Dowling, E. M., Gansert, P., Buckingham, M., Wong, C. A., Suzuki, S., Naliaka, C., Kibbedi, P., Namurinda, E., Williams, K., Geldhof, G. J ., Lerner, J. V., Ebstyne King, P., Sim. A. T. R., & Lerner, R. M. (2019). Toward a measure for assessing features of effective youth development programs: Contextual safety and the “Big Three” components of positive youth development programs in Rwanda. Child and Youth Care Forum, 49(2), 201-222. https://doi.org/10.1007/s10566-019-09524-6

Measures of Program Quality

Caregiver Experiences of Program Quality (CEPQ)

What It Measures:

A caregiver’s experiences of mentoring program practices.

Description

This is a 15-item index. Sample items include, “I have attended a program-sponsored activity or event (not training) with my child,” and “There is someone I can go to at the program if I have concerns about my child’s mentor.” Five items are recommended in addition to the original 10 items to obtain broader coverage of practices in the Elements of Effective Practice for Mentoring. Response options are: Yes, No, and Not Applicable.

Rationale

This is one of the only comprehensive measures of caregiver-focused program practices created specifically for mentoring programs and used in several large-scale mentoring evaluations. It was selected because it measures several program components included in the Elements of Effective Practice for Mentoring and has broad application across youth mentoring program types.

Cautions

Although the measure has been administered to caregivers in a wide range of mentoring programs across the U.S., it has not been validated in peer-reviewed research. No information on the reliability or validity of this measure is currently available.

Special Administration Information

Ideally the measure should be self-administered in a private setting, responded to anonymously, and viewed only across all respondents (i.e., in the aggregate) to ensure caregivers feel comfortable providing honest responses.

How to Score

Each item is scored 0 (No) or 1 (Yes), except the last item on closure which also has a “Not Applicable” option. The total score is the sum of all 15 items and ranges from 0 to 15 (or 0 to 14 for those responding “Not Applicable” to the last item). Programs may also examine subsets of program practices, summing only items of interest.

How to Interpret Findings

Higher scores on the measure reflect caregiver receipt of and/or satisfaction with a greater number of caregiver-focused program practices.

Access and Permissions

A copy of the measure can be found here and is available for non-commercial use with no charge.

Alternatives

None.

  • Citation

    Herrera, C., DuBois, D. L., & Grossman, J. B. (2013). The role of risk: Mentoring experiences and outcomes for youth with varying risk profiles. A Public/Private Ventures project distributed by MDRC.

Mental and Emotional Health

Emotional outcomes include feelings of distress, such as symptoms of depression or anxiety, as well as facets of positive well-being, such as optimism, self-esteem, life satisfaction, and a sense of meaning or purpose in life. Historically, mentoring programs have devoted substantially less attention to emotional health relative to academic and behavioral outcomes.

A recent survey of mentoring programs in Illinois, for example, found that mental health outcomes were a “top 5” priority area for less than 10% of programs, whereas academic success was a top priority for over 80% and risk behavior prevention was among the chief concerns for nearly 60%.1 A number of factors may account for this pattern. These include the reality that federal funding opportunities for mentoring programs have been most focused on education and juvenile justice–areas in which mental health is not traditionally a primary concern. Yet, there is good reason to expect that mentoring can improve emotional outcomes and that these merit greater attention within the field. DuBois et al.’s recent meta-analysis,2 which synthesized the results of 73 evaluations of mentoring programs, found that, as a group, these studies showed evidence of positive effects on psychological/emotional outcomes. Available data also suggest that substantial numbers of youth served by mentoring programs are likely to be experiencing significant mental health concerns. In their Role of Risk study, Herrera and colleagues3 found that almost one in four youth involved in this research reported high levels of depressive symptoms at the time of program referral.

In selecting initial mental and emotional health outcomes to consider for this Toolkit, care was taken to include outcomes on both the negative and positive sides of the continuum. The former are represented by depressive symptoms and the latter by life satisfaction, hopeful future expectations, self-esteem, and sense of meaning in life. Adaptive coping with stress also was included. This is not an emotional outcome per se. Yet, the strategies that youth rely on to manage difficult circumstances in their lives have been shown convincingly to have important implications for their mental health.4

Depressive Symptoms

Even when not rising to the level of a clinical disorder, symptoms of depression reported by children and adolescents merit attention. In a 2-year longitudinal study of 435 school-age children and adolescents, for example, those with stable elevations in depressive symptoms exhibited a pattern of significantly greater impairment across several areas of functioning including clinically significant levels of anxiety, markedly lower self-esteem, and higher levels of acting-out behavior as rated by teachers.5 Similarly, adolescent depressive symptoms, even when mild, have been found to be associated with increased health care utilization and costs, only a minority of which were attributable to mental health care.6 In the Role of Risk study,3 youth who were randomly assigned to receive mentoring through the Big Brothers Big Sisters community-based program improved significantly in their reports of depressive symptoms over the 13-month time period of the study in comparison to their non-mentored peers. Interestingly, it appears that the benefits observed may have been most attributable to mentoring reducing existing levels of depressive symptoms as opposed to helping youth avoid symptom onset or worsening. A recent longitudinal study of youth in the BBBS of Canada community-based program found similarly that mentored youth, especially those whose relationships lasted 12 months or more, reported significantly fewer symptoms of depression at an 18-month follow-up than did non-mentored youth.7 Peaslee and Teye8 also found significant reductions in reported levels of depression for BBBS mentored youth in both community- and site-based programs. These researchers recommended that mentoring agencies routinely include depressive inventories in youth outcome assessments. Many important questions remain, however. For example, to what extent are beneficial effects on depressive symptoms evident across a broader range of program models than those studied to date (e.g., cross-age peer, group)? Do they extend to youth whose symptom picture is serious enough to meet clinical criteria for a depressive disorder? If programs and researchers heed the call for more consistent measurement of this outcome, answers to such questions will begin to be developed.

 

Life Satisfaction

It is difficult to imagine an outcome with more fundamental intuitive importance than the degree to which youth are experiencing their lives as enjoyable and rewarding while growing up. Research supports this idea, with youth who report greater life satisfaction being less prone to a wide range of maladaptive psychosocial outcomes, including feelings of depression and anxiety, loneliness, social stress, and aggressive behaviors.9 It also appears that greater life satisfaction may promote engagement with school, with such engagement then further reinforcing feelings of life satisfaction.10 These types of findings, coupled with the reality that many of the youth served by mentoring programs are precisely those for whom life satisfaction tends to be lower (e.g., those from lower socioeconomic status and higher-stress family environments),10 were influential in our selection of life satisfaction as an outcome. Furthermore, although limited, available evidence suggests that mentoring can indeed promote stronger feelings of life satisfaction, with McQuillin and colleagues11,12 providing support for this in two separate randomized control evaluations of a brief instrumental school-based mentoring program for middle schoolers.

 

Hopeful Future Expectations

Theoretically, mentors are in a good position to help their mentees both sustain and enhance positive views of what the future holds for them. Rhodes described how this might occur through a variety of processes, ranging from mentees directly internalizing their mentors’ optimism and confidence in their potential, to mentors’ more strategic efforts to connect their mentees to experiences and opportunities that open the door to new “possible selves” (ideas of what they would like to become).13 In line with these possibilities, Bowers and colleagues14 found that both the quantity and emotional closeness of youths’ relationships with important non-parental adults predicted more hopeful expectations for their futures. In contrast, in the Role of Risk study3, evidence of effects of mentoring program participation on youths’ scores on the Children’s Hope Scale was lacking. These mixed findings notwithstanding, there is solid evidence to indicate that any success that is achieved in cultivating hopeful future expectations will yield meaningful dividends.15,16

 

Adaptive Coping with Stress

Mentors would appear to be in a good position to help their mentees develop effective skills for coping. DuBois and colleagues17, for example, noted that “In the process of helping youth negotiate different types of stressors, mentors may model and instruct youth in skills and techniques that they can apply in similar situations.” Research suggests that it could be of particular value for mentors to foster active or so-called “approach” strategies for coping, such as seeking support, positive reframing, and problem-solving, as these (rather than more passive or avoidant strategies) have been most consistently associated with better adaptive outcomes for youth.4 There is scant research that bears directly on the potential for mentoring to promote adaptive forms of coping. In their research with the BBBS community-based program, DuBois and colleagues17 found no difference in mentored youths’ reported use of approached-focused coping strategies (problem-solving and support seeking) at 6 or 12 months after being matched relative to a comparison group of non-mentored youth. In research with the participants in the landmark P/PV evaluation of the BBBS community-based program,18 reports of receiving help from a Big Brother or Sister with coping were associated with matches that lasted longer and in which outings tended to last longer. The effectiveness of mentoring programs that more intentionally target development of coping skills is also an emerging area of inquiry. Grant and colleagues,19 for example, recently have developed and piloted an intervention that provides early adolescents in low-income urban communities with a) training in contextually relevant coping, b) connection to mentors who support youth’s developing coping strategies, and c) connection to youth-serving community organizations, where youth receive additional support.

 

Self-Esteem

Research suggests that youth who report higher levels of self-esteem experience fewer psychological, behavioral, academic, and economic difficulties later in life.20,21 In light of such links, there is considerable interest in interventions aimed at strengthening youth self-esteem. Because youth relationships characterized by emotional support and social approval appear to have a positive influence on the development of self-esteem,22 mentors may be in a good position to strengthen youth’s feelings of self-worth. In fact, some research supports this hypothesis. For example, in one study, mentoring program participants showed greater improvements in self-esteem at a 15-month follow-up than youth in a comparison group.3 In addition, in a small randomized trial, Silverthorn and colleagues found hints that youth participating in Girl Power!–a youth mentoring program with an explicit focus on strengthening self-esteem–experienced larger gains in self-esteem than youth participating in standard BBBSA community-based mentoring.23 Although this effect was not statistically significant, the estimated effect size of .25 is meaningful and worth noting. Natural mentoring relationships may also foster positive self-esteem,24 and improvements in self-esteem may be one of the routes through which mentoring works. For example, DuBois and colleagues found that the positive effects of participation in BBBSA on emotional and behavior problems were explained through several variables, one of which was self-esteem.25

Despite these promising findings, results from 3 large randomized trials found no significant impact of youth mentoring on self-esteem.3,26,27 Thus, the viability of youth mentoring as an intervention to promote self-esteem is still an open question. Mentoring programs that place explicit emphasis on strengthening self-esteem may show particular promise in producing effects, and to the extent that mentors can effect positive change in self-esteem, youth may reap significant benefits.

 

Sense of Meaning and Purpose

Scholars studying positive youth development have begun to emphasize youth’s sense of meaning and purpose in life.28 Youth with a sense of meaning and purpose experience greater psychological health, quality of life, and academic functioning, and engage in fewer health risk behaviors.28,29,30 However, establishing a sense of purpose requires youth to first contemplate and actively search for their purpose and meaning in life.31 Some research suggests that this search for meaning and purpose can be confusing and stressful for adolescents, negatively affecting their self-esteem.31 Mentors may be well positioned to provide valuable support and guidance to youth as they explore their own meaning and purpose. In two studies of supportive adult relationships developing outside of formal mentoring programs, youth reported that significant adults in their lives (e.g., parents, teachers, staff, mentors) provided valuable guidance or inspiration in their initial search for purpose in life, and critical support as they pursued activities consistent with their identified purpose.32,33 Another study of natural mentoring relationships involving 207 females in middle and high school34 found an association between participation in “growth-fostering” mentoring relationships (i.e., characterized by mutual empathy and engagement, authenticity, empowerment, and the ability to deal with conflict) and engagement in purposeful activities (i.e., activities consistent with one’s purpose in life). Moreover, it was through effects on participation in purposeful activities that these mentoring relationships were associated with higher levels of self-esteem. Although these findings suggest that natural mentoring relationships may be important in fostering youth sense of meaning and purpose, studies have yet to clearly outline the role of formal, program-based mentoring relationships in this process.

 

Ethnic Identity

Research suggests that ethnic identity–or youth’s identification with his/her ethnic group–may be an important mental health outcome, particularly for racial and ethnic minority youth. Ethnic identity is positively related to self-esteem35,36 and negatively associated with internalizing37 and externalizing symptoms.38,39 Many youth served by mentoring programs are likely to be grappling with the developmentally appropriate task of identity formation as they seek to learn more about their ethnic groups, participate in its cultural practices and develop positive (or negative) feelings about their group membership. We selected a measure of ethnic identity that reflects youth’s efforts to learn about their ethnic group (i.e., exploration) and their sense of commitment to the group. In addition to the relevance of these aspects of identity to youth’s mental health, both dimensions have been shown to mitigate the negative mental health correlates of racial/ethnic discrimination, which is a common experience for racial and ethnic minority youth.40 Few studies have examined the link between mentoring program participation and positive ethnic identity. One study found that African American boys who participated in a mentoring program had more positive Black identity scores and lower pre-encounter scores (i.e., identity attitudes that minimize racism and race-related issues).41 Other work further suggests that ethnicity and ethnic identity may also play an important role in how mentoring influences developmental trajectories.42,43 For example, Hurd and colleagues found that natural mentoring relationships promoted positive educational attainment for academically at-risk African American adolescents, in part through effects on racial identity.43 Thus, ethnic identity may be worthwhile to assess not only because of its potential to be affected by mentoring, but also because of the important role it may play in shaping mentoring outcomes.

  • Cited Literature
    1. DuBois, D. L., Felner, J., & O’Neal, B. (2014). State of mentoring in Illinois. Chicago, IL: Illinois Mentoring Partnership.
    2. DuBois, D. L., Portillo, N., Rhodes, J. E., Silverthorn, N., & Valentine, J. C. (2011). How effective are mentoring programs for youth? A systematic assessment of the evidence. Psychological Science in the Public Interest, 12, 57–91. http://dx.doi.org/10.1177/1529100611414806
    3. Herrera, C., DuBois, D. L., & Grossman, J. B. (2013). The role of risk: Mentoring experiences and outcomes for youth with varying risk profiles. New York, NY: A Public/Private Ventures project published by MDRC. Retrieved from http://www.mdrc.org/sites/default/files/Role%20of%20Risk_Exec%20Sum-web%20final.pdf
    4. Compas, B. E., Conner-Smith, J. K., Saltzman, H., Harding, A., & Wadsworth, M. E. (2001). Coping with stress during childhood and adolescence: Problems, progress, and potential in theory and research. Psychological Bulletin, 127, 87–127. http://dx.doi.org/10.1037//0033-2909.127.1.87
    5. DuBois, D. L., Felner, R. D., Bartels, C., & Silverman, M. M. (1995). Stability of self-reported depressive symptoms in a community sample of children and adolescents. Journal of Clinical Child Psychology, 24, 386–396. http://dx.doi.org/10.1207/s15374424jccp2404_3
    6. Wright, D. R., Katon, W. J., Ludman, E., McCauley, E., Oliver, M., Lindenbaum, J., & Richardson, L. P. (2016). Association of adolescent depressive symptoms with health care utilization and payer-incurred expenditures. Academic Pediatrics, 16, 82–89. http://dx.doi.org/10.1016/j.acap.2015.08.013
    7. DeWit, D. J., DuBois, D. L., Erdem, G., Larose, S., & Lipman, E. L. (2016). The role of program-supported mentoring relationships in promoting youth mental health behavioral and developmental outcomes. Prevention Science, 17, 646–657. http://dx.doi.org/10.1007/s11121-016-0663-2
    8. Peaselee, L., & Teye, A. C. (2015). Testing the impact of mentor training and peer support on the quality of mentor-mentee relationships and outcomes for at-risk youth, final report. Rockville, MD: U.S. Department of Justice. Retrieved from https://www.ncjrs.gov/pdffiles1/ojjdp/grants/248719.pdf
    9. Gilman, R., & Huebner, S. (2003). A review of life satisfaction research with children and adolescents. School Psychology Quarterly, 18, 192–205. http://dx.doi.org/10.1023/B:SOCI.0000007497.57754.e3
    10. Lewis, A. D., Huebner, E. S., Malone, P. S., & Valois, R. F. (2011). Life satisfaction and student engagement in adolescents. Journal of Youth and Adolescence, 40, 249–262. http://dx.doi.org/10.1007/s10964-010-9517-6
    11. McQuillin, S., Strait, G., Smith, B., & Ingram, A. (2015). Brief instrumental school-based mentoring for first- and second-year middle school students: A randomized evaluation. Journal of Community Psychology, 43, 885–899. http://dx.doi.org/10.1002/jcop.21719
    12. McQuillin, S. D., & Lyons, M. D. (2016). Brief instrumental school-based mentoring for middle school students: Theory and impact. Advances in School Mental Health Promotion 9, 73–89. http://dx.doi.org/10.1080/1754730X.2016.1148620
    13. Rhodes, J. E. (2005). A model of youth mentoring. In D. L. DuBois & M. J. Karcher (Eds.), Handbook of youth mentoring (pp. 30–43). Thousand Oaks, CA: SAGE.
    14. Bowers, E. P., Geldhof, G. J., Schmid, K. L., Napolitano, C. M., Minor, K., & Lerner, J. V. (2012). Relationships with important nonparental adults and positive youth development: An examination of youth self-regulatory strengths as mediators. Research in Human Development, 9, 298–316. http://dx.doi.org/10.1080/15427609.2012.729911
    15. Callina, K. S., Mueller, M. K., Buckingham, M. H., & Gutierrez, A. S. (2015). Building hope for positive youth development: Research, practice, and policy. In E. P. Bowers et al. (Eds.), Promoting positive youth development (pp. 71–94). Springer International Publishing. http://dx.doi.org/10.1007/978-3-319-17166-1_5
    16. Schmid, K. L., Phelps, E., & Lerner, R. M. (2011). Constructing positive futures: Modeling the relationship between adolescents’ hopeful future expectations and intentional self-regulation in predicting positive youth development. Journal of Adolescence, 34, 1127–1135. http://dx.doi.org/10.1016/j.adolescence.2011.07.009
    17. DuBois, D. L., Neville, H. A., Parra, G. R., & Pugh-Lilly, A. O. (2002). Testing a new model of mentoring. In G. G. Noam (Ed.-in-chief) & J. E. Rhodes (Ed.), A critical view of youth mentoring (New Directions for Youth Development: Theory, Research, and Practice, No. 93, pp. 21–57). San Francisco, CA: Jossey-Bass.
    18. Rhodes, J. E., Reddy, R., Roffman, J., & Grossman, J. (2005). Promoting successful youth mentoring relationships: A preliminary screening questionnaire. Journal of Primary Prevention, 26, 147–168. http://dx.doi.org/10.1007/s10935-005-1849-8
    19. Grant, K. E., Farahmand, F., Meyerson, D. A., DuBois, D. L., Tolan, P. H., Gaylord-Harden, N. K., . . . Johnson, S. (2014). Development of Cities Mentor Project: An intervention to improve academic outcomes for low-income urban youth through instruction in effective coping supported by mentoring relationships and protective settings. Journal of Prevention and Intervention in the Community, 42, 221–242. http://dx.doi.org/10.1080/10852352.2014.916586
    20. DuBois, D. L., & Tevendale, H. D. (1999). Self-esteem in childhood and adolescence: Vaccine or epiphenomenon? Applied and Preventive Psychology, 8, 103-117. http://dx.doi.org/10.1016/S0962-1849(99)80002-X
    21. Trzesniewski KH, Donnellan MB, Moffitt TE, Robins RW, Poulton R, Caspi A (2006). Low self-esteem during adolescence predicts poor health, criminal behaviors, and limited economic prospects during adulthood. Developmental Psychology, 42, 381–390. https://psycnet.apa.org/record/2006-03514-015
    22. Harter, S. (1990). Causes, correlates, and the functional role of global self-worth: A life-span perspective. In R. J. Stemberg & J. Kolligan, Jr. (Eds.), Competence considered (pp. 67-97). New Haven, CT; Yale University Press.
    23. Silverthorn, N., DuBois, D. L., Pryce, J. M., Sanchez, B., Zmiewski, M. R., Hauber, S., Chianelli, J., Jones, V., Long, L., & Cheng, L. (2008). Can we improve on the “Gold Standard”? Evaluation of a program to enhance relationships in the Big Brothers Big Sisters Program. Paper presented at the Society for Research in Adolescence Biennial Meeting, Chicago, Illinois.
    24. DuBois, D. L. & Silverthorn, N. (2005). Natural mentoring relationships and Adolescent Health: Evidence from a national study. American Journal of Public Health, 95, 518-524. http://dx.doi.org/10.2105/AJPH.2003.031476
    25. DuBois, D. L., Neville, H. A., Parra, G. R., & Pugh-Lilly, A. O. (2002). Testing a new model of mentoring. New Directions for Youth Development, 93, 21–57. http://dx.doi.org/10.1002/yd.23320029305
    26. Herrera, C., Grossman, J. B., Kauh, T. J., & McMaken, J. (2011). Mentoring in schools: An impact study of Big Brothers Big Sisters school‐based mentoring. Child Development, 82, 346-361. http://dx.doi.org/10.1111/j.1467-8624.2010.01559.x
    27. Tierney, J. P., Grossman, J. B., & Resch, N. L. (1995). Making a difference. An impact study of Big Brothers Big Sisters. Philadelphia, PA: Public/Private Ventures.
    28. Damon, W., Menon, J., & Bronk, K. C. (2003). The development of purpose during adolescence. Applied Developmental Science, 7, 119–128. http://dx.doi.org/10.1207/S1532480XADS0703_2
    29. Brassai, L., Piko, B.F., Steger, M.F. (2010). Meaning in Life: Is it a protective factor for adolescents’ psychological health? International Journal of Behavioral Medicine, 18, 44-51. http://dx.doi.org/10.1007/s12529-010-9089-6
    30. Leffert, N., Benson, P.L., Scales, P.C., Sharma, A. R., Drake, D.R., & Blyth, D.A. (1998). Developmental assets: Measurement and prediction of risk behaviors among adolescents. Applied Developmental Science, 2, 209-230. http://dx.doi.org/10.1207/s1532480xads0204_4
    31. Blattner, M.C., Liang, B., Lund, T., & Spencer, R. (2013). Searching for sense of purpose: The role of parents and effects on self-esteem among female adolescents. Journal of Adolescence, 36, 839-848. http://dx.doi.org/10.1016/j.adolescence.2013.06.008
    32. Liang, B., White, A., Rhodes, H., Strodel, R., Mousseau, A.M.D., Lund, T., & Gutowski, E. (in press). Pathways to purpose among impoverished youth from the Guatemala City Dump Community. Community Psychology in Global Perspective.
    33. Liang, B., White, A., Mousseau, A.M.D., Hasse, A., Knight, L., Berado, D., & Lund, T. J. (2017). The four P’s of purpose among college bound students: People, propensity, passion, prosocial benefits. The Journal of Positive Psychology, 12, 281-294. http://dx.doi.org/10.1080/17439760.2016.1225118
    34. Liang, B., Lund, T.J., Mousseau, A.M.D., & Spencer, R. (2016). The mediating role of engagement in mentoring relationships and self-esteem among affluent adolescent girls. Psychology in the Schools, 53, 848-860. http://dx.doi.org/doi:10.1002/pits.21949
    35. Bracey, J.R., Bámaca, M.Y., & Umaña-Taylor, A.J. (2004). Examining ethnic identity and self-esteem among biracial and monoracial adolescents. Journal of Youth and Adolescence, 33, 123-132. https://psycnet.apa.org/record/2004-10744-004
    36. Umaña-Taylor, A. J., & Updegraff, K. A. (2007). Latino adolescents’ mental health: Exploring the interrelations among discrimination, ethnic identity, cultural orientation, self-esteem, and depressive symptoms. Journal of Adolescence, 30, 549-567. http://dx.doi.org/10.1016/j.adolescence.2006.08.002
    37. Wong, C.A., Eccles, J.S., & Sameroff, A. (2003). The influence of ethnic discrimination and ethnic identification on African American adolescents’ school and socioemotional adjustment. Journal of Personality, 71, 1197-1232. http://dx.doi.org/10.1111/1467-6494.7106012
    38. Belgrave, F.Z., Van Oss Marin, B., & Chambers, D.B. (2000). Culture, contextual, and intrapersonal predictors of risky sexual attitudes among urban African American girls in early adolescence. Cultural Diversity and Ethnic Minority Psychology, 6, 309-322. http://dx.doi.org/10.1037/1099-9809.6.3.309
    39. McMahon, S.D. & Watts, R.J. (2002). Ethnic identity in urban African American youth: Exploring links with self-worth, aggression, and other psychosocial variables. Journal of Community Psychology, 30, 411-431. http://dx.doi.org/10.1002/jcop.10013
    40. Romero, A.J., & Roberts, R.E. (2003). The impact of multiple dimensions of ethnic identity on discrimination and adolescents’ self-esteem. Journal of Applied Social Psychology, 33, 2288-2305. http://dx.doi.org/10.1111/j.1559-1816.2003.tb01885.x
    41. Gordon, D.M., Iwamoto, D.K., Ward, N., Potts, R., & Boyd, E. (2009). Mentoring urban black middle school male students: Implications for academic achievement. Journal of Negro Education, 78, 277-289.
    42. Darling, N., Bogar, G.A., Cavell, T.A., Murphy, S.E., & Sánchez, B. (2006). Gender, ethnicity, development and risk: Mentoring and the consideration of individual differences. Journal of Community Psychology, 34, 765-779. http://dx.doi.org/10.1002/jcop.20128
    43. Hurd, N. M., Sánchez, B., Zimmerman, M. A., & Caldwell, C. H. (2012). Natural mentors, racial identity, and educational attainment among African American adolescents: Exploring pathways to success. Child development, 83, 1196-1212. http://dx.doi.org/10.1111/j.1467-8624.2012.01769.x

Mental and Emotional Health

Life Satisfaction

This measure consists of 6 items. Sample items include “How satisfied or dissatisfied are you with your family life?” and “How satisfied or dissatisfied are you with your life overall?” Response choices are Very dissatisfied, Somewhat dissatisfied, Neither satisfied nor dissatisfied, Somewhat satisfied, and Very satisfied.

Scale

Brief Multidimensional Students’ Life Satisfaction Scale – Peabody Treatment Progress Battery

What It Measures:

A youth’s reported overall life satisfaction; it also provides a profile of feelings of satisfaction across five different life domains (family, friends, school, self, and living environment).

Intended Age Range

8- to 18-year-olds.

Rationale

A number of developmentally-appropriate measures of life satisfaction exist for children and adolescents. The Multidimensional Students’ Life Satisfaction Scale, however, is unique because in addition to assessing global life satisfaction it provides a profile of feelings of satisfaction across different life domains. This information may be useful in mentoring programs for identifying areas in which a youth is most in need of additional encouragement or support. As an evaluation tool, the domain-specific satisfaction ratings also may enhance sensitivity to benefits of mentoring that otherwise could be missed (e.g., improved feelings of satisfaction with friends or school).

Cautions

Because levels of satisfaction with particular life domains are assessed based on answers to single items, these are likely to be less reliable than the total score on the measure. It should also be kept in mind that ratings on this and other measures of life satisfaction are susceptible to social desirability bias (i.e., a tendency to answer questions in a manner that will be viewed favorably by others). For this reason, scores obtained may indicate a higher level of life satisfaction than is actually the case.

Special Administration Information

None.

How to Score

Each item is scored from 1 (Very dissatisfied) to 5 (Very satisfied). The total score on the measure is obtained by averaging across all items.

How to Interpret Findings

According to the developers of the measure, a total score greater than 4.5 is considered high, while a score less than 3.3 is considered low. However, these guidelines are based on validation research with youth receiving mental health services and thus should be utilized with this caveat in mind.

Access and Permissions

The measure is available for non-commercial use with no charge and can be accessed online here. A Spanish language version can be found here.

Alternatives

Those interested in a more robust assessment of feelings of life satisfaction may want to consider using the full Multidimensional Students’ Life Satisfaction Scale. This 40-item measure assesses life satisfaction in each domain based on answers to multiple questions rather than just one. Briefer measures of overall life satisfaction are also available. These include a 3-item measure for adolescents from Child Trends and a 1-item visual analogue measure from the Health Behaviors in School-Age Children Survey that has been used with youth ages 11 and older.

  • Cited Literature
    1. Athay, M. M., Douglas Kelley, S., & Dew-Reeves, S. E. (2012). Brief Multidimensional Students’ Life Satisfaction Scale – PTPB Version (BMSLSS-PTPB): Psychometric properties and relationship with mental health symptom severity over time. Administration and Policy in Mental Health, 39, 30–40. http://dx.doi.org/10.1007/s10488-011-0385-5

Mental and Emotional Health

Depressive Symptoms

This measure consists of 8 items. Youth are asked to consider their feelings over the past 7 days. Sample items include: “I felt sad”, “I felt lonely”, and “It was hard for me to have fun.” Response choices are Never, Almost never, Sometimes, Often, and Almost always.

Scale:

Pediatric Depressive Symptoms – Short Form from the Patient-Reported Outcomes Measurement Information System (PROMIS)

What It Measures

A youth’s reported level of depressive symptoms.

Intended Age Range

8- to 17-year-olds.

Rationale

This scale was selected based on its relative brevity, appropriateness for use with a wide age range of youth, promising evidence of reliability and validity, and the availability of normative data that can be used to help gauge the severity of a responding youth’s depressive symptoms.

Cautions

This measure is not intended to assess the presence or absence of a depressive disorder. Therefore, care should be taken to describe and interpret resulting scores as referring to depressive symptoms rather than depression or a depressive disorder. In addition, although normative data are available to help with interpretation of scores, they may not apply fully to the youth served by a given mentoring program.

Special Administration Information

Provisions should be in place for appropriate follow-up and referral to mental health treatment for youth who report notably elevated levels of depressive symptoms (i.e., a T-score of 60 or higher per scoring procedures described below). Youth should be made aware of this possibility prior to completing the measure (i.e., that their responses will be reviewed to determine whether it may be appropriate to provide their parent or guardian with information about counseling or other resources).

How to Score

Each item is scored on a 5-point scale from 0 (Never) to 4 (Almost always). A total score is computed by averaging across items.

How to Interpret Findings

Higher T-scores reflect greater levels of depressive symptoms. The corresponding T-score for a youth’s score, based on normative sample data, can be determined using a table in the scoring manual for the Pediatric PROMIS Profile Instruments (see the table titled, “Depressive Symptoms 8b” on p. 9). T-scores are constructed so that they have a standard deviation of 10 and an average of 50. For example, a T-score of 50 would represent a score equal to the average score for youth in the normative sample and a score of 60 would indicate a score that is one standard deviation higher than this average and thus an elevated level of depressive symptoms compared to the typical youth in the normative sample.

Access and Permissions

This scale is available for non-commercial use with no charge and can be obtained here. A Spanish version is also available.

Alternatives

The Patient Health Questionnaire (PHQ-9) is the self-administered version of the depression portion of the PRIME-MD interview, which uses Diagnostic and Statistical Manual-IV criteria to screen for possible mental disorders in primary care settings. The measure and its documentation can be found here. Although originally developed for adults, validity evidence for use of this measure with adolescents is encouraging. It can be scored for evidence of a depressive disorder and to grade symptom severity via a continuous score. This measure may be a good alternative for programs interested in identifying youth who may be experiencing a depressive disorder. As with all survey-based measures, however, it does not provide a sufficient basis for making a clinical diagnosis of depression.

  • Cited Literature
    1. Irwin, D. E., Stucky, B. D., Langer, M. M., Thissen, D., DeWitt, E. M., Lai, J. S., . . . DeWalt, D. A. (2010). An item response analysis of the pediatric PROMIS anxiety and depressive symptoms scales. Quality of Life Research, 19, 595–607. http://dx.doi.org/10.1007/s11136-010-9619-3

Mental and Emotional Health

Adaptive Coping with Stress

The KIDCOPE (child version) asks about 11 different types of coping strategies, using 1 or 2 questions per strategy for a total of 15 questions.

Four of the strategies asked about are approach-oriented and thus generally considered to be positive or adaptive (i.e., problem solving, positive emotion regulation, cognitive restructuring, seeking social support), whereas seven are escape-oriented and thus generally considered to be negative or maladaptive (i.e., distraction, negative emotion regulation, social withdrawal, wishful thinking, self-criticism, blaming others, resignation). Youth are asked to indicate both how often a particular coping strategy was used (i.e., frequency) and how much it helped (i.e., efficacy). Sample items include: “I tried to fix the problem by thinking of answers” (problem-solving) and “I just tried to forget it” (distraction). Frequency is assessed by asking youth whether they made use of each strategy (Yes or No); efficacy is assessed by asking youth to rate how helpful the strategy was (if used) on a 3-point scale: Not at all, A little, or A lot. Youth can be asked to self-identify a recent stressor to consider when responding to the questions on the measure or, alternatively, to consider a pre-identified type of stressor (e.g., a recent difficulty experienced in getting along with peers).

The adolescent version of the measure assesses the same coping strategies, but with fewer items (11); these items have a higher reading level and more refined response scales for ratings of frequency and efficacy.

Scale

KIDCOPE – Child version

What It Measures:

The frequency with which different coping strategies are used by a youth in response to stressors and the youth’s perceptions of their effectiveness.

Intended Age Range

7- to 12-year-olds; an adolescent version is also available for use with 13- to 17-year-olds.

Rationale

Numerous measures of coping exist for children and adolescents. Nearly all of these are quite lengthy and thus likely to be impractical for use by most mentoring programs. The KIDCOPE was selected based on its relative brevity, assessment of specific coping strategies, and promising evidence of reliability and validity. The child version of the measure was selected for primary consideration due to its greater overlap with the age range of youth served by most mentoring programs.

Cautions

Research indicates that the effectiveness of coping strategies used by youth may vary depending on the specific features of the stressor involved and the context in which it occurs. For example, although “approach” strategies generally have been found to be most helpful, some research suggests this may not necessarily be the case for urban youth who frequently confront violence and related types of stressors.

Special Administration Information

Careful consideration should be given to the stressor that youth are asked to consider in completing the measure. In many instances, it may be most appropriate to have each youth self-identify a stressor to ensure its relevance and personal importance. In these cases, the stressors that youth self-identify should be reviewed immediately after administration to determine whether any follow-up support or referral may be necessary; youth should be made aware of this possibility prior to completing the measure.

How to Score

There are multiple options available for scoring the KIDCOPE. One of these is to consider each of the 11 assessed types of coping strategies separately. This approach may be desirable when there is interest in understanding the potential effects of a program on youths’ use of particular approaches to coping (e.g., problem-solving); however, the resulting scores will be less reliable (dependable) because they are each based on only 1 or 2 items. Alternatively, separate scores can be computed for positive and negative coping strategies by averaging across responses for the items that ask about each type of coping (as listed above). This approach provides distinct information about youths’ use of both “adaptive” and “maladaptive” coping strategies, while likely providing enhanced reliability of scores and thus sensitivity to potential program effects. Ratings of frequency and efficacy can be considered separately or in combination when scoring the KIDCOPE. The most straightforward approach is the former, in which frequency can be computed as to whether a coping strategy was used (when considering each separately) or the total number of strategies used within a given category (e.g., positive), and efficacy can be computed as the average of the ratings of helpfulness (0 for Not at all, 1 for A little and 2 for A lot) for those strategies endorsed.

How to Interpret Findings

Higher scores reflect greater reported use and/or perceived helpfulness of the indicated coping strategy or type of coping (e.g., positive or negative).

Access and Permissions

This scale is available for non-commercial use with no charge and is made available here.

Alternatives

An overview of several widely used measures of coping for youth can be found here. These measures typically assess similar types of coping strategies to those asked about on the KIDCOPE, but do so with a greater number of items and thus are likely to offer enhanced reliability and sensitivity.

  • Cited Literature
    1. Spirito A., Stark L. J., & Williams, C. (1988). Development of a brief checklist to assess coping in pediatric patients. Journal of Pediatric Psychology, 13, 555–574. http://dx.doi.org/10.1093/jpepsy/13.4.555

Mental and Emotional Health

Hopeful Future Expectations

This scale consists of 4 items from the original 13-item HFE measure.

Youth respond to the prompt: “Think about how you see your future. What are your chances for the following?” The 4 items are: “having friends you can count on,” “being healthy,” “being involved in helping other people,” and “being safe.” Response options are Very low, Low, About 50/50, High, or Very high.

Scale

Abbreviated version of the Hopeful Future Expectations (HFE) Scale

What it measures:

A youth’s hopefulness or positive expectations about his or her future.

Intended Age Range

10- to 18-year-olds.

Rationale

This measure was selected because of its brevity, evidence of reliability and validity, and assessment of expectations related to specific situations later in life.

Cautions

The abbreviated 4-item version of the HFE scale has been examined only in a single published study. However, results from that study are consistent with evidence of reliability and validity for the original 13-item version.

Special Administration Information

None. How to score: Each item is scored from 1 (Very low) to 5 (Very high). The total score is the average of all 4 items.

How to Interpret Findings:

Higher scores on the HFE items reflect higher personal expectations that positive future outcomes will occur in one’s life.

Access and Permissions

Both the 4-item abbreviated and full 13-item versions of the measure are available for non-commercial use with no charge and can be requested from the Institute for Applied Research in Youth Development at Tufts University. The 4-item abbreviated is also available here.

Alternatives

Child Trends developed a measure of more general feelings of hope for youth from ages 12 to 17 years old. More information on this 3-item measure is available here.

  • Cited Literature
    1. Bowers, E. P., Geldhof, G. J., Schmid, K. L., Napolitano, C. M., Minor, K., & Lerner, J. V. (2012). Relationships with important nonparental adults and positive youth development: An examination of youth self-regulatory strengths as mediators. Research in Human Development, 9, 298–316. http://dx.doi.org/10.1080/15427609.2012.729911

Mental and Emotional Health

Self-Esteem

The Global Self-Worth Scale is a subscale of the Self-Esteem Questionnaire, which measures 5 contextual dimensions of self-esteem (e.g., peer relations, family, sports/athletics) in addition to global self-esteem.

Sample items include “I am happy with the way I can do most things” and “I sometimes think I am a failure (a loser).” Each item is rated on a 4-point scale: Strongly disagree, Disagree, Agree, or Strongly agree.

Scale

Self-Esteem Questionnaire – Global Self-Worth Scale

What It Measures:

A youth’s level of global self-esteem.

Intended Age Range

8- to 18-year-olds.

Rationale

The scale was selected because of its relative brevity, evidence of reliability and validity across cultures, and appropriateness for use with youth from a wide age range. A Chinese, Italian, isiXhosa, and Afrikaans translations of the measure are also available.

Cautions

It is important to keep in mind that a youth’s self-reported self-esteem may be influenced both by limitations in introspective abilities and by a desire to present oneself in a positive light. Research indicates that additional aspects of self-esteem involving less controlled (i.e., automatic) and non-conscious self-evaluations can affect behaviors and well-being. These aspects of self-evaluation are best assessed by measures of implicit self-esteem, whereas the SEQ (and nearly all other survey-based measures) assesses explicit self-esteem (i.e., conscious). Programs might also consider administering the context-specific scales of the SEQ when the aim of the program is to raise self-esteem in a particular domain.

Special Administration Information

None.

How to Score

Each item is scored on a 4-point scale from 1 (Strongly disagree) to 4 (Strongly agree). Three items are reverse coded: “I sometimes think I am a failure (a loser),” “I often feel ashamed of myself,” and “I wish I had more to be proud of.” The total score is the average of all 8 items (after reverse coding).

How to Interpret Findings

Higher scores on the Global Self-Worth Scale reflect a greater level of self-reported overall self-esteem.

Access and Permissions

A copy of the Global Self-Worth Scale can be found here and the full version of the SEQ can be found here. The measure is available for non-commercial use with no charge.

Alternatives

The Rosenberg Self-Esteem Scale is a good alternative for those interested in measuring self-esteem in pre-adolescence, adolescence, and young adults. This 10-item measure has demonstrated reliability and validity in middle school, high school, and young adult samples and has been translated in a number of different languages. A list of items and a description of the measure can be found here.

  • Cited Literature
    1. DuBois, D. L., Felner, R. D., Brand, S., Phillips, R. S. C., & Lease, A. M. (1996). Early adolescent self-esteem: A developmental-ecological framework and assessment strategy. Journal of Research on Adolescence, 6, 541-578.

Mental and Emotional Health

Sense of Meaning and Purpose

This measure consists of 5 items that assess the extent to which a youth feels his or her life has meaning and a clear purpose.

Sample items include “I understand my life’s meaning” and “My life has a clear sense of purpose.” Response options are: Absolutely untrue, Mostly untrue, Somewhat untrue, Can’t say true or false, Somewhat true, Mostly true, or Absolutely true.

Scale

Meaning in Life Questionnaire (MLQ) – Presence of Meaning Scale

What It Measures:

A youth’s general sense of meaning in his or her life.

Intended Age Range

12-year-olds to young adults

Rationale

The scale was selected because of its relative brevity and its evidence of reliability and validity across gender, age, and racial and national groups. The questionnaire is translated into over two dozen languages.

Cautions

It is important to keep in mind a youth’s developmental level when administering and interpreting responses on this measure. Some children at the lower end of the intended age range may not understand the abstract concept of meaning or purpose in life.

Special Administration Information

None.

How to Score

Each item is scored on a 7-point scale from 1 (Absolutely untrue) to 7 (Absolutely true). One item (“My life has no clear purpose”) is reverse coded. The total score is the average of all 5 items (after reverse coding of the one item).

How to Interpret Findings

A higher score indicates a greater perceived sense of meaning in life. A scoring and interpretation guide for the Meaning in Life Questionnaire can be found here.

Access and Permissions

The measure is available for non-commercial use with no charge and can be found here. The original validation study for the scale is available here.

Alternatives

The Meaning and Purpose Scale – short form (4- and 8-item versions are available) is a good alternative to the Presence of Meaning Scale, especially for use with younger children. The measure is part of the PROMIS item bank and is designed for use with children as young as 8 years old. It is available here in a 4-item version or here in an 8-item version.

  • Cited Literature
    1. Steger, M. F., Frazier, P., Oishi, S., & Kaler, M. (2006). The Meaning in Life Questionnaire: Assessing the presence of and search for meaning in life. Journal of Counseling Psychology, 53, 80-93. doi: 10.1037/0022-0167.53.1.80

Mental and Emotional Health

Ethnic Identity

The MEIM-R consists of 6 items, 3 assessing exploration and 3 assessing commitment.

Exploration refers to efforts to learn more about one’s ethnic group and to participation in the cultural practices of this group. Commitment reflects positive affirmation of one’s group and a sense of commitment to the group. The items are preceded by an open-ended question that elicits the respondent’s spontaneous ethnic self-label (i.e., “In terms of ethnic group, I consider myself to be ___________.”) Sample exploration items include: “I have spent time trying to find out more about my ethnic group, such as its history, traditions, and customs” and “I have often talked to other people in order to learn more about my ethnic group.” Sample commitment items include: “I have a strong sense of belonging to my own ethnic group” and “I understand pretty well what my ethnic group membership means to me.” Youth respond on a 5-point scale: Strongly disagree, Disagree, Neutral, Agree, or Strongly agree. The measure concludes with a list of ethnic groups that the respondent can check to indicate both their own and their parents’ ethnic backgrounds.

Scale

Multigroup Ethnic Identity Measure – Revised (MEIM-R)

What It Measures:

A youth’s identification with his or her ethnic group.

Intended Age Range

This scale has been used with 13- to 18-year-olds and is most appropriate for adolescent populations.

Rationale

This measure was selected based on its brevity, evidence of reliability and validity across both male and female and ethnically diverse adolescents, and flexibility of use with youth regardless of the specific racial or ethnic group with which the youth identifies.

Cautions

Several items on the scale ask about the respondent’s ethnic group, group membership, or ethnic background. It is possible that some youth may not identify with a particular ethnic group or be familiar with these terms or concepts. However, an introductory prompt is provided that may be useful for respondents who have limited familiarity with these concepts.

Special Administration Information

Some youth may identify with more than one ethnic group. Options in these cases include instructing youth to think about the group with which they identify most strongly when responding or having youth respond to the questions separately for each ethnic group with which they identify.

How to Score

Each item is scored on a 5-point scale ranging from 1 (Strongly disagree) to 5 (Strongly agree). The measure yields subscale scores for Exploration and Commitment, each of which is computed as the average of the 3 items for that subscale. If the overall strength of ethnic identity or the degree to which ethnic identity is achieved is of primary interest, the two scales can be combined by taking the average of the 6 items.

How to Interpret Findings

Higher Exploration scores indicate greater engagement in learning about one’s group and participation in ethnic cultural practices. Higher Commitment scores indicate positive affirmation of one’s group and a sense of commitment to that group.

Access and Permissions:

The measure is available for non-commercial use with no charge and can be obtained here.

Alternatives

None recommended.

  • Cited Literature
    1. Phinney, J. S., & Ong, A. D. (2007). Conceptualization and measurement of ethnic identity: Current status and future directions. Journal of Counseling Psychology, 54, 271-281. https://doi.org/10.1037/0022-0167.54.3.271

Social-Emotional Skills

Social-emotional (SE) skills include the knowledge, attitudes, and skills necessary for youth to recognize and control their emotions and behaviors; establish and maintain positive relationships; make responsible decisions and solve challenging situations; and set and achieve positive goals.1,2 Sometimes labeled as 21st century skills,3 soft skills,4 non-cognitive skills,5 or character attributes,6 SE skills have been shown to be malleable and linked to academic, career, and life success.7 Based on this evidence, promoting these skills in young people has become a priority for both schools and afterschool settings.

Rhodes’ model of youth mentoring8 points to an important role for mentors in promoting SE skills. Mentoring relationships that are emotionally engaging (e.g., through trust, empathy, mutuality) are expected to produce social and emotional growth in young people that will improve their relationships with peers, parents, and other adults as well as their overall well-being and success in life. Indeed, meta-analyses have linked quality mentoring programs9,10 as well as quality afterschool programs11 to improvements in social and emotional development. These impacts extend across program types and across youth background and demographic characteristics. For example, cross-age peer mentoring programs have been indicated to contribute to improvements in mentees’ communication skills and social adjustment.12 In addition, youth with learning and behavioral difficulties have also shown social gains in areas of self-control and cooperation after engaging with a mentor.13 However, it is important to note that these effects are often small in magnitude and have not been consistent across all outcome measures relevant to SE skills or across all programs.

In deciding what SE skills from the broad array to include in this Toolkit, priority was given to those skills that have most consistently been linked to short- and long-term success in multiple domains such as mental health, behavior, and academics. Emphasis was also given to SE skills that, based on available evidence, seem most likely to be malleable to mentoring. It is important to note here that one should use caution with only collecting youth self-report data to assess SE skills. As self-awareness is also a key facet of SE skills, youth with poor self-awareness may not accurately report their other SE skills. In assessing SE skills as outcomes, it thus may be especially valuable to gather data from additional informants, such as mentors, parents, or teachers, as well as through objective assessments such as observations of behavior.

Self-Control

Self-control refers to one’s ability to regulate one’s emotions and behaviors.14 It may involve delaying gratification, controlling impulses, focusing attention, and following rules. Self-control is seen as foundational to the other SE skills. For example, successfully maintaining positive peer relationships or working constructively with others often requires the ability to control one’s emotions and act in socially appropriate ways. The ability to control one’s emotions and behaviors has been linked to success in all domains of life including educational, social, and vocational contexts. For example, seminal research by Mischel and colleagues has suggested that there are long-term effects of self-control on positive outcomes later in life; preschoolers’ ability to delay gratification was linked to academic, behavioral, and social success in adolescence as well as higher SAT scores, college completion rates, and income levels.15 Following a cohort of 1,000 children from birth to the age of 32, Moffitt and colleagues16 also linked childhood self-control in favorable directions to several developmental outcomes including physical health, personal finances, substance use, and criminal behavior. Surprisingly little research has directly investigated the effects of youth mentoring on self-control abilities. In a notable exception, an evaluation of Across Ages, an intergenerational mentoring program, found that program participants exhibited greater improvements in their self-reported self-control abilities.

However, these effects were not maintained following the end of program involvement.17 Another exception to the dearth of work examining the link between mentoring and self-control is a longitudinal study by Kogan and colleagues.18 Using data from a sample of rural African American youth, they found that positive natural mentoring relationships predicted greater self-control in these youth as rated by their parents, which in turn, was linked to less anger, rule-breaking behavior, and aggression.18

Social Competence

Social competence is the set of abilities needed to be assertive and to create and maintain positive relationships.19 These skills are necessary to get along well with others and to work constructively with others within established social norms across multiple contexts. For example, a consistent and robust body of research indicates that social competence predicts career success in terms of employment, workplace performance, income, and entrepreneurial success.4 Mentors can serve as a key resource for promoting social competence. Rhodes’ model of mentoring8 described this process and empirical evidence supports this idea. DuBois and colleagues’ meta-analysis of 55 evaluations of youth mentoring programs9 suggested that, on average, participation in mentoring programs significantly improved the social competence of youth. In the landmark Public/Private Ventures evaluation of the Big Brothers Big Sisters program,20 for example, findings indicated that mentored youth improved in their relationships with peers and parents as compared to non-mentored youth.

Problem-Solving Ability

Problem-solving ability involves the capacity to identify a problem, collect information from multiple sources to consider options, and select a reasonable solution to that problem. Effective problem-solving has been conceptualized as involving “planning, flexibility, and resourcefulness.”21 Resilient children and adolescents growing up in adverse environments are often found to have strong problem-solving abilities.22,23 Theoretically, mentors can serve as role models for positive problem-solving by modeling a calm, thoughtful, and flexible approach to dealing with problems. In addition, mentors can serve as resources for advice to young people as they work through problems, such as in their relationships with family members, peers, and teachers. However, there has been very little empirical evidence examining the role of mentoring in affecting problem-solving ability (or similar higher-order thinking constructs such as decision making and critical thinking).

Skills for Setting and Pursuing Goals

The ability to set appropriate goals and effectively pursue them is widely understood to be central to healthy development.24,25 In line with this view, results from the Lerner and Lerner 4-H Study of PYD26 have consistently linked goal-directed skills to positive youth development outcomes. Helping youth to develop goal-setting and goal-pursuit skills is a key aim in the organization and structuring of many mentoring programs.27 Mentors may prove helpful in building these skills in youth through several ways.28 Mentors serve as teachers, role models, and advocates for youth as they provide opportunities for youth to practice these skills, examples of success and failure in pursuing goals, and access to social networks that align with youth goals. Only limited research has addressed these possibilities. In one recent study using data from 415 mentor-mentee dyads from mentoring programs around the United States, Bowers and colleagues29 found that mentor-mentee relationship quality predicted growth in youth goal-directed skills. In another recent study involving Big Brothers Big Sisters community-based mentoring programs,30 youth who were randomly assigned to receive support with development of skills for goal-setting and pursuit (along with other facets of thriving) did not show any greater improvement in this area than youth assigned to receive mentoring as usual. However, for a subgroup of these youth who reported having positive exposure to the activities that were designed to build goal-directed skills, it appears that they were indeed beneficial for promoting thriving and, in turn, reduced problem behavior. Further research will be needed to better clarify the conditions under which mentoring is most likely to help youth cultivate skills for setting and working toward goals effectively.

Perseverance

Perseverance refers to the ability to pursue one’s tasks to completion. It has attracted considerable interest from practitioners, researchers, and policy-makers, particularly in relation to its potential role in facilitating academic and career success. Youth self-reports of greater perseverance have been positively linked to measures of their GPA, healthy habits, and (as rated by teachers) academic performance, cooperation, effort, and organization, and has been negatively linked to youth depression, anxiety, and aggression. 31 A systematic review of program and learning models suggests that perseverance can be taught and developed.32

Most studies on the role of adults in promoting youth perseverance have been conducted in school settings. Reviews of this work indicate that youth are more likely to persist when they view adults as showing they care about them, having high expectations for their success, and holding them to high standards.33 In an evaluation of five after-school centers from the San Francisco Beacon Initiative,34 young people who participated in the Beacon centers for a year or more were 33 percent less likely to show a decline in self-reported perseverance (identified as “self-efficacy” in the study) over an 18-month period than youth who either did not participate in the Beacon center programming or who participated for less time. Increased participation in Beacon centers was also linked to increased levels of non-family adult support, which, in turn, significantly predicted positive changes in perseverance. A quasi-experimental evaluation of OneGoal, a college preparation program with the goal of college graduation and emphasis on social support, found that youth in the OneGoal program had higher rates of college enrollment and retention than comparison-group youth, and that growth in the SE skills of persistence and self-control was linked to college enrollment and retention for OneGoal participants.35

A construct related to perseverance is grit36–one’s sustained interest in and perseverance of efforts over years toward a long-term goal. A meta-analysis of findings from 88 independent study samples indicated that the perseverance of effort dimension of grit was much more strongly related to measures of academic performance than the consistency of interest dimension or overall grit scores,37 providing additional evidence for an important contribution of perseverance to youth success. It should be noted, however, that the potential for mentoring programs to promote perseverance has not been rigorously investigated.

Self-Advocacy

Self-advocacy is the ability to communicate one’s needs and engage in actions that mobilize the identified supports required to achieve those needs.38 Central building blocks of self-advocacy are knowledge of self (i.e., an understanding of one’s own strengths and growing points), knowledge of one’s rights and the steps needed to advocate for change, the ability to communicate with others and locate needed resources, and the capacity to exhibit relevant leadership skills such as working with and motivating others.39 The development of self-advocacy skills is correlated with improvement in academic performance, stronger social-emotional support systems, and greater access to health care services.40,41,42

Mentors may help their mentees acquire and strengthen their self-advocacy skills through several routes, for example, by discussing strategies youth could use to further their goals or providing role modeling these behaviors for youth (e.g., connecting their mentees to information or people who could help them43). Help-seeking behavior is one aspect of self-advocacy entailing the capacity to seek out needed resources and supports.44 This skill is important in healthy development, as it can foster resilience and coping. High-quality mentoring relationships have been significantly tied to improvements in youths’ help-seeking behavior.43

Career Exploration

During adolescence, a youth’s sense of personal future develops, and identity exploration and commitment become important developmental tasks.38 Educational and career achievements are a typical focus of adolescent thoughts of the future.39 How youth approach career exploration has been linked to variation in indicators of positive youth development40 including school engagement41 and academic success.42 Career and academic processes, in turn, have been linked to youth perseverance and success in the subsequent school-to-work transition.43

Theoretically, mentoring relationships can help place young people on a path to career success. Social Cognitive Career Theory (SCCT)44 posits that career development is a lifelong process that can be facilitated in childhood and adolescence through career exploration and support, modeling, resources, and feedback from others including mentors, teachers, and counselors. For example, support from parents, close friends, and non-parental adults such as extended family members and teachers has been linked to career development in urban youth,45 and natural mentoring relationships have been linked to reported use of planful strategies to pursue long-term career goals among rural African American youth and emerging adults46 as well as indices of the career development of pregnant and parenting African-American teenagers.47 Natural mentoring relationships in adolescence have also been associated positively with work hours per week in the early twenties48,49 and with intrinsic job rewards (creativity, authority, and autonomy) in the early thirties.50 Analyses based on the same nationally representative sample further found that the reported presence of a natural mentor was linked to greater annual earnings during adulthood for males without fathers and especially so for African-American males without fathers.51 Similar possible benefits for annual earnings for young men, especially those at risk for high school dropout, were identified in an evaluation of the Career Academy mentoring program.52, 53 Males enrolled in the Career Academy earned more than those in the non-Academy control group over the 4- and 8-year follow-up periods via increased wages, hours worked, and employment stability. A recent evaluation of the iMentor program also found evidence of an effect of program participation on career planning.54 Evidence regarding the ability of programs geared toward younger youth and without a specific focus on career development to promote career exploration, however, is lacking.

Youth-Centered Outcomes

As previously indicated, helping youth set and pursue their goals is a central task in many mentoring programs.27 However, with youth setting goals in diverse domains, programs  may find it difficult to assess youth progress toward achieving these goals. Measures of universal outcomes provide programs with a way to track youth progress toward their individualized goals using a standardized questionnaire format.55 Reviews suggest that these types of outcome measures – often referred to as idiographic in the research literature and which are referred to here as youth-centered – have the potential to capture change better than standardized measures often used in practice.56 For example, in a sample of 137 youth receiving mental health services, Edbrooke-Childs and colleagues reported that changes in progress toward goals was linked to changes in clinician- reported functioning and parent-reported satisfaction with care.57 The processes of setting and monitoring progress towards goals, which is encouraged for youth to do collaboratively with relevant adults such as therapists or, the case of mentoring programs, program staff and mentors,58,59 has itself been linked to a range of behavioral outcomes60,61 and psychological well-being and distress.62  In clinical research, for example, both practitioners and youth have reported that the process of reviewing and tracking goals motivated youth, empowered them to take ownership of their progress, and improved communication between the practitioner and youth and parents.63 Similarly, in a study of 176 mentors trained to use Goal Attainment Scaling, a youth-centered outcomes measure, Balcazar and colleagues found that that this process provided mentors with a helpful framework for working with youth, including clear direction for how to focus their support.64 However, it is important to note that ratings on youth-centered outcomes are subjective and may be subject to social desirability bias (i.e., a motivational tendency or investment on the part of raters, such as youth or mentors, to report positive progress).55 Several strategies may be helpful for mitigating the risk of bias in youth-centered outcomes. These include 1) developing a system and structure in which goals are discussed and reviewed in a consistent manner; 2) collection of goal progress data from multiple sources; and 3) encouraging mentors and others involved with gauging goal progress (e.g., staff) to temper unrealistic expectations of goal attainment and to consider how youth may be affected in the long run if assessments of progress are biased to be more favorable than is warranted.55 To summarize, the use of youth-centered measures has been uncommon to date in the mentoring literature and carries with it a number of important considerations. At the same time, these tools hold significant promise for enhancing sensitivity to detecting changes in meaningful outcomes among youth receiving mentoring as well as for enhancing beneficial processes in the mentoring relationship itself.

  • Cited Literature

    Cited Literature

    1. Collaborative for Academic, Social, and Emotional Learning. (2005). Safe and sound: An educational leader’s guide to evidence-based social and emotional learning programs— Illinois edition. Chicago, IL: Author. Retrieved from https://bit.ly/361Xxia
    2. Elias, M. J., Zins, J. E., Weissberg, R. P., Frey, K. S., Greenberg, M. T., Haynes, … Shriver, T.P. (1997). Promoting social and emotional learning: Guidelines for educators. Alexandria, VA: Association for Supervision and Curriculum Development.
    3. Partnership for 21st Century Skills. (2014). Framework for 21st Century skills. Washington, DC: Author. Retrieved from http://www.p21.org/about-us/p21-framework
    4. Lippman, L., Ryberg, R., Carney, R., & Moore, K. A. (2015). Key ‘soft skills’ that foster youth workforce success: Toward a consensus across fields. Retrieved from http://www.childtrends.org/wp-content/uploads/2015/06/2015-24WFCSoftSkills1.pdf
    5. Heckman, J. J., Stixrud, J., & Urzua, S. (2006). The effects of cognitive and noncognitive abilities on labor market outcomes and social behavior (No. w12006). Cambridge, MA: National Bureau of Economic Research. Retrieved from http://www.nber.org/papers/w12006
    6. Ji, P., Flay, B. R., & DuBois, D. L. (2013). Social-Emotional and Character Development Scale: Development and initial validation with urban elementary school students. Journal of Research in Character Education, 9, 121–147. Retrieved from http://bit.ly/2boKJHM
    7. Gutman, L. M., & Schoon, I. (2016) A synthesis of causal evidence linking non-cognitive skills to later outcomes for children and adolescents. In M. S. Khine & S. Areepattamannil (Eds.), Non-cognitive skills and factors in educational attainment (pp.171–198). Rotterdam, The Netherlands: Sense Publishers.
    8. Rhodes, J. E. (2005). A model of youth mentoring. In D. L. DuBois & M. J. Karcher (Eds.), Handbook of youth mentoring (pp. 30–43). Thousand Oaks, CA: SAGE.
    9. DuBois, D. L., Holloway, B. E., Valentine, J. C., & Cooper, H. (2002). Effectiveness of mentoring programs for youth: A meta-analytic review. American Journal of Community Psychology, 30, 157–197. http://dx.doi.org/10.1023/A:1014628810714
    10. DuBois, D. L., Portillo, N., Rhodes, J. E., Silverthorn, N., & Valentine, J. C. (2011). How effective are mentoring programs for youth? A systematic assessment of the evidence. Psychological Science in the Public Interest, 12, 57–91. http://dx.doi.org/10.1177/1529100611414806
    11. Durlak, J. A., Weissberg, R. P., & Pachan, M. (2010). A meta‐analysis of after‐school programs that seek to promote personal and social skills in children and adolescents. American Journal of Community Psychology, 45, 294–309. http://dx.doi.org/10.1007/s10464-010-9300-6
    12. Karcher, M. J. (2005). The effects of school-based developmental mentoring and mentors’ attendance on mentees’ self-esteem, behavior, and connectedness. Psychology in the Schools, 42, 65–77. http://dx.doi.org/10.1002/pits.20025
    13. Muscott, H. S., & O’Brien, S. T. (1999). Teaching character education to students with behavioral and learning disabilities through mentoring relationships. Education and Treatment of Children, 22, 373–90. http://www.jstor.org/stable/42899580
    14. Baumeister, R. F., Vohs, K. D., & Tice, D. M. (2007). The strength model of self-control. Current Directions in Psychological Science, 16, 351–355. http://dx.doi.org/10.1111/j.1467-8721.2007.00534.x
    15. Shoda, Y. Mischel, W., & Peake, P. K. (1990). Predicting adolescent cognitive and self-regulatory competencies from preschool delay of gratification: Identifying diagnostic conditions. Developmental Psychology, 26, 978–986. http://dx.doi.org/10.1037/0012-1649.26.6.978
    16. Moffitt, T. E., Arseneault, L., Belsky, D., Dickson, N., Hancox, R. J., Harrington, H., . . . Caspi, A. (2011). A gradient of childhood self-control predicts health, wealth, and public safety. Proceedings of the National Academy of Sciences of the United States of America, 108, 2693–2698. http://www.pnas.org/content/108/7/2693.full
    17. Aseltine, R. H., Dupre, M., & Lamlein, P. (2000). Mentoring as a drug prevention strategy: An evaluation of Across Ages. Adolescent and Family Health, 1, 11–20.
    18. Kogan, S. M., Brody, G. H., & Chen, Y. F. (2011). Natural mentoring processes deter externalizing problems among rural African American emerging adults: A prospective analysis. American Journal of Community Psychology, 48, 272–283. http://dx.doi.org/10.1007/s10464-011-9425-2
    19. Muris, P. (2001). A brief questionnaire for measuring self-efficacy in youths. Journal of Psychopathology and Behavioral Assessment, 23, 145–149. http://dx.doi.org/10.1023/A:1010961119608
    20. Tierney, J. P., Grossman, J. B., & Resch, N. L. (1995). Making a difference: An impact study of Big Brothers/Big Sisters. Philadelphia: Public/Private Ventures. Retrieved from http://bit.ly/2bUT3gf
    21. Hanson, T. L., & Kim, J. O. (2007). Measuring resilience and youth development: The psychometric properties of the Healthy Kids Survey. (Issues & Answers Report, REL 2007–No. 034). Washington, DC: U.S. Department of Education. Retrieved from http://www.ies.ed.gov/ncee/edlabs/regions/west/pdf/REL_2007034.pdf
    22. Benard, B. (1991). Protective factors in the family, school, and community. Portland, OR: Western Center for Drug-Free Schools and Communities. https://eric.ed.gov/?id=ED335781
    23. Dumont, M., & Provost, M. A. (1999). Resilience in adolescents: Protective role of social support, coping strategies, self-esteem, and social activities on experience of stress and depression. Journal of Youth and Adolescence, 28, 343–363. http://dx.doi.org/10.1023/A:1021637011732
    24. Bowers, E. P., Geldhof, G. J., Chase, P., Gestsdottir, S., Urban, J. B., & Lerner, R. M. (2015). Self-regulation during adolescence: Variations associated with individual ↔ context relations. In J. Berry, & H. Keller (Eds.) and J. D. White (Ed. in Chief), International Encyclopedia of Social and Behavioral Sciences (Vol. 21, 2nd ed., pp. 547–552). Oxford, England: Elsevier.
    25. Gestsdóttir, S., & Lerner, R. M. (2007). Intentional self-regulation and positive youth development in early adolescence: Findings from the 4-H Study of Positive Youth Development. Developmental Psychology, 43, 508–521. http://dx.doi.org/10.1037/0012-1649.43.2.508
    26. Lerner, R. M., Lerner, J. V., Almerigi, J., Theokas, C., Phelps, E., Gestsdottir, S. . . . von Eye, A. (2005). Positive youth development, participation in community youth development programs, and community contributions of fifth grade adolescents: Findings from the first wave of the 4-H Study of Positive Youth Development. Journal of Early Adolescence, 25, 17–71. http://dx.doi.org/10.1177/0272431604272461
    27. Balcazar, F. E., & Keys, C. B. (2014). Goals in mentoring relationships. In D. L. DuBois & M. J. Karcher (Eds.), Handbook of youth mentoring (2nd ed., pp. 83–98). Thousand Oaks, CA: SAGE.
    28. Rhodes, J. E., Spencer, R., Keller, T. E., Liang, B., & Noam, G. (2006). A model for the influence of mentoring relationships on youth development. Journal of Community Psychology, 34, 691–707. http://dx.doi.org/10.1002/jcop.20124
    29. Bowers, E. P., Wang, J., Tirrell, J. M., & Lerner, R. M. (2016). A cross-lagged model of the development of mentor-mentee relationships and intentional self regulation in adolescence. Journal of Community Psychology, 44, 118–138. http://dx.doi.org/10.1002/jcop.21746
    30. DuBois, D. L., & Keller, T. E. (in press). Investigation of the integration of supports for youth thriving into a community-based mentoring program. Child Development.  
    31. Kern, M. L., Benson, L., Steinberg, E.A., &, Steinberg, L. (2016). The EPOCH measure of adolescent well-being. Psychological Assessment, 28, 586–597. http://dx.doi.org/10.1037/pas0000201
    32. Shechtman, N., DeBarger, A. H., Dornsife, C., Rosier, S., & Yarnall, L. (2013). Promoting grit, tenacity, and perseverance: Critical factors for success in the 21st century. U.S. Department of Education, Office of Educational Technology. Retrieved from https://studentsatthecenterhub.org/resource/promoting-grit-tenacity-and-perseverance-critical-factors-for-success-in-the-21st-century/
    33. National Research Council and Institute of Medicine. (2003). Engaging schools: Fostering high school students’ motivation to learn. Washington, DC: The National Academies Press. Retrieved from https://www.nap.edu/download/10421
    34. Walker, K. E., & Arbreton, A. J. A. (2004). After-school pursuits: An examination of outcomes in the San Francisco Beacon Initiative. Philadelphia: Public/Private Ventures. Retrieved from http://admin.issuelab.org/permalink/download/1982
    35. Kautz T. D., Zanoni W. (2014). Measuring and fostering non-cognitive skills in adolescence: Evidence from Chicago Public Schools and the OneGoal Program. Unpublished manuscript, Department of Economics, University of Chicago, Chicago, IL. Retrieved from http://www.issuelab.org/resources/26888/26888.pdf
    36. Duckworth, A. L., Peterson, C., Matthews, M. D., & Kelly, D. R. (2007). Grit: Perseverance and passion for long-term goals. Journal of Personality and Social Psychology, 92, 1087–1101. http://dx.doi.org/10.1037/0022-3514.92.6.1087
    37. Credé, M., Tynan, M. C., & Harms, P. D. (2016, June 16). Much ado about Grit: A meta-analytic synthesis of the Grit literature. Journal of Personality and Social Psychology. Advance online publication. http://dx.doi.org/10.1037/pspp0000102
    38. Stodden, R. A., Conway, M. A., & Chang, K. B. T. (2003). Findings from the study of transition, technology and postsecondary supports for youth with disabilities: Implications for secondary school educators. Journal of Special Education Technology, 18(4) 29-44. https://doi.org/10.1177/016264340301800403

     

    1. Test, D., Fowler, C., Wood, W., Brewer, D., & Eddy, S. (2005). A conceptual framework of self-advocacy for students with disabilities. Remedial and Special Education, 26(1), 43-54. https://doi.org/10.1177/07419325050260010601

     

    1. Adams, K. S., & Proctor, B. E. (2010). Adaptation to college for students with and without disabilities: Group differences and predictors. The Journal of Postsecondary Education and Disability, 22(3), 166-184.

     

    1. Maslow, G., Adams, C., Willis, M., Neukirch, J., Herts, K., Froehlich, W., Calleson, D., & Rickerby, M. (2013). An evaluation of a positive youth development program for adolescents with chronic illness. Journal of Adolescent Health, 52(2), 179-185. https://doi.org/10.1016/j.jadohealth.2012.06.020

     

    1. Murray, C., Goldstein, D. E., Nourse, S., & Edgar, E. (2000). The postsecondary school attendance and completion rates of high school graduates with learning disabilities. Learning Disabilities Research and Practice, 15(3) 119-127. https://doi.org/10.1207/SLDRP1503_1

     

    1. Austin, L. J., Parnes, M. F., Jarjoura, G. R., Keller, T. E., Herrera, C., Tanyu, M., & Schwartz, S. E. (2020). Connecting youth: The role of mentoring approach. Journal of Youth and Adolescence, 49(12), 2409-2428. https://doi.org/10.1007/s10964-020-01320-z

     

    1. Skinner, E. A., & Zimmer-Gembeck, M. J. (2007). The development of coping. Annual Review of Psychology, 58(1), 119–144. https://doi.org/10.1146/annurev.psych.58.110405.085705
    2. Brandtstädter, J. (1998). Action perspectives on human development. In W. Damon & R. M. Lerner, (Eds.), Handbook of child psychology: Theoretical models of human development (pp. 807-863). New York: Wiley.
    3. Nurmi, J. E. (1991). How do adolescents see their future? A review of the development of future orientation and planning. Developmental Review, 11, 1–59. http://dx.doi.org/10.1016/0273-2297(91)90002-6
    4. Skorikov, V. B., & Vondracek, F. W. (2011). Occupational identity. In S. J. Schwartz, K. Luyckx, & V. L. Vignoles (Eds.), Handbook of identity theory and research (pp. 693–714). New York: Springer.
    5. Kenny, M. E., Blustein, D. L., Haas, R. F., Jackson, J., & Perry, J. C. (2006). Setting the stage: Career development and the student engagement process. Journal of Counseling Psychology, 53, 272–279. http://dx.doi.org/10.1037/0022-0167.53.2.272
    6. Oliveira Í. M., Taveira M. C., & Porfeli, E. J. (2017) Career preparedness and school achievement of Portuguese children: Longitudinal trend articulations. Frontiers in Psychology, 8, 618 – 630. http://dx.doi.org/10.3389/fpsyg.2017.00618
    7. Bandura, A., Barbaranelli, C., Caprara, G. V., and Pastorelli, C. (2001). Self-efficacy beliefs as shapers of children’s aspirations and career trajectories. Child Development, 72, 187–206. http://dx.doi.org/10.1111/1467-8624.00273
    8. Lent, R. W., Hackett, G., & Brown, S. D. (1999). A social cognitive view of school-to-work transition. Career Development Quarterly, 47, 297–311. http://dx.doi.org/10.1002/j.2161-0045.1999.tb00739.x
    9. Kenny, M. E., Blustein, D. L., Chaves, A., Grossman, J. M., & Gallagher, L. A. (2003). The role of perceived barriers and relational support in the educational and vocational lives of urban high school students. Journal of Counseling Psychology, 50, 142–155. http://dx.doi.org/10.1037/0022-0167.50.2.142
    10. Kogan, S. M., Brody, G. H., & Chen, Y. F. (2011). Natural mentoring processes deter externalizing problems among rural African American emerging adults: A prospective analysis. American Journal of Community Psychology, 48(3–4), 272–283. http://dx.doi.org/10.1007/s10464-011-9425-2
    11. Klaw E. L., & Rhodes J. E. (1995).Mentor relationships and the career development of pregnant and parenting African-American teenagers. Psychology of Women Quarterly, 19, 551–562. http://dx.doi.org/10.1111/j.1471-6402.1995.tb00092.x
    12. DuBois, D. L., & Silverthorn, N. (2005). Natural mentoring relationships and adolescent health: Evidence from a national study. American Journal of Public Health, 95(3), 518–524. http://dx.doi.org/10.2105/AJPH.2003.031476 49. McDonald, S., Erickson, L. D., Johnson, M. K., & Elder, G. H., Jr. (2007). Informal mentoring and young adult employment. Social Science Research, 36(4), 1328–1347. https://doi.org/10.1016/j.ssresearch.2007.01.008
    13. McDonald, S., Erickson, L. D., Johnson, M. K., & Elder, G. H., Jr. (2007). Informal mentoring and young adult employment. Social Science Research, 36(4), 1328–1347. https://doi.org/10.1016/j.ssresearch.2007.01.008
    14. McDonald, S., & Lambert, J. (2014). The long arm of mentoring: A counterfactual analysis of natural youth mentoring and employment outcomes in early careers. American Journal of Community Psychology, 54, 262–273. http://dx.doi.org/10.1007/s10464-014-9670-2
    15. Timpe, Z. C., & Lunkenheimer, E. (2015). The long-term economic benefits of natural mentoring relationships for youth. American Journal of Community Psychology, 56, 12–24. https://pubmed.ncbi.nlm.nih.gov/26148978/
    16. Kemple, J. J., & Scott–Clayton, J. (2004). Career Academies: Impacts on labor market outcomes and educational attainment. San Francisco, Calif.: Manpower Demonstration Research Corporation. Retrieved from http://www.mdrc.org/sites/default/files/full_49.pdf
    17. Kemple, J. J., & Willner, C. J. (2008). Career Academies: Long-term impacts on labor market outcomes, educational attainment, and transitions to adulthood. San Francisco, CA: Manpower Demonstration Research Corporation. Retrieved from http://www.mdrc.org/publications/482/full.pdf
    18. Merrill, L., Siman, N., Wulach, S., & Kang, D. (2015). Bringing together mentoring, technology, and whole-school reform. New York, NY: The Research Alliance for New York City Schools. Retrieved from http://steinhardt.nyu.edu/research_alliance/publications/imentor_first_look
    19. Lloyd, C. E., Duncan, C., & Cooper, M. (2019). Goal measures for psychotherapy: A systematic review of self‐report, idiographic instruments. Clinical Psychology: Science and Practice, e12281. https://doi.org/10.1111/cpsp.12281
    20. Edbrooke‐Childs, J., Jacob, J., Law, D., Deighton, J., & Wolpert, M. (2015). Interpreting standardized and idiographic outcome measures in CAMHS: What does change mean and how does it relate to functioning and experience? Child and Adolescent Mental Health, 20, 142–148. https://doi.org/10.1111/camh.12107
    21. Edbrooke‐Childs, J., Jacob, J., Law, D., Deighton, J., & Wolpert, M. (2015). Interpreting standardized and idiographic outcome measures in CAMHS: What does change mean and how does it relate to functioning and experience? Child and Adolescent Mental Health, 20, 142–148. https://doi.org/10.1111/camh.12107
    22. Law, D., & Jacob, J. (2015). Goals and Goal Based Outcomes (GBOs): Some useful information (3rd Ed.). London: CAMHS Press. Retrieved from https://www.annafreud.org/media/3189/goals-booklet-3rd-edition220715.pdf
    23. Karcher, M. J., Herrera, C., & Hansen, K. (2010). “I dunno, what do you wanna do?” Testing a framework to guide mentor training and activity selection. New Directions for Youth Development, 126, 51-69.
    24. Epton, T., Currie, S., & Armitage, C. J.(2017). Unique effects of setting goals on behavior change: Systematic review and meta‐analysis. Journal of Consulting and Clinical Psychology, 85, 1182–1198. https://doi.org/10.1037/ccp0000260
    25. Harkin, B., Webb, T. L., Chang, B. P. I., Prestwich, A., Conner, M., Kellar, I., . . . Sheeran, P. (2016). Does monitoring goal progress promote goal attainment? A meta‐analysis of the experimental evidence. Psychological Bulletin, 142, 198-229. https://doi.org/10.1037/bul0000025
    26. Lloyd, C. E., Duncan, C., & Cooper, M. (2019). Goal measures for psychotherapy: A systematic review of self‐report, idiographic instruments. Clinical Psychology: Science and Practice, e12281. https://doi.org/10.1111/cpsp.12281
    27. Pender F, Tinwell C, Marsh E, Cowell V (2013) Evaluating the use of goal-based outcomes as a single patient rated outcome measure across CWP CAMHS: a pilot study. Child and Family Clinical Psychology Review, 1, 29–40
    28. Balcazar, F.E., Davies, G.L., Viggers, D., Tranter D., (2006). Goal Attainment Scaling as an effective strategy to assess the outcomes of mentoring programs for troubled youth. International Journal of School Disaffection, 4(1), 43-52.

Social-Emotional Skills

Self-Control

This measure consists of 4 items. Sample items include: “I wait my turn in line patiently,” “I keep my temper when I have an argument with other kids,” and “I follow the rules even when nobody is watching”. Each item is rated on a 4 point scale: NO!, no, yes, and YES!

Scale

Social-Emotional and Character Development Scale (SECDS) – Self-control subscale

What It Measures

A youth’s ability to control and regulate his or her behavior and emotions in positive ways.

Intended Age Range

8- to 13-year-olds.

Rationale

This scale, which is one of several in the SECDS (a comprehensive measure of social-emotional and character development), was selected based on its brevity as well as its evidence of both reliability and validity and sensitivity to effects of interventions.

Cautions

This measure was developed for use with elementary-school-age children; please consider in particular that the content of the items that comprise the scale may not be appropriate for older youth. For example, with older youth, waiting one’s turn in line may not be a salient situation for exercising self-control.

Special Administration Information

None.

How to Score:

Each item is scored from 1 (NO!) to 4 (YES!). The total score is computed by averaging across all items.

How to Interpret Findings

Higher scores on this measure reflect greater self-reported ability to control one’s behaviors and emotions.

Access and Permissions

The measure is available for non-commercial use with no charge and is made available here.

Alternatives

Information about several more in-depth measures of self-control as well as scales that may be more appropriate for older youth can be found here.

  • Cited Literature
    1. Ji, P., DuBois, D. L., & Flay, B. R. (2013). Social-Emotional and Character Development Scale: Development and initial validation with urban elementary school students. Journal of Research in Character Education, 9, 121–147. (Available here.)

Social-Emotional Skills

Social Competence

This measure consists of 7 items. The scale is adapted from the Social Self-Efficacy subscale on the Self-Efficacy Questionnaire for Children (SEQ-C).

Sample items include: “I can make friends with other kids” and “I can talk with people I don’t know.” Youth respond on a 4-point scale: Not at all true, A little true, Mostly true, or Really true.

Scale

Social Competencies Scale of the Youth Outcome Measures Online Toolbox

What It Measures:

A youth’s perceived ability to be assertive and to create and maintain positive peer relationships.

Intended Age Range

8- to 18-year-olds.

Rationale

This measure was selected based on its relative brevity, promising evidence of reliability and validity, and support for its use with diverse populations of youth and types of afterschool programs.

Cautions

Although intended for use with both children and adolescents, there is only limited research addressing the scale’s use with older, high-school-aged youth.

Special Administration Information

None

How to Score

Each item is scored from 1 (Not at all true) to 4 (Really true). The total score is the average across all items.

How to Interpret Findings

Higher scores reflect greater perceptions of assertiveness and capabilities to make and maintain positive relationships with peers.

Access and Permissions

The scale is available for non-commercial use with no charge. To receive a list of the survey items, contact the tool developers at this site or by e-mailing afterschool@uci.edu. A ready-to-use format is also available here.

Alternatives

Child Trends has developed a 9-item measure of social competence using a nationally representative sample of 12- to 17-year-old youth. This measure may be useful to consider for programs that serve primarily older youth. More information on the measure is available here.

  • Cited Literature
    1. Muris, P. (2001). A brief questionnaire for measuring self-efficacy in youths. Journal of Psychopathology and Behavioral Assessment, 23, 145–149. http://dx.doi.org/10.1023/A:1010961119608

Social-Emotional Skills

Problem-Solving Ability

This measure consists of 4 items. Sample items include: “When you have a problem to solve, one of the first things you do is get as many facts about the problem as possible” and “After carrying out a solution to a problem, you usually try to analyze what went right and what went wrong.” Each item is rated on a 5-point scale: Strongly agree, Agree, Neither agree nor disagree, Disagree, or Strongly disagree.

Scale

National Longitudinal Study of Adolescent Health (Add Health) – Problem-solving items

What It Measures:

A youth’s reported use of strategies to identify, select, and evaluate solutions to problems.

Intended Age Range

10- to 21-year-olds.

Rationale

This measure was selected based on its relative brevity, promising evidence of reliability and validity, and support for its use with diverse populations of youth.

Cautions

The measure has been used with a national sample of youth ages 10 and older. However, the language of the items may prove difficult for some younger youth and for youth who require reading support.

Special Administration Information

None

How to Score

Each item is scored from 1 (Strongly agree) to 5 (Strongly disagree). The total score is computed by summing across the items.

How to Interpret Findings

A higher score reflects greater youth perceptions of his or her ability to solve problems.

Access and Permissions

The scale is available for non-commercial use with no charge and is provided here.

Alternatives

The Social Problem-Solving Inventory-Revised provides a more in-depth assessment of problem-solving abilities, with separate scales for several different facets of problem-solving (e.g., Positive Problem Orientation, Generation of Alternative Solutions). More information about this measure, which has been used in research with adolescents, and how to purchase it is available here. The American Camping Association has also developed several measures of problem-solving confidence for youth from ages 6 to 17 years old. More information on these measures and how to purchase them is available here.

  • Cited Literature
    1. Harris, K. M., Halpern, C. T., Whitsel, E., Hussey, J., Tabor, J., Entzel, P., & Udry, J. R. (2009). The National Longitudinal Study of Adolescent to Adult Health: Research design. Retrieved from http://www.cpc.unc.edu/projects/addhealth/design
    2. Brown, J. S., Meadows, S. O., & Elder, G. J. (2007). Race-ethnic inequality and psychological distress: Depressive symptoms from adolescence to young adulthood. Developmental Psychology, 43, 1295-1311. http://dx.doi.org/10.1037/0012-1649.43.6.1295

Social-Emotional Skills

Skills for Setting and Pursuing Goals

This measure consists of 9 items. Youth are asked “How much do each of these statements describe you?” Sample items include: “When I decide upon a goal, I stick to it,” “I always pursue goals one after the other,” and “I keep trying as many different possibilities as are necessary to succeeding at my goal.” Each item is rated on a 5-point scale from Not at all like me to Very much like me.

Scale

Global scale of Selection, Optimization, and Compensation (SOC)

What It Measures:

Youth goal-directed behaviors.

Intended Age range

10- to 18-year-olds.

Rationale

This scale was selected based on its brevity, ease of administration, appropriateness for use with school-aged youth, and evidence of good reliability and validity.

Cautions

This measure is based on the selection, optimization, and compensation (SOC) model. The original scale used forced choice as the response format. Research suggests that forced-choice and Likert-type formats of this measure produce similar results; however, additional validation of this measure is needed as this set of items has not yet been examined with the Likert scale response format. This response format is recommended for greater ease of use.

Special Administration Information

None.

How to Score

Each item is scored from 1 (Not at all like me) to 5 (Very much like me). The total score is the average across all items.

How to Interpret Findings

A higher score reflects a greater level of goal-directed behavior.

Access and Permissions

This scale is available for non-commercial use with no charge and upon request from the Institute for Applied Research in Youth Development at Tufts University. A ready to use format is also provided here.

Alternatives

The Goal Orientation—Adolescent scale is a 7 item self-report scale that measures an adolescent’s motivation and ability to make viable plans and take action toward achieving desired goals. This measure is a good alternative for programs interested in assessing the frequency of goal-setting/pursuit behaviors. It is available for download here.

  • Cited Literature
    1. Gestsdottir, S., & Lerner, R. M. (2007). Intentional self regulation and positive youth development in early adolescence: Findings from the 4-H Study of Positive Youth Development. Developmental Psychology, 43, 508–521. http://dx.doi.org/10.1037/0012-1649.43.2.508
    2. Geldhof, G. J., Gestsdottir, S., Stefansson, K. K., Johnson, S. K., Bowers, E. P., & Lerner, R. M. (2015). The selection, optimization, and compensation questionnaire: The validity and reliability of forced-choice versus Likert-type measurement of intentional self-regulation in late adolescence. International Journal of Behavioral Development, 39, 171–185. http://dx.doi.org/10.1177/0165025414560447

Social-Emotional Skills

Perseverance

This scale consists of 4 items assessing youth’s ability to pursue tasks to completion. Sample items include: “I finish whatever I begin” and “Once I make a plan to get something done, I stick to it.” Youth respond on a five-point scale. Responses for the first two items are: Almost never, Sometimes, Often, Very often, or Almost always. Responses for the last two items are: Not at all like me, A little like me, Somewhat like me, Mostly like me, or Very much like me.

Scale

EPOCH Measure of Adolescent Well-Being – Perseverance Scale

What It Measures:

A youth’s reported perseverance or persistence in completing tasks.

Intended Age Range

10- to 18-year-olds.

Rationale

This measure was selected based on its brevity, promising evidence of reliability and validity, and support for use with diverse populations of youth.

Cautions

None.

Special Administration Information

How to Score

Each item is scored on a 5-point scale from 1 (Almost never or Not at all like me) to 5 (Almost always or Very much like me). The total score is computed by averaging across all items.

How to Interpret Findings

A higher score reflects greater youth perceptions of his or her ability to persevere in completing a task or achieving a goal.

Access and Permissions

The scale is available for non-commercial use with no charge, after registering here. The Perseverance Scale is provided here. The full EPOCH measure is available here.

Alternatives

A construct related to perseverance is grit. Grit has been conceptualized as one’s sustained interest in and efforts toward achieving a long-term goal. Angela Duckworth’s 8-item Grit scale intended for use with children is available here.

  • Cited Literature
    1. Kern, M. L., Benson, L., Steinberg, E. A., & Steinberg, L. (2016). The EPOCH Measure of Adolescent Well-Being. Psychological Assessment, 28, 586-597. http://dx.doi.org/10.1037/pas0000201

Social-Emotional Skills

Youth Self Advocacy

This 7-item measure assesses the extent to which youth seek help from others when confronted with challenges. Two of the items are from the California Healthy Kids Survey Problem Solving Scale (i.e., When I need help I find someone to talk with, and I know where to go for help with a problem). The other five were developed to provide additional contexts for help-seeking (e.g., I ask for help from teachers or friends when I have difficult school work) and add two negative items. Response options are Not at all true, A little true, Mostly true, or Very true.


Scale
Help-Seeking Scale

What it measures
The extent to which a young person engages in help-seeking behaviors.

Intended age range
Youth 10 and older.

Rationale
This measure was selected because it has evidence of reliability and validity and has shown sensitivity to change in a mentoring context.

Cautions
Although promising, evidence of reliability is limited to one study. While the sample for the study included 30 mentoring programs across a range of U.S. locations, some groups of youth were not well represented.

Special administration information
None.

How to score
Each item is scored on a 4-point scale from 1 (Not at all true) to 4 (Very true). The score is computed by averaging ratings across all 7 items after reverse scoring the two negatively worded items (“I don’t like to ask for help from other people” and “I try not to ask others for help even when I have trouble figuring things out on my own”).

How to interpret findings
Higher scores indicate a greater willingness to seek help when needed.

Access and permissions
The Help-Seeking Scale is available for non-commercial use with no charge. A formatted version can be found here.

Alternatives
None

  • Cited Literature

    Citations
    Jarjoura, G. R., Tanyu, M., Forbush, J., Herrera, C., & Keller, T. E. (2018). Evaluation of the Mentoring Enhancement Demonstration Program: Technical report (Office of Justice Programs’ National Criminal Justice Reference Service Document Number 252167).

    Problem Solving Scale, California Healthy Kids Survey, California Department of Education. 2021.

Social-Emotional Skills

Career Exploration

This scale consists of 5 items which are introduced with, “When you explore careers, to what extent do you agree with the following statements? Right now I am…” Sample items include: “Learning what I can do to improve my chances of getting into my chosen career,” and “Learning as much as I can about the particular educational requirements of the career that interests me the most.” Youth respond on a 5-point scale: Strongly disagree, Disagree, Agree and disagree, Agree, or Strongly agree.

Scale

Vocational Identity Status Assessment – In-Depth Career Exploration Scale

What it measures:

A youth’s reported efforts and activities to explore and understand a field of work or career.

Intended Age Range

13- to 25-year-olds.

Rationale

This measure was selected based on its relative brevity, promising evidence of reliability and validity, and support for its use with diverse populations of youth.

Cautions

Use of this scale may be more developmentally appropriate for older adolescents as they are closer in age to when work roles are established and in-depth career exploration is more likely to occur.

Special Administration Information

One item in this scale asks about a “chosen career” which many youth may not have yet. It may be useful to have such youth either skip that question or think about a careers that interest them.

How to Score

Each item is scored on a 5-point scale from 1 (Strongly disagree) to 5 (Strongly agree). The total score is computed by averaging across all items.

How to Interpret Findings

A higher score reflects greater levels of involvement in career exploration.

Access and Permissions

The scale is available for non-commercial use with no charge and is provided here.

Alternatives

Within the Vocational Identity Status Assessment (VISA), the authors have also published the In-Breadth Career Exploration Scale which assesses youth’s broader exploration of a number of careers. This scale might be more appropriate for youth who are earlier in their exploration of careers. The full VISA measure is available here. To assess a young person’s readiness to make career decisions and occupational choices, the Career Maturity Inventory is also available here.

Social-Emotional Skills

Youth-Centered Outcomes

On the Goal-Based Outcomes tool, youth identify up to three goals for themselves. The tool was developed originally for use in clinical services context.

In this context, developers recommend that goals be established collaboratively with a service provider, but that they must be agreed upon and owned by the person seeking help. In the context of a mentoring program, goals could be set with the assistance of program staff, mentors, and/or parents. After youth goals are set and recorded, progress toward each goal is rated on a scale from 0 = Goal not at all met to 10 = Goal fully met/reached, with a midway anchor point of 5. The tool with the same goals listed can then be completed again at later points in time to assess progress toward those goals. Ratings on progress toward each goal can be completed independently by youth as well as with the assistance of program staff, mentors, and/or parents.

Scale

Goal Based Outcomes tool

What It Measures:

A youth’s reported progress toward their chosen goals.

Intended Age Range

10- to 18-year-olds and also has been used with adults.

Rationale

This measure was chosen based on its applicability across diverse populations and settings, demonstrated ability to detect change in youth, evidence of reliability and validity, and brevity. Reported progress on goals as assessed on the Goal Based Outcomes tool has been positively associated with improvements in emotional symptoms and functioning and negatively associated with psychosocial difficulties in youth.

Cautions

The simplicity of the goal-generation procedure may lead youth and others involved in helping them to set goals to overlook underlying or less conscious concerns of youth. Most evidence for the use of the Goal Based Outcomes tool is based on samples of youth in clinical settings. It also should be taken into account that ratings on the Goal Based Outcomes tool (and other similar measures) involve a subjective component and may be subject to social desirability bias (i.e., a motivational tendency or investment on the part of raters, such as youth or mentors, to report positive progress). Related to this, it is important to remember that youth should not be expected to fully meet their goals, that youth may only make limited progress on their goals in a given period of time, and that a youth’s goals may change. In view of these considerations, it may be useful to have a structure in place to review and discuss goal progress ratings as well as to collect data on goal progress from other sources.

Special Administration Information

The developers of the Goal Based Outcomes tool note that it may be useful to periodically provide opportunities for goals to be reset.

How to Score

Each goal is scored from 0 (Goal not at all met) to 10 (Goal reached). Progress on up to three goals could be assessed at each meeting with youth, the beginning and end of an intervention, or at set time points. Goal progress for up to 12 meetings can be monitored using the goal progress chart.  Reported progress on up to three goals are averaged at each meeting. Change is calculated by averaging the differences between scores on each goal. Although not discussed by developers, the measure also appears potentially appropriate for use with youth who are not participating in an intervention (for example, youth in a non-mentoring comparison group in an evaluation study).

How to Interpret Findings

Higher scores reflect greater progress toward the goals set.

Access and Permissions

The scale is available for non-commercial use with no charge and is available here.

Alternatives

Goal Attainment Scaling (GAS) is a widely used process that provides greater depth and nuance in assessment of goal attainment; however, GAS is more complex and time intensive and requires training.  More information on this measure and its application within mentoring is available here.

Healthy and Prosocial Behavior

Healthy and prosocial behavior includes behaviors that are directly promotive of physical health, such as exercise and a nutritional diet, as well as those that are directed to supporting the health and well-being of others, such as peers and members of one’s community more generally.

Attention to promoting these positive outcomes is consistent with the strengths-oriented perspective that is widely emphasized in the mentoring field.1,2,3,4 Theoretically, the presence of a mentor in the life of a young person may support healthy and prosocial behavior.5,6 Along with processes such as mentor role modeling of relevant behaviors, mentors may help young people to both learn and apply strategies for incorporating healthy and prosocial behaviors into their lives.7,8 In deciding which health and prosocial behaviors to include, priority was given to those behaviors that theory and research suggest mentors may be most likely to influence. The following types of healthy and prosocial behaviors were selected for inclusion in the Toolkit: healthy eating, physical activity, prosocial behavior, and civic engagement.

Healthy Eating

Healthy eating in this context refers to choosing an appropriate calorie level to help achieve and maintain a healthy body weight, support nutrient adequacy, and reduce the risk of chronic disease.9 Given that dietary behaviors initiated during childhood and adolescence are often carried into adulthood, mentors have the potential to influence healthy eating behaviors not only over the short-term, but also across the lifespan of the young people with whom they work. At the same time, there also may be powerful influences in the lives of young people that make it difficult for mentors to support healthy eating behaviors. Access to healthy food, for example, spans multiple contexts for young people (home, school, community) and where limited may compromise the efforts of mentors. The social and behavioral norms that exist in these settings, too, may work against healthy eating behavior in many instances. One useful strategy for addressing contextual influences on healthy eating is with the use of peer mentors. Unlike adult mentors, cross-age peer mentors may be present in many of the same contexts as the young people they are mentoring (school and community), and thus are in key positions to demonstrate both the importance of healthy eating and where healthy food can be accessed in their schools and communities. In line with these possibilities, research suggests that cross-age peer mentoring programs hold promise as an effective method for delivering health curricula and to enhance the development of younger children6 and for helping mentees improve nutritional knowledge and attitudes and intentions toward healthy eating.10 The evidence base regarding effects of mentoring on healthy eating among youth, however, is surprisingly limited.

Physical Activity

Excessive weight during childhood is a growing public health problem that is linked to obesity-related challenges, such as childhood asthma, hypertension, diabetes, and weight-related criticism (a type of peer victimization targeting weight or body shape).11,12 The combination of inadequate physical activity and an unhealthy diet contributes directly to overweight status among youth.13 Research with ethnic minority middle school students suggests parental modeling of healthy behaviors is a strategy that may help children maintain a healthy weight.14 Mentors may be able to play a useful role in supporting parents with these efforts. Mentors may engage in physical activities with youth as part of their time together, for example, and also may promote behaviors in this area by encouraging or facilitating their mentees’ participation in sports and recreational programs that help to keep youth active. Results from an evaluation of a youth development and mentoring program for girls found that one of the areas of greatest benefit was the domain of healthy behavior, with program participants reporting greater improvements in physical activity relative to a comparison group.15 In this study, physical activity included the activities program participants were involved in during the past week, including walking, running, bicycling, skating, and swimming. In general, however, there has been only limited attention to mentoring’s potential effects on physical activity level and the conditions under which such benefits are likely to be realized. Thus these topics remain poorly understood.

Treatment Adherence

Treatment Adherence is often defined as the extent to which a person’s behavior aligns with the recommendations from a health care provider. 16 Following provider recommendations is critical to the effectiveness of prescribed therapies, but the rate of treatment adherence for child and adolescent pediatric populations is often below 50 percent across health conditions and therapies.16,17,18 Low adherence is associated with poor health outcomes and is particularly problematic for youth with severe or chronic health conditions. Low treatment adherence can also affect health care provider behavior—for example, providers may modify treatment frequency or dosage because they think their treatment is ineffective.19

Promoting good treatment adherence is critical to improving health outcomes for youth.20 Youth with chronic health problems benefit from mentoring.21 Mentors may be particularly well positioned to influence treatment adherence behaviors of youth who struggle with health conditions that require care from a health professional. Mentors could encourage and reinforce treatment plan compliance, work collaboratively with youth to identify and problem-solve barriers to adherence, and support youth as they take an active role in managing their health condition. Some work supports this possibility. For example, youth in a mentoring program targeting young people youth receiving mental health care found that mentored youth utilized their mental health services to a greater extent than a comparison group of similar non-mentored youth.22 Another study of the same program found that mentored youth were also less likely than a matched comparison group to have an unplanned and client-initiated ending of their mental health treatment and more likely to have a planned ending of treatment.23

Prosocial Behavior

Prosocial behavior is “voluntary behavior that benefits others or promotes harmonious relations with others.”24,25 As such, prosocial behavior can be conceived as at the core of a mentor’s behavior and intentions (i.e., to benefit the mentee) and thus, not surprisingly is often a desired youth outcome for mentoring programs. Research has found that youth who are considered “prosocial” are good problem solvers, are considerate, and tend not to be aggressive.26,27 Mentors may promote prosocial behavior in their mentees by teaching and helping them to practice social skills, by modeling prosocial behavior themselves, and by introducing their mentees to diverse social interactions and contexts.28 Despite this potential, research evidence for effects of mentoring on prosocial behavior is equivocal. Of note, evidence of overall effects of mentoring program participation on measures of prosocial behavior was absent in the Role of Risk study29 as well as in randomized control trials of the Big Brothers Big Sisters school-based mentoring program30 and school-based mentoring programs funded through the Student Mentoring Program.31

Civic Engagement

Civic engagement refers to the ways in which citizens participate in the life of a community to improve conditions for others or to help shape the community’s future.32 Civic engagement behaviors are, in fact, what mentors are ideally modeling in every interaction with youth they are mentoring. They are thus theoretically outcomes that mentoring relationships have the potential to impact and research has found a link between civic engagement and mentoring. In the 4-H Study of Positive Youth Development, youth who participated in 4-H were 2.1 times more likely than youth not in 4-H to make contributions to their communities and 1.8 times more likely to have higher scores on measures of active and engaged citizenship.33 Active and engaged citizenship is a construct that reflects young people’s responses to measures of civic duty, civic skills, neighborhood connection, and civic participation. 4-H youth also reported more mentors than did non-4-H youth. Taken together, these findings suggest a link between having a mentor and the promotion of civic engagement among other behaviors associated with positive youth development.33 In the Role of Risk study, however, there was no evidence of an effect of program participation on youths’ reported levels of involvement in community service activities.21

  • Cited Literature

     

    1. DuBois, D. L., Portillo, N., Rhodes, J. E., Silverthorn, N., & Valentine, J. C. (2011). How effective are mentoring programs for youth? A systematic assessment of the evidence. Psychological Science in the Public Interest, 12, 57–91. http://dx.doi.org/10.1177/1529100611414806
    2. Eccles, J., & Gootman, J. (Eds.). (2002). Community programs to promote youth development. Washington, DC: National Academy Press.
    3. Lewin-Bizan, S., Bowers, E. P., & Lerner, R. M. (2010). One good thing leads to another: Cascades of positive youth development among American adolescents. Development and Psychopathology, 22, 759–770. http://dx.doi.org/10.1017/S0954579410000441
    4. Scales, P. C., Benson, P. L., & Mannes, M. (2006). The contribution to adolescent well-being made by nonfamily adults: An examination of developmental assets as contexts and processes. Journal of Community Psychology, 34, 401–413. http://dx.doi.org/10.1002/jcop.20106
    5. Portwood, S. G., Ayers, P. M., Kinnison, K. E., Waris, R. G., & Wise, D. L. (2005). YouthFriends: Outcomes from a school-based mentoring program. Journal of Primary Prevention Special Issue: Mentoring with Children and Youth, 26, 129–145. http://dx.doi.org/10.1007/s10935-005-1975-3
    6. Smith, L. H. (2011). Piloting the use of teen mentors to promote a healthy diet and physical activity among children in Appalachia. Journal for Specialists in Pediatric Nursing, 16, 16–26. http://dx.doi.org/10.1111/j.1744-6155.2011.00286.x
    7. Martinek, T., Schilling, T., & Johnson, D. (2001). Transferring personal and social responsibility of underserved youth to the classroom. Urban Review, 33, 29–45. http://dx.doi.org/10.1023/A:1010332812171
    8. Bandura, A. (1977). Social learning theory. Englewood Cliffs, NJ: Prentice Hall.
    9. Odoms-Young, A. M., & Fitzgibbon, M. (2008). Familial and environmental factors that contribute to pediatric overweight in African American populations: Implications for prevention and treatment. Progress in Pediatric Cardiology, 25, 147–151. http://dx.doi.org/10.1016/j.ppedcard.2008.06.002
    10. Smith, L. H. (2011). Piloting the use of teen mentors to promote a healthy diet and physical activity among children in Appalachia. Journal for Specialists in Pediatric Nursing, 16, 16–26. http://dx.doi.org/10.1111/j.1744-6155.2010.00264.x
    11. Webber, K. J., & Loescher, L. J. (2013). A systematic review of parent role modeling of healthy eating and physical activity for their young African American children. Journal for Specialists in Pediatric Nursing, 18, 173–188. http://dx.doi.org/10.1111/jspn.12033
    12. Hayden-Wade, H. A., Stein, R. I., Ghaderi, A., Saelens, B., Zabinski, M., & Wilfley, D. E. (2005). Prevalence, characteristics, and correlates of teasing experiences among overweight children vs. non-overweight peers. Obesity Research, 13, 1381–1392. http://dx.doi.org/10.1038/oby.2005.167
    13. Glover, S., Piper, C., Hassan, R., Preston, G., Wilkinson, L., Bowen-Seabrook, J., & Williams, S. (2011). Dietary, physical activity, and lifestyle behaviors of rural African Americans South Carolina children. Journal of the National Medical Association, 103, 300–304. http://dx.doi.org/10.1016/S0027-9684(15)30310-2
    14. Stevens, C. J. (2010). Obesity prevention interventions for middle school-age children of ethnic minority: A review of the literature. Journal for Specialists in Pediatric Nursing, 15, 233–243. http://dx.doi.org/10.1111/j.1744-6155.2010.00242.x
    15. Kuperminc, G. P., Thomason, J., DiMeo, M., & Broomfield-Massey, K. (2011). Cool Girls, Inc.: Promoting the positive development of urban preadolescent and early adolescent girls. Journal of Primary Prevention, 32, 171–183. http://dx.doi.org/10.1007/s10935-011-0243-y
    16. World Health Organization. Adherence to long-term therapies: evidence for action. Available from: http://www.who.int/chp/knowledge/publications/adherence_report/en/
    17. Dunbar-Jacob, J., & Mortimer-Stephens, M. K. (2001). Treatment adherence in chronic disease. Journal of Clinical Epidemiology, 54(Suppl 1), S57–S60. https://doi.org/10.1016/s0895-4356(01)00457-7
    18. La Greca, A. M., & Bearman, K. J. (2003). Adherence to prescribed medical regimens. In M. C. Roberts (Ed.), Handbook of pediatric psychology (3rd ed., pp. 119–140). Guilford.
    19. DiMatteo, M. R., Giordani, P. J., Lepper, H. S., & Croghan, T. W. (2002). Patient adherence and medical treatment outcomes: A meta-analysis. Medical Care, 40(9), 794-811. https://doi.org/10.1097/00005650-200209000-00009
    20. Quittner, A. L., Modi, A., Lemanek, K. L., levers-Landis, C. E., & Rapoff, M. A. (2008). Evidence-based assessment of adherence to medical treatments in pediatric psychology. Journal of Pediatric Psychology, 33(9), 916-936. https://doi.org/10.1093/jpepsy/jsm064
    21. Lipman, E. L., DeWit, D., DuBois, D. L., Larose, S., & Erdem, G. (2018). Youth with chronic health problems: how do they fare in mainstream mentoring programs? BMC Public Health, 18(1), 102. https://doi.org/10.1186/s12889-017-5003-3
    22. Walker, S. C. (2013). 4Results Mentoring evaluation. Division of Public Behavioral Health & Justice Policy, Department of Psychiatry and Behavioral Sciences, University of Washington.
    23. DuBois, D. L., Herrera, C., & Higley, E. (2018). Investigation of the reach and effectiveness of a mentoring program for youth receiving outpatient mental health services. Children and Youth Services Review, 91, 85-93. https://doi.org/10.1016/j.childyouth.2018.05.033
    24. Dovidio, J. F., Piliavin, J. A., Schroeder, D. A., & Penner, L. A. (2006). The social psychology of prosocial behavior. Mahwah, NJ: Erlbaum.
    25. Eisenberg, N., Guthrie, I. K., Murphy, B. C., Shepard, S. A., Cumberland, A., & Carlo, G. (1999). Consistency and development of prosocial dispositions: A longitudinal study. Child Development, 70, 1360–1372. http://dx.doi.org/10.1111/1467-8624.00100
    26. Marsh, D. T., Serafica, F. C., & Barenboim, C. (2012). Interrelationships among perspective taking, interpersonal problem solving, and interpersonal functioning. Journal of Genetic Psychology, 138, 37–48. http://dx.doi.org/10.1080/00221325.1981.10532840
    27. Eisenberg, N., Carlo, G., Murphy, B., & Van Court, P. (1995). Prosocial development in late adolescence: A longitudinal study. Child Development, 66, 1179-1197. https://doi.org/10.2307/1131806
    28. Schirm, V., Ross-Alaolmolki, K., & Conrad, M. (1995). Collaborative education through a foster grandparent program: Enhancing intergenerational relations. Gerontology & Geriatrics Education, 15, 85-94. http://dx.doi.org/10.1300/J021v15n03_07
    29. Herrera, C., DuBois, D. L., & Grossman, J. B. (2013). The role of risk: Mentoring experiences and outcomes for youth with varying risk profiles. New York, NY: A Public/Private Ventures project distributed by MDRC. Retrieved from http://www.mdrc.org/sites/default/files/Role%20of%20Risk_Exec%20Sum-web%20final.pdf
    30. Herrera, C., Grossman J. B., Kauh, T. J., Feldman, A. F., McMaken, J., & Jucovy, L. Z. (2007). Making a difference in schools: The Big Brothers Big Sisters School-based Mentoring Impact Study. Philadelphia, PA: Public/Private Ventures. Retrieved from http://bit.ly/2c3qxrP
    31. Bernstein, L., Rappaport, C. D., Olsho, L., Hunt, D., & Levin, M. (2009). Impact evaluation of the US Department of Education’s Student Mentoring Program (NCEE Report 2009-4047). Washington, DC: National Center for Education Evaluation and Regional Assistance Institute of Education Sciences, U.S. Department of Education. Retrieved from http://ies.ed.gov/ncee/pubs/20094047/pdf/20094047.pdf
    32. Adler, R. P., & Goggin, J. (2005). What do we mean by “civic engagement”? Journal of Transformative Education, 3, 236-253. http://dx.doi.org/10.1177/1541344605276792
    33. Lerner, R. M., & Lerner, J. V. (2013). The Positive Development of Youth: Report of the findings from the first seven years of the 4-H Study of Positive Youth Development. Medford, MA: Tufts University, Institute for Applied Research in Youth Development. Retrieved from http://bit.ly/2bLXCYB

Healthy and Prosocial Behavior

Healthy Eating

This measure consists of 5 items that ask about the frequency with which the youth ate different types of food the previous day. Examples of the types of food asked about are milk, yogurt, and cheese; vegetables; and cookies, doughnuts, pie, and cake. Item response choices are Didn’t eat, Ate once, and Ate once or more.

Scale

National Longitudinal Study of Adolescent to Adult Health (Add Health) – nutrition/dietary intake items

What It Measures:

The frequency of the youth’s daily consumption of food from various food groups (e.g. fruits, vegetables, dairy).

Intended Age Range

12- to 18-year olds; however, somewhat younger youth also may be able to provide reliable responses.

Rationale

The items were selected based on their ability to provide a short, easy-to-administer tool for assessing dietary intake among youth, the use of a short, one-day recall period (expected to be less subject to recall inaccuracies than longer periods especially for youth), and promising evidence of reliability. Additionally, normative data for responses to this set of questions for a large, nationally-representative sample of 7th to 12th graders are available through the Add Health study.

Cautions

The use of self-reported data to assess dietary intake is less reliable and less accurate than more objective measures (e.g., the doubly labeled water method or biomarkers such as plasma carotenoids), especially among children younger than 10 years old and those who are overweight or obese. These items also do not provide information on portion sizes of foods eaten and thus are not appropriate for measuring total diet (i.e., daily energy intake). Additionally, these items would not be adequate for mentoring programs that have a focus on reducing nutritional risk behaviors, such as skipping breakfast, eating dinner while watching TV, etc.

Special Administration Information

None.

How to Score

Responses to each item can be used to assess the frequency of consumption of food from individual food groups. In addition, although precedent for doing so has not been identified, a total score could be computed. This score could be computed by averaging responses across all items, with items scored on the same 3-point response set noted above, from 0 (Didn’t eat) to 2 (Ate once or more) for the 4 items that refer to nutritious types of foods and in the reverse direction for the remaining item that asks about consumption of sweets.

How to Interpret Findings

Responses to these items could be compared to those of youth in the Add Health study to help determine how program participants compare to those of youth more generally. These types of comparisons will be more informative when conducted with reference to the youth’s specific demographic subgroup (e.g., 9th grade girls).

Access and Permissions

The measure is available for non-commercial use free of charge and is made available here.

Alternatives

A more detailed and longer assessment (e.g., the National Health and Nutrition Examination Survey Food Frequency Questionnaire) can also be used in assessing dietary intake among young people. This measure may also be used with children younger than 10 years of age when completed by a parent or other adult informant.

  • Cited Literature
    1. Harris, K. M., Halpern, C. T., Whitsel, E., Hussey, J., Tabor, J., Entzel, P., & Udry, J. R. (2009). The National Longitudinal Study of Adolescent to Adult Health: Research design. Retrieved from http://www.cpc.unc.edu/projects/addhealth/design

Healthy and Prosocial Behavior

Physical Activity

This measure consists of the following question: During the past 7 days, on how many days were you physically active for a total of at least 60 minutes per day? (Add up all the time you spent in any kind of physical activity that increased your heart rate and made you breathe hard some of the time.) Response choices are 0 days; 1 day; 2 days; 3 days; 4 days; 5 days; 6 days; 7 days.

Scale

Youth Risk Behavior Survey (YRBS)—Physical activity item

What It Measures:

This question assesses the frequency of a youth’s participation in vigorous physical activity.

Intended Age Range

12- to 18-year-olds (grades 7—12); however, somewhat younger youth also may be able to provide reliable responses.

Rationale

This one-item measure provides a simplified, easy-to-administer tool for assessing physical activity among children and adolescents in grades 7 to 12. It also has a relatively short recall period, which is ideal for use among youth. A number of variations of this measure have been used to assess physical activity in children and adolescents, and comparison of reporting accuracy between self-reports and more objective measures (e.g., heart rate monitors, pedometers) have found moderate to high reliability. There are also normative data available for responses to the question derived from a large nationally-representative sample of 7th to 12th graders. To date, however, there have not been any studies that have used this item or the longer (5-item) physical activity subscale to assess mentoring program impacts.

Cautions

Studies examining self-report measures of physical activity among children younger than 10 years old have found them to be less reliable and less accurate than more objective measures such as pedometry. Thus, the YRBS item recommended here may not be suitable for children younger than 10 years. It is also possible that responses on this item may capture physical activity that is a part of school-based physical education (PE), which is not likely to be affected by participation in a mentoring program. One approach that programs may wish to use in addressing this concern is to collect information on PE participation and then control statistically for PE participation when evaluating program effects on physical activity as assessed by the YRBS item. An additional question from the YRBS can be used for this purpose (“In an average week when you are in school, on how many days do you go to physical education (PE) classes?”).

Special Administration Information

None.

How to Score

Responses on this measure can be categorized into physical activity levels. For example, those reporting vigorous physical activity on 3 or more days may be categorized as “physically active,” those reporting vigorous activity on 1-2 days may be categorized as “moderately active,” and those reporting vigorous activity on 0 days may be categorized as “inactive.”

How to Interpret Findings

Higher scores reflect greater frequency of engaging in vigorous physical activity.

Access and Permissions

The measure is available for non-commercial use free of charge from the YRBS website and is also provided here.

Alternatives

The longer (5-item) physical activity subscale of the YRBS as well as the Daily Activities subscale of the National Longitudinal Study of Adolescent to Adult Health (11 items) can also be used by programs that are interested in measuring different types of physical activity (e.g., frequency of participation in team sports) and sedentary behaviors (e.g. time spent watching television). Additionally, more objective measures such as accelerometers and pedometers can also be used to measure physical activity in children and adolescents. A good review of these methods is available here.

Healthy and Prosocial Behavior

Treatment Adherence

This 4-item measure was developed following a review of existing treatment adherence measures for physical and mental health treatments and can be used to capture treatment adherence across a broad range of treatment types and conditions. Response options are Never, Rarely, Sometimes, Often, or Always.


What it measures
The extent to which a young person follows the recommendations of their health care provider.

Intended age range
Youth 10 and older.

Rationale
Mentoring programs that serve youth with physical or mental health challenges may want to assess the impact of mentoring on the youth’s treatment adherence, which is an important indicator of whether the youth will ultimately benefit from treatment. Unlike most existing measures, this measure does not refer to a specific health condition or treatment.

Cautions
The development of these scale items was informed by existing scales that show evidence of reliability and validity. However, such evidence does not exist for this new measure. Although the items have face validity (i.e., they appear to reflect treatment adherence well), there is no evidence to date to support combining the four items to form a broader measure. In addition, self-report measures of treatment adherence often overestimate the extent to which an individual follows the treatment recommendations of their health care provider.

Special administration information
This version of the measure uses the term “health care provider” to indicate the individual who is providing care to the youth. This term can be changed to “mental health care provider” or other specific individual providing care.

How to score
Each item is scored from 0 (Never) to 5 (Always) and used independently as single items. The item, “I miss appointments with my health care provider” is reverse coded.

How to interpret findings
Higher scores indicate a greater degree of treatment adherence.

Access and permissions
The Treatment Adherence measure is available for non-commercial use with no charge. A formatted version can be found here.

Alternatives
The Partners in Health (PIH) scale measures the degree of a person’s active involvement in managing their care across a range of chronic conditions. The 12 PIH items include 4 subscales: Knowledge; Partnership in Treatment; Recognition and Management of Symptoms; and Coping. PIH items are rated on a scale from 0 (Very good) to 8 (Very poor). The revised 12 items of the PIH are publicly available and can be found here.

  • Cited Literature

    Citations

    Smith, D., Harvey, P., Lawn, S., Harris, M., & Battersby, M. (2017). Measuring chronic condition self-management in an Australian community: Factor structure of the revised Partners in Health (PIH) scale. Quality of Life Research, 26, 149–159. https://doi.org/10.1007/s11136-016-1368-5

    Peñarrieta-de Córova, I., Barrios, F. F., Gutierrez-Gomes, T., Piñonez-Martinez, M. del S., Quintero-Valle, L. M., & Castañeda-Hidalgo, H. (2014). Self-management in chronic conditions: Partners in health scale instrument validation. Nursing Management, 20(10), 32-37. https://doi.org/10.7748/nm2014.02.20.10.32.e1084

Healthy and Prosocial Behavior

Prosocial Behavior

This measure consists of 6 items. Sample items include: “I am nice to kids who are different from me,” “I try to cheer up other kids if they are feeling sad,” and “I am a good friend to others.” Each item is rated on a 4 point scale: NO!, no, yes, and YES!

Scale

Social-Emotional Character Development Scale (SECDS) – Prosocial Behavior subscale

What It Measures:

Youth engagement in prosocial behavior with peers.

Intended Age Range

6- to 11-year-olds.

Rationale

This scale was selected based on its brevity, ease of administration, and evidence of good reliability and validity as well as sensitivity to effects of interventions. It is one of several scales in the SECDS—a comprehensive measure of social-emotional and character development.

Cautions

This measure was developed for use with elementary-school-age children; please consider in particular that the wording of the items that comprise the scale may not be appropriate for older youth. For example, the item, “I play nicely with others” may not be a salient descriptor of prosocial behavior for older youth.

Special Administration Information

None.

How to Score

Each item is scored from 1 (NO!) to 4 (YES!). The total score is computed by averaging across all items.

Access and Permissions

This scale is available for non-commercial use with no charge and can be obtained here. A ready-to-use version is also available here.

How to Interpret Findings

Higher scores on this measure reflect greater self-reported prosocial behavior.

Alternatives

The Informal Helping Youth Scale measures prosocial behavior across several contexts (e.g., home, school, community). It has good reliability across elementary, middle, and high-school-aged youth. This measure may be a good alternative for programs interested in assessing prosocial behaviors for a wide age range of youth. The measure and its documentation can be found here.

  • Cited Literature
    1. Ji, P., Flay, B. R., & DuBois, D. L. (2013). Social-Emotional and Character Development Scale: Development and initial validation with urban elementary school students. Journal of Research in Character Education, 9, 121–147. (Available here.)

Healthy and Prosocial Behavior

Civic Engagement

This measure consists of 8 items. Youth are asked to report how often in the last year they engaged in various civic activities.

Sample items include: “How often do you help make your city or town a better place for people to live?” and “During the last 12 months, how many times have you been a leader in a group or organization?” Some items are rated on a 5-point scale – Never to Very often/5 or more times whereas other items are rated on a 6-point scale – Never to Every day.

Scale:

Active and Engaged Citizenship (AEC) – Civic Participation scale

What It Measures:

The frequency with which the youth engages in civic-related behaviors.

Intended Age Range:

13- to 16-year-olds (grades 8—10)

Rationale

This scale was selected based on its brevity, ease of administration, and promising evidence of reliability and validity.

Cautions

This tool has only been assessed with youth in grades 8 to 10 and may not be as relevant for younger youth and those living in urban areas. Although the reading level required is lower than 8th grade, the activities may not be as relevant for younger youth.

Special Administration Information

None

How to Score

Each item is scored from 1 (Never) to 5 (Very often/5 or more times) or from 1 (Never) to 6 (Every day). The total score is computed by summing across the items.

How to Interpret Findings

Higher scores indicate more frequent participation in civic activities during the last 12 months.

Access and Permissions

The measure is available for non-commercial use with no charge or advance permission required. A link to the measure is made available here.

Alternatives

None recommended.

  • Cited Literature
    1. Zaff, J., Boyd, M., Li, Y., Lerner, J. V., & Lerner, R. M. (2010). Active and engaged citizenship: Multi-group and longitudinal factorial analysis of an integrated construct of civic engagement. Journal of Youth and Adolescence, 36, 736–750. http://dx.doi.org/10.1007/s10964-010-9541-6

Problem Behavior

Problem behaviors include a wide range of negative youth behaviors that could result in significant personal, social, academic, or legal consequences for the youth. The impacts of those behaviors can also extend beyond the individual youth to affect family, peers, and the larger community.

Problem behaviors include such outcomes as school misbehavior, aggression, delinquency, hyperactivity, inattention, and non-compliance. These types of behaviors can be a source of great distress for parents, teachers, and peers, and a wealth of intervention research has focused on preventing, deflecting, or remediating them.

Children who are served by community- or school-based mentoring programs often exhibit problem behaviors or are at heightened risk for developing such behaviors in adolescence and adulthood. Mentoring programs are generally successful in reducing some of these behaviors.1 In fact, DuBois and colleague’s recent meta-analysis2 suggests that mentoring programs are particularly effective when directed at youth with pre-existing difficulties (e.g., problem behaviors) or those who experience significant levels of environmental risk. Yet, it is not currently known whether the benefits of mentoring will accrue for youth with more severe antisocial tendencies.

In selecting problem behavior outcomes to review for this Toolkit, emphasis was given to outcomes in this domain that are often of concern to families, schools, mentoring programs, and program funders. The selected outcomes are youth delinquency (both self-report and official records), bullying, disruptive behavior at school and school discipline (both self-report and official records), substance use, opioid misuse, and school absenteeism and truancy (both self-report and official records). Each of these types of behavior place youth at significant risk for difficulties later in life and thus are important targets for intervention and prevention efforts.

Delinquency

Youth delinquency is often viewed as a precursor to later criminality, particularly for those children who engage in delinquent acts at an early age. Youth who go on to perpetrate violent and nonviolent crimes impose a substantial cost to society.3 In the US, Cohen and Piquero4 estimate that each high-risk youth who becomes a career criminal costs society $2.6-5.3 million over a lifetime (e.g., lost wages, medical expenses, stolen property, costs of incarceration). Interventions that are effective in reducing youth delinquency thus have the potential to both improve the lives of individual youth and reduce the social burden imposed by delinquent activity. Youth mentoring is arguably one of the most commonly applied interventions to prevent delinquency or deflect youth off the path from delinquency to later adult criminality.1 In fact, as part of their prevention efforts, the Office of Juvenile Justice and Delinquency Prevention (OJJDP), alone, invested $615 million in mentoring programs from 2008 to 2014. And studies support this investment with meta-analyses showing small to medium effects in this area.1,2

Youth self-report and official records data collection are two common methods for collecting data on delinquency. Programs interested in a broad measure of delinquent behavior may choose to collect youth-reported delinquency, as this method is convenient, relatively inexpensive, and generally reliable and accurate in most contexts. There is some concern, however, that youth may under- or over-report delinquent behavior or arrests depending on the nature or frequency of these behaviors.5,6,7 Thus, a program interested in a more nuanced understanding of youth arrests, offenses, and sentencing may choose to collect records data on these outcomes. Collection of this information through official records requires considerably more program resources and comes with several steps and potential challenges, including: identifying agencies within and across jurisdictions from whom to collect records; clarifying the agency documentation requirements around accessing data; applying for access to the data; ensuring that you have signed permission for collecting these records; and, in some cases, deciphering needed information from case files. In addition, not all jurisdictions will grant the release of identified information; the agency may agree to provide only “deidentified” data, making it difficult to link the information collected to program records. Thus, the goals of the evaluation and financial and staff resources of the organization should be considered when selecting how to measure delinquency.

 

Bullying

Bullying refers to aggressive behavior that is perpetrated repeatedly with the intention to inflict physical, psychological, or social harm on a peer.8 Children who bully their peers are often also aggressive with teachers, parents, and siblings and tend to experience a range of problems throughout childhood and young adulthood. For example, children who display early-onset conduct problems, including aggression, are known to experience school difficulties9,10 and peer relationship problems,11 and are less interpersonally skilled than their peers.12 During adolescence these children are likely to engage in higher rates of criminal activity13 and substance use,14 are more likely to experience teenage pregnancy,15 and are more apt to be diagnosed with a mental illness.16 As young adults, they are more likely to be unemployed and impoverished,17 to experience difficulties in romantic relationships,18 and to be diagnosed with antisocial personality disorder.19 These findings, coupled with the fact that many at-risk youth who are served by youth mentoring programs exhibit behavioral problems, were influential in our selection of bullying as an outcome. It is also clear that bullying is becoming a major public health problem in the US and abroad, and youth mentoring could complement school-wide interventions aimed at reducing bullying. Although limited, available evidence suggests that mentoring can lead to reductions in aggressive behavior,1 but more research is needed in this domain, particularly as it relates to the effect of mentoring on bullying per se.

 

Disruptive Behavior at School and School Discipline

The Toolkit contains measures of both disruptive behavior at school (through a self-report measure and the collection of school records), as well as the disciplinary actions that result from these behaviors (through school records).

Disruptive behavior reflects behaviors that disrupt or disturb academic or social activities, creating troubled learning conditions for students. Children who display heightened levels of disruptive behavior at school perform less well academically,20 have impaired social relations,21 and are at heightened risk for antisocial outcomes.22 Children’s disruptive behavior also negatively impacts the performance of teachers and students in the classroom.

School disciplinary actions can include office disciplinary referrals (ODRs) and formal disciplinary actions, ranging from written reprimands to law enforcement referrals. Exclusionary discipline (i.e., removing students from class or school) includes in- and out-of-school suspension, temporary placement in an alternative educational setting, and expulsion from school, and can have particularly significant consequences for youth.23

Students who are suspended from school have lower course completion rates and standardized test scores, are more likely to repeat a grade, and less likely to graduate from high school or go to college.24,25,26,27 These deleterious effects can occur not just when students are removed from school but also from in-school suspension (ISS). A study of high school students found that ISS was negatively associated with students’ grade point averages and high school completion.28 Students who are suspended or expelled are also more likely to come in contact with the criminal justice system and to be arrested.27,29,30 The effects of school suspension are cumulative; the more times a student is suspended, the lower their odds of school completion and postsecondary enrollment and the higher their odds of arrest.29,31

Due to the significant negative effects of punitive disciplinary actions, many schools are moving away from zero-tolerance policies in favor of less severe measures like peer mediation or positive behavioral reinforcement, especially for minor infractions.32,33

Mentoring programs have been shown to have a positive impact both on student behavior and on disciplinary referrals. One study found that Big Brothers Big Sisters school-based mentoring was effective in decreasing fighting at school, as well as principal’s office visits and suspensions.34 Evaluations of two middle-school mentoring programs also found a significant reduction in disciplinary referrals, in- and out-of-school suspensions, and the number of serious infractions committed on school property for mentored youth compared to their non-mentored peers.35,36

 

Substance Use

Substance use by adolescents is a national concern.37 By the time they are seniors, just over 60 percent of high school students will have tried alcohol, nearly half will have taken an illegal drug, over a quarter will have smoked a cigarette, and 18 percent will have used a prescription drug for a nonmedical purpose.38 The desire for new experiences, an attempt to deal with problems or perform better in school, and simple peer pressure have been offered as some of the reasons adolescents begin to experiment with substance use.39 Mentors can provide a safe context for discussions and disclosures related to substance use and simultaneously transmit prosocial values, advice, and perspectives about the dangers of substance use.40,41 Dunn and colleagues conducted a review of 15 studies on mentoring and substance use in adolescents.37 Findings were inconsistent across the reviewed studies (only 6 of the 15 showed effects of mentoring on substance use). However, the authors concluded that higher-quality mentoring programs (e.g., providing mentor training and support) and more exposure to mentoring (i.e., mentoring relationships lasting more than one year) were linked with stronger effects in this area. Similarly, two systematic reviews conducted by Thomas and colleagues42,43 found evidence for favorable effects of mentoring on substance use (as did a meta-analysis by Tolan and colleagues1), but, again, findings were inconsistent across studies and were stronger for alcohol use than for other drug or tobacco use. Among several issues in need of clarification in this area is the question of whether mentoring works equally well in preventing the onset of substance use or in curbing existing use.

 

Opioid Misuse

In 2017, the U.S. Department of Health and Human Services declared the opioid crisis (i.e., the misuse of, addiction to, and deaths related to opioids) a public health emergency.44 This crisis involves significant numbers of youth. In a 2016 national survey, 3.6 percent of 12- to 17-year-old youth reported misusing opioids (i.e., using opioids without a doctor’s prescription or differently than how a doctor prescribed them) over the past year.45 In 2015 alone, 4,235 youth and young adults ages 15 to 24 died from a drug-related overdose; over half of these were attributable to opioids.46 Nonmedical prescription opioid use in adolescence is also linked with several negative outcomes later in life such as substance use disorder symptoms47 and the transition to heroin use.48 In adults, opioid misuse is also predictive of both mood and anxiety disorders.49

The misuse of prescription opioids has only recently become a large-scale public concern. Thus, the field has few examples of scales used to measure it in adolescents, and differences in how these scales are framed and administered to youth (e.g., in their definitions of misuse, use of the term “opioids,” inclusion of specific types of opioids or pictures of sample pills) make interpretation and comparisons of findings and trends across studies difficult (see Voon & Kerr50; Palamar et al,51). To date, there is also very little published information on youth mentoring programs specifically targeting opioid misuse or the potential for mentoring relationships to reduce risk for opioid misuse among young persons. Yet, there is much potential for these types of programs, as well as those supporting mentoring opportunities for youth more generally, to contribute to approaches for tackling this problem.52

 

School Absenteeism and Truancy

Students miss school for many reasons. While occasionally missing class for a valid reason (e.g., illness, dentist appointment, etc.) is not typically considered a “problem behavior,” reasons for absenteeism are not always clear. Moreover, missing school—whether excused or unexcused—is negatively associated with student academic outcomes. The effects of absences begin as early as prekindergarten, and can establish a trajectory of missed school, reduced academic achievement, and slower school progression in later childhood and adolescence.53,54,55 Absenteeism is associated with lower standardized test scores, school dropout, and decreased rates of high school graduation and enrollment in postsecondary education.56 Students with attendance problems exhibit lower school efficacy, more depressive symptoms and lower self-esteem than those with regular attendance.57

The negative effects of absenteeism are particularly acute for students who are chronically absent and those who are truant. The U.S. Department of Education defines chronic absenteeism as missing more than 10% of the school year (18 days in a 180-day school year).58 Although truancy is generally considered to be any unexcused or unverified absence from school, the number of unexcused absences that a student can accrue before they are legally considered truant varies by state and school district.59 Unexcused absenteeism—whether or not it meets this legal threshold—is a strong predictor of academic failure, school dropout, substance use, and criminal activity.59,60

In some states, students who are habitually truant can be suspended or expelled from school,61 further reducing learning opportunities and increasing the likelihood of negative outcomes. Additionally, because truancy is considered a status offense in most states (i.e., a noncriminal act that is a law violation only because of a youth’s status as a minor), students who are truant may face additional legal consequences and juvenile justice system involvement.59 Thus, the link between truancy and negative outcomes is complex—truancy itself may not necessarily cause delinquent behavior, but may lead to other processes that increase the likelihood of these outcomes. For example, being involved in a court appearance during high school is associated with increased likelihood that a youth will drop out of school, independent of their involvement in delinquency.62 In addition, delinquent youth are more likely to be arrested at times when they are kept out of school through suspensions, expulsions or truancy.63 Such contact with the juvenile justice system may increase youth’s risk of moving deeper into the system and, in turn, increase their risk for future delinquency.64,65 These links have led to calls for rethinking current labeling of, and response to, truancy, in favor of more preventive approaches.65

Studies suggest that mentoring programs may be an effective strategy to reduce absenteeism and truancy, although the effect varies depending on program design and student characteristics, such as risk and grade level.66,67,68 For example, mentoring effects may be especially strong in school-based programs, where school attendance may be particularly salient for the relationship. In fact, in a systematic review of three large-scale, rigorous evaluations of school-based mentoring programs, a decrease in unexcused absences was the outcome with the largest estimated effect across all of the outcomes tested.69 There is also some evidence suggesting the potential for community-based mentoring programs to decrease absenteeism and truancy.70

Programs interested in assessing student absences and truancy can survey mentees about the number of times they missed school in a specific period or they can collect official records of absenteeism. Official records include student report cards, which can be collected directly from students or parents, and school administrative data, which can be requested from schools or school districts once appropriate data sharing agreements are developed (more information on data sharing can be found here). Surveying mentees about absenteeism is often easier and less resource intensive for mentoring programs, as many programs routinely administer surveys to program participants (see the Recent and Lifetime Truancy Scale for a suggested measure on self-reported unexcused absences/truancy). However, research has shown that self-reports of school absences differ from school records in important ways. Students tend to under-report the number of days missed and under-reporting may be greatest among students with frequent unexcused absences, perhaps in part due to the negative consequences that often result from truancy.71,72 Moreover, because ethnic and racial minorities often face greater penalties for truancy (i.e., greater likelihood of suspension or expulsion), such students may be more hesitant to self-report truant behavior.57

Administrative data also have shortcomings, especially when trying to distinguish between excused and unexcused absences. For example, school data may overestimate the number of unexcused absences among particular groups (e.g., students without health insurance are often unable to get a doctor’s note to document illness). Students may also routinely miss specific class periods without being counted as absent for the day.57 (See School Attendance Records for more considerations when collecting administrative data.) Still, many funders and other stakeholders are interested in official records to support program impact. Programs should think carefully about these tradeoffs when considering absenteeism and truancy as potential outcome measures.

  • Cited Literature
    1. Tolan, P., Henry, D., Schoeny, M., & Bass, A. (2008). Mentoring inter-inventions to affect youth juvenile delinquency and associated problems. Chicago, IL: University of Chicago, Institute for Juvenile Research. Retrieved from http://www.campbellcollaboration.org/lib/project/48/
    2. DuBois, D. L., Portillo, N., Rhodes, J. E., Silverthorn, N., & Valentine, J. C. (2011). How effective are mentoring programs for youth? A systematic assessment of the evidence. Psychological Science in the Public Interest, 12, 57–91. http://dx.doi.org/10.1177/1529100611414806
    3. Welsh, B. C., Loeber, R., Stevens, B. R., Stouthamer-Loeber, M., Cohen, M.A., & Farrington, D. P. (2008). Costs of juvenile crime in urban areas: A longitudinal perspective. Youth Violence and Juvenile Justice, 6, 3–27. http://dx.doi.org/10.1177/1541204007308427
    4. Cohen, M. A., & Piquero, A. R. (2009) New evidence on the the monetary value of saving a high risk youth. Journal of Quantitative Criminology, 25, 25–49. http://dx.doi.org/10.1007/s10940-008-9057-3
    5. Kirk, D. S. (2006). Examining the divergence across self-report and official data sources of inferences about the adolescent life-course of crime. Journal of Quantitative Criminology, 22, 107–129. https://doi.org/10.1007/s10940-006-9004-0
    6. Krohn, M. D., Lizotte, A. J., Phillips, M. D., Thornberry, T. P., & Bell, K. A. (2013). Explaining systematic bias in self-reported measures: Factors that affect the under- and over-reporting of self-reported arrests. Justice Quarterly, 30, 501–528. https://doi.org/10.1080/07418825.2011.606226
    7. Thornberry, T. P., & Krohn, M. D. (2000). The self-report method of measuring delinquency and crime. In D. Duffee (Ed.), Criminal justice 2000 (pp. 33–84). Washington, DC; U.S. Department of Justice, the National Institute of Justice. National Institute of Justice.
    8. Olweus, D. (1993). Bullying at school: What we know and what we can do.Cambridge, MA: Blackwell.
    9. Dishion, T. J., Patterson, G. R., Stoolmiller, M., & Skinner, M. L. (1991). Family, school, and behavioral antecedents to early adolescent involvement with antisocial peers. Developmental Psychology, 27, 172–180. http://dx.doi.org/10.1037/0012-1649.27.1.172
    10. Patterson, G. R., Capaldi, D. M., & Bank, L. (1991). An early starter model for predicting delinquency. In D. J. Pepler & K. H. Rubin (Eds.), The development and treatment of child aggression (pp. 139–168). Hillsdale, NJ: Erlbaum.
    11. Coie, J. D., & Kupersmidt, J. B. (1983). A behavioral analysis of emerging social status in boys’ groups.Child Development, 54, 1400–1416. http://dx.doi.org/10.2307/1129803
    12. Panella, D., & Henggeler, S. W. (1986). Peer interactions of conduct-disordered, anxious-withdrawn, and well-adjusted black adolescents. Journal of Abnormal Child Psychology, 14, 1–11. http://dx.doi.org/10.1007/BF00917217
    13. Loeber, R. (1990). Development and risk factors of juvenile antisocial behavior and delinquency. Clinical Psychology Review, 10, 1–42. http://dx.doi.org/10.1016/0272-7358(90)90105-J
    14. Lansford, J. E., Erath, S., Yu, T., Pettit, G. S., Dodge, K. A., & Bates, J. E. (2008). The developmental course of illicit substance use from age 12 to 22: Links with depressive, anxiety, and behavior disorders at age 18. The Journal of Child Psychology and Psychiatry, 49, 877–885. http://dx.doi.org/10.1111/j.1469-7610.2008.01915.x
    15. Yampolskaya, S., Brown, E. C., & Greenbaum, P. E. (2002). Early pregnancy among adolescent females with serious emotional disturbances: Risk factors and outcomes. Journal of Emotional and Behavioral Disorders, 10, 108–115. http://dx.doi.org/10.1177/10634266020100020501
    16. Kim-Cohen, J., Caspi, A., Moffitt, T. E., Harrington, H., Milne, B. J., & Poulton, R. (2003). Prior juvenile diagnoses in adults with mental disorder: Developmental follow-back of a prospective-longitudinal cohort. Archives of General Psychiatry, 60, 709–717. http://dx.doi.org/10.1001/archpsyc.60.7.709
    17. Caspi, A., Wright, B. R. E., Moffitt, T. E., & Silva, P. A. (1998). Early failure in the labor market: Childhood and adolescent predictors of unemployment in the transition to adulthood. American Sociological Review, 63, 424–451. https://www.jstor.org/stable/2657557
    18. Patterson, G. R., Reid, J. B., & Dishion, T. J. (1992). Antisocial boys.Eugene, OR: Castalia.
    19. American Psychiatric Association (2000). The diagnostic and statistical manual of mental disorders (4th ed., text rev.). Washington, DC: Author.
    20. Frick, P. J., Kamphaus, R. W., Lahey, B. B., Loeber, R., Christ, M. A. G., Hart, E. L., & Tannenbaum, L. E. (1991). Academic underachievement and the disruptive behavior disorders. Journal of Consulting and Clinical Psychology, 59, 289–294. http://dx.doi.org/10.1037/0022-006X.59.2.289
    21. Hanish, L. D., & Guerra, M. G. (2000). Predictors of peer victimization among urban youth. Social Development, 9, 521–543. http://dx.doi.org/10.1111/1467-9507.00141
    22. Schaefer, C. M., Petras, H., Ialongo, M., Masyn, K. E., Hubbard, S., Poduska, J., & Kellam, S. (2006). A comparison of girls and boys aggressive—distruptive behavior trajectories across elementary school: Prediction to young adult antisocial outcomes. Journal of Consulting and Clinical Psychology, 74, 500–510 http://dx.doi.org/10.1037/0022-006X.74.3.500
    23. Nishioka, V. (2017). School Discipline Data Indicators: A Guide for Districts and Schools. REL 2017-240. Regional Educational Laboratory Northwest.
    24. Chu, E. M., & Ready, D. D. (2018). Exclusion and urban public high schools: Short-and long-term consequences of school suspensions. American Journal of Education, 124 (4), 479–509. https://doi.org/10.1086/698454
    25. Marchbanks III, M. P., Blake, J. J., Smith, D., Seibert, A. L., Carmichael, D., Booth, E. A., & Fabelo, T. (2014). More than a drop in the bucket: The social and economic costs of dropouts and grade retentions associated with exclusionary discipline. Journal of Applied Research on Children: Informing Policy for Children at Risk, 5 (2), 17.
    26. Raffaele Mendez, L. M. (2003). Predictors of suspension and negative school outcomes: A longitudinal investigation. New directions for youth development, 2003 (99), 17–33. https://doi.org/10.1002/yd.52
    27. Rosenbaum, J. (2020). Educational and criminal justice outcomes 12 years after school suspension. Youth & Society, 52 (4), 515–547. https://doi.org/10.1177/0044118×17752208
    28. Cholewa, B., Hull, M. F., Babcock, C. R., & Smith, A. D. (2018). Predictors and academic outcomes associated with in-school suspension. School Psychology Quarterly, 33 (2), 191. https://doi.org/10.1037/spq0000213
    29. Mowen, T., & Brent, J. (2016). School discipline as a turning point: The cumulative effect of suspension on arrest. Journal of Research in Crime and Delinquency, 53 (5), 628–653. https://doi.org/10.1177/0022427816643135
    30. Fabelo, T., Thompson, M. D., Plotkin, M., Carmichael, D., Marchbanks, M. P., & Booth, E. A. (2011). Breaking schools’ rules: A statewide study of how school discipline relates to students’ success and juvenile justice involvement. New York: Council of State Governments Justice Center.
    31. Balfanz, R., Byrnes, V., & Fox, J. H. (2015). Sent home and put off track. Closing the school discipline gap: Equitable remedies for excessive exclusion, 17–30.
    32. Camacho, K. A., & Krezmien, M. P. (2020). A statewide analysis of school discipline policies and suspension practices. Preventing School Failure: Alternative Education for Children and Youth, 64 (1), 55–66. https://doi.org/10.1080/1045988x.2019.1678010
    33. Rafa, A. (2019). The status of school discipline in state policy. Education Commission of the States. https://www.ecs.org/wp-content/uploads/The-Status-of-School-Discipline-in-State-Policy.pdf
    34. Herrera, C., Grossman J. B., Kauh, T. J., Feldman, A. F., McMaken, J., & Jucovy, L. Z. (2007). Making a Difference in Schools: The Big Brothers Big Sisters School-Based Mentoring Impact Study.Philadelphia, PA: Public/Private Ventures. Retrieved from http://bit.ly/2c3qxrP
    35. Gordon, J., Downey, J., & Bangert, A. (2013). Effects of a School-Based Mentoring Program on School Behavior and Measures of Adolescent Connectedness. School Community Journal, 23 (2), 227–250.
    36. Rollin, S. A., Kaiser‐Ulrey, C., Potts, I., & Creason, A. H. (2003). A school‐based violence prevention model for at‐risk eighth grade youth. Psychology in the Schools, 40 (4), 403–416. https://doi.org/10.1002/pits.10111
    37. Dunn, S., Jones, J., Mekjavich, E., Mukai, G., & Varenas, D. (2012). “Understanding the impact of mentoring on substance abuse patterns in adolescents.” Pediatrics CATs. Paper 16. http://commons.pacificu.edu/otpeds/16
    38. Johnston, L. D., O’Malley, P. M., Miech, R. A., Bachman, J. G., & Schulenberg, J. E. (2017). Monitoring the Future national survey results on drug use, 1975-2016: Overview, key findings on adolescent drug use.Ann Arbor: Institute for Social Research, The University of Michigan. http://www.monitoringthefuture.org/pubs/monographs/mtf-overview2016.pdf
    39. NIDA. (2014, January 14). Principles of Adolescent Substance Use Disorder Treatment: A Research-Based Guide.Retrieved from https://www.drugabuse.gov/publications/principles-adolescent-substance-use-disorder-treatment-research-based-guide on 2017, July 14. https://d14rmgtrwzf5a.cloudfront.net/sites/default/files/podata_1_17_14.pdf
    40. Darling, N., Hamilton, S. F., & Niego, S. (1994). Adolescents’ relations with adults outside the family. In R. Monemayor & G. R. Adams (Eds.), Personal relationships during adolescence: Advances in adolescent development, pp 216–235. Thousand Oaks, CA: Sage Publications.
    41. Rhodes, J. E. (2002). Stand by me. The risks and rewards of mentoring today’s youth. Cambridge, MA: Harvard University Press.
    42. Thomas, R. E., Lorenzetti, D. L., & Spragins, W. (2011). Mentoring adolescents to prevent drug and alcohol use. Cochrane Database of Systematic Reviews (11). https://bit.ly/33JX6pS
    43. Thomas, R.E., Lorenzetti, D.L., & Spragins, W. (2013). Systematic Review of Mentoring to Prevent or Reduce Alcohol and Drug Use by Adolescents. Academic Pediatrics, 13, 292–299. doi: http://dx.doi.org/10.1016/j.acap.2013.03.007
    44. U.S. Department of Health and Human Services (2017). HHS Acting Secretary Declares Public Health Emergency to Address National Opioid Crisis. Retrieved from https://www.hhs.gov/about/news/2017/10/26/hhs-acting-secretary-declares-public-health-emergency-address-national-opioid-crisis.html
    45. Substance Abuse and Mental Health Services Administration. (2017). Key substance use and mental health indicators in the United States: Results from the 2016 National Survey on Drug Use and Health (HHS Publication No. SMA 17-5044). Rockville, MD: Center for Behavioral Health Statistics and Quality, Substance Abuse and Mental Health Services Administration. Retrieved from https://www.samhsa.gov/data/.
    46. The National Institute on Drug Abuse Blog Team. (2017). Drug overdoses in youth. Retrieved from https://teens.drugabuse.gov/drug-facts/drug-overdoses-youth
    47. McCabe, Se. E., Veliz, P. T., Boyd, C. J., Schepis, T. S., McCabe, V. V., & Schulenberg, J. E. (2019). A prospective study of nonmedical use of prescription opioids during adolescence and subsequent substance use disorder symptoms in early midlife. Drug and Alcohol Dependence, 194, 377–385. https://doi.org/10.1016/j.drugalcdep.2018.10.027
    48. Cerdá, M., Santaella, J., Marshall, B. D. L., Kim, J. H., & Martins, S. S. (2015). Nonmedical prescription opioid use in childhood and early adolescence predicts transitions to heroin use in young adulthood: A national study. The Journal of Pediatrics, 167, 605–612.e2. https://doi.org/10.1016/j.jpeds.2015.04.071
    49. Martins, S. S., Fenton, M. C., Keyes, K. M., Blanco, C., Zhu, H., & Storr, C. L. (2012). Mood and anxiety disorders and their association with non-medical prescription opioid use and prescription opioid-use disorder: longitudinal evidence from the National Epidemiologic Study on Alcohol and Related Conditions. Psychological Medicine, 42(6), 1261–1272. http://doi.org/10.1017/S0033291711002145
    50. Voon, P, & Kerr, T.(2013). “Nonmedical” prescription opioid use in North America: A call for priority action. Subst Abuse Treat Prev Policy,8, 39. https://doi.org/10.1186/1747-597X-8-39
    51. Palamar, J. J., Shearston, J. A., Dawson, E. W., Mateu-Gelabert, P., & Ompad, D. C. (2016). Nonmedical opioid use and heroin use in a nationally representative sample of us high school seniors. Drug and alcohol dependence, 158, 132–138. doi:10.1016/j.drugalcdep.2015.11.005
    52. Garringer, M. (2018). The promise and potential of mentors in combating the opioid crises. National Mentoring Resource Center Blog. Retrieved from
    53. Connolly, F., and Olson, L. S. (2012). Early Elementary Performance and Attendance in Baltimore City Schools’ Pre-Kindergarten and Kindergarten. Baltimore, MD: Baltimore Education Research Consortium. Retrieved November 28, 2016 from http://files.eric.ed.gov/fulltext/ED535768.pdf
    54. Gottfried, Michael A. (2014). Chronic Absenteeism and Its Effects on Students’ Academic and Socioemotional Outcomes. Journal of Education for Students Placed at Risk (JESPAR), 19:2, 53-75. Retrieved April 24, 2017 from http://dx.doi.org/10.1080/10824669.2014.96269
    55. London, R. A., Sanchez, M., & Castrechini, S. (2016). The Dynamics of Chronic Absence and Student Achievement. Education Policy Analysis Archives, 24 (112). Retrieved April 14, 2017 from http://dx.doi.org/10.14507/epaa.24.2471
    56. Balfanz, R., and Byrnes, V. (2012). The Importance of Being in School: A Report on Absenteeism in the Nation’s Public Schools. Baltimore, MD: Johns Hopkins University Center for Social Organization of Schools. Retrieved August 19, 2016, from http://new.every1graduates.org/the-importance-of-being-in-school/
    57. Keppens, G., Spruyt, B., & Dockx, J. (2019). Measuring school absenteeism: Administrative attendance data collected by schools differ from self-reports in systematic ways. Frontiers in Psychology, 10, 2623. https://doi.org/10.3389/fpsyg.2019.02623
    58. Attendance Works and Healthy Schools Campaign (2015). Mapping the Early Attendance Gap. Retrieved August 19, 2016, from http://www.attendanceworks.org/research/mapping-the-gap/.
    59. McKinney, S. (2013). Truancy: A research brief. Status Offence Reform Center. http://www.modelsforchange.net/publications/679/Truancy_A_Research_Brief.pdf
    60. McCluskey, C. P., Bynum, T. S., & Patchin, J. W. (2004). Reducing chronic absenteeism: An assessment of an early truancy initiative. Crime and Delinquency, 50, 214–234. https://doi.org/10.1177/0011128703258942
    61. National Center of Safe Supportive Learning Environments (n.d.). School Discipline Laws & Regulations by Category & State. https://safesupportivelearning.ed.gov/discipline-compendium/choose-state/results?field_sub_category_value=Attendance+and+truancy
    62. Sweeten, G. (2006). Who will graduate? Disruption of high school education by arrest and court involvement. Justice Quarterly, 23, 462–480. https://doi.org/10.1080/07418820600985313
    63. Monahan, K. C., VanDerhei, S., Bechtold, J., & Cauffman, E. (2014). From the school yard to the squad car: School discipline, truancy, and arrest. Journal of Youth and Adolescence, 43, 1110–1122. https://doi.org/10.1007/s10964-014-0103-1
    64. McAra, L., & McVie, S. (2007). Youth justice? The impact of system contact on patterns of desistance from offending. European Journal of Criminology, 4, 315–345. https://doi.org/10.1177/1477370807077186
    65. Ricks, A., & Esthappan, S. (2018, August 20). States are looking beyond the juvenile justice system to address school truancy [Blog post]. Retrieved from https://www.urban.org/urban-wire/states-are-looking-beyond-juvenile-justice-system-address-school-truancy
    66. Rhodes, J. E., Grossman, J. B., & Resch, N. L. (2000). Agents of change: Pathways through which mentoring relationships influence adolescents’ academic adjustment. Child development, 71 (6), 1662–1671. https://doi.org/10.1111/1467-8624.00256
    67. DuBois, D. L., Portillo, N., Rhodes, J. E., Silverthorn, N., & Valentine, J. C. (2011). How effective are mentoring programs for youth? A systematic assessment of the evidence. Psychological Science in the Public Interest, 12 (2), 57–91. https://doi.org/10.1177/1529100611414806
    68. Christenson, S. L., & Pohl, A. J. (2020). The Relevance of Student Engagement: The Impact of and Lessons Learned Implementing Check & Connect. In Student Engagement (pp. 3–30). https://doi.org/10.1007/978-3-030-37285-9_1
    69. Wheeler, M. E., Keller, T. E., & DuBois, D. L. (2010). Review of Three Recent Randomized Trials of School-Based Mentoring: Making Sense of Mixed Findings. Social Policy Report. Volume 24, Number 3. Society for Research in Child Development. http://pdxscholar.library.pdx.edu/cgi/viewcontent.cgi?article=1063&context=socwork_fac
    70. Grossman, J. B., & Tierney, J. P. (1998). Does mentoring work? An impact study of the Big Brothers Big Sisters program. Evaluation review, 22(3), 403–426. https://doi.org/10.1177/0193841X9802200304
    71. Hancock, K. J., Gottfried, M. A., & Zubrick, S. R. (2018). Does the reason matter? How student‐reported reasons for school absence contribute to differences in achievement outcomes among 14–15 year olds. British Educational Research Journal, 44 (1), 141–174. https://doi.org/10.1002/berj.3322
    72. Teye, A. C., & Peaslee, L. (2015, December). Measuring educational outcomes for at-risk children and youth: Issues with the validity of self-reported data. In Child & Youth Care Forum 44 (6), 853–873. https://doi.org/10.1007/s10566-015-9310-5

Problem Behavior

Delinquent Behavior

This scale of the PBFS consists of 8 items, each referring to a different behavior related to delinquency. Participants are asked to report how many times in the last month they’ve engaged in each behavior.

Sample items include: “Taken something from a store without paying for it” and “Written things or sprayed paint on walls or sidewalks or cars where you were not supposed to.” Each item is rated on a 6-point scale: 0, 1-2, 3-5, 6-9, 10-19, or 20 or more (times).

Scale

Problem Behavior Frequency Scale (PBFS) — the Self-Report Delinquency Scale

What It Measures:

The frequency with which a youth engages in delinquent behavior.

Intended Age Range

11- to 14-year-olds; the items appear applicable to older adolescents as well.

Rationale
The Self-Report Delinquency Scale of the PBFS is relatively brief, has evidence of reliability and validity in a school-based sample and a sub-sample of high-risk students, and provides information on the frequency of non-violent delinquent behavior, a form of delinquency that may be more commonly exhibited by youth participating in many mentoring programs.

Cautions

This measure does not assess arrests, encounters with the law, or violent delinquent behavior.

Special Administration Information

Care should be taken to ensure that youth understand that they are reporting on the frequency of their engagement in the behaviors asked about over the last month.

How to Score

Each item is scored on a six-point scale from 1 (None) to 6 (20 or more) and a total score is derived by summing across the items.

How to Interpret Findings

A higher score reflects a higher self-reported frequency of delinquent behavior.

Access and Permissions

This scale is available for non-commercial use with no charge and is included in the following resource from the National Center for Injury Prevention and Control of the Centers for Disease Control and Prevention: Measurement of Violence-Related Attitudes, Behaviors, and Influences Among Youth: A Compendium of Assessment Tools, 2nd edition (see p. 215).

Alternatives

The revised Self-Reported Delinquency Scale (SRD) is a good alternative for those interested in measuring both violent and nonviolent forms of delinquency. The SRD-revised is a 9-item brief version of the original 24-item SRD scale. Sample items include: “been involved in gang fights,” “hit one of your parents,” and “used force to get money or things from a teacher or adult at school.”

  • Cited Literature
    1. Dahlberg, L. L., Toal, S. B., Swahn, M., & Behrens C. B., (2005). Measuring violence-related attitudes, behaviors, and influences among youths: A compendium of assessment tools (2nd ed.). Atlanta, GA: Centers for Disease Control and Prevention, National Center for Injury Prevention and Control. (Available here.)

Problem Behavior

Aggression

This measure consists of 5 items, each referring to aggressive behavior. Youth are asked to report how many times in the last month they’ve engaged in each behavior.

Sample items include: “I called other students names,” “I pushed, shoved, slapped, or kicked other students,” and “I threatened to hit or hurt another student.” Response options are Never, 1-2 times, 3-4 times, or 5 or more times.

Scale

Modified Aggression Scale (MAS) — Bullying subscale

What It Measures:

A youth’s level of bullying behavior.

Intended Age Range

11- to 14-year-olds; the items appear applicable to older adolescents as well.

Rationale

Several developmentally appropriate measures of aggression exist for children and adolescents. The Bullying subscale of the MAS was selected because of its brevity, developmental applicability, evidence of reliability and validity, and assessment of both physical and verbal overt aggression.

Cautions

The Bullying subscale of the MAS does not assess relational aggression (i.e., harm is caused by damaging someone’s relationships or social status). Although overt and relational aggression are highly correlated, available evidence suggests there is utility in distinguishing between these and other forms of aggressive behavior. In addition, this measure does not distinguish between reactive and proactive/instrumental aggression, which research indicates have different functions and predict different social, emotional, and behavioral outcomes. In addition, those interested in assessing child aggression through a self-report measure should consider the characteristics of youth involved in their assessment. There is some evidence to suggest that certain children (e.g., those with ADHD) tend to underreport their level of aggression.

Special Administration Information

None.

How to Score

Each item is scored on a 4-point scale from 0 (None) to 3 (5 or more times). The total score is derived by summing across the items.

How to Interpret Findings

Higher scores reflect a higher frequency of self-reported bullying behavior.

Access and Permissions

The measure is available for non-commercial use with no charge and is made available here.

Alternatives

The 36-item Forms and Functions of Aggression (FFA) measure is a good alternative for those who are interested in a more in-depth assessment of aggression that distinguishes between overt, relational, proactive, and reactive forms of aggression. Sample items include: “When I’m hurt by someone, I often fight back” (reactive), “I often tell my friends to stop liking someone to get what I want” (relational), and “I often start fights to get what I want” (proactive).

Problem Behavior

School Misbehavior

This measure consists of 5 items. Sample items include “I sometimes annoy my teacher during class,” “I sometimes don’t follow my teacher’s directions during class,” and “I sometimes disturb the lesson that is going on in class.” Each item is rated on a 5-point scale ranging from Not at all true to Very true.

Scale

Patterns of Adaptive Learning Scales (PALS) – Disruptive Behavior subscale

What It Measures:

Students’ engagement in behaviors that disrupt or disturb the classroom.

Intended Age Range

12- to 18-year-olds; may be appropriate for use with older elementary school students as well.

Rationale

This scale was selected based on its brevity, ease of administration, appropriateness for use with school-aged youth, and promising evidence of reliability and validity.

Cautions

This measure is not intended to assess the presence or absence of school-related bullying, aggression, or violence. Care should be taken not to interpret the resulting scores as referring to specific disciplinary infractions; rather, the scores reflect behaviors that disrupt or disturb the classroom and may be associated with other behaviors.

Special Administration Information

None.

How to Score

Each item is scored from 1 (Not at all true) to 5 (Very true). A total score is computed by averaging across all items.

How to Interpret Findings

Higher scores reflect greater levels of self-reported disruptive behavior.

Access and Permissions

This scale is available for non-commercial use with no charge and can be obtained here (scale items appear on p. 26 and an example of how to format the items with the response choices can be found on p. 40).

Alternatives

The Disruptive Behavior Scale Professed by Students (DBS-PS) is a 16-item self-report scale that measures transgression of school rules, aggression toward other students, and aggression toward school authorities. This measure may be a good alternative for programs interested in a more comprehensive array of problematic behaviors in the school context. The measure and its documentation can be found here.

  • Cited Literature
    1. Midgley, C., Maehr, M. L., Hruda, L. Z., Anderman, E., Anderman, L., Freeman, K. E., … & Urdan, T. (2000). Manual for the Patterns of Adaptive Learning scales. Ann Arbor, MI: University of Michigan. Retrieved from http://www.umich.edu/~pals/PALS%202000_V12Word97.pdf

Problem Behavior

Substance Use

The scale consists of 9 items in 2 domains: the first three questions focus on alcohol and substance use; the remaining six focus on substance use risk behavior.

Sample items in the first domain include, “During the past 12 months, did you smoke any marijuana or hashish?” Sample items in the second domain include, “Have you ever ridden in a car driven by someone (including yourself) who was “high” or had been using alcohol or drugs?” Youth respond with Yes or No. Youth who respond with No to all three of the substance use questions receive a score of “0” and do not proceed to the risk behavior questions. Those responding with Yes to any of these questions are also asked the risk behavior questions.

Scale

The CRAFFT Screening Questionnaire

What It Measures:

The scale identifies adolescents who are at high risk for substance and alcohol abuse.

Intended Age Range

12- to 18-year-olds.

Rationale

This measure was selected based on its brevity, availability in 10 languages and ability to identify young people who are at risk for substance use problems.

Cautions

The CRAFFT is good at identifying youth at risk for substance use problems but is less able to identify the specific type of substance use problem a youth may be experiencing.

Special Administration Information

This measure can be administered via self-report or in an interview format.

How to Score

Each of the six substance use risk behavior questions are scored with 1 for a “Yes” response or 0 for a “No” response and added for a total score.

How to Interpret Findings

A score of 2 or more indicates a need for additional assessment.

Access and Permissions

The measure is available for no fee here.

Alternatives

There are a few well-known epidemiologic measures of alcohol and substance use including the Monitoring the Future Survey (part of an ongoing study of the behaviors, attitudes, and values of American youth and young adults) and the National Survey on Drug Use and Health (NSDUH) which provides national and state-level data on the use of tobacco, alcohol, illicit drugs, and mental health in the United States. Although these measures offer useful comparison data across time and demographic populations, they require more time to administer.

  • Cited Literature
    1. Knight JR, Shrier LA, Bravender TD, Farrell M, Vander Bilt J, Shaffer HJ. A new brief screen for adolescent substance abuse. Arch Pediatr Adolesc Med 1999;153(6):591-6. DOI: 10.1001/archpedi.153.6.591

Problem Behavior

Opioid Misuse

The YRBS includes three questions on opioid use: lifetime misuse of prescription pain medication (i.e., “use of prescription pain medicine without a doctor’s prescription or differently than how a doctor told you to use it”); 30-day misuse of prescription pain medication; and lifetime heroin use. Each item is rated on a 6-point scale: 0 times; 1 or 2 times; 3 to 9 times; 10 to 19 times; 20 to 39 times; 40 or more times.

Scale

Youth Risk Behavior Survey Questionnaire (YRBS) from the Youth Risk Behavior Surveillance System – Opioid misuse items

What It Measures:

Lifetime and 30-day misuse of prescription pain medication and heroin.

Intended Age Range

The items are intended for use with middle and high school students.

Rationale

This measure was selected based on its brevity, appropriateness for use with youth, coverage of use of different types of opioids, availability of national norms, and evidence of validity.

Cautions

This measure does not include pictures of different pain medications, which could help youth distinguish among types of prescription medication (e.g., opioids versus methamphetamines). It also does not assess accidental exposure to opioid drugs, such as might occur through use of a substance that is contaminated with an opioid drug.

Special Administration Information

The items could be adapted to assess use of specific opioids, such as by replacing “prescription pain medicine” with the name of specific drugs of interest.  However, such items are not included on the YRBS and thus validity data and national norms would not be available.

How to Score

Each item is scored from 1 (0 times) to 6 (40 or more times) to yield three descriptive measures of opioid misuse.

How to Interpret Findings

Higher scores indicate more frequent opioid misuse.

Access and permissions

The YRBS is available for use with no charge and is available here. The specific questions on opioid misuse are available here.

Alternatives

The Monitoring the Future survey includes questions asking about 30-day and/or 12-month use of specific opioids, including OxyContin, Vicodin, non-prescription cough or cold medicine, and heroin. The full survey codebook for the 8th and 10th grade surveys can be found here.  The National Institute on Drug Abuse has also created a screener to assist clinicians serving adult patients in screening for drug use, that appears potentially appropriate for use with adolescents. The screener asks about frequency of opioid (and other drug) use and follows with questions designed to assess risk for having or developing a substance use disorder and suggested next steps. Information on the screener can be found here.

  • Cited Literature
    1. Centers for Disease Control and Prevention (2018). 2019 Youth Risk Behavior Survey Questionnaire. Retrieved from www.cdc.gov/yrbs.

Problem Behavior

Truancy

The scale consists of 2 items: “How often in the last three months of school have you skipped a full day of school without your parent or guardian knowing?” and “How often in the last three months of school have you skipped a class without being allowed?”

Youth respond on a 4-point scale: I have never done this in my whole life; I have done this, but not in the last 3 months of school; I did this 1–2 times in the last 3 months of school; or I did this 3 or more times in the last 3 months of school.

Scale

Recent and Lifetime Truancy Scale

What It Measures:

The number of times a student has been absent for a full day or part of the day from school without permission.

Intended Age Range

8- to 18-year-olds.

Rationale

This measure was selected based on its brevity and use of a similarly worded set of questions in previous large-scale mentoring evaluations.

Cautions

The definition of truancy is usually established by school district policy and may vary across districts.

Special Administration Information

For younger elementary-age children, the “skipping class” question may not be as relevant as it is for older youth who attend more than one class. For this reason, you may wish to omit the question on skipping class when administering the measure to younger children.

How to Score

Each item is scored on a 4-point scale from 0 (Never in my whole life) to 3 (3 or more times in the last 3 months of school). The total score is computed by adding scores across both items.

How to Interpret Findings

A score of zero indicates that the child has never skipped class or school; higher scores indicate more frequent and/or more recent truancy.

Access and Permissions

The measure is available for no fee here.

Alternatives

None.

Problem Behavior

School Attendance (records)

This measure consists of school records of student absences (both excused and unexcused).

Rationale

Evaluators and researchers often rely on students to report the number of days they miss school. However, research has shown that there is a weak association between self-reported absenteeism and absences reported in administrative data, with students frequently underestimating days absent, particularly when absences are unexcused.

Cautions

When planning to collect student absences directly from schools or school districts, be sure to set aside adequate time and staff resources to work with school personnel. Interpreting school records requires the ability to collate and analyze data. Programs without on-staff expertise may want to work with an external program evaluator.

Access and Permissions

When working with an outside agency (e.g., a school or district) to collect school records, access to their data typically involves strict confidentiality conditions (see FERPA guidelines). You may be required to provide written parent permission with very specific information included (that can vary across schools or districts). Standard permission/consent language can be incorporated into program enrollment forms (see sample). Also, consider budgeting funds to reimburse time for school officials to gather needed data. More extensive guides on federal privacy guidelines and how to establish data sharing partnerships with school districts can be found in the Evaluation Guidance and Resources section of this Toolkit.

What to Collect

Suggestions for variables to request from schools or school districts can be found in a formatted data collection guide, here. If you are collecting report cards from parents or youth, this guide can be used to help structure your database for storage and analysis.

How to Collect

Sources: One option for collecting school absence records is to get them directly from parents or youth (e.g., copies of the students’ report card). A small incentive for providing this information, when possible, may be helpful. Another option is to get administrative data directly from schools or school district offices, in which case, a formal MOU will typically be required. Schools or districts may agree only to provide “deidentified” data (i.e., data that do not include student names or other identifying information).  If so, it is advisable in the data request to attach information to each youth’s name, such as basic demographics (gender or race/ethnicity) or program participation status so that the data once obtained (with this information attached to each line of data, but with the youth’s name removed) will allow you to use this information in analyses. Care must be taken, however, to ensure this type of attached information does not allow a youth to be inadvertently identified; a general rule of thumb is to ensure that the data once obtained do not include subgroups (e.g., male Native American youth) of fewer than 10 youth.
Additional Considerations: If you are interested in assessing changes over time, make sure to collect a “baseline” in the period before the student began program involvement. If mentoring program participation is less than the full school year, be sure to collect a similar time period for comparison to account for seasonal variations in absence rates (e.g., fall semester to fall semester). If possible, you may also want to consider collecting absence data for a comparable group of students not participating in the mentoring program. These data can be used to compare outcomes for program and non-program participants, which is a more robust evaluation design than simply looking at changes over the course of program involvement for program participants.

How to Analyze

It is important to work closely with school or district officials to understand how they record absences/non-attending. For example, there may be variations in whether absences include when students are late or leave school early, how staff determines what counts as an excused absence, and whether suspensions are included in the number of days absent. Attendance records may also vary across grade levels, dual enrollment programs, and virtual learning environments. Additionally, school records may or may not flag chronic absenteeism, defined by the U.S. Department of Education as missing 10% of the academic year (e.g., 18 days in a 180-day school year). If the schools you are working with do not track chronic absenteeism, ask for the total number of days in the academic year and divide the number of days a student has been absent (both excused and unexcused) by the total number of school days. Finally, excessive unexcused absences can trigger legal action under a state or locality’s truancy statute. School records typically indicate whether a student is considered truant, however, the number of allowable absences before the threshold for truancy is met may vary across schools or districts.

How to Interpret Findings

Programs can report changes in total days missed, with a reduction in this number indicating student improvement. Here, it is important to distinguish between excused and unexcused absences, as the latter is more strongly associated with poor academic outcomes. Changes from chronically absent or truant during the baseline period to no chronic absenteeism or truancy in the follow-up period also indicates improvement in school attendance over time.

  • Cited Literature
    1. Attendance Works. (2014). The Attendance Imperative: How States Can Advance Achievement by Reducing Chronic Absence. Retrieved from http://www.attendanceworks.org/state-policy-brief-attendance-imperative
    2. Hancock, K. J., Gottfried, M. A., & Zubrick, S. R. (2018). Does the reason matter? How student‐reported reasons for school absence contribute to differences in achievement outcomes among 14–15 year olds. British Educational Research Journal, 44 (1), 141-174.
    3. Keppens, G., Spruyt, B., & Dockx, J. (2019). Measuring school absenteeism: Administrative attendance data collected by schools differ from self-reports in systematic ways. Frontiers in Psychology, 10, 2623.
    4. National Forum on Education Statistics. (2018). Forum Guide to Collecting and Using Attendance Data (NFES 2017-007). U.S. Department of Education. Washington, DC: National Center for Education Statistics. https://nces.ed.gov/pubs2017/NFES2017007.pdf
    5. Teye, A. C., & Peaslee, L. (2015, December). Measuring educational outcomes for at-risk children and youth: Issues with the validity of self-reported data. In Child & Youth Care Forum (Vol. 44, No. 6, pp. 853-873). Springer US.

Problem Behavior

School Discipline (records)

This measure consists of school records of student discipline. These include both descriptions of the disciplinary incident or event (e.g., fighting, disruptive behavior) and corrective action taken by the school (e.g., detention, in-school suspension, out-of-school suspension, expulsions).

Rationale

Evaluators and researchers frequently rely on students to self-report school misbehavior and discipline. However, the accuracy of such reports can be influenced by a number of factors, including student age, cognitive ability, and actual behavior. For these reasons, collecting records of school discipline is desirable where feasible.

Cautions

When planning to collect school disciplinary records be sure to set aside adequate time and staff resources. Accessing school records can be complex, particularly when working with multiple schools and grade levels. Schools and districts may use different terminology and classifications of student behavior. Additionally, the frequency of office disciplinary referrals and formal disciplinary actions may differ across school districts or even within schools, which is critical to consider when using disciplinary data for program evaluation or research purposes. Interpreting school disciplinary records requires the ability to collate and analyze data. Programs without on-staff expertise may want to work with an external program evaluator.

Access and Permissions

When working with an outside agency (e.g., a school or district) to collect school records, access to their data typically involves strict confidentiality conditions (see FERPA guidelines). You may be required to provide written parent permission with very specific information included (that can vary across schools or districts). Standard permission/consent language can be incorporated into program enrollment forms (see sample). Also, consider budgeting funds to reimburse time for school officials to gather needed data. More extensive guides on federal privacy guidelines and how to establish data sharing partnerships with school districts can be found in the Evaluation Guidance and Resources section of this Toolkit.

What to Collect

A formatted data collection guide of variables you might consider requesting directly from schools can be found here. If you are collecting data directly from parents/youth, this guide can be used to help structure your database for storage and analysis.

How to Collect

Sources: Programs can request student disciplinary records directly from schools or school district offices, in which case, a formal MOU will typically be required. Schools or districts may agree only to provide “deidentified” data (i.e., data that do not include student names or other identifying information). If so, it is advisable in the data request to attach information to each youth’s name, such as basic demographics (gender or race/ethnicity) or program participation status so that the data once obtained (with this information attached to each line of data, but with the youth’s name removed) will allow you to use this information in analyses. Care must be taken, however, to ensure this type of attached information does not allow a youth to be inadvertently identified; a general rule of thumb is to ensure that the data once obtained do not include subgroups (e.g., male Native American youth) of fewer than 10 youth.

Additional Considerations: If you are interested in assessing changes over time, make sure to collect a “baseline” in the period before the student began program involvement. In addition, request disciplinary records for the entire time period of the student’s program participation and after the end of the program, as these data can help to assess longer-term program effects. And be sure to align data requests with the specific timeframe of program enrollment for each student (e.g., one student may need disciplinary records starting in the spring quarter of one school year through the fall of the next school year, whereas another student may have a very different time frame of participation). If possible, you may also want to consider collecting school discipline data for a comparable group of students not participating in the mentoring program. These data can be used to compare outcomes for program and non-program participants, which is a more robust evaluation design than simply looking at changes over the course of program involvement for program participants.

How to Analyze

Scoring: It is important to work closely with school or district officials to interpret scoring differences across years, grade levels, and types of disciplinary incidents. If available, an annual district interpretation guide can be useful.

Disciplinary Action: Schools commonly report the number and type of formally recorded decisions that result from student behavior within a given term (quarter or semester). Here, it is helpful to distinguish between exclusionary or non-exclusionary actions (i.e., those that exclude or do not exclude the student from class or school). When comparing disciplinary action across groups, such as the groups receiving or not receiving your mentoring program, you may want to analyze differences in(1) the percentage of students in each group receiving one or more disciplinary actions; (2) the percentage receiving each type of action; and (3) the average number of disciplinary actions received. Other disciplinary consequences, such as reports made to law enforcement, are important to include as well.

Type of Behavior: Student behaviors should be analyzed carefully, as they vary in severity and are associated in different ways with student risk. For example, class disruption leading to an in-school suspension is very different from aggravated assault that leads to an out-of-school suspension.

Subgroup Differences: Research has shown that some racial and ethnic minority groups, males, low-achieving students, special education students, and students from low socioeconomic backgrounds experience disproportionate rates of school suspensions and expulsions, which is likely the result of decisions made by school administrators rather than actual differences in student behavior. Therefore, programs may want to analyze disciplinary actions separately for relevant subgroups and include findings from a similar group of youth that does not receive program services.

How to Interpret Findings

A reduction in the number of disciplinary actions or in severity of behavior indicates better student behavior at school. Keep in mind, however, that serious misbehavior and more severe exclusionary disciplinary actions may occur infrequently, especially at lower grade levels. Moreover, disciplinary rates vary widely across schools, which could be due to a variety of factors in addition to student behavior, including referral processes and teacher tolerance for disruptive behavior. Detailed district-level data on discipline is available through the U.S. Department of Education Office for Civil Rights.

Alternatives

School disciplinary records are limited to incidences where a formal office referral was made. As such, they provide a good indicator of school discipline but may miss many instances of school misbehavior or problem behavior more generally. Office disciplinary referrals often focus on serious infractions and may undercount less serious behavior, such as classroom disruptions. Therefore, programs that want to assess a wider range of youth behavior may want to collect self-reports of misbehavior (a measure of self-reported school misbehavior can be found here).

  • Cited Literature
    1. Cholewa, B., Hull, M. F., Babcock, C. R., & Smith, A. D. (2018). Predictors and academic outcomes associated with in-school suspension. School Psychology Quarterly, 33 (2), 191.
    2. DuBois, D. L., Portillo, N., Rhodes, J. E., Silverthorn, N., & Valentine, J. C. (2011). How effective are mentoring programs for youth? A systematic assessment of the evidence. Psychological Science in the Public Interest, 12 (2), 57-91.
    3. Fabelo, T., Thompson, M. D., Plotkin, M., Carmichael, D., Marchbanks, M. P., & Booth, E. A. (2011). Breaking schools’ rules: A statewide study of how school discipline relates to students’ success and juvenile justice involvement. New York: Council of State Governments Justice Center.
    4. Huang, F. L., & Cornell, D. G. (2017). Student attitudes and behaviors as explanations for the Black-White suspension gap. Children and youth services review, 73, 298-308.
    5. Morrison, G. M., Peterson, R., O’Farrell, S., & Redding, M. (2004). Using office referral records in school violence research: Possibilities and limitations. Journal of School Violence, 3(2/3), 39-61.
    6. Nishioka, V. (2017). School Discipline Data Indicators: A Guide for Districts and Schools. REL 2017-240. Regional Educational Laboratory Northwest.
    7. U.S. Department of Education Office for Civil Rights. (n.d.). Civil Rights Data Collection. Available at https://ocrdata.ed.gov/Home.

Problem Behavior

Juvenile Offending (records)

This is a measure of juvenile offending based on information gathered from juvenile justice agencies on arrests, offenses, and sentencing.

Rationale

Evaluators frequently rely on youth self-reports of delinquent behavior and arrests. However, some youth under-report delinquent behavior or misreport the nature of their offense (Thornberry & Khron, 2000), perhaps due to consequences that could arise from their disclosure or a failure to recall these behaviors. In addition, some groups of youth tend to over-report their arrests, whereas others (i.e., those with more arrests) tend to under-report arrests (Kirk, 2006; Krohn, Lizotte, Phillips, Thornberry, & Bell, 2013). For this reason, collecting records of offending is desirable when feasible.

Cautions

Be sure to set aside adequate time and staff resources to collect records on juvenile offending. Accessing juvenile justice records often requires extensive planning and negotiation with the state (e.g., Office of Juvenile Justice, Department of Corrections) and/or reporting agency or agencies (e.g., juvenile court, family court), particularly around issues of data privacy and confidentiality. The juvenile justice system is complex in structure and process, including jurisdictional variations and ongoing reforms and other changes over time. See flow chart here for an overview. It is also important to note that an arrest should not be interpreted as an indication that a youth necessarily committed the offense with which they were charged and that the behavior of certain groups is more likely to result in an arrest, charge, and conviction than the behavior of individuals from other groups (Developmental Services Group, 2014; Huizinga et al. 2007, Kakade et al. 2012).

Access and Permissions

Access to juvenile offending records typically involves strict confidentiality conditions. Organizations wanting access to these records typically will be required to complete a formal written application, part of which will be detailing the specific information being requested. If the request is approved, it should be expected that written parent permission and potentially also youth consent or assent (depending on the age of the youth) will be required prior to release of any data that identifies individual youth (i.e., is not deidentified). Consideration also should be given to collecting the youth’s social security number with parent permission because agencies granting access to juvenile offending records often require it to facilitate identification of youth within available records. Also, consider budgeting funds to reimburse time for agency staff to gather these data.

How to Collect

Sources: It is possible that the agency providing juvenile justice records could vary across different jurisdictions or that these records reside in multiple agencies in the same jurisdiction. Programs that serve youth in more than one geographical area should anticipate potentially needing to work with a range of differing types of agencies (e.g., office of juvenile justice, juvenile court, Department of Corrections) or with agencies that have different names in different jurisdictions but serve the same function (e.g., juvenile court, family court), and each of these agencies may have slightly different requirements for access. Establishing contact with administrators at the agencies from which there is a plan to collect juvenile justice records data prior to an evaluation is critical. This will clarify the documentation needed (e.g., it may be necessary to develop permission/consent forms that include very specific information) and potential barriers to collecting the data. Depending on the nature of the evaluation and the jurisdiction’s policies, the agency may agree to provide only “deidentified” data (i.e., data that does not include youth names or other identifying information). If so, it is advisable in the data request to attach information to each youth’s name, such as basic demographics (gender or race/ethnicity) or program participation status so that the data once obtained (with this information attached to each line of data, but with the youth’s name removed) will allow you to use this information in analyses. Care must be taken, however, to ensure this type of attached information does not allow a youth to be inadvertently identified; a general rule of thumb is to ensure that the data once obtained do not include subgroups (e.g., male Native American youth) of fewer than 10 youth.

Additional Considerations: If there is interest in assessing change over time in outcomes, the time period for which data are requested should be specified accordingly along with a request for information (e.g., dates of arrest) that will allow for the desired type of analysis (e.g., before and after program participation). It is also advisable to consider requesting data not only for the entire period of a youth’s program participation, but also a period of time after participation has ended as this information can be helpful for evaluating possible longer-term effects of program involvement. Care also should be taken to account for possible variation in the time period for which data should be requested for different youth, such as in cases in which youth are enrolling in a program at different points in time. If possible, consideration also should be given to collecting similar records for a comparable group of youth not participating in the mentoring program. These data can be used to compare outcomes for program and non-program participants, which is a more robust evaluation design than simply looking at changes over the course of program involvement for program participants (for further discussion of evaluation design considerations, see the Evaluation Guidance and Resources section of this Toolkit).

How to Analyze

Format: It is important to work closely with reporting agencies to interpret differences in juvenile justice terminology and offense classifications across agencies. It may also be necessary to review key terms (e.g., informed by an interpretation guide if available) prior to collecting or “coding” this information (i.e., translating values or categories into numbers that can be analyzed). A list of some of these terms can be found here. Additionally, records data on arrests, juvenile offenses, and sentencing may be presented in many different formats across agencies. For example, some agencies may provide an Excel table with columns containing each piece of information requested, whereas others may provide photocopied case files with the information in a narrative form across multiple documents, in which case, it will be necessary to read through these descriptions to pull out the information needed.

Scoring: How the collected information is coded or “scored” will depend on your specific aims. For example, a program might be interested in a broad count of arrests. In this case, it may be sufficient to simply add up the total number of arrests reported. Similarly, if there is interest only in whether the youth was arrested, a simple “yes/no” indicator in which a score of 1 reflects at least one arrest and a score of 0 reflects no arrest can be used. However, not all arrests involve offenses of equal severity. Distinctions that may be useful to consider include whether an offense involves violence and whether an offense is person-related (e.g., robbery) or property-related (e.g., vandalism). As detailed in the data collection guide (here), for example, the Uniform Crime Reporting Program of the Federal Bureau of Investigation distinguishes between a set of “index” crimes (e.g., aggravated assault, burglary) and other types of offenses (e.g., disorderly conduct), the former being further subdivided into violent and property crimes, respectively. Considering such distinctions (e.g., total number of violence-related arrests or an indication of whether a youth was arrested for any person-related offense) may provide a more nuanced understanding of a program’s effects or the needs of the youth that it serves.

How to Interpret Findings

When using frequency counts or “presence or absence” of arrests or offenses in evaluating a program’s possible effects, results for a group of youth (e.g., those participating in a program) can be expressed as the average number of arrests per youth or as the percentage of youth with one or more arrests in that group within a given time period. Programs interested in gauging their effectiveness will generally be interested in seeing declines in these numbers during or after youth’s participation in the program. However, such change could occur for reasons unrelated to program participation (e.g., initiation of local reforms such as pre-arrest diversion programs); conversely, an absence of change, and even an increase in arrests might be observed due to non-program factors such as a developmental trend toward greater involvement in delinquent behavior with age. In the absence of an appropriate comparison group, findings should never be interpreted as being indicative of program effectiveness or lack thereof (for further discussion, see the Evaluation Guidance and Resources section of this Toolkit).

Alternatives

Although youth self-reports of arrests may not be as accurate or precise as data collected from official records, youth often recall arrest information with good accuracy, particularly when arrests are few in number (Thornberry & Krohn, 2000). In addition, although some youth may be reluctant to report delinquent behavior, official arrest records are also limited in that they do not capture delinquent acts for which youth are not caught and arrested. For this reason, youth self-reports of arrests and/or delinquent activity could be a good alternative.

  • Cited Literature
    1. Development Services Group, Inc. (2014). Disproportionate minority contact. Washington, DC: Office of Juvenile Justice and Delinquency Prevention. Prepared by Development Services Group, Inc., under cooperative agreement number 2013–JF–FX–K002. Points of view or opinions expressed in this document are those of the author and do not necessarily represent the official position or policies of OJJDP or the U.S. Department of Justice. Available at: https://www.ojjdp.gov/mpg/litreviews/Disproportionate_Minority_contact.pdf
    2. Huizinga, D, Thornberry, T. P., Knight, K. E., Lovegrove, R. L., Hill, K., Farrington, D. P. (2007). Disproportionate Minority Contact in the Juvenile Justice System: A Study of Differential Minority Arrest/Referral to Court in Three Cities. Washington, DC: Office of Juvenile Justice and Delinquency Prevention.
    3. Kakade, M., Duarte, C. S., Liu, X., Fuller, C. J., Drucker, E., Hoven, C. W., . . . Wu, P. (2012). Adolescent substance use and other illegal behaviors and racial disparities in criminal justice system involvement: Findings from a US national survey. American Journal of Public Health, 102, 1307–1310. https://doi.org/10.2105/AJPH.2012.300699
    4. Kirk, D. S. (2006). Examining the divergence across self-report and official data sources of inferences about the adolescent life-course of crime. Journal of Quantitative Criminology, 22, 107-129. https://doi.org/10.1007/s10940-006-9004-0
    5. Krohn, M. D., Lizotte, A. J., Phillips, M. D., Thornberry, T. P., & Bell, K. A. (2013). Explaining systematic bias in self-reported measures: Factors that affect the under- and over-reporting of self-reported arrests. Justice Quarterly, 30, 501-528. https://doi.org/10.1080/07418825.2011.606226
    6. Thornberry, T. P., & Krohn, M. D. (2000). The self-report method of measuring delinquency and crime. In D. Duffee (Ed.), Criminal justice 2000 (pp. 33-84). Washington, DC; U.S. Department of Justice, the National Institute of Justice. National Institute of Justice.

Interpersonal Relationships

Children’s social networks are complex and can include relationships with mothers, fathers, siblings, peers, and other non-parental adults. The quality of these relationships can have implications for children’s wellbeing.

Youth who do not have supportive interpersonal relationships or struggle to develop these relationships are at risk for poor academic, social, behavioral, and mental and physical health outcomes. The impact of negative interpersonal relationships on youth distress is compounded when multiple relationships within the youth’s life are troubled.1,2

Well-established mentoring relationships may enhance the capacity for youth to develop more effective interpersonal relationships. Rhodes3 proposed a theoretical model describing the processes through which youth mentoring may impart its effect. Rhodes suggests that mentors who model effective adult communication create a context that promotes socio-emotional skill development.4 Over time, these skills may generalize to other relationships within the youth’s social network.5,6 Consistent with this notion is evidence that the effects of youth mentoring are in part explained by improvements in children’s relationships with parents, peers, and other non-parental adults.4,6

In selecting outcomes pertaining to youths’ interpersonal relationships for this Toolkit, emphasis was given to those that have demonstrated potential sensitivity to youth mentoring. The selected outcomes are parental support, significant non-parental adult relationships, peer connectedness, loneliness, and community connectedness.

Parent-Child Support

According to attachment theory, children’s early experiences with parents influence later interpersonal adjustment. Children develop a sense of security and autonomy when parents are affectionate, sensitive, and supportive. In the context of this type of relationship, children develop expectations for the availability of significant others and a sense of personal value.7 Early experiences with parents also influence a youth’s ability to develop close relationships with other individuals.8 Parental support is linked with a number of important outcomes in childhood and adolescence. For example, parental support is associated with lower substance use9,10 and can protect youth from the effects of interpersonal stress on depression.11 There is reason to believe that children’s positive interactions with mentors could lead to improvements in aspects of the parent-child relationship.3 For example, drawing on data from the BBBSA outcome study, Rhodes, Reddy & Grossman6 found that mentored youth reported lower substance use than youth in a control condition, and that these gains were explained by improvements in the parent-child relationship brought about by mentoring.

Significant Non-Parental Adult Relationships

A significant non-parental adult is any adult in the youth’s life with whom they feel they have a meaningful positive connection. Over the course of development, youth may seek out non-parental adults as a source of support and guidance. Non-parental adults may serve as a resource for youth in ways that are both similar and distinct from parents.12 Similar to parents, non-parental adults can provide youth with affection, guidance, and support, but unlike parents they are not hindered by expectations of managing problem behavior or providing a home environment with structure, rules, and routines. Available evidence suggests that youth with a positive relationship with a non-parental adult experience heightened psychosocial functioning, improved capacity to navigate peer relationships and friendships, greater peer acceptance, and improved academic and employment outcomes.13,14,15 Youth may benefit from mentoring relationships when they are lacking positive connections with supportive non-parental adults. In fact, there is evidence that youth who participate in formal mentoring are more likely than non-participants to report having a very important adult in their life.16

Student-Teacher Relationship Quality

Many young people report having a significant non-parental adult relationship with one of their teachers,17 and the quality of this relationship is linked with a wide range of positive cognitive, behavioral, social, and emotional outcomes both within and outside of the school context.18,19 For example, high-quality student-teacher relationships are linked with positive outcomes in several school-related areas including student behavior, motivation, engagement, grades, test scores, rates of retention, and dropping out.19,20,21,22,23 Positive student-teacher relationships are also associated with greater life satisfaction and fewer symptoms of depression in youth.24 These associations have been identified across child and adolescent development and across diverse populations of youth.25 In addition, high-quality student-teacher relationships predict positive outcomes both concurrently and in later periods of development.26

Youth who are in high-quality relationships with their teachers get along with their teachers and enjoy being with them. These relationships are also marked by mutually earned trust and respect.27 Studies have linked mentoring with improvements in these relationships. For example, in one study,28 students with more positive relationships with their mentors (relative to those with lower-quality relationships) experienced greater improvements, over time, in the quality of their relationships with teachers and parents–improvements which were, in turn, related to impacts in other outcome areas.

Peer Connectedness

Peer connectedness refers to the degree of positive feelings toward peers, positive engagement with peers, and a sense of social acceptance. It is clear that peers play a significant role in youth development. Positive experiences with peers are associated with health, wellbeing, and academic success while negative experiences are associated with poor psychosocial and academic adjustment.29 Mentoring relationships that foster socio-emotional skill development in youth may improve youth’s relationships with peers. In one study evaluating the effects of a school-based mentoring program in a sample of predominately Latino students, children in a mentoring condition reported higher levels of peer connectedness than children in a control condition following the intervention.30

Loneliness

Loneliness refers to the subjective experience of dissatisfaction with social relationships that is often accompanied by a negative emotional state.31 Youth who are lonely often struggle to develop and maintain effective interpersonal relationships. Studies reveal that loneliness is associated with impaired peer relations and friendship, and places children at risk for becoming the target of bullying.32,33,34 Moreover, prolonged loneliness can lead to the development of depression.35 Mentors can provide youth who are lonely with affection, emotional support, and social contact. There are few studies that have examined the role of mentoring in affecting youth loneliness, but available evidence suggests a positive effect of mentoring on youth’s emotional functioning.36

Community Connectedness

Community connectedness reflects the perception of youth that they (and other young people) are cared about, trusted, and respected by adults in their community, both as individuals and as a collective group.37 The construct of connectedness is intended to be transactional and reflects a sense of belonging that is both received and reciprocated on the part of youth. Community connectedness may be important in youth’s willingness to engage in a relationship with an adult mentor; and positive experiences with a mentor may also serve to enhance youth feelings of being valued by adult members of their community. Few studies to date have examined associations between mentoring and community connectedness. However, there is some preliminary evidence that participation in a school-based mentoring program is associated with increased community connectedness for youth with low connectedness prior to being matched with a mentor.38

  • Cited Literature

     

    1. Cohen, J. R., Spiro, C. N., Young, J. F., Gibb, B. E., Hankin, B. L., & Abela, J. R. (2015). Interpersonal risk profiles for youth depression: A person-centered, multi-wave, longitudinal study. Journal of Abnormal Child Psychology, 43, 1415–1426. http://dx.doi.org/10.1007/s10802-015-0023-x
    2. Criss, M. M., Shaw, D. S., Moilanen, K. L. Hitchings, J. E., & Ingoldsby, E. M. (2009). Family, neighborhood, and peer characteristics as predictors of child adjustment: A longitudinal analysis of additive and mediational models. Social Development, 18, 511–535. http://dx.doi.org/10.1111/j.1467-9507.2008.00520.x
    3. Rhodes, J. E. (2005). A model of youth mentoring. In D. L. DuBois & M. J. Karcher (Eds.) Handbook of youth mentoring (pp. 30–43). Thousand Oaks, CA: SAGE.
    4. Rhodes, J. E., Grossman, J. B., & Resch, N. R. (2000). Agents of change: Pathways through which mentoring relationships influence adolescents’ academic adjustment. Child Development, 71, 1662–1671. http://dx.doi.org/10.1111/1467-8624.00256
    5. Rhodes, J. E., & DuBois, D. L. (2008). Mentoring relationships and programs for youth. Current Directions in Psychological Science, 17, 254–258. http://dx.doi.org/10.1111/j.1467-8721.2008.00585.x
    6. Rhodes, J. E., Reddy, R., & Grossman, J. B. (2005). The protective influence of mentoring on adolescents’ substance use: Direct and indirect pathways. Applied Developmental Science, 9, 31–47. http://dx.doi.org/10.1207/s1532480xads0901_4
    7. Elicker, J., Englund, M., & Sroufe, L. A. (1992). Predicting peer competence and peer relationships in childhood from early parent–child relationships. In R. D. Parke & G. W. Ladd (Eds.), Family-peer relationships: Modes of linkage. Hillsdale, NJ: Lawrence Erlbaum.
    8. Furman, W., & Shomaker, L. B. (2008). Patterns of interaction in adolescent romantic relationships: Distinct features and links to other close relationships. Journal of Adolescence, 31, 771–788. http://dx.doi.org/10.1016/j.adolescence.2007.10.007
    9. Barber, B. K. (1992). Family, personality, and adolescent problem behaviors. Journal of Marriage and the Family, 54, 66–79. https://doi.org/10.2307/353276
    10. Branstetter, S. A., Low, S., & Furman, W. (2011). The influence of parents and friends on adolescent substance use: A multidimensional approach. Journal of Substance Use, 16, 150–160. http://dx.doi.org/10.3109/14659891.2010.519421
    11. Hazel, N. A., Oppenheimer, C. W., Technow, J. R., Young, J. F., & Hankin, B. L. (2014). Parent relationship quality buffers against the effect of peer stressors on depressive symptoms from middle childhood to adolescence. Developmental Psychology, 50, 2115–2123. http://dx.doi.org/10.1037/a0037192
    12. Spencer, R. (2007). Naturally occurring mentoring relationships involving youth. In T. D. Allen & L. T. Eby (Eds.), The Blackwell handbook of mentoring: A multiple perspectives approach (pp. 99 –117). Oxford, England: Blackwell. https://psycnet.apa.org/record/2007-00535-007
    13. DuBois, D. L., & Silverthorn, N. Characteristics of natural mentoring relationships and adolescent adjustment: Evidence from a national study. The Journal of Primary Prevention, 25, 69–92. http://dx.doi.org/10.1007/s10935-005-1832-4
    14. Franco, M., & Levitt, M. J. (1998). The social ecology of middle childhood: Family support, friendship quality, and self-esteem. Family Relations: An Interdisciplinary Journal of Applied Family Studies, 47, 315–321. https://doi.org/10.2307/585262
    15. Kuperminc, G., P., Thomason, J., DiMeo, M., & Broomfield-Massey, K. (2011). Cool Girls, Inc.: Promoting the positive development of urban adolescent girls. The Journal of Primary Prevention, 32, 171–183. http://dx.doi.org/10.1007/s10935-011-0243-y
    16. Herrera, C., Grossman, J. B., Kauh, T. J., Feldman, A. F., & McMaken, J. (2007). Making a difference in schools: The Big Brother Big Sisters School-based Mentoring Impact Study. Philadelphia, PA: Public/Private Ventures. Retrieved from http://bit.ly/2c3qxrP
    17. Van Dam, L., Smit, D., Wildschut, B., Branje, S. J. T., Rhodes, J. E., Assink, M., & Stams, G. J. J. (2018). Does natural mentoring matter? A multilevel meta‐analysis on the association between natural mentoring and youth outcomes. American Journal of Community Psychology62(1-2), 203-220. https://doi.org/10.1002/ajcp.12248
    18. Pianta, R. C. (1999). Enhancing relationships between children and teachers. American Psychological Association. https://psycnet.apa.org/doi/10.1037/10314-000
    19. Wentzel, K. R. (2012). Teacher-student relationships and adolescent competence at school. In T. Wubbels, P. den Brok, J. van Tartwijk & J. Levy (Eds.), Advances in learning environments research (Vol 3): Interpersonal relationships in education (pp. 19–35). Sense Publishers.
    20. Klem, A., & Connell, J. (2004). Relationships matter: Linking teacher support to student achievement and engagement. Journal of School Health, 74(7), 262–273. https://doi.org/10.1111/j.1746-1561.2004.tb08283.x
    21. Lee, V. E., & Burkam, D. T. (2003). Dropping out of high school: The role of school organization and structure. American Educational Research Journal, 40(2), 353–393. https://doi.org/10.3102/00028312040002353
    22. Muller, C. (2001). The role of caring in the teacher-student relationship for at-risk students. Sociological Inquiry, 71(2), 241-255. https://doi.org/10.1111/j.1475-682X.2001.tb01110.x
    23. Scales, P. C., Van Boekel, M., Pekel, K., Syvertsen, A. K., & Roehlkepartain, E. C. (2020). Effects of developmental relationships with teachers on middle‐school students’ motivation and performance. Psychology in the Schools, 57(4), 646-677. https://doi.org/10.1002/pits.22350
    24. Murray, C., & Zvoch, K. (2011). Teacher-student relationships among behaviorally at-risk African American youth from low-income backgrounds: student perceptions, teacher perceptions, and socioemotional adjustment correlates. Journal of Emotional and Behavioral Disorders19(1), 41-54. https://doi.org/10.1177/1063426609353607
    25. Roorda, D. L., Koomen, H. M., Spilt, J. L., & Oort, F. J. (2011). The influence of affective teacher–student relationships on students’ school engagement and achievement: A meta-analytic approach. Review of Educational Research81(4), 493-529. https://doi.org/10.3102/0034654311421793
    26. Bernstein‐Yamashiro, B., & Noam, G. G. (2013). Teacher‐student relationships: A growing field of study. New Directions for Youth Development2013(137), 15-26. https://doi.org/10.1002/yd.20045
    27. Karcher, M. J. & Sass, D. (2010). A multicultural assessment of adolescent connectedness: Testing measurement invariance across gender and ethnicity. Journal of Counseling Psychology, 57(3), 274-289. https://doi.org/10.1037/a0019357
    28. Chan, C. S., Rhodes, J. E., Howard, W. J., Lowe, S. R., Schwartz, S. E., & Herrera, C. (2013). Pathways of influence in school-based mentoring: The mediating role of parent and teacher relationships. Journal of School Psychology, 51(1), 129-142. https://doi.org/10.1016/j.jsp.2012.10.001
    29. Bukowski, W. M., Buhrmester, D., & Underwood, M. (2011). Peer relations as a developmental context. In M. K. Underwood & L. H. Rosen (Eds.) Social development: Relationships in infancy, childhood, and adolescence (pp. 153–179). New York, NY: Guildford Press.
    30. Karcher, M. J. (2008). The study of mentoring in learning environment (SMILE): A randomized evaluation of the effectiveness of school-based mentoring. Prevention Science, 9, 99–113. http://dx.doi.org/10.1007/s11121-008-0083-z
    31. Asher, S. R., & Paquette, J. A. (2003). Loneliness and peer relations in childhood. Current Directions in Psychological Science, 12, 75-78. http://dx.doi.org/10.1111/1467-8721.01233
    32. Boivin, M., Hymel, S., & Bukowski, W. M. (1995). The roles of social withdrawal, peer rejection, and victimization by peers in predicting loneliness and depressed mood in childhood. Development and Psychopathology, 7, 765–785. http://dx.doi.org/10.1017/S0954579400006830
    33. Cassidy, J., & Asher, S. r. (1992). Loneliness and peer relations in young children. Child Development, 63, 350–365. http://dx.doi.org/10.2307/1131484
    34. Crick, N. R., & Ladd, G. W. (1993). Children’s perceptions of their peer experiences: Attributions, loneliness, social anxiety, and social avoidance. Developmental Psychology, 29, 244–254. http://dx.doi.org/10.1037/0012-1649.29.2.244
    35. Qualter, P., Brown, S. L., Munn, P., & Rotenberg, K. L. (2010). Childhood loneliness as a predictor of adolescent depression symptoms: an 8-year longitudinal study. European Child and Adolescent Psychiatry, 19, 493–501. http://dx.doi.org/10.1007/s00787-009-0059-y
    36. DuBois, D. L., Portillo, N., Rhodes, J. E., Silverthorn, N., & Valentine, J. C. (2011). How effective are mentoring programs for youth? A systematic assessment of the evidence. Psychological Science in the Public Interest, 12, 57–91. http://dx.doi.org/10.1177/1529100611414806
    37. Whitlock, J. (2007). The role of adults, public space, and power in adolescent community connectedness. Journal of Community Psychology, 35, 499–518. http://dx.doi.org/10.1002/jcop.20161
    38. Portwood, S. G., Ayers, P. M., Kinnison, S. E., Waris, R. G., & Wise, D. L. (2005). YouthFriends: Outcomes from a school-based mentoring program. The Journal of Primary Prevention, 26, 129–145. https://www.researchgate.net/publication/7766776_YouthFriends_Outcomes_from_a_School-Based_Mentoring_Program

     

Interpersonal Relationships

Parent-Child Relationship Quality

This measure consists of 7 items that assess a youth’s perception of support from an important person in their life (e.g., mother, father, primary caregiver, etc).

Sample items include: “How much does this person treat you like you’re admired and respected,” “How often do you tell this person everything that you are going through?,” and “How much does this person have a strong feeling of affection (loving or liking) toward you?” Each item is rated on a 5-point scale: Little or none, Somewhat, Very much, Extremely much, or The most.

Scale

Network of Relationships Inventory (NRI)-short form — Parent Support subscale

What It Measures:

A youth’s perception of support from a primary caregiver.

Intended Age Range

8- to 15-year-olds (grades 3—9); items are likely appropriate for youth older than grade 9.

Rationale

This measure was selected because of its grounding in theory, wide developmental applicability, and evidence of reliability and validity. This measure also can be administered more than once to assess support from multiple primary caregivers.

Cautions

None.

Special Administration Information

It is possible that mentoring programs serve youth where one or both parents are not primary caregivers or that another individual, like a grandparent, may serve as a primary caregiver or co-parent. In this case, a slight revision to the NRI instructions may be appropriate. For example, a revision could read, “Everyone has a number of people who are important in his or her life. These questions ask about your relationships with your mother, father, or other adult in your life that has been most important in raising you.”

How to Score

Each item is scored on a 5-point scale from 1 (Little or none) to 5 (The most). A support score is computed by taking the average of the 7 items.

How to Interpret Findings

Higher scores reflect higher levels of perceived support.

Access and Permissions

The measure is available for non-commercial use with no charge and can be accessed online here as part of the manual for the NRI. Prior to use, review the terms of use for this measure (see p. 4 of the manual).

Alternatives

The 6-item Connectedness to Parents subscale of the Hemingway is an alternative measure with strong research support. Two other subscales from the Hemingway serve as recommended measures for other outcome domains (i.e., Connectedness to School; Connectedness to Peers).

  • Cited Literature

Interpersonal Relationships

Very Important Non-parent Adult

This scale consists of the following single item: I’d like to ask you about any Very Important Adults you might have in your life right now. A Very Important Adult is someone who spends a lot of time with you, someone you can really count on, who gets you to do your best, and who cares about what happens to you. Please check the boxes that describe any Very Important Adults in your life right now. If you have more than one Very Important Adult, you may check more than one box. If you do not happen to have a Very Important Adult in your life right now, please check the very last box.

Examples of the categories of adults asked about are: “My parent or other person who raises me; “Another adult relative (grandparent, aunt or uncle, etc.),” “Teacher, guidance counselor, or other adult at school,” and, “A mentor through this program.” Youth can also select “I do not have a Very Important Adult in my life right now.”

Scale

Presence of a Very Important Adult.

What It Measures:

Whether a youth has an adult in his or her life who fills the role of a mentor, and who this adult(s) is.

Intended Age Range

8- to 18-year olds.

Rationale

Variations of this measure have been used in several evaluations of youth mentoring programs. As would be expected, program participants have been more likely than non-participants to report the presence of a very important (or “special”) adult in their lives. The version provided here is simplified from that used in prior work.

Cautions

None. Special administration information: Administrators should highlight that youth can check more than one box if they have more than one very important adult.

How to Score

Responses on this measure can be used flexibly to assess: (1) the number of different categories of mentor-like adults in the youth’s life (based on the number of categories endorsed); (2) the presence of a non-parental mentor-like adult in the youth’s life (based on whether a category other than “My parent or other person who raises me” is endorsed); and (3) the youth’s view of the program mentor as filling the role of a Very Important Adult (based on whether “A mentor through this program” is endorsed). Response options can be changed as desired. However, retaining at least a few categories in addition to “A mentor through this program” is advised as this helps to ensure that youth do not feel that the “right” answer is to endorse their program mentor.

How to Interpret Findings

A youth who endorses at least one category of non-parental adults is considered to have an adult in his or her life who fills a mentor-like role.

Access and Permissions

The measure is available for non-commercial use with no charge and is provided here.

Alternatives

Other available measures provide more in-depth assessments of a youth’s relationships with important adults. One such measure includes separate scales asking about each of four distinct functional roles that may be addressed in a youth’s relationship with an important non-familial adult: Supporter, Model/Compass, Challenger, and Connector. This measure consists of 14 items.

  • Cited Literature
    • Herrera, C., Grossman, J. B., Kauh, T. J., Feldman, A. F., & McMaken, J. (2007). Making a difference in schools: The Big Brothers Big Sisters School-based Mentoring Impact Study. Philadelphia, PA: Public/Private Ventures. (Accessible here.)

Interpersonal Relationships

Student-Teacher Relationship Quality 

This 6-item measure assesses the extent to which youth are concerned about their relationships with their teachers, enjoy being with them, and are emotionally involved in these relationships. Sample items include: “I care what my teachers think of me,” “I try to get along with my teachers,” and “I always try hard to earn my teachers’ trust.” Response options are Not at all true, Not really true, Sort of true, True, or Very true.


Scale
Hemingway Measure of Adolescent Connectedness (MAC) — Connectedness to Teachers subscale

What it measures
A youth’s feelings of connection to their teachers.

Intended age range
11- to 18-year-olds (grades 6-12); versions for pre-adolescents (grades 3-6) and college students are also available.

Rationale
This measure was selected because of its grounding in theory, potential to use with youth of various ages, evidence of reliability and validity, and brevity. One evaluation of a school-based mentoring program also supports the potential for mentoring to be linked to improvements over time on this outcome. A teacher report form of the subscale is also available.

Cautions
The items in this measure are focused on youth’s feelings of connectedness to teachers in general, not a young person’s relationship to a specific teacher.  Please see the suggested alternative scale to assess the multiple dimensions that may characterize a youth’s relationship with an individual teacher.

Special administration information
None.

How to score
Each item is scored from 1 (Not at all true) to 5 (Very true). The item, “I do not get along with some of my teachers” is reverse coded. A total score is computed by averaging ratings across all six items.

How to interpret findings
Higher scores reflect stronger connectedness to teachers.

Access and permissions
The measure is available for non-commercial use with no charge and can be accessed online here as part of the manual for the Hemingway Measure of Adolescent Connectedness. Prior to use, the author requests that you review the terms of use (see p. 9 of the manual) and email him to indicate that these are acceptable (michaelkarcher@mac.com). Spanish, French, and Chinese language versions can be found on the Hemingway measure’s website. A ready-to-use version is also available here.

Alternatives
Those interested in a measure of the positive and negative dimensions of a young person’s relationship with an individual teacher might consider the Inventory of Teacher-Student Relationships (IT-SR). The IT-SR assesses three dimensions of these relationships: communication, trust, and alienation. More information on this measure is available here. A ready-to-use version is also available here.

  • Cited Literature

    Karcher, M. J. & Sass, D. (2010). A multicultural assessment of adolescent connectedness: Testing measurement invariance across gender and ethnicity. Journal of Counseling Psychology, 57(3), 274–289. https://doi.org/10.1037/a0019357

    Murray, C., & Zvoch, K. (2011). The inventory of teacher-student relationships: Factor structure, reliability, and validity among African American youth in low-income urban schools. The Journal of Early Adolescence, 31(4), 493-525.  https://doi.org/10.1177/0272431610366250

Interpersonal Relationships

Peer Relationship Quality

This measure consists of 6 items that assess the extent to which the youth feels positive about his or her peers and enjoys working with peers on projects and school-related tasks.

Sample items include: “I like pretty much all of the other kids in my grade,” “I like working with my classmates,” and “I get along well with the other students in my classes.” Each item is rated on a 5-point scale: Not at all, Not really, Sort of, True, or Very true.

Scale

Hemingway Measure of Adolescent Connectedness (MAC) — Connectedness to Peers subscale

What It Measures:

A youth’s positive feelings of connection with his/her peers.

Intended Age Range

11- to 18-year-olds (grades 6—12); versions for pre-adolescents (grades 3-6) and college students are also available.

Rationale

This measure was selected because of its grounding in theory, wide developmental applicability, evidence of reliability and validity, and potential for improvement through mentoring (i.e., significant impacts on this outcome in an evaluation of a school-based mentoring program).

Cautions

The items in this measure are focused solely on children’s connectedness to peers at school rather than to peers more generally-and youth may function differently in these two contexts.

Special Administration Information

None.

How to Score

Each item is scored on a 5-point scale from 1 (Not at all) to 5 (Very true). The item “My classmates often bother me” is a reverse coded item. A total score is computed by averaging across all items.

How to Interpret Findings

Higher scores reflect stronger connectedness to peers at school.

Access and Permissions

The measure is available for non-commercial use with no charge and can be accessed online here as part of the manual for the Hemingway. Prior to use, the author requests that you review the terms of use (see p. 9 of the manual) and email him to indicate that these are acceptable (michaelkarcher@mac.com). Spanish, French, and Chinese language versions can be found on the Hemingway measure’s website. A ready-to-use version is also available here.

Alternatives

Those interested in a measure of peer acceptance might consider the Peer Affiliation and Social Acceptance (PASA) measure. It is a single item administered to a child, mother, father, or teacher and has demonstrated a moderate association with peer-rated acceptance. More information on this measure is available here.

  • Cited Literature
    • Karcher, M. J. & Sass, D. (2010). A multicultural assessment of adolescent connectedness: Testing measurement invariance across gender and ethnicity. Journal of Counseling Psychology, 57, 274–289. http://dx.doi.org/10.1037/a0019357

Interpersonal Relationships

Loneliness

This measure consists of 9 items. Sample items include: “I have nobody to talk to” and “I’m lonely.” Each item is rated on a 5-point scale: Not true at all, Hardly ever true, True sometimes, True most of the time, or Always true. The LQ-Short was modified from the original Loneliness Questionnaire (LQ) by removing reverse-coded items (items worded positively), 8 filler items, and one item (I feel lonely) that overlapped with another item (I’m lonely).

Scale

Loneliness Questionnaire — Short Version (LQ-Short)

What It Measures:

A youth’s level of loneliness.

Intended Age Range

8- to 18-year-olds.

Rationale

The LQ-Short was selected because of its brevity, evidence of reliability and validity, and direct assessment of loneliness without reverse-coded items.

Cautions

The reliability and validity of the LQ-Short has only been assessed in a single study. Results from this study are consistent, however, with validation for the original version of the LQ.

Special Administration Information

None.

How to Score

Each item is scored on a 5-point scale from 1 (Not true at all) to 5 (Always true). The total score is computed by averaging across all items.

How to Interpret Findings

Higher scores on the LQ-Short reflect a greater reported sense of loneliness.

Access and Permissions

Both the LQ-Short and full LQ measures are available for non-commercial use with no charge. The manuscript describing the LQ-Short measure and items can be found here. The manuscript listing the full set of LQ items is available here.

Alternatives

The Roberts Revision of the UCLA Loneliness Scale (RULS-8) is a good alternative for those interested in measuring loneliness in adolescence. This 8-item measure has demonstrated reliability and validity in middle school and high school samples and has been used to assess loneliness in Mexican American youth. A list of items and a description of the measure can be found in the original publication, which is available here.

  • Cited Literature
    • Ebesutani, C., Drescher, C. F., Reise, S., Heiden, L., Hight, T. L., Damon, J., & Young, J. (2012). The Loneliness Questionnaire–short version: An evaluation of reverse-worded and non-reverse worded items via item response theory. Journal of Personality Assessment, 94, 427–437. http://dx.doi.org/10.1080/00223891.2012.662188
    • Asher, S. R., Hymel, S., & Renshaw, P. D. (1984). Loneliness in children. Child Development, 55, 1456–1464. http://dx.doi.org/10.2307/1130015

Interpersonal Relationships

Community Connectedness

This scale consists of 5 items that assess the extent to which the youth feels a sense of connectedness to his or her community, perceives that they are cared for, trusted, and respected by adults, and feels that this sense of connectedness is reciprocated. Sample items include: “I trust most of the people in my town,” and “Adults in my town respect what people my age think.” Each item is rated on a 5-point scale: Strongly disagree, Disagree, Unsure, Agree, or Strongly agree.

Scale:

Community Engagement and Connections Survey – Connection to Community Subscale

What It Measures:

A youth’s feelings of connectedness to his or her community.

Intended Age Range

This scale has been used with adolescents from Grade 8 upwards and is most appropriate for adolescent populations.

Rationale

This measure was selected based on evidence of reliability and validity and potential sensitivity to improvements in community connectedness that may occur through mentoring.

Cautions

None.

Special Administration Information

Youth in different geographical regions may refer to their communities using different terms (e.g., “town”, “city”, “community”). The term “town” can be replaced or augmented with more appropriate terms as needed (e.g., “…town or city”).

How to Score

Each item is scored on a 5-point scale ranging from 1 (Strongly disagree) to 5 (Strongly agree). The subscale score is computed by summing ratings across all 5 items after reverse scoring the one negatively worded item (“Adults in my town don’t care about people my age”).

How to Interpret Findings

Higher scores reflect stronger feelings of connectedness to the community.

Access and Permissions

The scale is available for non-commercial use with no charge and is provided here.

Alternatives:

Those interested in a measure of connectedness to the youth’s more immediate neighborhood could consider the Neighborhood Connectedness subscale of the Hemingway Measure of Adolescent Connectedness (MAC) for adolescents (Grades 6 and up) or the equivalent subscale of the Pre-Adolescent Measure of Adolescent Connectedness (PreMAC). The items in these scales measure the youth’s feelings about the neighborhood in which they reside. More information on the measure is available here.

  • Cited Literature
    • Whitlock, J. (2007). The role of adults, public space, and power in adolescent community connectedness. Journal of Community Psychology, 35, 499-518. http://dx.doi.org/10.1002/jcop.20161

Academics

Academic outcomes include performance in school, acquired skills in different subject areas (e.g., reading, math), longer-term levels of educational attainment, and attitudes and behaviors that can support school success. Many mentoring programs include academic outcomes in their logic model or theory of change as one of their most central goals for youth.

In fact, a recent survey of mentoring programs in Illinois found that academic success was a top priority for over 80% of these programs.1 This is likely due to the strong policy relevance of academic outcomes—youth “success” is often measured, at least in part, through their ability to succeed in school and in related distal milestones, such as graduation from high school and acceptance into post-secondary education institutions. Mentoring also has a solid track record of benefiting youth in academic areas. DuBois et al.’s recent meta-analysis2, which synthesized the results of 73 evaluations of mentoring programs, found that, as a group, these studies showed evidence of positive effects on academic outcomes (e.g., grades, attendance, standardized test scores). Although modest in size (i.e., what would be statistically considered “small” effects), these benefits are nonetheless noteworthy and of a magnitude that would be widely considered to have policy relevance.

But deciding what to measure in this domain can be challenging. Affecting more “objective” measures of academic achievement or performance (such as official grades on report cards or standardized test scores) is often programs’ ultimate goal. Yet, these measures can be difficult and costly to gather and to ensure that they are measured in comparable ways across schools and districts. Many programs, thus, rely on youth reports of their academic performance as well as attitudinal or behavioral measures of factors that are associated with either performance or other longer-term goals like high school or college graduation.

One of the academic outcomes selected for inclusion in this Toolkit is academic performance, due to its direct policy relevance. The other four outcomes are attitudinal factors (growth mindset, academic self-efficacy, school engagement, and school connectedness) that show promising evidence of both influencing academic achievement and potentially being shaped by mentoring. As discussed, the recommended measures are not without limitations. Self-reported grades, for example, are clearly not a perfect proxy for achievement and other long-term outcomes of interest. School records can provide a more definitive measure of school success, but they can be costly and challenging to collect and analyze, particularly when assessing youth across schools and grade levels. Nevertheless, when utilized in a well-designed evaluation, the selected measures should give programs a good sense for whether the youth they are serving are benefiting academically.

Consideration was also given to reviewing measures for other important school-related factors, including educational aspirations, perceived value of school, and attendance. Some of these will likely be included in future updates. It is also worth noting that the recommended measure of school connectedness touches on bonding. School misbehavior and truancy, furthermore, are considered in the Problem Behavior section of the Toolkit.

When using any academic measure, it is important to understand that some academic outcomes change naturally over time, without intervention. This makes a comparison group of non-mentored, but otherwise similar youth essential when testing for program effects on these outcomes. For example, youth grades, on average, decrease during the transition from elementary to middle school.3 Thus, even if programs do not see improvement over time on this outcome, youth may actually be benefiting, relative to what would be expected during that developmental period. As a case in point, in the Public/Private Ventures evaluation of the BBBS Community-Based Mentoring program, mentored youth actually declined slightly in their self-reported grades over time.4 However, the control group declined more during the same period, yielding an overall positive impact of mentoring. The study was thus able to show that mentoring helped to prevent some of this steep decline. Without a comparison group, this benefit would have been missed.

Academic Performance

The recommended measure of academic performance focuses on grades. Grades can be collected through self-reports of performance or direct school records. School grades are of great interest to funders, policy makers, and potential program partners—especially for school-based programs whose partners’ main goal is often to foster youth’s academic skills. Even as early as elementary school, grades predict high school dropout and college attendance.5,6 Yet, asking youth to report on their own grades can be challenging. Timing issues (i.e., the last report card may have been issued several months before your follow-up assessment), framing differences (i.e., many schools have different grading systems and some do not use traditional grades at all), and potential challenges with memory (i.e., children may simply not remember their grades accurately) make it far from a perfectly “objective” measure. These factors also help explain why youth aren’t always completely accurate in their reporting of their own grades. One study7 for example, found a moderate correlation (about .5) between reported grades and actual grades in all but one subject area (Language Arts/Reading). However, similar to other studies,8 it also found that lower performing students (which many mentoring programs target) and younger children tend to be less accurate. Thus, self-reported measures of grades should be used with these cautions in mind and considered mainly when resources preclude gathering more objective measures. At the same time, collecting school records also comes with several steps and potential challenges including: developing a relationship with associated schools or districts; ensuring that you have signed permission for collecting these records; combining grades across multiple grading systems; and interpreting change over time or across different schools.

 

Growth Mindset

A “growth mindset” is the belief that intelligence can be improved with effort. This contrasts with a “fixed” mindset which holds that intelligence is a stable trait that can’t be changed. Research, most notably by Carol Dweck, suggests that a growth mindset can contribute to better academic achievement. One study,9 for example, measured the growth mindset of 373 7th graders and followed these youth for two years. They found that the students with a growth mindset were more likely to report focusing on effort and learning and not giving up in the face of challenges, whereas the students with a fixed mindset were more likely to report giving up easily and ignoring feedback. Over time, despite similar math skills at the start of the study, those youth with growth mindsets outperformed their peers. This same study tested an intervention which taught growth mindset to a group of students and compared their progress to a randomly assigned control group not receiving the intervention. Findings indicated that the intervention prevented the declining grades seen in the control group. Thus, growth mindset shows evidence of both promoting academic success and being a malleable trait that—with intervention—can be changed over time. Evidence of the ability of interventions to shape growth mindset thus far is limited to those that target this kind of cognitive process. Programs are thus advised to keep this caveat in mind when deciding whether to assess growth mindset as a potential outcome.

 

Academic Self-Efficacy

Academic self-efficacy refers to youths’ confidence in their performance capabilities related to schoolwork, including not only overall abilities but also their capacity to succeed at tasks (e.g., homework) with perseverance and effort. Numerous studies support links between academic self-efficacy and both academic performance and persistence.10,11,12 Efficacy beliefs about a task or activity are thought to influence how an individual will approach that task, including how much effort the individual will invest as well as how resilient he/she will be and how long he/she will persevere when confronting academic challenges.11 In line with these ideas, studies find that, when holding ability constant, students with high academic self-efficacy put more effort into their schoolwork, are more cognitively engaged in school, and use more effective self-regulatory strategies relative to students with lower academic self-efficacy.11,13 Research also points to academic self-efficacy as an important mediator of the effects of knowledge and skills on subsequent academic performance.11 Academic self-efficacy has received relatively little consideration in research on mentoring, although one study indicates that school-based mentoring has the potential to positively influence self-perceptions of academic ability (i.e., how capable one is relative to other students).14

 

School Engagement

Although definitions vary, for purposes of this toolkit school engagement refers to a young person’s level of behavioral engagement and active participation with learning activities in the classroom, including effort, attention, and contribution to classroom discussions.15,16 More active participation in classroom learning activities among children is associated positively with academic achievement and negatively with skipping school and dropping out.15,16 A number of studies suggest that mentoring and other supportive relationships with adults within the school setting (e.g., teachers) may promote academic engagement among youth.17,18,19 The potential for mentoring received outside of school to offer similar benefits is not clear.

 

School Connectedness

School connectedness is the youth’s feelings of connection to the school environment. Researchers have defined it in many different ways—some include factors like feelings of safety and support from teachers and peers, whereas others include aspects of school bonding, school climate, engagement, and involvement.20 Studies support associations (when one item goes up or down, another follows the same trend) between school connectedness and a wide range of youth outcomes including school achievement and overall health status.21 It is also a potential protective factor, exhibiting negative associations with cigarette, alcohol21 and drug use,22 delinquency and gang membership,22 sexual activity,22 emotional distress,23 violence,22 and suicidality.23 Michael Karcher’s work24,25 further suggests that participation in school-based mentoring can improve school connectedness, hinting that it may be a particularly relevant outcome in school-based programs.


i This difference may have been caused, in part, by the fact that some youth may have had only one, both or neither of these subjects in school, which could affect how they rated their performance in this area and how this rating was associated with their actual performance.

  • Cited Literature
    1. DuBois, D. L., Felner, J., &amp; O’Neal, B. (2014). State of mentoring in Illinois. Chicago, IL: Illinois Mentoring Partnership. Retrieved from http://ilmentoring.org/images/pdf/SoM-Full-Report.pdf
    2. DuBois, D. L., Portillo, N., Rhodes, J. E., Silverthorn, N., &amp; Valentine, J. C. (2011). How effective are mentoring programs for youth? A systematic assessment of the evidence. Psychological Science in the Public Interest, 12, 57–91. http://dx.doi.org/10.1177/1529100611414806
    3. Gutman, L. M., &amp; Midgley, C. (2000). The role of protective factors in supporting the academic achievement of poor African American students during the middle school transition. Journal of Youth and Adolescence, 29, 223–249. http://dx.doi.org/10.1023/A:1005108700243
    4. Tierney, J. P., Grossman, J. B., &amp; Resch, N. L. (1995). Making a difference: An impact study of Big Brothers/Big Sisters. Philadelphia, PA: Public/Private Ventures. Retrieved from http://ppv.issuelab.org/resources/11972/11972.pdf
    5. Lloyd, D. N. (1978). Prediction of school failure from third-grade data. Educational and Psychological Measurement, 38, 1193–1200. http://dx.doi.org/10.1177/001316447803800442
    6. Entwisle, D. R., Alexander, K. L., &amp; Olson, L. S. (2005). First grade and educational attainment by age 22: A new story. American Journal of Sociology, 110, 1458–1502. http://dx.doi.org/10.1086/428444
    7. Teye, A. C., &amp; Peaslee, L. (2015). Measuring educational outcomes for at-risk children and youth: Issues with the validity of self-reported data. Child &amp; Youth Care Forum, 44, 853–873. http://dx.doi.org/10.1007/s10566-015-9310-5
    8. Kuncel, N. R., Credé, M., &amp; Thomas, L. L. (2005). The validity of self-reported grade point averages, class ranks, and test scores: A meta-analysis and review of the literature. Review of educational research, 75, 63–82. http://dx.doi.org/10.3102/00346543075001063
    9. Blackwell, L. S., Trzesniewski, K. H., &amp; Dweck, C. S. (2007). Implicit theories of intelligence predict achievement across an adolescent transition: A longitudinal study and an intervention. Child development, 78, 246–263. http://dx.doi.org/10.1111/j.1467-8624.2007.00995.x
    10. Multon, K. D., Brown, S. D., &amp; Lent, R. W. (1991). Relation of self-efficacy beliefs to academic outcomes: A meta-analytic investigation. Journal of Counseling Psychology, 38, 30-38. http://dx.doi.org/10.1037/0022-0167.38.1.30
    11. Pajares, F. (1996). Self-efficacy beliefs in academic settings. Review of Educational Research, 66, 543-578
    12. Schunk, D. H., &amp; Pajares, F. (2001) The development of academic self-efficacy. In A. Wigfield &amp; J. Eccles (eds., pp.15-31), Development of achievement motivation. San Diego, CA: Academic Press.
    13. Linnenbrink, E. A., &amp; Pintrich, P. R. (2002). Motivation as an enabler for academic success. School Psychology Review, 31, 313–327.
    14. Herrera, C., Grossman, J. B., Kauh, T. J., Feldman, A. F., McMaken, J., &amp; Jucovy, L. (2007). Making a difference in schools: The Big Brothers Big Sisters school-based mentoring impact study. Philadelphia, PA: Public/Private Ventures. Retrieved from https://eric.ed.gov/?id=ED503245
    15. Finlay, K. A. (2006). Quantifying school engagement: A research report. Denver, CO: National Center for School Engagement. Retrieved from http://schoolengagement.org/wp-content/uploads/2013/12/QuantifyingSchoolEngagementResearchReport-2.pdf
    16. Fredricks, J. A., Blumenfeld, P. C., &amp; Paris, A. H. (2004). School Engagement: Potential of the Concept, State of the Evidence. Review of Educational Research, 74, 59–109
    17. Anderson, A. R., Christenson, S. L., Sinclair, M. F., &amp; Lehr, C. A. (2004). Check &amp; Connect: The importance of relationships for promoting engagement with school. Journal of School Psychology, 42(2), 95-113.
    18. Murray, C. (2009). Parent and teacher relationships as predictors of school engagement and functioning among low-income urban youth. The Journal of Early Adolescence, 29(3), 376-404.
    19. Woolley, M. E., &amp; Bowen, G. L. (2007). In the context of risk: Supportive adults and the school engagement of middle school students. Family Relations, 56(1), 92-104.
    20. Libbey, H. P. (2004). Measuring student relationships to school: Attachment, bonding, connectedness, and engagement. Journal of School Health, 74, 274–283. https://eric.ed.gov/?id=EJ743600
    21. Bonny, A. E., Britto, M. T., Klostermann, B. K., Hornung, R. W., &amp; Slap, G. B. (2000). School disconnectedness: Identifying adolescents at risk. Pediatrics, 106, 1017–1021. http://dx.doi.org/10.1111/j.1746-1561.2009.00415.x
    22. Catalano, R. F., Oesterle, S., Fleming, C. B., &amp; Hawkins, J. D. (2004). The importance of bonding to school for healthy development: Findings from the Social Development Research Group. Journal of School Health, 74, 252–261. http://dx.doi.org/10.1111/j.1746-1561.2004.tb08281.x
    23. Resnick, M. D., Bearman, P. S., Blum, R. W., Bauman, K. E., Harris, K. M., Jones, J., . . . Ireland, M. (1997). Protecting adolescents from harm: findings from the National Longitudinal Study on Adolescent Health. JAMA, 278, 823–832. http://dx.doi.org/10.1001/jama.1997.03550100049038
    24. Karcher, M. J. (2005). The effects of developmental mentoring and high school mentors’ attendance on their younger mentees’ self‐esteem, social skills, and connectedness. Psychology in the Schools, 42, 65–77. http://dx.doi.org/10.1002/pits.20025
    25. Karcher, M. J. (2008). The study of mentoring in the learning environment (SMILE): A randomized evaluation of the effectiveness of school-based mentoring. Prevention Science, 9, 99–113. http://dx.doi.org/10.1007/s11121-008-0083-z

Academics

Academic Performance

This measure consists of the following set of questions: Think back to the grades or marks you got on your most recent report card. Please check the box that shows how you did in each subject. The four subjects are listed as: “Math”, “English or Language Arts”, “Social Studies or History” and “Science.” Response choices are F (Not good at all), D (Not so good), C (Okay), B (Good), A (Excellent), or I don’t have this subject in school.

Scale:

Academic Performance

What It Measures:

The youth’s academic performance in four subject areas.

Intended Age Range

8- to 18-year-olds.

Rationale

Variations of this measure have been used in several large-scale mentoring studies and in the Youth Outcomes Survey administered by Big Brothers Big Sisters agencies nationwide, and at least one study has reported mentoring program impacts on this measure. Research comparing responses to this question to actual report card data found a modest correlation with overall actual GPA and grades in individual subject areas. The version provided here is simplified from that used in prior work.

Cautions

Similar to findings in other studies comparing self-reported grades to actual grades, research using an earlier version of this measure found that relatively low performers (and younger youth) tended to be less accurate in their reports. Thus, findings for these groups should be viewed with caution. Also, as noted, although researchers have found modest correlations with actual grades, this association is not particularly strong. Thus, programs should consider self-reported grades only if they do not have easy access to report cards.

Special Administration Information

None.

How to Score

Responses are scored from 1 (F or Not good at all) to 5 (A or Excellent). Responses to individual items on this measure can be used to assess academic performance in individual subject areas. The youth’s overall academic performance, analogous to a GPA, can be computed by averaging the youth’s responses across all items.

How to Interpret Findings

Higher scores reflect better overall academic performance.

Access and Permissions

The measure is available for non-commercial use with no charge and is made available here.

Alternatives:

Most available self-report measures of academic performance are very similar to this one or ask youth to report their actual GPA, which is more appropriate for older students. Options other than self-report include obtaining school records of actual grades and having teachers or parents report on the youth’s academic performance.

  • Cited Literature
    • Herrera, C., DuBois, D. L., & Grossman, J. B. (2013). The role of risk: Mentoring experiences and outcomes for youth with varying risk profiles. New York, NY: A Public/Private Ventures project published by MDRC. (Accessible here.)

Academics

Growth Mindset for Intelligence

This measure, which is a revision of Carol Dweck’s original scale, consists of two subscales: Entity Self Beliefs (4 items) and Incremental Self Beliefs (4 items). Sample items include: “I don’t think I personally can do much to increase my intelligence” (Entity Self Beliefs) and “With enough time and effort I think I could significantly improve my intelligence level” (Incremental Self Beliefs). Youth respond on a 6-point scale: Strongly disagree, Disagree, Mostly disagree, Mostly agree, Agree, or Strongly agree.

Scale

Revised Implicit Theories of Intelligence (Self-Theory) Scale

What It Measures

The youth’s beliefs about his or her inability to change his or her intelligence (i.e., the absence of a “growth mindset”).

Intended Age Range

12- to 19-year-olds.

Rationale

The classic Dweck measure of growth mindset has been used much more often than the recommended measure. However, the personalized framing of items in the recommended scale (e.g., “I cannot change my own intelligence”) seems more amenable to change through mentoring program participation than the third-person framing in Dweck’s measure (“People in general cannot change their intelligence”). Scores on this newer measure also have been found to predict several key academic measures (e.g., truancy, disengagement, self-reported grades) above and beyond scores on the original Dweck scale.

Cautions

Younger youth and those with less developed cognitive abilities may experience difficulty responding to questions on this topic.

Special Administration Information

None.

How to Score

Items are scored from 1 (Strongly disagree) to 6 (Strongly agree). The youth’s score on the measure is obtained by reverse scoring the 4 items on the Incremental Self Beliefs scale, then averaging ratings across all 8 items.

How to Interpret Findings

Higher scores reflect a stronger belief on the part of the youth that he or she can’t do much to change his or her own intelligence.

Access and Permissions

The measure is available for non-commercial use with no charge and is made available here.

Alternatives

Dweck’s original measure of “growth mindset” is also a viable option and is noted as appropriate for youth 10 years and older. The full measure can be found here. A shorter 3-item version that includes only the items referring to fixed views of intelligence can be found in Dweck’s book, Self-theories: Their Role in Motivation, Personality, and Development.

  • Cited Literature
    • De Castella, K., & Byrne, D. (2015). My intelligence may be more malleable than yours: The Revised Implicit Theories of Intelligence (Self-Theory) Scale is a better predictor of achievement, motivation, and student disengagement. European Journal of Psychology of Education, 30, 245–267. http://dx.doi.org/10.1007/s10212-015-0244-y

Academics

Academic Self-Efficacy

This scale consists of 5 items. Sample items include: “I’m certain I can master the skills taught in class this year” and “I can do even the hardest work in this class if I try”. Youth respond on a 5-point scale: Not at all true, A little true, Somewhat true, Mostly true, or Very true. The options A little true and Mostly true are added to the original version.

Scale

Patterns of Adaptive Learning (PALS) – Academic Efficacy subscale

What It Measures

A youth’s perception of competence to do his/her class work.

Intended Age Range

5th to 9th graders

Rationale

A number of developmentally appropriate measures of academic efficacy exist for children and adolescents. However, available measures vary widely; some focus on specific subjects or classes, whereas others emphasize efficacy beliefs specifically related to homework completion or include questions about the youth’s ability to enlist social resources (e.g., help from teachers). The Academic Efficacy subscale provides an assessment of a youth’s beliefs about his or her academic abilities more generally. Additionally, the measure is relatively brief and has good evidence of reliability and validity in school-aged samples.

Cautions

None.

Special Administration Information

More general wording (e.g., “school work” versus “work in this class”) may be more appropriate for elementary school students, who tend to spend most of their day in the same classroom and/or with the same teacher. When used with older students, the questions can be modified to refer to a specific academic domain (e.g., Math), as domain- or task-specific questions may provide a more accurate assessment of self-efficacy, which can differ across domains or tasks.

How to Score

Each item is scored from 1 (Not at all true) to 5 (Very true). The total score is computed by averaging across all 5 items.

How to Interpret Findings

Higher scores reflect greater levels of perceived academic ability.

Access and Permissions

The measure is available for non-commercial use at no cost and is available here. The measure is located on page 20 of the manual.

Alternatives:

Programs interested in a more robust assessment of academic self-efficacy may want to consider Bandura’s Children’s Self-Efficacy Scale. This 23-item measure assesses three types of academic self-efficacy: self-efficacy for academic achievement, self-efficacy for self-regulated learning, and self-efficacy in enlisting social resources. A list of items and a description of the measure can be found in the original publication, which is available here.

  • Cited Literature
    • Midgley, C., Maehr, M. L., Hruda, L. Z., Anderman, E., Anderman, L., Freeman, K. E., Gheen, M., Kaplan, A., Kumar, K., Middleton, M. J., Nelson, J., Roeser, R. & Urdan, T. (2000). Manual for the Patterns of Adaptive Learning Scales. Ann Arbor, MI: University of Michigan. Retrieved from http://www.umich.edu/~pals/PALS%202000_V13Word97.pdf

Academics

School Engagement

This measure consists of 5 items reflecting a student’s effort, attention or interest, and persistence. Sample items include: “I try hard to do well in school” and “I pay attention in class”. Each item is rated on a 4-point scale: Not at all true, Not very true, Sort of true, or Very true.

Scale

Engagement versus Disaffection with Learning (EvsD) – Behavioral Engagement subscale

What It Measures:

A youth’s active participation with learning activities in the classroom.

Intended Age Range

3rd to 10th graders

Rationale

The measure provides a short, easy-to-administer tool for assessing behavioral aspects of school engagement and has good evidence of reliability and validity in diverse samples of youth.

Cautions

None.

Special Administration Information

None.

How to Score

Each item is scored from 1 (Not at all true) to 4 (Very true). The total score is the average of all 5 items.

How to Interpret Findings

Higher scores reflect greater levels of behavioral engagement in school.

Access and Permissions

The measure is available for non-commercial use with no advance permission required. The full EvsD scale can be accessed online here.

Alternatives

For a more complete assessment of school engagement, programs may want to consider using the full EvsD scale. This 20-item measure assesses both behavioral (effort, interest) and emotional (connectedness, belongingness) features of engagement, and distinguishes between engagement and disaffection. A description of the measure can be found in the original publication. A measure of School Connectedness, which is similar to emotional engagement, is also provided here in this toolkit.

  • Cited Literature
    • Skinner, E. A., Kindermann, T. A., & Furrer, C. J. (2009). A motivational perspective on engagement and disaffection: Conceptualization and assessment of children’s behavioral and emotional participation in academic activities in the classroom. Educational and Psychological Measurement, 69, 493-525. http://dx.doi.org/10.1177/0013164408323233

Academics

School Connectedness

This measure consists of 6 items. Sample items include: “I work hard at school,” “I enjoy being at school,” and “I do well in school.” Each item is rated on a 5-point scale: Not at all true, Not really true, Sort of true, True, or Very true.

Scale:

The Hemingway Measure of Adolescent Connectedness — School Connectedness subscale.

What It Measures:

How engaged youth are at school, how much they enjoy school, how successful they feel at school, and how much they value this success.

Intended Age Range

11- to 18-year-olds; versions for pre-adolescents (grades 3—6) and college students are also available.

Rationale

School connectedness measures vary widely in content. Many contain items that address feelings about safety while at school as well as rule fairness and teacher support. The Hemingway scale focuses more on aspects of school liking, engagement in school work, and feelings of success in the school context. These latter facets of school connectedness appear to be more amenable to change through mentoring because they focus more on youths’ perceptions and behaviors as opposed to more “objective” features of their school environment.

Cautions

None.

Special Administration Information

None.

How to Score

Each item is scored from 1 (Not at all true) to 5 (Very true). The total score on the measure is computed by reverse coding one item (“I get bored in school al lot”) and averaging across all items.

How to Interpret Findings

A higher score indicates stronger connectedness to school.

Access and Permissions

The measure is available for non-commercial use with no charge and can be accessed online here as part of the manual for the Hemingway. Prior to use, the author requests that you review the terms of use (see p. 9 of the manual) and email him to indicate that these are acceptable (michaelkarcher@mac.com). Spanish, French, and Chinese language versions can be found on the Hemingway measure’s website. A ready-to-use version is also made available here.

Alternatives

Another frequently used measure is the School Connectedness Scale (SCS). The 6-item scale was originally developed for the Add Health Longitudinal study and has been used in several studies (in some cases omitting one of the original items). More information about this measure can be found here.

  • Cited Literature
    • Karcher, M. J., & Sass, D. (2010). A multicultural assessment of adolescent connectedness: Testing measurement invariance across gender and ethnicity. Journal of Counseling Psychology, 57, 274–289. http://dx.doi.org/10.1037/a0019357

Academics

Grades (records)

This measure consists of school records of student grades (e.g., letter grades).

Rationale

Evaluators and researchers frequently rely on student reports of their own grades. However, the accuracy of such reports can be influenced by a number of factors, including student age, cognitive ability, and actual school performance. For example, lower performing students tend to overestimate their grades, and younger students often have difficulty recalling them accurately. For these reasons, collecting records of grades is desirable when feasible.

Cautions

When planning to collect grades, be sure to set aside adequate time and staff resources. Accessing school records can be complex, particularly when working with multiple schools, grade levels, and academic subjects. Additionally, the format of grades may differ across or even within schools, which is critical to consider when using grade data for program evaluation or research purposes.

Access and Permissions

When working with an outside agency (e.g., a school or district) to collect school records, access to their data typically involves strict confidentiality conditions (see FERPA guidelines). You may be required to provide written parent permission with very specific information included (that can vary across schools or districts). Standard permission/consent language can be incorporated into program enrollment forms (see sample). Also, consider budgeting funds to reimburse time for school officials to gather needed data.

What to Collect

Suggestions for variables to request from schools or school districts can be found in a formatted data collection guide, here. If you are collecting report cards from parents/youth, this template also can be used to help structure your database for storage and analysis.

How to Collect

Sources: One option for collecting grades is to get them directly from parents/youth (e.g., copies of the students’ report card). A small incentive for providing this information, when possible, may be helpful. Another option is to get grade records directly from schools or school district offices, in which case, a formal MOU will typically be required. Schools or districts may agree only to provide “deidentified” data (i.e., data that does not include student names or other identifying information). If so, it is advisable in the data request to attach information to each youth’s name, such as basic demographics (gender or race/ethnicity) or program participation status so that the data once obtained (with this information attached to each line of data, but with the youth’s name removed) will allow you to use this information in analyses. Care must be taken, however, to ensure this type of attached information does not allow a youth to be inadvertently identified; a general rule of thumb is to ensure that the data once obtained do not include subgroups (e.g., male Native American youth) of fewer than 10 youth.

Weighting: Be sure to consider how you will “weight” advanced, honors, and AP courses (e.g., a B in science = 4; a B in AP science = 5).

Additional Considerations: If you are interested in assessing changes over time, make sure to collect a “baseline” in the period before the student began program involvement. In addition, request grades for the entire time period of the student’s program participation and after the end of program involvement, as these data can help to assess longer-term program effects. And be sure to align grade requests with the specific timeframe of program enrollment for each student (e.g., one student may need grade information starting in the spring quarter of one school year through the fall of the next school year, whereas another student may have a very different time frame of participation). Typically, student report cards are issued quarterly but these records may vary in their meaning or purpose. For example, in some cases, quarterly grades may simply reflect student progress, while semester grades serve as the formal record of performance. If possible, you may also want to consider collecting grade information for a comparable group of students not participating in the mentoring program. These data can be used to compare outcomes for program and non-program participants, which is a more robust evaluation design than simply looking at changes over the course of program involvement for program participants.

How to Analyze

Scoring: It is important to work closely with school or district officials to interpret scoring differences across years, grade levels, and subjects. If available, an annual district interpretation guide can be useful.

Grades and Subjects: Grades are commonly reported as letters which need to be converted to numeric values (e.g., F (Failing)=1, A (Excellent)=5). Additionally, schools may use different scales which need to be converted to one system to allow you to compare youth across these schools (e.g., O, S, N, & U; Pass/Fail; A, B, C, D & F). Depending on program goals, you may want to combine related subjects into a broader field (e.g., Biology and Chemistry can both be designated as Science) or to combine all course grades into a single GPA measure (which can be helpful when students are missing grades in particular subjects or their progress is being compared across multiple grades or scoring systems). Failing grades also can be used as a single indicator of performance (e.g., the number of failing grades a student has or whether the student has any failing grades in a given time period).

How to Interpret Findings

When using GPA or individual course grades, higher scores indicate better academic performance. When using a count, or “presence or absence” of failing grades, a percent reduction can indicate improved academic performance. Remember that grade maintenance may be a positive outcome for some groups of youth (e.g., students sustaining the same level of grades before and after program participation during a period when grades often decline, such as the transition to junior high school).

Alternatives

While useful, grades are not the only formal measure of student academic performance. Standardized test scores, grade retention, descriptive (“open-ended”) classroom-based assessments, portfolios or individual projects also can be collected. Depending on your program goals, these measures of performance can provide depth, richness, and perspective and increase your ability to detect program effectiveness.

  • Cited Literature
    1. Karcher, M. J. (2008). The study of mentoring in the learning environment (SMILE): A randomized evaluation of the effectiveness of school-based mentoring. Prevention Science, 9(2), 99.
    2. Kuncel, N. R., Credé, M., & Thomas, L. L. (2005). The validity of self-reported grade point averages, class ranks, and test scores: A meta-analysis and review of the literature. Review of educational research, 75(1), 63-82.
    3. Teye, A. C., & Peaslee, L. (2015, December). Measuring educational outcomes for at-risk children and youth: Issues with the validity of self-reported data. In Child & Youth Care Forum (Vol. 44, No. 6, pp. 853-873). Springer US.

Benefits of Mentoring for Mentors and Others Outside the Mentoring Relationship

Most studies focusing on the impact of youth mentoring programs have examined the effects on the young people served by those programs.1 Far less research attention has focused on the impact of mentoring on those serving as mentors to young people,1 and on others outside of, but nevertheless connected to, the mentoring relationship (e.g., parents, teachers, program staff). This section of the Toolkit focuses on measures that assess some of these potential benefits for both mentors (i.e., cultural humility, perspective taking, career identity development, and generativity) and parents/guardians (i.e., parenting stress and family functioning).

Potential Benefits to Mentors

A foundational element of youth mentoring is the interpersonal connection between mentor and mentee.2 Research suggests that reciprocity is an important part of this relationship and that mentors learn from their mentees just as mentees can learn from their mentors.3 Based on the Helper Therapy Principle,4 in which providing assistance to another person may benefit the helper as much as the person receiving assistance, it is reasonable to expect that engaging in a mentoring relationship to support a young person may also benefit the mentors themselves. A recent review of studies assessing how the mentoring experience may benefit mentors suggested outcomes in a wide range of domains: professional/career development, social/interpersonal relationships, cultural humility, mental health/well-being, personal growth, and generativity/purpose.1 Most of the published studies included in this review focused on younger adult mentors; other studies, however, suggest that a larger share of mentors represent middle adulthood stages (i.e., 35-44 and 45-54).5 Mentors in middle and older adulthood may benefit just as much as younger mentors, but perhaps in different ways. The following four domains were selected as areas of potential influence on mentors’ personal development. Included domains have research evidence in mentoring; two are particularly relevant for mentors at distinct developmental stages.

 

Cultural Humility (Mentor)

Mentors are often matched with youth whose backgrounds (e.g., race/ethnicity, socioeconomic status) and lived experiences differ greatly from their own.6 For example, formal mentoring programs predominantly serve youth of color, whereas most of the mentors in these programs are White.7 Although negative dynamics related to class and cultural differences can be linked with mentoring relationship failure,8-9 qualitative and mixed-methods research also suggest that engagement in the mentoring relationship can increase mentors’ awareness of, and sensitivity to, the cultural and socioeconomic circumstances of youth.10-12 These gains, in turn, may foster stronger mentoring relationships. Namely, research suggests that mentors with higher levels of cultural competence are ultimately more satisfied with their mentoring relationships,13 and youth who experience stronger mentor support for their ethnic-racial identity report increases in their relationship satisfaction over time.14

Perspective Taking (Mentor)

Rhodes’ conceptual model15 identifies empathy as a central component of effective mentoring relationships. An important aspect of empathy is perspective taking. Mentors engage in perspective taking when they try to relate to their mentees’ experiences and understand their mentees’ point of view.16 Efforts to take their mentees’ (and, similarly, their mentees’ parents’) perspectives can help mentors learn about people or communities with which they have previously not had close contact. This can expand their worldview and provide the foundation for developing cultural humility. Indeed, research has found that serving as a mentor is associated with the development of empathy and a greater understanding of others’ perspective.17 These gains may be important for the mentoring relationship: Mentors who report being able to engage with their mentees in a way that shows empathy and sincere perspective taking have reported greater satisfaction with their mentoring relationship.16 Mentor empathy (including perspective taking) has also been found to predict later mentoring relationship quality as reported by both the mentor and mentee.18


Career Identity Development (Mentor)

Older adolescence and emerging adulthood are developmental stages in which identity development is particularly salient.19 The development of career knowledge, goals, and skills is a significant component of identity development during these stages. Because many youth mentoring programs specifically recruit adolescents (e.g., high school students) or young adults (e.g., college students) to serve as mentors, career identity development is an important area of potential benefit for these mentors.

Mentoring can foster career identity development by providing mentors a view of what a career working with youth could look like, and whether and how their skills and interests fit with that career path. In fact, learning about a potential career is an important motivation for younger adult volunteers (under 40), who report higher levels of career-related motivation to volunteer as mentors than do older adults.20 Reflecting the importance of career identity, Anderson and DuBois’1 review notes that many of the existing studies on how mentors benefit from mentoring have examined benefits in career and academic domains. In fact, several studies have found that being a mentor is associated with exploring and clarifying professional goals.17,21-24 In addition, a long-term follow-up study of entrepreneurship mentoring found that almost 80% of former mentors reported that participation positively affected their career plans and helped prepare them for work.25


Generativity (Mentor)

Middle and later adulthood are developmental stages in which concern for others, particularly younger people, becomes more salient. Erikson26 defined generativity as “the interest in establishing and guiding the next generation” (p. 231). In Erikson’s final work, he described generativity as manifesting in the “strength of care” for the next generation, and passing on a legacy, being remembered, contributing to the community, and engaging in creative works.27 McAdams and de St. Aubin28 outlined a more complex model of generativity that identified generative concern and generative behavior (action). Subsequent research confirmed that both generative concern and behavior are associated with greater well-being in adults.29-30 More recently, Gruenewald et al.31 suggested an additional component of generativity, “generative achievement,” as the experience of having been generative. This is useful, because someone might engage in generative behaviors yet still not feel their desire for generativity has been achieved.

Youth mentoring offers an opportunity to engage in generative behaviors. In an evaluation of the Experience Corps program, adults aged 60 and over assigned to provide mentoring to participating youth reported higher levels of generative desire (or concern) and generative achievement relative to a control group that engaged in non-mentoring volunteering.31 Changes in generativity associated with being a mentor are not limited to studies with older mentors: increases in generativity are also associated with engagement in mentoring among emerging adults.32


Potential Benefits to Parents

A relationship between a mentor and a young person exists within a larger network of relationships that includes the youth’s parent or guardian as well as program staff.33 These relationship networks are bidirectional and thus can both influence, and be influenced by, the relationship between a mentor and a young person. Consistent with this framework, mentoring relationships may also affect other interconnected relational networks, such as siblings and other family members, friends of the mentored youth, and teachers.

Parents and guardians may be particularly prone to benefit from the mentoring relationship, given their interactions with the mentor and extensive interactions with the youth who are being mentored. Parents may benefit directly from the mentoring relationship, for example, by learning about community resources from the mentor and receiving support from the mentor in navigating school, medical, or other systems.34 They may also benefit indirectly, for example, by experiencing improvements in their relationship with their child.35-36 Mentors can support the parent-child relationship by serving as a sounding board to their mentees, coaching youth in strategies to communicate with caregivers, and promoting greater understanding between youth and their parents.37

These potential gains for parents can lead to additional gains for youth (see Rhodes15). For example, findings by Rhodes et al.38 suggest that the academic impacts of mentoring program involvement are achieved, at least in part, through improvements in the parent-child relationship.


Parent Stress (Parent)

One potential area of benefit for parents is the reduction of parenting-related stress. Parenting stress may be alleviated through positive changes in youth attitudes and behavior that improve the quality of their interactions with others, including those with their caregivers.39 Caregivers have also reported that having their child involved in mentoring provided respite from the stress of parenting demands.39

Studies support this potential benefit. Jent and Niec,40 for example, report decreases in parenting stress for parents of mentored youth with emotional and behavioral disorders. A recent meta-analysis of five empirical studies also found a reduction in parenting stress for parents of youth with internalizing/externalizing problems who received specialized mentoring services.41


Family Functioning (Parent)

Youth mentoring may also improve parents’ perceptions of overall family functioning through mechanisms similar to those noted for parenting stress.33,39 For example, the mentor may be able to serve as a resource for youth in more successfully navigating their relationship with caregivers,37 and may provide support and reinforcement for the caregiver’s parenting.39 Longitudinal research suggests that parents of youth matched with a mentor reported greater gains in family functioning relative to a non-matched comparison group.42 Benefits in family functioning have also been reported in a recent large-scale randomized controlled trial of Big Brothers Big Sisters community-based mentoring.43

  • Cited Literature

    1. Anderson, A. J., & DuBois, D. L. (2023). Are adults influenced by the experience of mentoring youth? A scoping review. Journal of Community Psychology, 51(3), 1032-1059. https://doi.org/10.1002/jcop.22954
    2. Rhodes, J. E. (2002). Stand by me: The risks and rewards of mentoring today’s youth. Harvard University Press.
    3. Lester, A. M., Goodloe, C. L., Johnson, H. E., & Deutsch, N. L. (2019). Understanding mutuality: Unpacking relational processes in youth mentoring relationships. Journal of Community Psychology, 47(1), 147–162. https://doi.org/10.1002/jcop.22106
    4. Riessman, F. (1965). The “helper” therapy principle. Social Work, 10(2), 27–32. https://www.jstor.org/stable/23708219
    5. Raposa, E. B., Dietz, N., & Rhodes, J. E. (2017). Trends in volunteer mentoring in the United States: Analysis of a decade of census survey data. American Journal of Community Psychology, 59(1–2), 3–14. https://doi.org/10.1002/ajcp.12117
    6. Herrera, C., DuBois, D. L., & Grossman, J. B. (2013). The role of risk: mentoring experiences and outcomes for youth with varying risk profiles. MDRC. https://files.eric.ed.gov/fulltext/ED544233.pdf
    7. Garringer, M., McQuillin, S., & McDaniel, H. (2017). Examining youth mentoring services across America: Findings from the 2016 National Mentoring Program Survey. MENTOR: The National Mentoring Partnership. https://doi.org/10.13140/RG.2.2.18166.70728
    8. Spencer, R. (2007). “It’s not what I expected”: A qualitative study of youth mentoring relationship failures. Journal of Adolescent Research, 22(4), 331–354. https://doi.org/10.1177/0743558407301915
    9. Spencer, R., McCormack, M. J., Drew, A. L., Gowdy, G., & Keller, T. E. (2022). (Not) minding the gap: A qualitative interview study of how social class bias can influence youth mentoring relationships. Journal of Community Psychology, 50(3), 1579–1596. https://doi.org/10.1002/jcop.22737
    10. Duron, J. F., Williams‐Butler, A., Schmidt, A. T., & Colon, L. (2020). Mentors’ experiences of mentoring justice‐involved adolescents: A narrative of developing cultural consciousness through connection. Journal of Community Psychology, 48(7), 2309–2325. https://doi.org/10.1002/jcop.22415
    11. Lee, J. M., Germain, L. J., Lawrence, E. C., & Marshall, J. H. (2010). “It opened my mind, my eyes. It was good.” Supporting college students’ navigation of difference in a youth mentoring program. Educational Horizons, 89(1), 33–46. http://www.jstor.org/stable/42926942
    12. Marshall, J. H., Lawrence, E. C., Lee Williams, J., & Peugh, J. (2015). Mentoring as service‐learning: The relationship between perceived peer support and outcomes for college women mentors. Studies in Educational Evaluation, 47, 38–46. https://doi.org/10.1016/j.stueduc.2015.07.001
    13. Suffrin, R. L. (2014). The role of multicultural competence, privilege, attributions, and team support in predicting positive youth mentor outcomes [Master’s thesis, DePaul University].
    14. Sánchez, B., Pryce, J., Silverthorn, N., Deane, K. L., & DuBois, D. L. (2019). Do mentor support for ethnic–racial identity and mentee cultural mistrust matter for girls of color? A preliminary investigation. Cultural Diversity and Ethnic Minority Psychology, 25(4), 505–514. https://doi.org/10.1037/cdp0000213
    15. Rhodes, J. E. (2005). A Model of Youth Mentoring. In D. L. DuBois & M. J. Karcher (Eds.), Handbook of youth mentoring (pp. 30–43). Sage Publications Ltd. https://doi.org/10.4135/9781412976664.n3
    16. Spencer, R., Pryce, J., Barry, J., Walsh, J., & Basualdo-Delmonico, A. (2020). Deconstructing empathy: A qualitative examination of mentor perspective-taking and adaptability in youth mentoring relationships. Children and Youth Services Review, 114, 105043. https://doi.org/10.1016/j.childyouth.2020.105043
    17. Haddock, S., Weiler, L., Krafchick, J., Zimmerman, T. S., McLure, M., & Rudisill, S. (2013). Campus Corps therapeutic mentoring: Making a difference for mentors. Journal of Higher Education Outreach and Engagement, 17(4), 225–256.
    18. Deane, K. L., Boat, A. A., Haddock, S. A., Henry, K. L., Zimmerman, T. S., & Weiler, L. M. (2022). The comparative roles of mentor self-efficacy and empathy in fostering relationship quality with youth. Journal of Youth and Adolescence, 51(4), 805–819. https://doi.org/10.1007/s10964-022-01584-7
    19. Branje, S., de Moor, E. L., Spitzer, J., & Becht, A.I. (2021). Dynamics of identity development in adolescence: A decade in review. Journal of Research on Adolescence, 31(4), 908– https://doi.org/10.1111/jora.12678
    20. Caldarella, P., Gomm, R. J., Shatzer, R. H., & Wall, D. G. (2010). School‐based mentoring: A study of volunteer motivations and benefits. International Electronic Journal of Elementary Education, 2(2), 199–216. Retrieved from https://www.iejee.com/index.php/IEJEE/article/view/248
    21. Reddick R. J., Griffin K. A., & Cherwitz, R. A. (2011). Viewpoint: Answering President Obama’s call for mentoring: It’s not just for mentees anymore. Planning for Higher Education, 39(4), 59–65.
    22. Schmidt, M. E., Marks, J. L., & Derrico, L. (2004). What a difference mentoring makes: Service learning and engagement for college students. Mentoring & Tutoring: Partnership in Learning, 12(2), 205–217. https://doi.org/10.1080/1361126042000239947
    23. Slaughter‐Defoe, D., & English‐Clarke, T. (2010). Mentoring in the Philadelphia GO‐GIRL program: Impact on Penn’s graduate school of education student mentors. Educational Horizons, 89(1), 80–92. https://www.jstor.org/stable/42926946
    24. Taussig, H. N., Culhane, S. E., Raviv, T., Fitzpatrick, L. E., & Hodas, R. W. (2010). Mentoring children in foster care: Impact on graduate student mentors. Educational Horizons, 89(1), 17–32. https://www.jstor.org/stable/42926941
    25. Newman, C. M., & Hernandez, S. A. (2011). Minding our business: Longitudinal effects of a service‐learning experience on alumni. Journal of College Teaching & Learning (TLC), 8(8), 39–48. https://doi.org/10.19030/tlc.v8i8.5321
    26. Erikson, E. H. (1950). Childhood and society.W. Norton and Company.
    27. Erikson, E. H., & Erikson, J. M. (1998). The life cycle completed. W.W. Norton and Company.
    28. McAdams, D. P., & de St. Aubin, E. (1992). A theory of generativity and its assessment through self-report, behavioral acts, and narrative themes in autobiography. Journal of Personality and Social Psychology, 62(6), 1003–1015. https://doi.org/10.1037/0022-3514.62.6.1003
    29. Keyes, C. L.M.,& Ryff, C. D. (1998). Generativity in adult lives: Social structural contours, quality of life consequences. In D. P. McAdams, & E. de St. Aubin (Eds.), Generativity and adult development: How and why we care for the next generation (pp. 227–263). American Psychological Association.
    30. Stewart, A. J., Ostrove, J. M., & Helson, R. (2001). Middle aging in women: Patterns of personality change from the 30s to the 50s. Journal of Adult Development, 8(1), 23–37. https://doi.org/10.1023/A:1026445704288
    31. Gruenewald, T. L., Tanner, E. K., Fried, L. P., Carson, M. C., Xue, Q.-L., Parisi, J. M., Rebok, G. W., Yarnell, L. M., & Seeman, T. E. (2016). The Baltimore Experience Corps Trial: Enhancing generativity via intergenerational activity engagement in later life. Journals of Gerontology: Psychological Sciences, 71(4), 661–670. https://doi.org/10.1093/geronb/gbv005
    32. Hastings, L. J., Griesen, J. V., Hoover, R. E., Creswell, J. W., & Dlugosh, L. L. (2015). Generativity in college students: Comparing and explaining the impact of mentoring. Journal of College Student Development, 56(7), 651–669. https://doi.org/10.1353/csd.2015.0070
    33. Keller, T. (2005). A systemic model of the youth mentoring intervention. Journal of Primary Prevention, 26(2), 169–188. https://doi.org/10.1007/s10935-005-1850-2
    34. Salazar, A. M., Haggerty, K. P., Walsh, S., Noell, B., & Kelley‐Siel, E. (2019). Adapting the friends of the children programme for child welfare system‐involved families. Child & Family Social Work, 24(4), 430–440. https://doi.org/10.1111/cfs.12622
    35. Tierney, J. P., Baldwin, J. G., & Resch, N. L. (1995). Making a difference: An impact study of Big Brothers Big Sisters. Public/Private Ventures. https://ppv.issuelab.org/resource/making-a-difference-an-impact-study-of-big-brothers-big-sisters-re-issue-of-1995-study.html
    36. Chan, C. S., Rhodes, J. E., Howard,W. J., Lowe, S. R., Schwartz, S. E. O., & Herrera, C. (2013). Pathways of influence in school-based mentoring: The mediating role of parent and teacher relationships. Journal of School Psychology, 51(1), 129–142. https://doi.org/10.1016/j.jsp.2012.10.001
    37. Billingsley, J. T., Rivens, A. J., Charity-Parker, B. M., Chang, S. H., Garrett, S. L., Li, T., & Hurd, N. M. (2022). Familial mentor support and Black youths’ connectedness to parents across adolescence. Youth & Society, 54(4), 547– https://doi.org/10.1177/0044118X211058215
    38. Rhodes, J. E., Grossman, J. B., & Resch, N. L. (2000). Agents of change: Pathways through which mentoring relationships influence adolescents’ academic adjustment. Child Development, 71(6), 1662– https://doi.org/10.1111/1467-8624.00256
    39. Keller, T. E., Overton, B., Pryce, J. M., Barry, J. E., Sutherland, A., & DuBois, D. L. (2018). “I really wanted her to have a Big Sister”: Caregiver perspectives on mentoring for early adolescent girls. Children and Youth Services Review, 88, 308–315. https://doi.org/10.1016/j.childyouth.2018.03.029
    40. Jent, J. F., & Niec, L. N. (2006). Mentoring youth with psychiatric disorders: The impact on child and parent functioning. Child & Family Behavior Therapy, 28(3), 43–58. https://doi.org/10.1300/J019v28n03_03
    41. La Valle, C. (2015). The effectiveness of mentoring youth with externalizing and internalizing behavioral problems on youth outcomes and parenting stress: A meta-analysis. Mentoring & Tutoring: Partnership in Learning, 23(2), 213–227. https://doi.org/10.1080/13611267.2015.1073565
    42. Erdem, G., DuBois, D. L., Larose, S., De Wit, D. J., & Lipman, E. L. (2015, November 11-14). The impact of mentoring on parental well-being and family functioning [Conference presentation]. National Council on Family Relations Annual Conference, Vancouver, British Columbia, Canada.
    43. DuBois, D. L., Herrera, C., Rivera, J., Brechling, V., & Root, S. (2022). Randomized controlled trial of the effects of the Big Brothers Big Sisters Community-Based Mentoring Program on crime and delinquency: Interim report of findings. University of Illinois Chicago. https://doi.org/10.25417/uic.20767438.v1

Benefits of Mentoring for Mentors and Others Outside the Mentoring Relationship

Cultural Humility

This 31-item scale includes four subscales: empathic feeling and expression (15 items; e.g., “I express my concern about discrimination to people from other racial or ethnic groups”); empathic perspective taking (7 items; e.g., “I know what it feels like to be the only person of a certain race or ethnicity in a group of people”); acceptance of cultural differences (5 items; e.g., “I feel irritated when people of different racial or ethnic backgrounds speak their language around me”); and empathic awareness (4 items; “I am aware of how society differentially treats racial or ethnic groups other than my own”). Response options are: Strongly disagree that it describes me, Disagree that it describes me, Slightly disagree that it describes me, Slightly agree that it describes me, Agree that it describes me, and Strongly agree that it describes me.

Scale

Scale of Ethnocultural Empathy

What It Measures:

A mentor’s empathy toward other racial/ethnic groups

Intended Age Range

11- to 18-year-olds.

Rationale

The Scale of Ethnocultural Empathy was selected because of its use in studies of youth mentoring, evidence of reliability and validity across cultures, and appropriateness for use with adolescent and adult mentors.

Cautions

None.

Special Administration Information

None.

How to Score

Each item is scored on a 6-point scale from 1 (Strongly disagree that it describes me) to 6 (Strongly agree that it describes me). Items 21, 16, 17, 31, 28, 29, 2, 10, 1, 5, 27, and 8 are reverse coded (see Wang et al., 2003, pp. 225). The total score is the average of all 31 items after reverse coding. Subscale scores are the average of items within each subscale after reverse coding.

How to Interpret Findings

Higher scores on the Scale of Ethnocultural Empathy reflect a higher level of ethnocultural empathy.

Access and Permissions

The measure is available for non-commercial use at no charge. The original validation study for the scale includes the scale items and is available here.

Alternatives

The Miville-Guzman Universality-Diversity Scale – Short Form is a good alternative for those interested in measuring cultural humility across a wider range of identity areas (e.g., disability, racial/ethnic, cultural). The 15-item measure has evidence of reliability and validity across different cultures and age groups. A list of items can be requested by contacting the author, Dr. Marie L. Miville-Guzman (email: mlm2106@tc.columbia.edu)

  • Citations

    Fuertes, J. N., Miville, M. L., Mohr, J. J., Sedlacek, W. E., & Gretchen, D. (2000). Factor structure and short form of the Miville-Guzman Universality-Diversity Scale. Measurement and Evaluation in Counseling and Development33(3), 157-169. https://doi.org/10.1080/07481756.2000.12069007

    Wang, Y. W., Davidson, M. M., Yakushko, O. F., Savoy, H. B., Tan, J. A., & Bleier, J. K. (2003). The Scale of Ethnocultural Empathy: Development, validation, and reliability. Journal of Counseling Psychology50(2), 221-234. https://doi.org/10.1037/0022-0167.50.2.221

Benefits of Mentoring for Mentors and Others Outside the Mentoring Relationship

Perspective Taking

This 7-item measure assesses the motivation and tendency to take another’s perspective. Sample items include: “I believe that there are two sides to every question and try to look at them both” “Before criticizing somebody, I try to imagine how I would feel if I were in their place” and “I sometimes find it difficult to see things from the ‘other guy’s’ point of view.” Response anchors in the original scale are “Does not describe me well” (0) to “Describes me very well” (4) with only these two endpoints described and numbers 1, 2 and 3 shown as intermediate choices.

Scale

Perspective Taking Scale

What It Measures:

The extent to which an individual tries to take the perspective of others.

Intended Age Range

Individuals 12 and older.

Rationale

This measure was selected because it is the most commonly used assessment of perspective taking in the mentoring literature and has evidence of reliability and validity. There is also research on the influence of training on changes in perspective taking using this measure and the conditions under which these shifts are most likely to occur.

Cautions

Some studies suggest that people tend to agree on the importance of perspective taking and affirm their tendency to take others’ perspectives. Therefore, many individuals may score high on this measure, leaving limited room for improvement.

Special Administration Information

None.

How to Score

Each item is scored on a 5-point scale from 0 (Does not describe me well) to 4 (Describes me very well). The score is computed by averaging ratings across all 7 items after reverse coding the two negatively worded items (“I sometimes find it difficult to see things from the ‘other guy’s’ point of view” and “If I’m sure I’m right about something, I don’t waste much time listening to other people’s arguments”).

How to Interpret Findings

Higher scores indicate a greater tendency to take others’ perspectives and placing a higher value on taking the perspective of others.

Access and Permissions

The Perspective-taking Scale is available for non-commercial use at no charge and can be accessed online from the author here. The original study may be accessed here. The NMRC ready-to-use version can be accessed here.

Alternatives

None.

Benefits of Mentoring for Mentors and Others Outside the Mentoring Relationship

Career Identity Development

This 36-item scale includes 12 subscales assessing different aspects of career preparedness. Four scales are of particular relevance to mentoring programs: Soft Skills (3 items; e.g., “I have many skills that I could apply to different occupational fields”); Career Confidence (3 items; e.g., “I am confident that I will achieve my occupational goals”); Career Clarity (3 items; e.g., “I know which occupational field I am intending to pursue”); and Self-Exploration (3 items; e.g., “I have often thought about what is important to me in an occupation”). Response options are Not true at all, Slightly not true, Moderately true, Mostly true, and Completely true.

Scale

Career Resources Questionnaire–Adolescent Version (CRQ-A)

What It Measures:

Career preparedness.

Intended Age Range

Adolescents and young adults (ages 14 to 22).

Rationale

This measure was selected based on its theory-informed comprehensive assessment of adolescent and young adult career preparedness and evidence of reliability and validity.

Cautions

Evidence of reliability and validity of the CRQ-A is reported in two studies published in a single report. Although item content appears relevant for young adults, the measure was only validated in a sample of adolescents.

Special Administration Information

None.

How to Score

Each item is scored from 1 (Not true at all) to 5 (Completely true). Each subscale score is the average of the items that make up the subscale.

How to Interpret Findings

Higher scores indicate more career preparedness in the particular area of interest.

Access and Permissions

The scale is available for non-commercial use. The recommended subscales are available here.

Alternatives

The original Career Resources Questionnaire (CRQ) is a good alternative for mentoring programs interested in the impact of youth mentoring on adult mentors with more career experience. The CRQ was validated in a sample of US and German employees and university students.

  • Citations

    Marciniak, J., Hirschi, A., Johnston, C. S., & Haenggli, M. (2021). Measuring career preparedness among adolescents: Development and validation of the Career Resources Questionnaire—Adolescent Version. Journal of Career Assessment, 29(1), 164–180. https://doi.org/10.1177/1069072720943838

    Hirschi, A., Nagy, N., Baumeler, F., Johnston, C. S., & Spurk, D. (2018). Assessing key predictors of career success: Development and validation of the Career Resources Questionnaire. Journal of Career Assessment, 26(2), 338-358. https://doi.org/10.1177/1069072717695584

Benefits of Mentoring for Mentors and Others Outside the Mentoring Relationship

Generativity

This 6-item measure assesses the extent to which individuals feel they are giving back to their community and supporting the next generation, and will leave a legacy or be remembered when they die. Sample items include: “I feel like I make a difference in my community” and “I feel like I am giving back.” Response options are Disagree strongly, Disagree somewhat, Disagree slightly, Agree slightly, Agree somewhat and Agree strongly.

Scale

Generative Achievement

What It Measures:

The perception that one is contributing to the well-being of others, giving back to one’s community, and acting on concern for the next generation.

Intended Age Range

18 and older.

Rationale

This measure was selected because it measures the elements of caring, giving back to the community, and being remembered that are central to most measures of generativity. It has strong face validity (that is, the items appear to be relevant to mentor experiences), and in one mentoring program evaluation, the experience of participating as a mentor was associated with increases in this measure over time.

Cautions

This measure has been used in very few mentoring evaluations (i.e., one evaluation of a program working with older adult volunteer mentors; see citation below). It lacks evidence of validity in other program settings and with other age groups of mentors.

Special Administration Information

None.

How to Score

Each item is scored from 1 (Disagree strongly) to 6 (Agree strongly). A total score is computed by averaging ratings across all 6 items.

How to Interpret Findings

Higher scores indicate greater perceptions of generative contributions.

Access and Permissions

The measure is available for non-commercial use at no charge. A formatted version can be found here.

Alternatives

Whereas the Generative Achievement Scale measures the feeling of having been generative, the Loyola Generativity Scale measures adults’ conscious concern for having a positive effect on the next generation. A formatted version of the Loyola Generativity Scale can be found here and related research here.

  • Citations

    Gruenewald, T. L., Tanner, E. K., Fried, L. P., Carlson, M. C., Xue, Q., Parisi, J. M., Rebok, G. W., Yarnell, L. M., & Seeman, T. E. (2016). The Baltimore Experience Corps trial: Enhancing generativity via intergenerational activity engagement in later life. The Journals of Gerontology: Series B, 71(4), 661-670. https://doi.org/10.1093/geronb/gbv005

    McAdams, D. P., & de St. Aubin, E. (1992). A theory of generativity and its assessment through self-report, behavioral acts, and narrative themes in autobiography. Journal of Personality and Social Psychology, 62(6), 1003-1015. https://doi.org/http://dx.doi.org/10.1037/0022-3514.62.6.1003

Benefits of Mentoring for Mentors and Others Outside the Mentoring Relationship

Parenting Stress

This 18-item measure assesses the extent to which a parent feels stress and a lack of satisfaction in their parenting role. It includes two subscales: Parental Stress (10 items; e.g., “The major source of stress in my life is my child(ren)”); and Lack of Parental Satisfaction (8 reverse-coded items; e.g., “I am happy in my role as a parent”). Response options are Strongly Disagree, Disagree, Undecided, Agree, and Strongly Agree.

Scale

Parental Stress Scale

What It Measures:

Experiences of stress related to parenting

Intended Age Range

Parents or caregivers of young people of any age.

Rationale

This measure was selected based on its relative brevity and evidence of reliability and validity in US and international samples.

Cautions

Evidence supports using the Parental Stress Scale total score, but more recent evidence suggests the Parental Stress subscale and Lack of Parental Satisfaction subscale should be used independently (Nielsen et al. 2020). In addition, items 2 and 11 have performed poorly in some studies.

Special Administration Information

Each parent or caregiver should complete separate the scale separately. In households with multiple children, parents should provide responses about their typical relationship with their children.

How to Score

Each item is scored from 1 (Strongly Disagree) to 5 (Strongly Agree). A Parental Stress Scale total score is calculated by summing all 18 items after reverse coding all items in the Lack of Parental Satisfaction subscale (i.e., 1, 2, 5, 6, 7, 8, 17, and 18).

How to Interpret Findings

The Parental Stress Scale total score ranges from 18 to 90, with lower scores indicating lower levels of parental stress.

Access and Permissions

The scale is available for non-commercial use in its original form. Adaptations are not authorized without written permission from the developer. Additional details about the measure can be found here. A copy of the measure can be found here.

Alternatives

The Parenting Stress Index-4-Short form (PSI-4-SF) is a good alternative for mentoring programs interested in a more comprehensive assessment of parental stress. The PSI-4-SF is one of the most frequently used measures of parenting stress and has shown evidence of reliability and validity across many studies. It is available for purchase for about $223 per Short Form Kit, which includes 25 survey, record, and profile forms and a user manual.

  • Citations

    Berry, J. O., & Jones, W. H. (1995). The Parental Stress Scale: Initial psychometric evidence. Journal of Social and Personal Relationships, 12(3), 463-472. https://doi.org/10.1177/0265407595123009

    Nielsen, T., Pontoppidan, M., & Rayce, S. B. (2020). The Parental Stress Scale revisited: Rasch-based construct validity for Danish parents of children 2-18 years old with and without behavioral problems. Health and Quality of Life Outcomes, 18, 1-16. https://doi.org/10.1186/s12955-020-01495-w

Benefits of Mentoring for Mentors and Others Outside the Mentoring Relationship

Family Functioning

This 12-item measure assesses a family member’s perspective about their family’s overall functioning. The subscale covers several aspects of family health: problem solving, communication, roles, affective responsiveness and involvement, and behavioral control. Sample items include: “In times of crisis we can turn to each other for support and “Making decisions is a problem for our family.” Response options are: Strongly agree, Agree, Disagree, and Strongly disagree.

Scale

Family Assessment Device (FAD) – General Functioning subscale

What It Measures:

A family’s overall health.

Intended Age Range

Ages 12 and up.

Rationale

This measure was selected because of its development in both clinical and nonclinical samples, evidence of reliability and validity, and impacts in mentoring research. The subscale and the other subscales from the Family Assessment Device are appropriate for use with parents and other family members.

Cautions

None.

Special Administration Information

None.

How to Score

Each item is scored from 1 (Strongly agree) to 4 (Strongly disagree). The subscale score is the average of all 12 items after reverse coding the 6 positively worded items.

How to Interpret Findings

Higher scores on the General Functioning subscale indicate higher levels of distress (i.e., worse functioning).

Access and Permissions

The measure is available for non-commercial use with no charge. The original validation study for the scale includes the scale items and is available here.

Alternatives

None.

Risk and Protective Factors

“Risk factors” are characteristics of the youth or his/her surroundings that increase the likelihood of negative outcomes, while “protective factors” are those that mitigate the impact of risk on positive youth outcomes.1,2 There is evidence that various types of risk and pro

Drawing on Rhodes’4,5 theory of how youth mentoring achieves its impacts, DuBois et al.3 identified categories of potential moderators of the effectiveness of mentoring programs (i.e., factors that may increase or decrease the observed impact of mentoring program participation on outcomes). These include: youth interpersonal history (e.g., parental maltreatment, delinquency); social competencies (e.g., interpersonal sensitivity, capacity for engaging others); youth’s age; and family and community context (e.g., family relationships, school climate, neighborhood risk). While the evidence base, with respect to moderator effects, is clearer for some factors than others, there is broad consensus (and empirical evidence) that individual and environmental risk variables interact to influence the overall impacts of mentoring programs.3

Several types of risk and protective factors were selected for inclusion in the Toolkit. In most cases, these factors are not expected to be outcomes of mentoring programs, but rather, factors that could be measured prior to mentoring and tested as moderators of program benefits. The risk and protective factors considered thus far for the Toolkit include two family and community context factors (i.e., family management, neighborhood risk), three interpersonal history factors (i.e., deviant peer affiliation, peer victimization, and trauma exposure), factors that may increase susceptibility for opioid misuse as well as being impacted negatively by others’ opioid misuse, and out-of-school-time (OST) structured activity involvement as a protective influence.i By incorporating assessments of these various risk and protective factors in their evaluations, programs may be able to gain useful information about factors that are most likely to influence the effectiveness of their programs. It is also possible that some of these risk and protective factors may be useful to assess as outcomes themselves, depending on program characteristics and aims (e.g., trauma symptoms).

Family Management

Family risk, including family structure (e.g., single-parent household), resources (e.g., socioeconomic status), and relationships (e.g., family conflict/dysfunction, parent-child relationship quality) have been noted as important potential influences on mentoring program effectiveness.3,4,5,6 Because many family context variables may influence the effectiveness of mentoring programs, selecting which measures to use is challenging. To represent family risk, we selected a measure of family management which taps into unclear expectations for behavior, monitoring of behavior, and inconsistent parenting practices (e.g., rewards and punishment). Limit setting and parental monitoring have been linked with other factors that are also hypothesized to moderate program effectiveness (e.g., youth problem behaviors, social competencies) as well as with negative behavioral outcomes such as drug use, violence, and delinquency.1

 

Neighborhood Risk

A lack of neighborhood resources and other neighborhood risk factors (e.g., crime, drug use, and or violence) have also been implicated as possible moderators of mentoring program effectiveness.3,4,5,6 Interest in the potential moderating role of such factors appears to stem, in part, from their well-documented contributions to negative youth outcomes. For example, physical deterioration of the neighborhood and high crime rates are associated with higher rates of youth problem behaviors such as juvenile crime and drug use, outcomes which themselves are potential moderators of intervention effectiveness.1 In this initial iteration of the Toolkit, a measure of community disorganization was selected that taps into physical deterioration, crime, and drug sales.

 

Deviant Peer Affiliation

Deviant peer affiliation (DPA), or youths’ involvement with deviant peers, has long been identified as a cause of adjustment problems that include adolescent drug use, high-risk sexual behavior, and violent offenses.7,8,9,10,11 DPA may promote increases in deviant behavior through social modeling or peer pressure and various types of reinforcement that in turn reinforce the youth’s affiliation with deviant peers.12 DPA is also associated with several other aspects of the youth’s interpersonal history that may influence mentoring program effectiveness such as gang/involvement, delinquency, social competence, and peer rejection.3

 

Peer victimization

Peer victimization refers to the experience of being targeted by hurtful teasing and aggressive behavior (e.g., experiencing bullying13). DuBois et al.3 and Rhodes4,5 identified peer rejection as a potential moderator of mentoring-program effectiveness. DuBois et al.3 note that youth who have experienced peer rejection may enter mentoring relationships with heightened interpersonal sensitivity14 and this may in turn affect the developing relationship and its ability to benefit youth (see also Kanchewa, Yoviene, Schwartz, Herrera, & Rhodes15). As such, peer victimization may be particularly important to assess as a moderator of program effects.

 

OST Structured Activity Involvement

Out-of-school-time (OST) structured activity involvement is characterized by youth involvement in organized activities such as sports and other afterschool and community-based programs or clubs. OST involvement is related to positive youth outcomes such as social competence and may be a protective factor against problem behaviors including gang involvement and delinquency.16,17 Thus, whether a youth begins a mentoring program with this protective factor in place may very well influence the outcomes he or she experiences over the course of his or her involvement in mentoring.

 

Symptoms of Trauma Exposure

Traumatic experiences such as parental abandonment, or experiences of abuse or neglect have been identified as important potential moderators of program effectiveness.3,4,5 Traumatic experiences may be particularly important as mentoring is a relationship-based intervention, and prior negative experiences in significant relationships may influence how youth respond to a program.3

 

Risk Factors for Opioid Misuse and the Negative Impact of Others’ Opioid Misuse

The opioid epidemic in the United States is well-documented and involves significant numbers of youth.18 Compared to other drugs (e.g. alcohol, nicotine), less is known about the factors that influence whether or not a youth will misuse opioids or the severity of youth opioid misuse.19 Efforts are ongoing to identify factors that can reliably predict youth who are at increased risk for misusing prescription opioids with the goal of informing opioid misuse prevention and intervention efforts (see SAMSA, 2018 for a review). This section of the Toolkit includes a collection of survey items addressing a range of potential risk factors for opioid misuse among youth, including having previously been prescribed an opioid by a doctor, school grade of first opioid prescription, perceived availability of opioids, having friends who are engaged in various types of substance use, and misperceptions of risk associated with opioid use. To date, there is very little published information on the potential for mentoring relationships to reduce risk for opioid misuse among young persons. However, mentors may be well positioned to provide youth with support and guidance that serves to directly counter some sources of risk (e.g., lack of understanding of potential harmful consequences of opioid use, such as dependency or addiction). Awareness of a youth’s susceptibility to other risk factors, such as friends engaged in substance use, furthermore, may be useful as an impetus for mentors and programs to augment potentially counter-acting protective factors, such as those addressed by measures in this section of the toolkit (e.g., OST structured activity involvement).

Equally important are the ways in which youth may be susceptible to the negative effects of opioid misuse on the part of their parents or other family members as well as other adults and peers in their communities. Assessing these impacts can position mentoring programs to better support youth through enhanced staff and mentor awareness (e.g., training in trauma-informed approaches to care). This type of information also may be used to help catalyze and inform partnerships with other service providers to help address the problem of parental substance abuse and, if needed, support youth in out of home placements as a result of parental addiction.

It is also important to note that there are media reports of exposure to opioids occurring accidentally through the consumption of other drugs (e.g., methamphetamine) contaminated by or mixed with fentanyl or other opioids and this resulting in subsequent opioid misuse or addiction.20,21 There is not definitive research evidence linking accidental opioid exposure to subsequent misuse or addiction nor is there a validated measure of accidental opioid exposure. However, this topic appears to be receiving growing attention and, as knowledge and tools evolve, merits consideration by both researchers and practitioners.


i Consideration was also given to including one other risk/protective factor—impulsivity and attentiveness. However, it was ultimately decided to not include this factor in this initial version of the Toolkit for three reasons. First, self-control is already considered in the Mental/Emotional Health section of the Toolkit and should be highly correlated with impulsivity. Second, ADHD is likely to be better reported by parents (e.g., based on youth being in treatment). Finally, it can be challenging to find good public domain measures of impulsivity and attentiveness.

  • Cited Literature
    1. Arthur, M. W., Hawkins, J. D., Pollard, J. A., Catalano, R. F., & Baglioni, A. J. J. (2002). Measuring risk and protective factors for substance use, delinquency, and other adolescent problem behaviors: The Communities That Care Youth Survey. Evaluation Review, 26, 575–601. http://dx.doi.org/10.1177/019384102237850
    2. Rutter, M. (1987). Psychosocial resilience and protective mechanisms. American Journal of Orthopsychiatry, 57, 316–331. http://dx.doi.org/10.1111/j.1939-0025.1987.tb03541.x
    3. DuBois, D. L., Portillo, N., Rhodes, J. E., Silverthorn, N., & Valentine, J. C. (2011). How effective are mentoring programs for youth? A systematic assessment of the evidence. Psychological Science in the Public Interest, 12, 57–91. http://dx.doi.org/10.1177/1529100611414806
    4. Rhodes, J. E. (2002). Stand by me: The risks and rewards of mentoring today’s youth. Cambridge, MA: Harvard University Press. 5. Rhodes, J. E. (2005). A model of youth mentoring. In D. L. DuBois & M. J. Karcher (Eds.), Handbook of youth mentoring (pp. 30–43). Thousand Oaks, CA: SAGE.
    5. DuBois, D. L., Holloway, B. E., Valentine, J. C., & Cooper, H. (2002). Effectiveness of mentoring programs for youth: A meta-analytic review. American Journal of Community Psychology, 30, 157–197. http://dx.doi.org/10.1023/A:1014628810714
    6. Dishion, T. J., & Medici Skaggs, N. (2000). An ecological analysis of monthly “bursts” in early adolescent substance use. Applied Developmental Science, 4, 89–97. http://dx.doi.org/10.1207/S1532480XADS0402_4
    7. Elliott, D. S., Huizinga, D., & Ageton, S. (1985). Explaining delinquency and drug use. Beverly Hills, CA: SAGE.
    8. Rudolph, K. D., Lansford, J. E., Agoston, A. M., Sugimura, N., Schwartz, D., Dodge, K. A., … Bates, J. E. (2014). Peer victimization and social alienation: Predicting deviant peer affiliation in middle school. Child Development, 85, 124–139. http://dx.doi.org/10.1111/cdev.12112
    9. Short, J. F. Jr. (1957). Differential association and delinquency. Social Problems, 4, 233–239. http://dx.doi.org/10.2307/798775
    10. Short, J. F., Jr., & Strodtbeck, F. L. (1965). Group process and gang delinquency. Chicago, IL: University of Chicago Press.
    11. Van Ryzin, M. J., & Dishion, T. J. (2014). Adolescent deviant peer clustering as an amplifying mechanism underlying the progression from early substance use to late adolescent dependence. Journal of Child Psychology and Psychiatry, 55, 1153–1161. http://dx.doi.org/10.1111/jcpp.12211
    12. Desjardins, T., Yeung Thompson, R. S., Sukhawathanakul, P., Leadbeater, B. J., & MacDonald, S. W. S. (2013). Factor structure of the Social Experience Questionnaire across time, sex, and grade among early elementary school children. Psychological Assessment, 25, 1058–1068. http://dx.doi.org/10.1037/a0033006
    13. Downey, G., Lebolt, A., Rincón, C., & Freitas, A. L. (1998). Rejection sensitivity and children’s interpersonal difficulties. Child Development, 69, 1074–1091. http://dx.doi.org/10.2307/1132363
    14. Kanchewa, S. S., Yoviene, L. A., Schwartz, S. E. O., Herrera, C., & Rhodes, J. E. (2016). Relational experiences in school-based mentoring: The mediating role of rejection sensitivity. Youth & Society. Advanced online publication. http://dx.doi.org/10.1177/0044118X16653534
    15. Metzger, A., Crean, H. F., & Forbes-Jones, E. L. (2009). Patterns of organized activity participation in urban, early adolescents: Associations with academic achievement, problem behaviors, and perceived adult support. The Journal of Early Adolescence, 29, 426–442. http://dx.doi.org/10.1177/0272431608322949
    16. Rose-Krasnor, L., Busseri, M. A., Willoughby, T., & Chalmers, H. (2006). Breadth and intensity of youth activity involvement as contexts for positive development. Journal of Youth and Adolescence, 35, 385–399. http://dx.doi.org/10.1007/s10964-006-9037-6
    17. Center for Behavioral Health Statistics and Quality. (2017). 2016 National survey on drug use and health: Detailed tables. Substance Abuse and Mental Health Services Administration, Rockville, MD. https://www.michigan.gov/documents/mdhhs/UnderstandingWhoIsAtRisk_547024_7.pdf
    18. Center for Application of Prevention Technologies (2016). Preventing prescription drug misuse: Understanding who is at risk. Substance Abuse and Mental Health Services Administration, Rockville, MD. https://www.michigan.gov/documents/mdhhs/UnderstandingWhoIsAtRisk_547024_7.pdf
    19. Firth, S. (2018, July 17). Growing array of street drugs now laced with Fentanyl – Physicians, officials spotlight grim trends and possible solutions. MedPage Today. Retrieved from https://www.medpagetoday.com/primarycare/opioids/74071
    20. Bebinger, M. (2019, March 21). Fentanyl-linked deaths: The U.S. opioid epidemic’s third wave begins. Shots: Health News from NPR. Retrieved from https://www.npr.org/sections/health-shots/2019/03/21/704557684/fentanyl-linked-deaths-the-u-s-opioid-epidemics-third-wave-begins

Risk and Protective Factors

Family Management

This measure consists of 8 items. Sample items include: “My parents ask if I’ve gotten my homework done,” “When I am not at home, one of my parents knows where I am and who I am with,” and “The rules in my family are clear.” Youth respond on a 4-point scale: NO!, no, yes, or YES!

Scale

Communities That Care (CTC) Youth Survey — Poor Family Management subscale

What It Measures:

Family management practices characterized by unclear expectations for behavior and poor parental monitoring of behavior.

Intended Age Range

11- to 18-year-olds.

Rationale

The Poor Family Management scale was selected because it is relatively brief and has evidence of its reliability and validity in for use with males and females and diverse racial and ethnic groups.

Cautions

One item on the scale, which asks respondents whether they would be caught by parents if they carried a handgun without permission, may be less appropriate for some youth and programs. The scale appears likely to retain its validity with this item removed or refocused (e.g., more broadly asking about carrying a weapon).

Special Administration Information

None.

How to Score

Each item is scored from 4 (NO!) to 1 (YES!).The total score is computed by averaging across all items.

How to Interpret Findings

Higher scores reflect youth reports of poorer family management/parental monitoring.

Access and Permissions

The measure is available for non-commercial use with no charge or advance permission required, and the items can be accessed online here (page 8).

Alternatives

None recommended.

  • Cited Literature
    1. Arthur, M. W., Hawkins, J. D., Pollard, J. A., Catalano, R. F., & Baglioni, A. J., Jr. (2002). Measuring risk and protective factors for substance use, delinquency, and other adolescent problem behaviors: The Communities That Care Youth Survey. Evaluation Review, 26, 575–601. http://dx.doi.org/10.1177/0193841X0202600601

Risk and Protective Factors

Deviant Peer Affiliation

The Peer Affiliation scale consists of 4 items. Sample items include: “Percent of friends who misbehave or break rules” and “Percent of friends who experiment with smoking and drugs.” Youth respond on a 5-point scale: Very Few (less than 25%), Some (around 25%), About half (50%), Most (around 75%), or Almost All (more than 75%).

Scale

Peer Affiliation and Social Acceptance (PASA) Measure — Peer Affiliation subscale

What It Measures:

Deviant peer affiliations.

Intended Age Range

12- to 13-year-olds; appears to be most appropriate for young adolescent/middle-school-age populations.

Rationale

This measure was selected based on its brevity, evidence of reliability and validity across males and females and ethnically diverse early adolescents. Additionally, the measure contains well-defined response choices and includes a positive peer affiliation item (i.e., “Percent of friends who are well-behaved in school”).

Cautions

One item on the scale, which asks about the percentage of peers who dress or act like gang members, may be less appropriate for some youth and programs. It is anticipated that the scale would retain its validity and still have adequate reliability with this item removed if doing so would be appropriate based on the age and backgrounds of youth served by a program. Also, because youth may not report deviant peer affiliations accurately, when possible, programs, may want to obtain ratings from multiple informants (e.g., parent or teacher ratings) or combine self-report with direct observation.

Special Administration Information

None.

How to Score

Each item is scored from 1 (Very few – less than 25%) to 5 (Almost all-more than 75%), with the exception of one reverse-coded item (i.e., “Percentage of friends who are well-behaved in school”). The total score is computed by averaging across all items.

How to Interpret Findings

Higher scores indicate greater deviant peer affiliations.

Access and Permissions

The scale is available for non-commercial use with no charge and can be obtained by contacting Dr. Thomas J. Dishion (dishion@asu.edu). A ready-to-use version of the items in this measure is provided here.

Alternatives

None recommended.

  • Cited Literature
    1. Dishion, T. J., Kim, H., Stormshak, E. A., & O’Neill, M. (2014). A brief measure of peer affiliation and social acceptance (PASA): Validity in an ethnically diverse sample of early adolescent. Journal of Clinical Child & Adolescent Psychology, 43, 601–612. http://dx.doi.org/10.1080/15374416.2013.876641

Risk and Protective Factors

Peer Victimization

This measure consists of 4 items. Sample items include: “Other students picked on me” and “I got hit and pushed by other students.” Response options are Never, 1-2 times, 3-4 times, 5-6 times, or 7 or more times.

Scale

University of Illinois Victimization Scale – Peer Victimization items

What It Measures:

A youth’s level of overt victimization from peers over the last 30 days.

Intended Age Range

8- to 18-year-olds.

Rationale

This scale was selected because it is brief, has evidence of reliability and validity, and provides information on the frequency of peer victimization over a specified period of time.

Cautions

This measure does not assess relational peer victimization⎯a form of peer victimization aimed at damaging children’s social relationships or reputation. Although overt and relational peer victimization tend to be correlated, available research suggests value in distinguishing between these forms of victimization. There is some evidence, for example, suggesting that levels of overt and relational victimization vary by gender and are related differently to other psychosocial outcomes.

Special Administration Information

Care should be taken to ensure that youth understand that they are reporting on the frequency with which acts of peer victimization happened to them over the last 30 days.

How to Score

Each item is scored on a 5-point scale from 1 (Never) to 5 (7 or more times). A total score is obtained by averaging scores across all items. Item scores have also been used to classify children as victims or non-victims. Children who score 2 (“1-2 times”) or higher on more than 2 items are classified as “victims”; those who do not are classified as “non-victims.”

How to Interpret Findings

Based on past research classifying children as victims or non-victims, children whose total score (their average across items) is greater than 2 (corresponding to a response choice of 1-2 times are often considered “repeated victims.” The developers of the measure have similarly used a cut-off score of 1.5 to classify children as victims (1.5 or above) or non-victims (lower than 1.5).

Access and Permissions

Illinois Victimization Scale is available for non-commercial use with no charge here (the peer victimization items are items 4, 5, 6, and 7).

Alternatives

The Social Experiences Questionnaire (SEQ) is a good alternative for those interested in measuring relational victimization. The SEQ is a 15 item measure with 3 subscales assessing overt victimization (5 items), relational victimization (5 items), and receipt of prosocial acts (5 items). This measure shows good reliability and validity and there is support for use in diverse samples (e.g., German, Chinese, and Mexican-American youth). Additional information on this scale is available here.

Risk and Protective Factors

Out-of-School Time (OST) Structured Activity Involvement

This measure consists of 4 items. The items ask youth about their weekly participation or involvement in structured, adult-supervised OST activities (e.g., a club, program, sports team, lessons, or tutoring/homework help) at different places. Items ask about the youth’s involvement in OST activities at school, a center or activity area in the community, a place of worship, and any other places outside the home. Youth choose from the following options: 0 hours a week, Some time but less than 2 hours a week, 2-5 hours a week, and More than 5 hours a week.

Scale

Out-of-School-Time Structured Activity Involvement

What It Measures:

Youth’s level of participation in structured out-of-school-time (OST) activities.

Intended Age Range

10- to 17-year-olds.

Rationale

This measure was selected based on its relative brevity, appropriate coverage of the breadth of possible OST activities, and anticipated ease of use with youth.

Cautions

None.

Special Administration Information

Before having youth respond to these items, they should be provided with the designated instructions. These will ensure that youth have a clear understanding of the types of activities being asked about. It will also be necessary to specify the time frame that youth should use in responding to the questions. Differing time frames may be appropriate based on considerations such as the length of the mentoring program in which youth are participating and the time of year during which it operates (e.g., academic year vs. summer). For example, for a semester-long program during the school year, it may be useful to have youth consider their OST activity involvement over a time period of similar duration “ (e.g., “In a usual week over the past three months, how many hours did you spend…”). The selected time frame should be included as part of the written instructions.

How to Score

Each item is scored on a 4-point scale from 0 (0 hours a week) to 3 (more than 5 hours a week). There are several ways to score these items. An overall involvement score can be calculated based on the average level of involvement across all items, such that higher scores indicate greater overall involvement. A breadth of involvement score can be calculated by counting the number of different types of activities (out of five) in which a youth reports at least some level of involvement. An intensity of involvement score also can be calculated as the average of the youth’s ratings across all of the activities in which he or she reports some level of involvement.

How to Interpret Findings

Higher scores reflect greater youth involvement in OST activities.

Access and Permissions

This scale is available for non-commercial use with no charge and is provided here.

Alternatives

It also may be of interest to ask youth about their involvement in different types of OST activities (e.g., sports, academic, volunteer). The research cited below illustrates this approach.

  • Cited Literature
    1. This measure was developed specifically for the Measurement Guidance Toolkit. The development of the measure was informed by the following research: Scales, P. C., Benson, P. L., Bartig, K., Streit, K., Moore, K. A., … & Theokas, C. (2006). Keeping America’s Promises to Children and Youth: A Search Institute– Child Trends Report on the Results of the America’s Promise National Telephone Polls of Children, Teenagers, and Parents. Minneapolis, MN: Search Institute. (Available here)

Risk and Protective Factors

Symptoms of Trauma Exposure

The CPSS consists of 26 items that include 2 event items, 17 symptom items, and 7 functional impairment items.

Youth are first asked to write down: (1) their most distressing event; and (2) the length of time that has lapsed since that event occurred. Youth then respond to symptom items on a 4-point scale: “Not at all or only at one time”, “Once a week or less/once in a while”, “2-4 times a week/half the time”, or “5 or more times a week/almost always.” Sample symptom items include: “Having bad dreams or nightmares,” “Trying to avoid activities, people or places that remind you of the traumatic event,” and “Having trouble falling or staying asleep.” Youth then indicate (“Yes” or “No”) whether the symptoms they experienced have gotten in the way of any of 7 areas of life (i.e., led to functional impairment). Sample items/domains include “Relationships with friends,” “Fun and hobby activities,” and “Schoolwork.”

Scale:

Child Post Traumatic Stress Disorder Symptom Scale (CPSS)

What it measures:

Posttraumatic Stress Disorder (PTSD) diagnostic criteria and symptom severity; Trauma exposure.

Intended age range:

8- to 18-year-olds.

Rationale:

There are few sound child measures of trauma exposure, and the CPSS appears to provide a comprehensive, reliable, and valid assessment of trauma exposure and, for those interested, specific characteristic aspects of PTSD (e.g., re-experiencing the event, avoidance, and hyperarousal). Several recent studies provide strong psychometric evidence for males and females and diverse racial and ethnic groups.

Cautions:

A clinical professional or researcher with relevant training should be involved in the use of this measure. This type of support will be essential for interpreting responses and scores on the measure appropriately as well as for ensuring follow-up with individual youth when appropriate. Moreover, an important consideration is the sensitive nature of the questions (e.g., asking youth about an upsetting event). As noted, programs using this measure should be prepared to respond appropriately to content that may be shared (e.g., follow-up and possible referral for mental health treatment). It is also strongly recommended that programs administer the CPSS with a staff member present in view of the sensitive nature of the questions (e.g., do not mail out for independent completion). In cases where brevity is a top priority, users may prefer to use the symptom scale alone. Several studies support adequate to excellent reliability of this scale; however, evidence regarding its predictive validity (i.e., how well it predicts other key outcomes) is less clear.

Special administration information:

The full scale requires about 10 to 15 minutes for completion.

How to score:

Each item is scored a 4-point scale (0 = Not at all or only at one time; 1= Once a week or less/once in awhile; 2 = 2-4 times a week/half the time; 3 = 5 or more times a week/almost always) and yields a total symptom-severity score (ranging from 0 to 51) and a total severity-of-impairment score (ranging from 0 to 7). Measure developers recommend viewing a score of 11 or higher as suggestive of the presence of PTSD; however, there is ongoing debate regarding the most appropriate cut score, with recommendations ranging from 11 to 20.

How to interpret findings:

Higher scores are associated with greater trauma exposure and functional impairment.

Access and permissions:

The measure is available for non-commercial use with no charge and can be accessed online here. Translations are available in Spanish, Korean, Russian, Armenian, Chinese, German, Hebrew, Norwegian, Polish, and Swedish.

Alternatives:

Those interested in a briefer measure of trauma exposure may want to consider the 4-item Post-Traumatic Stress (PTS) subscale of the SCARED-Brief Assessment of Anxiety and PTS. Preliminary evidence suggests promising psychometric properties of this scale, and it does not require specialized clinical training for administration or interpretation. However, the measure does not yet have an adequate base of research support to be fully confident in its reliability and validity.

  • Cited Literature
    1. Foa, E. B., Johnson, K. M., Feeny, N. C., Treadwell, K. R. H. (2001). The Child PTSD Symptom Scale: A preliminary examination of its psychometric properties. Journal of Clinical Child Psychology, 30, 376–384. http://dx.doi.org/10.1207/S15374424JCCP3003_9

Risk and Protective Factors

Risk Factors for Opioid Misuse and the Negative Impact of Others’ Opioid Misuse

Survey items include doctor prescribed opioid use (1 item) and, if yes, age of first opioid prescription (1 item), perceived availability of opioids (2 items), friends’ substance use, including opioids (1 item), perceived risk of harm from the use of opioids (2 items), awareness of opioid misuse by others (1 item), and negative impact of others’ opioid misuse (1 item).

Sample items include “How difficult or easy would it be for you to get some heroin, if you wanted some?” and “How much do you think people risk harming themselves (physically or in other ways), if they try opioid drugs other than heroin once or twice?” Response scales vary across items (see measure).

Scale

Risk Factors for Opioid Misuse and the Negative Impact of Others’ Opioid Misuse

What it measures:

Items address a range of potential risk factors for opioid use among youth and negative impact on youth of others’ opioid misuse.

Intended Age Range

Youth aged 12 and older.

Rationale

These items were selected due to their relative simplicity and brevity, use in large-scale surveys of drug use (e.g., Monitoring The Future, National Survey on Drug Use and Health), and appropriateness for use with adolescents.

Cautions

This survey is not an exhaustive list of factors that may influence risk that a youth will misuse opioids. Youth mentoring programs specifically targeting opioid risk or opioid misuse should consider conducting a more comprehensive assessment of risk (e.g., psychological distress, other substance use). Items addressing opioid misuse by others and its potential impact on youth were developed for purposes of this measure and thus have not been field tested or assessed for reliability or validity.

Special Administration Information

None.

How to Score

There is no total or aggregate score created. Users of the measure should examine responses to individual items to facilitate understanding of risk reported by individual youth or groups of youth.

How to Interpret Findings

Responses indicating doctor prescription of opioids at an earlier age, greater access to opioids, less perceived risk of harm associated with opioid use, and opioid use by others (e.g., friends, parents) may indicate greater risk for opioid misuse by the youth. Reports of being negatively impacted by others’ opioid use, furthermore, indicate a potential for youth to be harmed by opioid use occurring in their family, peer, and community environments.

Access and Permissions

A copy of the measure can be found here. The measure is available for non-commercial use with no charge.

Alternatives

The Screener Opioid Assessment for Patients with Pain-Revised (SOAPP-R) is a measure of risk for opioid medication misuse (Butler, Fernandez, Benoit, et al., 2008) and may be a good alternative to the current measure for use with young adults or adults. The 24 items are rated from 0 (“never”) to 4 (“very often”) by respondents based on frequency of occurrence. Total scores range from 0 to 96. Scores ≥ 18 indicate high risk for opioid misuse. It should be kept in mind that this measure was validated with pain patients and thus broader applicability to other populations remains to be established. A copy of the measure can be found here.

  • Cited Literature
    1. Arthur, M. W., Hawkins, J. D., Pollard, J. A., Catalano, R. F., and Baglioni Jr., A. J. (2002). Measuring risk and protective factors for substance use, delinquency, and other adolescent problem behaviors: The Communities That Care Youth Survey. Evaluation Review, 26, 575–601. doi: 10.1177/0193841X0202600601
    2. Center for Behavioral Health Statistics and Quality. (2018). 2019 National Survey on Drug Use and Health (NSDUH): CAI Specifications for Programming (English Version). Substance Abuse and Mental Health Services Administration, Rockville, MD. Retrieved from https://www.samhsa.gov/data/sites/default/files/cbhsq-reports/NSDUHmrbCAISpecs2019.pdf
    3. Glaser, R. R., Van Horn, M. L., Arthur, M. W., Hawkins, J. D., & Catalano, R. F. (2005). Measurement properties of the Communities That Care® youth survey across demographic groups. Journal of Quantitative Criminology, 21, 73-102. doi: 10.1007/s10940-004-1788-1
    4. Kecojevic, A., Wong, C. F., Schrager, S. M., Silva, K., Bloom, J. J., Iverson, E., & Lankenau, S. E. (2012). Initiation into prescription drug misuse: Differences between lesbian, gay, bisexual, transgender (LGBT) and heterosexual high-risk young adults in Los Angeles and New York. Addictive Behavior, 37, 1289-1293. doi: 10.1016/j.addbeh.2012.06.006
    5. Miech, R. A., Johnston, L. D., O’Malley, P. M., Bachman, J. G., Schulenberg, J. E., & Patrick, M. E. (2019). Monitoring the Future national survey results on drug use, 1975–2018: Volume I, Secondary school students. Ann Arbor: Institute for Social Research, The University of Michigan. Available at http://monitoringthefuture.org/pubs.html#monographs

Evaluation Guidance and Resources

Identifying valid and reliable measures for assessing targeted outcomes is only one small part of the work of capturing a mentoring program’s impact. There are many other considerations that need to be addressed if programs are to get the most out of the measures recommended in this Toolkit. Please see the links below for additional information on conducting meaningful, practical program evaluations and where you can get help in doing this for your program.

Evaluation Guidance and Resources

Advice for Designing and Administering Evaluation Tools

Below you can find our Research Board’s answers to some of the most common questions asked about measuring outcomes and using instruments like the ones found in this Toolkit. Don’t see your question here? Please email us and we can offer some guidance and perhaps add your question to the list!

  • What’s a reasonable number of questions for a youth survey?

    One of the biggest challenges faced by youth mentoring programs when designing a comprehensive survey for participants is limiting the questions to a reasonable number. This is especially true when surveying youth participants as they may struggle to provide accurate information over a lengthy survey that tests their patience and energy level.

    While there is no magic number that serves as a maximum, in general, you are likely to want the survey to be something youth could complete in a relatively short period of time (20 minutes or so). You may consider breaking up a longer survey over multiple administration points, although that also increases the likelihood of getting incomplete responses from more participants. Yes/No, True/False, and Likert-style ratings (e.g., noting the extent to which you agree or disagree with something on a 5-point scale) generally will allow for more total questions to be asked than open-ended questions that require the writing of detailed thoughts.

    Youth participants are also more likely to provide robust and accurate information when the questions are relevant to their time in the program, use appropriate language for their reading levels, and are sequenced in a way that flows logically and makes it easy for them to think about their response to each in context. If you have doubts about the length or content of your survey, consider testing it with a small group of youth before finalizing it to get a sense for how quickly and thoroughly they can complete the items and if there are any words or questions that are difficult for them.

  • Most youth will respond well to either pencil and paper or (for older youth) online surveys. In either case, it may be appropriate to have questions read aloud for younger youth and those who do not have strong reading skills. This can serve both to enhance the quality of the data and reduce the time and effort required on the part of the youth to complete the survey. Ideally, youth will be administered surveys at the program site or in another location with the oversight of program staff. If this is not possible, care should be taken to ensure that youth are in a quiet and private location when completing the survey, without the close presence of a parent or their mentor. Verbal responses are great for qualitative evaluations or evaluations that are focused on program implementation, as they can contain nuanced information that provides rich feedback about the program’s services. But responding to an outcome survey verbally introduces a host of concerns (e.g., that the question asker is influencing the responses given, that youth may not want to respond honestly to some types of questions) and should only be used when trying to solicit responses from very young children who might struggle with a written or online survey.

  • The surveys and scales offered in this Toolkit were selected by the NMRC Research Board for their general appropriateness with mentoring program goals and outcomes. But, these are by no means the only scales available to assess the outcomes (and risk and protective factors) involved. You may find that our recommended measures are not quite what you are looking for in terms of reading level/language or the nuances of how they represent a particular outcome (e.g., examining changes in attitudes rather than changes in behavior).

    For many of the measures recommended here, links are provided to alternative measures that may be closer to what you are looking for. However, we strongly caution programs about using truly “homegrown” questions or scales that have not been tested for validity (i.e., that the scale truly measures what it’s intended to measure) and reliability (i.e., it measures these things consistently across testings or situations). In rare circumstances, a program may have unique needs that warrant the development and testing of a brand new set of survey questions. But chances are that an existing set of questions can meet your needs. If you don’t find what you are looking for in this Toolkit, remember that you can always request free, personalized technical assistance through the NMRC to get help with exactly this kind of instrument identification.

  • In a couple of cases, it is noted that you may want to alter a recommended measure for your own use. In most cases, however, we recommend you use the scale exactly as provided. You will make the strongest case for your program if you use pre-existing sets of questions in their entirety. Typically—especially when asking youth about attitudes—this means asking several similar questions, rather than only one or two. Picking a subset of items out of the recommended scales or changing the wording of these items—even very small changes—may not yield valid and reliable measurement of the outcome (or risk or protective factor) in which you are interested, and you may not be able to compare your findings with those of other programs using the original instrument.

    You can, in some instances, make small edits to the wording of a question to make it more age appropriate, but this should be undertaken only with the guidance of an experienced evaluator. If you find yourself wanting to substantially rewrite a set of questions, you may be better off identifying a different, more appropriate scale.

  • Overarching principles for good survey design, several of which are addressed above, include:

    • Keep a close eye on overall length, especially for youth surveys.
    • Make sure that all the questions use age-appropriate and culturally-relevant language, concepts, and response options, especially if drawn from different sources. The scales included in this Toolkit were selected with these considerations in mind.
    • Try to group questions on a given topic together and consider grouping questions with similar response options together.
    • Ensure that the survey has a logical flow to it and that the major sections are sequenced in a way that allows for efficient answering.
    • Consider adding brief “introductory” sections that orient youth to a new topic or set of response options (e.g., moving from “true/false” to “agree…disagree”).
    • Consider placing scales that ask about more personally sensitive topics (e.g., depressive symptoms or delinquent behavior) later in the survey so that youth are not confronted with these right at the beginning of the survey, which may reduce their comfort level in responding to these types of questions.

    Also consider the following when developing your survey:

    Make sure you are selecting outcomes that fit precisely with your program’s logic model and/or theory of change. While it can be tempting to cast a wide net when looking for outcomes from your program’s good work, remember that every set of questions added to a survey increases the overall burden on respondents and may decrease both the volume and accuracy of the information you collect. Be as targeted in your surveying as possible, sticking to the outcomes your logic model says are tied most closely to your program vision, and perhaps try to uncover additional rich information about your program via exit interviews, focus groups, or other qualitative approaches.

    Don’t forget about the time investment for your staff. The more questions asked, the more data will need to be stored, analyzed, and reported on. Even if you have hired an external evaluator, your staff is still likely to have a role in these types of tasks (see the Key Evaluation Considerations page for more information about evaluation roles and staffing).

    Take care to ask personal or sensitive questions only when they are critical to your program’s goals and you have considered the ramifications. Many mentoring programs address serious youth needs and circumstances and may need to demonstrate real progress on these challenges to accurately capture the good they are doing. But asking youth about highly personal and potentially painful topics also carries with it the potential for youth to experience negative feelings both during and following the completion of a survey. It also can place an ethical requirement on programs to follow up appropriately depending the information that a youth shares. Youth should always be informed that they have the option of not responding to any question on a survey or providing the information in another way if they feel uncomfortable. Personal and sensitive questions are also good examples of ones that are likely best placed toward the end of surveys.

    Consider how you can gather multiple opinions on the same outcomes. While most programs emphasize their youth surveys in efforts to get direct feedback from mentees about their gains in the program, additional perspectives from parents, mentors, teachers, and other caring adults or stakeholders in the relationship can be invaluable. As you are adding sets of questions to your youth survey, think about which topics might be meaningful to ask one or more other parties about. Their responses can often confirm or help to explain youth responses, providing added rigor and useful nuance to evaluation findings. For some topics, they may even be better respondents than youth. Future versions of this Toolkit will incorporate measures from sources other than youth in order to help programs take advantage of their potential benefits.

  • As a general rule, the scales recommended in this Toolkit are intended to be used in a pre-post design (ideally with a control/comparison group)—administering the scales before the youth begins receiving program services, and then again later—except for the Risk and Protective Factors, which are likely to be administered only at baseline to help clarify which types of children seem to benefit most from the program’s services. Outcome evaluation designs that do not establish a baseline starting point of where their participants are before they get a mentor or that lack a thoughtfully-identified comparison group of non-mentored youth have greatly reduced ability to attribute any subsequent “outcomes” to the program itself or to understand just how much the mentored youth changed compared to where they started.

  • As noted above, without a valid comparison group of youth, your evaluation is unlikely to say much about how big an impact your program has made. Having a comparative group of youth, ideally as similar as possible to those who were mentored in your program, can provide much needed context for findings. For example, a mentoring program serving middle schoolers might find that mentored youth are experimenting with drugs and alcohol more than they were at the beginning of the year. But, a comparison group might show even greater increased drug and alcohol use—critical context that might otherwise have led people to believe incorrectly that the program was, in fact, harmful. In another example, a program might find that mentored youth are faring much better in reading scores, but might take undue credit for that improvement if a comparison group would have shown that all students improved because of other factors (e.g., a new reading curriculum), not because of the work of the mentors.

    Smaller programs, though, in particular may often face challenges in coming up with a reasonable comparison group. Programs, for example, may have ethical concerns about denying mentoring to some youth to create a “comparison” group or they may simply be in too small of a setting to find a reasonable group of youth against whom to compare mentees. Programs are encouraged to be creative in how they find groups of comparable youth: working with similar youth at another school or in another city as a comparison group, using the incoming wave of mentees as the comparison group for the currently mentored youth, etc. These designs are beyond the scope of this Toolkit, but are options to consider with the guidance of a professional evaluator or through technical assistance requested through the NMRC. It should also be noted that if your program wants to know with the greatest confidence and credibility that it is making meaningful change in the youth it serves, an experimental design is likely to be necessary in which youth are randomly assigned to either the program or a non-program control group.

  • In general, the scales in this survey are valid based on the questions being answered in their entirety. If youth are skipping questions, it can greatly impact the accuracy of the measures or prevent you from determining an overall score on the measure altogether. So when possible, encourage mentees to honestly and accurately answer all questions in a particular measure, while still making clear their opportunity to skip any question. There are, however, strategies available to score measures when some of the items have been skipped; these will be addressed as part of the further development of this Toolkit.

    What if youth in our program are slightly younger or older than the recommended ages for a scale?

  • Can we stretch the age limits any? Within reason, scales may be able to be used with youth who are slightly younger than the lower end of the measure’s intended age range. This should only be considered when it is confirmed that such youth can still read and understand the questions. Even then, it should be kept in mind that younger-than-intended youth may find it difficult to understand and discriminate reliably among the response options that accompany each question. In general, it is easier for older youth to answer questions intended for a younger audience. Yet, it should be considered that questions designed for younger mentees may be asking about attitudes or behaviors that are developmentally inappropriate for older youth. In sum, to ensure the most accurate measurement, it is always best to use instruments with youth who are within the intended age range.

  • Yes, and in some instances they are likely to be much more accurate than asking parents, mentors, teachers, or other adults about the outcome (or risk or protective factor) as exhibited or experienced by the youth. In general, younger mentees may struggle to conceptualize their internal feelings, beliefs, and even actions. Older youth may be more likely to falsify answers, either to hide things adults may disapprove of or to tell an adult what they think they want to hear. But in general, self-reports are a useful and well-accepted way of assessing attitudes, values, perceptions, and behaviors. One tip for helping to avoid bias in youths’ responses is to make sure that they know how their answers will be used—for example, that their answers will be used only when combined with other mentees for the purposes of your evaluation. Also, make sure that youth understand who their responses with whom will and will not be shared, keeping in mind promises of confidentiality may not extend to portions of surveys that ask about actions that may indicate self-harm or a potential for harm to others. The bottom line is that you should always tell youth exactly how you will and will not use their responses and who will see them, before they complete their survey.

  • The purpose of an IRB review is to ensure that sensitive information about individuals participating in a research study is protected and potential harm to them is minimized. For most nonprofit mentoring programs, the purpose of collecting survey data is to inform internal program improvements or to report performance measures to a funding agency. If data are being collected for one of these purposes, this is probably not considered “research” (for additional guidance, see these helpful resources developed by the Department of Justice or Child Trends). But keep in mind that federally funded initiatives that include research activities or research being conducted by external evaluators, especially those affiliated with a higher education institution, may be required to go through IRB approval before surveying youth or families, so be sure to plan for what can sometimes be a lengthy delay. And while they may not have formal IRBs, many schools or school districts can have strict policies regarding surveying or collecting other data from students (tribal governments may also have similar policies). So make sure that you have explored all necessary protocols and discussed with grant officials if receiving funding before collecting data from your program participants.

    Even if you are not required to undergo an IRB review, it is still a good idea to try and protect the sensitive information that the youth and families may provide you in surveys. Some recommendations for protecting this information include: collecting anonymous surveys, removing identifying information from surveys, ensuring that surveys are secured in locked files, and working with a data information management specialist in designing protected data files.

  • How can I help them understand the meaning of the questions without influencing their answers? In general, providing students with additional information or guidance can taint your results or change the accuracy of your data as even simple explanations or supports can make the survey experience different for some mentees. However, if you do need to help a mentee understand a concept, read a challenging word, or clarify the intent of a question, make sure that you provide the same information to all of the mentees taking the survey (e.g., you might want to create a survey “dictionary” that defines difficult words for youth and provide it to all youth who take your survey). This is more easily done in group settings where you can explain a concept to everyone at once. But generally, program staff should avoid providing much help and focus their energy on selecting an appropriate instrument that will be easy for their mentees to understand and complete on their own.

  • The requirements of government or private organizations funding mentoring programs vary; however, many include the collection and reporting of performance measures. These survey resources may be able to assist with the data collection for these purposes, but you should carefully review the funder’s requirements.

    For OJJDP-funded projects, additional information on grant performance measures can be found on OJJDP’s Performance Measures Webpage.

  • National Mentoring Resource Center Research Board (Bowers, E., DuBois, D., Elledge, C., Hawkins, S., Herrera, C., Neblett, E.) with Garringer, M., & Alem, F. (2016). Measurement guidance toolkit for mentoring programs. Washington, DC: Office of Juvenile Justice and Delinquency Prevention National Mentoring Resource Center. Available at Measurement Guidance Toolkit

    Please note: This project was supported by Grant No. 2013-JU-FX-K001 awarded by the Office of Juvenile Justice and Delinquency Prevention, Office of Justice Programs, U.S. Department of Justice. Points of view or opinions in this document are those of the author and do not necessarily represent the official position or polices of the U.S. Department of Justice.

Evaluation Guidance and Resources

Key Evaluation Considerations

There are several considerations that should be kept in mind when using the instruments recommended in this Toolkit as part of an overall formal evaluation of a mentoring program’s effectiveness:
  1. Ensure fidelity to a program’s design, including the expected roles and behaviors of mentors, as well as the quality and duration of the mentoring relationships established before investing any resources in impact evaluation.

    Simply put, make sure that a mentoring program is both being implemented as intended and fostering the desired types of mentoring relationships before trying to determine whether its services are effective in benefiting participating youth. Skipping this step is unfair to mentors, youth, and other stakeholders who deserve an accurate assessment of the ultimate outcomes of their efforts.

  2. Always evaluate impact in the context of a relevant and plausible counterfactual (i.e., what youth outcomes would be in the absence of program involvement).

    This can be a challenge for many mentoring programs, especially ones that operate in remote communities or within small populations. But there are many options for addressing this concern and the NMRC can provide technical assistance to mentoring programs in helping them determine how to best find a comparison or control group of youth.

  3. Always use a well-developed program logic model to guide both process and impact evaluation activities, including especially the selection of what to measure.

    Programs may want to tightly focus their evaluation efforts on the outcomes that they are likely to achieve based on program activities and areas of emphasis. This can save time, energy, and money when it comes to conducting evaluation activities and is likely to increase the chances of finding meaningful positive impacts.

  4. Always assess targeted youth outcomes using well-validated measures.

    This is, in essence, why this Toolkit was developed. An unproven instrument is not a good choice for “proving” anything about your program’s results.

  5. Prioritize assessment of more proximal (i.e., initial and thus relatively immediate) anticipated youth outcomes over those that are more distal (i.e. emerging over relatively extended periods of time and likely to be contingent on attainment of proximal outcomes).

    Many mentoring programs are designed to help youth grow in ways that set the stage for eventual “big” outcomes like graduation, entering the workforce, or overcoming a major life challenge. But those outcomes can often be elusive and are subject to a number of forces outside of a program’s control. Start by measuring whether your program is effectively preparing youth in subtler ways for those big goals ahead.

  6. Always allow enough time for targeted youth outcomes to realistically be influenced by program participation.

    While shorter-term mentoring models have shown some ability to be impactful for youth, looking for big changes early in a mentoring relationship will usually not be realistic. The result of such premature measurement can be that a program seems like it’s less effective than it may actually be.

  7. Always collect youth outcome data from all participants, not only those who have desired amounts of program involvement (e.g., mentoring relationships lasting a full school year).

    Part of assessing the value of a program is determining whether it can actually deliver effective and impactful services to all the young people who it sets out to serve. A truly accurate picture of what a program has achieved requires that evaluations include outcome data from the youth who quit, left, or otherwise could not meet the ideal level of participation.

  8. Always have individuals with formal evaluation training and experience involved in designing, conducting, and analyzing and reporting the results of a mentoring program evaluation.

    The training and skill set needed to effectively design and lead an evaluation may be available internally within some mentoring programs. For most, however, there will likely be need for external assistance. Indeed, as the level of “evidence” desired increases (e.g., persuasive evidence of impact on youth outcomes), the more complicated and technically demanding the required evaluation activities (e.g., data collection and analysis) are likely to be. So make sure your program has access to the skills it needs to provide stakeholders with accurate and compelling evaluation results.

  9. Test for differences in program implementation fidelity, mentoring relationship duration/quality, and effects on youth outcomes.

    There are many, many factors that go into whether a program is able to be implemented as planned, establish and sustain high-quality mentoring relationships, and achieves meaningful outcomes. These include the backgrounds and characteristics of the youth served, the skill and experience levels of mentors and staff and features of the program’s design that may vary over time or across settings. Examining as many of these potential contributing factors as possible within an evaluation will help to paint a more accurate and nuanced picture of why a program did (or did not) achieve its goals.

  10. Always evaluate potential harmful effects of program participation on youth (e.g., adverse experiences with mentors).

    It can be hard to think of mentoring as a negative experience. Yet, the reality is that sometimes mentoring relationships (like all relationships) can be challenging or even harmful experiences. Make sure your program fulfills its ethical obligations, in part, by at least “doing no harm” to the youth served.

  11. Recognize the limitations of what it is possible to do in an evaluation.

    While evaluations of all types can yield valuable and actionable information for programs, they must be done well whatever their purpose. Consider, for example, impact evaluations in which the aim is to assess the effects of program participation on the outcomes of youth served against a comparison group. Such evaluations are invariably highly challenging to implement and can be expected to necessitate considerable investments of time not only from program staff, but also persons with advanced training in program evaluation and statistical analysis. Likewise, smaller programs may face additional challenges in generating enough statistical “power” to draw firm conclusions about their effects due to their small numbers of participants. Programs (big or small) can also face challenges around creating a relevant control or comparison group. This may especially be the case for so-called randomized control designs given that these may denying or at least delaying mentoring opportunities to some youth, which has moral, ethical and sometimes funding implications. With such considerations in mind, programs should take great care in choosing evaluation aims and approaches that best fit their size and resources.

  12. Report evaluation findings accurately and honestly.

    It’s understandable temptation for mentoring programs to present results of evaluations of their services in the most favorable light possible or to underreport findings that seem to question their efficacy. But, even poor or mediocre results are a positive in that they provide the needed information and impetus to improve a program’s services. And, even more importantly, stakeholders deserve an honest accounting of the program’s successes and failures. Their investments of time, energy, and other resources (e.g., funding) should be honored with accurate reporting and the use of evaluation findings as a foundation for program improvement.

  • Tips for Working with an External Evaluator

    One of the biggest decisions around program evaluation is whether to work with an external evaluator or to try and collect, interpret, and report on program data “in-house” using existing staff (i.e., an “internal” evaluation). A good starting point for making this decision is to take a look at the tasks associated with doing a quality evaluation and see how many of these your staff feels they can reasonably handle themselves given their existing job duties and skill levels:

    Tasks often required for both internal and external evaluations:

    • Train staff on data collection procedures;
    • Set up systems to collect, store and organize key information;
    • Train staff on the use of these systems;
    • Regular, consistent staff use of these systems (e.g., inputting data);
    • Staff administer assessments (e.g., surveys) to participants and/or encourage their completion.

    Additional tasks required for internal evaluations:

    • Design the evaluation;
    • For strong outcome evaluations, identify and collect data for a comparison group;
    • Identify and, as needed, develop tools to assess outcomes of interest;
    • Track administration of these tools;
    • Compile data from systems and tools used to collect information;
    • Statistically analyze the data to answer questions of interest;
    • Summarize the findings;
    • Develop a communications strategy for dissemination.

    If you do decide to work with an external evaluator, picking the right one can be surprisingly complicated. The following tips can help you find an external evaluator that’s right for your needs:

    • Start by studying other evaluations conducted by this evaluator (look at tone, focus, and accessibility, particularly to the audiences you are trying to reach). You’ll also want to ask several questions of potential evaluators:
      • How much input will you have on the measurement tools that are used?
      • How long will the study take and what will be required of your participants?
      • Will the study they have in mind answer the questions you want/need answered?
      • When will findings be shared with you?
      • How much staff time will you need to contribute and when?
      • Will results be disseminated more broadly or can they be solely for your internal use? (Most evaluators, especially if they are low or no cost, will only work on the study if they can disseminate the findings more broadly.)
      • Who will be working with you on the project (who is on the study team)? And how much time will each person on the team have on the project?
      • What kind of input will the partner allow in any reports that are disseminated on the study? (To remain “external”, most evaluators will have the final say on any formal evaluation reports that are prepared and disseminated.)
      • What will the final report include? Who will it be shared with?
    • Once you set up a relationship with them, make sure to create an MOU that outlines:
      • Specific roles for both your program and the evaluator—how much time (and what roles) your staff will need to devote to the study;
      • How much time will be required from your program participants; o Cost (Is it a “fixed” cost budget where you pay a set amount no matter what or a budget that may increase or decrease depending on what’s needed to complete the study?);
      • What kind of products will result from the study (reports, briefs, etc.)
      • How much input you will have in any report that gets written (e.g., how much time you will be given to review any products, what kind of input you will able to have); o What questions will be answered in the evaluation;
      • Timeline for the project;
      • Who will own the data; and
      • Who will be able to use the data going forward.
  • Costs for an external evaluation vary widely depending on who you partner with and the questions you want answered. Process evaluations can be less expensive because much of the required data can come directly from your program. But they may require more staff time to set up systems to allow an evaluator to assess your program’s activities. Outcome evaluations may not require as many systems to be in place, but will require more of your participants’ time and, in some cases, staff time to collect some or all of the data. They are also very time intensive for evaluators so are typically more expensive.

    You’ll need to budget staff time for:

    • Regular meetings with the evaluator;
    • Training staff in study consent and data collection procedures;
    • Setting up systems (spreadsheets, databases) for data collection;
    • Training staff in the use of these systems;
    • Regular staff use of these systems (e.g., inputting data regularly);
    • Reviewing tools selected or developed;
    • Administration of these tools (e.g., administering surveys);
    • Tracking administration of tools (i.e., who has been assessed and when);
    • Reviewing reports and other communications about the project;
    • (in some cases) Administering surveys to participants, including potentially those who are no longer attending your program.

Evaluation Guidance and Resources

Selected Reading and Resources

The following resources can help mentoring programs both to design good overall evaluations and to build strategies for ensuring that surveys are successful in measuring youth outcomes accurately.

General Evaluation


Survey Design/Administration


Logic Models and Theories of Change


Resources on Data Sharing with Schools

Evaluation and Guidance Resources

Training and Technical Assistance on Program Evaluation

The NMRC offers mentoring programs a dedicated training on the topic of program evaluation that can be requested via the TA Portal. This training covers the purposes and types of evaluation to consider, determining which outcomes to measure based on a program’s logic model, and planning steps for implementing an evaluation with or without external help. NMRC technical assistance providers are also available for one-on-one coaching on the topic.

Evaluation and Guidance Resources

Glossary

This resource provides explanations of the key terms encountered when exploring the Measurement Guidance Toolkit.

Association

A measure of the relation between two variables (or factors). If the values of one variable increase as the values of another increase, then the two variables have a positive association. If the values of one variable decrease as the values of another increase, the two variables have a negative association. An association does not necessarily indicate that one variable causes the other. It could be, for example, that two variables are associated because they are both influenced by another, third variable, and thus tend to increase or decrease together for this reason. Also see Correlation.


Biomarker

A measurable substance in an organism that indicates biological processes, pathogenic processes, or pharmacologic responses. It refers to indications of an objective a physical state as opposed to symptoms, which are limited to indications of health or illness perceived by persons themselves.


Comparison group

A group of people that does not receive the program being studied and that is compared to a group of program recipients on measures of interest. Comparison groups differ from “control groups” in that participants are not necessarily assigned randomly to be in the comparison group or the group receiving the program. Also see Control group.


Condition

A variation in services received as part of an evaluation. In a study that examines the difference between youth who receive an intervention (i.e., “intervention group”) and those who do not (i.e., “control” or “comparison group”), each of these groups would represent a condition in the study.


Continuous score

A score that can take on a large number of possible values within a particular range. In comparison, discrete scores are those that can take on only a limited number of possible values. For example, a question that asks a youth whether he or she engages in a behavior or not would generate discrete values (e.g., “yes” or “no”). The scoring for some of the scales provided in this toolkit assumes that the score generated for a respondent is continuous and thus that a youth responding to the scale can score anywhere between the minimum and maximum values. For example, a scale with 8 questions, each scored from 0 to 4, would have a minimum value of 0 and a maximum value of 32. A youth responding to the questions could have a score anywhere within that range. The dividing line between whether a score is best regarded as continuous or discrete is not always clear-cut and often may be informed by different types of statistical analyses conducted as part of the scale validation process.


Control group; Randomly assigned control group

A group that had the chance of being offered an intervention, but was randomly selected to not receive it. The data collected for this group are intended to represent what would have been the case for the intervention group had the youth in this group not been offered the program. Also see Comparison group.


Control statistically

Reducing the potential effect of a variable or set of variables (or factors) on an association between two other variables. For example, when looking at the association between mentoring with high school graduation, other variables that could affect graduation (e.g., academic performance, parent education, school connectedness) could be held constant to ensure that their influence on high school graduation does not distort or bias conclusions that are made about the potential contributions of mentoring to this outcome.


Correlation

A statistical measure that indicates the degree to which two or more variables are associated with each other. A positive correlation indicates that as one variable increases (or decreases), the other does as well; a negative correlation indicates that as one variable increases, the other decreases (or vice versa). The correlation between two variables can be anywhere from -1.0 (perfect negative correlation) to 1.0 (perfect positive correlation). A correlation does not necessarily indicate that one variable causes the other. It could be, for example, that two variables are correlated because they are both influenced by another, third variable, and thus tend to increase or decrease together for this reason. Also see Association.


Cut-off score

An established score that categorizes responses on a particular outcome measure. For example, in assessing youth physical activity levels using the measure provided in this toolkit, a cut-off score of 3 or more days of physical activity a week can be used to classify youth as “physically active.”


Demographic subgroup

A group that shares demographic characteristics. For example, African American youth are a demographic subgroup of youth, and female African American youth are a subgroup of African American youth.


Developmental applicability; Developmentally appropriate

An approach that considers the developmental stage of the young person. A developmentally appropriate measure is one that takes the age and cognitive abilities of youth into account in its structure (e.g., wording/reading level, applicability of terms/scenarios, length, etc.).


Dimension

An aspect of a construct being measured. For example, school engagement typically includes three dimensions in its measurement – behavioral, emotional, and cognitive engagement.


Effect size

A measure of the magnitude of the difference between groups, such as intervention and control groups, on an outcome measure of interest. Effect sizes are often expressed in standardized units to permit comparisons across both different outcomes and different measures of the same outcome. Cohen’s d is an example of this type of index and classifies effect sizes as small (less than 0.2 standard deviation difference between groups), medium (greater than 0.2 but less than 0.8), and large (greater or equal to 0.8). For a more detailed description of effect size and how to calculate it, see: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3444174/


Empirical evidence; Evidence base

Current knowledge within a particular field that is based on scientifically acquired information (i.e., experiments or observation).


Environmental risk factors

A challenge in the young person’s surrounding life circumstances, such as poverty or living in a dangerous neighborhood that is associated with increased likelihood of later difficulties (e.g., lack of high school graduation).


Forced choice

Questions that ask a respondent to select one or more responses. “Open-ended” responses, by contrast, allow respondents to write their responses in an open-ended format.


Grounded in theory

Being informed by, or based on, the principles or assumptions of a particular theory. For example, a program that aims to increase helping behavior in youth by exposing them to peers exhibiting this behavior is grounded in social modeling theory.


Individual risk factors

A challenge in a young person’s own behavior, including those related to personality traits or cognitive abilities, that is associated with increased likelihood of later difficulties.


Interaction

When the strength or direction of the association between one variable and an outcome depends on the level of one or more other variables. For example, when examining gender in relation to the association between mentoring and school misbehavior, the presence of an interaction would suggest that receiving mentoring or not has a different association with school misbehavior among boys (for example, a weaker or stronger association) than it does among girls.


Intervention group; Program group; Treatment group

A group that is offered the intervention or program that is being evaluated in a study.


Likert-type response

Options provided to survey respondents that use ordered response levels from which respondents choose one option that best aligns with their view (for example, the extent to which they agree or disagree with a particular statement). These response levels are anchored with consecutive numbers and/or labels that connote fairly evenly-spaced gradations along a continuum from least to most (e.g., strongly disagree, disagree, neutral, agree, strongly agree). More information available at https://www.cdc.gov/dhdsp/pubs/docs/cb_february_14_2012.pdf


Meta-analysis

A statistical technique that systematically combines results from several studies to develop a single conclusion. For example, results from several mentoring evaluations could be combined to assess the effectiveness of mentoring programs across those studies.


Nationally representative sample

A group of participants selected in a way that makes their characteristics closely match those of the population across the nation as a whole.


Normative data

Data that characterize how a characteristic, attitude, or skill is distributed in a given population to give a sense for what is usual in that population. Normative data are typically obtained from a large, randomly selected, representative sample from the wider population, and often may be provided separately for different subgroups (e.g., males and females).


Out-of-school-time

The time young people spend outside of the typical school day including, for example, time before school, after school, on weekends, or during school breaks (e.g., summer).


Psychometric properties; Psychometric evidence

How well the instrument measures what it is designed to measure (e.g., the reliability and validity of the instrument). Also see Reliability and Validity.


Psychosocial

The combination of social and psychological factors.


Public domain

Being available to the public and therefore not subject to copyright.


Randomized controlled trial; Randomized control trial; Random assignment

A study in which, or the process through which, individuals are assigned to intervention and control groups on the basis of chance so that every individual has the same probability as any other to be selected for either group. Using chance to assign people to groups helps to ensure that the groups are similar on all characteristics except for the intervention’s group opportunity to receive the intervention. Also see intervention group and control group.


Reinforcement

Encouraging or establishing a belief or pattern of behavior (e.g., through reward).


Reliability; Reliable

The consistency of scores on a measure over repeated use. For a more detailed discussion of reliability as it applies to youth program outcomes, see page 53 of the Forum for Youth Investment’s soft skills measures compendium.


Response format; Response choices

How the answers to a question are collected from respondents. For example, a Likert response scale is one type of response format. Also see Likert-type responses.


Reverse-scoring; Reverse direction

The process of reversing the numerical values assigned to responses on an item that reflects the absence (or opposite) of the outcome being measured. For example, when assessing Social Acceptance, a child’s response to a question asking to what extent he/she feels lonely, might be reversed so that a response of greater disagreement reflects higher Social Acceptance and can then be averaged with responses to the other items in the scale. The formula for reverse scoring an item is: New score = Number of response options possible plus 1 minus the original score. For example, if a youth chooses a response that would normally be scored a “4” on a 6-point scale, if that item is reverse-scored her new score for the item would be 6+1-4 = 3.


Scale vs. subscales

A set of questions used to assess an outcome of interest. A subscale is a part of an overall scale that measures a component of the outcome of interest. For example, the Self-Control subscale is part of the larger Social-Emotional and Character Development Scale (SECDS).


School-based sample

A group of participants that are selected from the larger body of students attending the school.


Sensitivity

A measure’s ability to correctly identify those respondents who exhibit a state or outcome of interest (e.g., a measure of depressive symptoms as being “sensitive” to detecting actual clinical depression). It is formally calculated as the number of persons correctly identified by the measure as having the state or outcome of interest (referred to as “true positives”) divided by the sum of all persons who actually have the state or outcome (i.e., true positives + false negatives). In contrast, a measure’s ability to correctly identify those respondents who do not exhibit a state or outcome of interest is referred to as its “specificity.” Specificity is formally calculated as the number of persons correctly identified by measure as not having the state or outcome of interest (“true negatives”) divided by the sum of all persons for whom the state or outcome is in fact not present (i.e., true negatives + false positives).


Standard deviation

A measure of the extent to which the scores for the members of a group vary from the group’s mean (i.e., average). For example, if a T-score of 50 represents a score equal to the average score for youth in the normative sample, a score of 60 would indicate a score that is one standard deviation higher than this average and thus an elevated level on the outcome measure compared to the typical youth in the normative sample.


Social modeling

The idea that we learn by observing others’ behavior.


Summing vs. averaging scores

When an outcome measure uses a Likert or Likert-type response set, you can compute a score to represent the youth’s overall response on the scale by either summing or averaging the values across each item in the scale. In this toolkit, the scoring method used by the developers of the scale is provided. For purposes of most statistical analyses (for example, looking at differences in an outcome between those in a mentored group and those in a comparison group), findings and conclusions will not be affected by whether summing or averaging is used in scoring the outcome measure. Also see Likert-type responses.


T-score

A measure of how far a given score is (in standard deviations) from a group average or mean. T-scores are constructed so that they have a standard deviation of 10 and an average of 50. For example, a T-score of 60 for a respondent would indicate that his or her score on a measure is one standard deviation higher than the average score of a reference group (for example, youth in the sample that was used to develop norms for the measure).


Validation

The process of assessing and confirming the reliability and validity of a measure.


Validity

The extent to which a measure actually measures what it is intended to measure. For a more detailed discussion of validity as it applies to youth program outcomes, see page 55 of the Forum for Youth Investment’s soft skills measures compendium.