Findings from an Innovative Teen Pregnancy Prevention Program Evaluation of mCircle of Life in Tribes in the Northern Plains Final Impact Report for University of Colorado Denver Centers for American Indian and Alaska Native Health August 18, 2015 Prepared by Traci Schwinn,1 Carol E. Kaufman,2 Kirsten Black, 2 Ellen M. Keane,2 Nicole R. Tuitt,2 Cecelia K. Big Crow,3 Carly Shangreau,3 Greg Schaffer,2 Steven Schinke1 1 Columbia School of Social Work, Columbia University, New York, NY 2 Centers for American Indian and Alaska Native Health, Colorado School of Public Health, University of Colorado Anschutz Medical Campus, Aurora, CO 3 Centers for American Indian and Alaska Native Health, Northern Plains Field Office, Pine Ridge, SD Recommended Citation: Schwinn, T, Kaufman, CE, Black, K, Keane, EM, Tuitt, NR, Big Crow, CK, Shangreau, C, Schaffer, G, Schinke, S. 2015. “Evaluation of mCircle of Life in Tribes of the Northern Plains: Findings from an Innovative Teen Pregnancy Prevention Program.” Final behavioral impact report submitted to the Office of Adolescent Health. August 18. The authors report no conflict of interest. Acknowledgements: We would like to thank the Native Boys and Girls Clubs’ staffs for their enduring support and collaboration on this project, participating youth for their generosity of time and willingness to engage with sensitive topics, their parents who supported them, and finally the people of the tribal communities of the Northern Plains, for their commitment to improving the lives of their children. This publication was prepared under Grant Number TP2AH000003 from the Office of Adolescent Health, U.S. Department of Health & Human Services (HHS). The views expressed in this report are those of the authors and do not necessarily represent the policies of HHS or the Office of Adolescent Health. EVALUATION OF MULTIMEDIA CIRCLE OF LIFE IN NORTH AND SOUTH DAKOTA: FINDINGS FROM AN INNOVATIVE TEEN PREGNANCY PREVENTION PROGRAM I. Introduction A. Introduction and study overview American Indian and Alaska Native (AIAN) rates of teen pregnancy are substantially higher than those of non-Hispanic Whites (31 per 1,000 vs. 18 per 1,000), and are the third highest across all race groups. 1 Additionally, in 2012, 20% of all AIAN teens giving birth had already given birth at least once before. This rate was about the same as for African Americans and Hispanics, but higher than for Whites (14%). 2 High teen birth rates are not the only potentially harmful reproductive outcome disproportionately shouldered by AIAN teens. In 2011, rates of chlamydia and gonorrhea in AIAN girls ages 15-19 years were 2.8 and 3.5 times greater, respectively, than those of their White counterparts. Only African American teens had higher rates than AIANs. 3 Risk behaviors are strongly associated with these birth and sexual health statistics for AIAN teens. According to the 2011 Youth Risk Behavior Survey, a national sample of youth attending high school, AIAN teens had the highest levels of ever having had sex (69%), having had sex in the last three months (46%), and substance use before last sex (32%). 4 (See Table I.A.1.) The epidemiological and behavioral profile suggests the importance of prevention at early ages. Yet, there are no culturally grounded evidence-based sexual risk reduction (SRR) interventions for AIAN youth of any age appearing on federal lists. 5,6 To address this gap, we were awarded funding from the Office of Adolescent Health, Teen Pregnancy Prevention Initiative (Tier 2). Our main interests in the project were assessing precursors of behavior change––knowledge and self-efficacy––among young AIAN adolescents. Prevention messages are perhaps most important in young adolescence since sexual risk behaviors are still rare. While many sexual risk behaviors do not occur frequently in this young age group, we hypothesized that changes in knowledge and self-efficacy, brought about by the intervention, may subsequently reduce sexual risk behavior later, in the teen years. Behavioral outcomes remain important, however, and in this report, we assess those outcomes. Table I.A.1 . Percent reporting sexual risk among high school youth, 2011, by race/ethnicity Total AIAN Black Hispanic White Ever had sexual intercourse 47.4. 69.0 60.0 48.6 44.3 Ever had sexual intercourse w/4+ persons 15.3 21.9 24.8 14.8 13.1 Sexually active in last 3 months 33.7 45.5 41.3 33.5 32.4 Did not use a condom at last sex 39.8 33.8 34.7 41.6 40.5 Did not use effective contraception at last sex*,*** 76.7 79.5 83.4 84.9 70.9 Drank alcohol or used drugs before last sex 22.1 31.6 18.1 21.8 23.4 Source: 2011 Youth Risk Behavior Survey *Among those who were sexually active **Effective contraception=birth control pills; IUD/implant; shot, patch or ring 3 We adapted Circle of Life (COL), a theory-based SSR program, to a multimedia format (mCOL) for AIAN youth ages 10-12 years. The original COL curriculum was developed specifically for AIAN youth more than 10 years ago by ORBIS Associates, an AIAN-owned and -operated not-for-profit education organization. 7 The COL curriculum was developed with extensive community review and input from parents, educators, and health experts across the country. 8 In addition to earlier qualitative pilots,8,9 COL was rigorously evaluated from 2006 to 2009 in a group-randomized controlled trial with 13 middle schools (youth ages ranged from 11 to 16 years at baseline) in a Northern Plains tribal community. Results showed COL was effective for delaying the onset of sexual activity among AIAN youth who received the program when they were young adolescents compared to those who received it at older ages or not at all. Students and facilitators reported that they liked the program and suggested including additional opportunities to integrate local cultural content and developing a digital version to reach and engage youth. 10,11 mCOL is an adaptation to address those recommendations, and is available either online or through DVDs. Additional class activities were added to supplement the mCOL online/DVD lessons. We evaluated mCOL in a group-randomized controlled trial to assess its effectiveness in sexual risk reduction. We partnered with Native Boys Girls Clubs (NBGCs) to implement the study in 16 communities across six tribes of the Northern Plains. The evaluation team consisted of external evaluators from Columbia University––experts in research with Boys and Girls Clubs––and of University of Colorado Denver evaluation staff, who had expertise in intervention research with AIAN communities. The NBGC program staff implemented the intervention. B. Primary research question(s) The effectiveness study for mCOL addresses this primary research question: • What is mCOL’s impact, relative to the control condition (After-School Science Plus), on the proportion of youth ever having had sex at 9 months after the intervention? C. Secondary research question(s) The effectiveness study for mCOL also addresses this secondary research question: • What is mCOL’s impact, relative to the control condition (After-School Science Plus), on the proportion of youth ever having had sex directly after the intervention? II. Program and comparison programming A. Description of program as intended mCOL is a culturally grounded, theory-based, comprehensive HIV-prevention program. Its theoretical foundation is the Medicine Wheel, a heuristic facilitating indigenous ways of learning and understanding familiar to most tribes in the United States.10 The Medicine Wheel symbolizes wholeness derived from balance among four quadrants: spiritual, emotional, physical, and mental. The concept of “volition” is included to emphasize each person’s power to make decisions to balance and strengthen their circle. Western behavioral theories (Social Cognitive 4 Theory, Theory of Reasoned Action, and Theory of Planned Behavior) support skill acquisition for goal setting, decision making, and standing up to peer pressure through activities such as storytelling, games, crafts, and role playing. Both the Medicine Wheel and Western behavioral theories are interwoven into the content and activities that comprise both the online and class lessons. Working in partnership with the Office of Minority Health Resource Center and the Indian Health Service, we adapted COL, a curriculum using materials specifically designed for low- resourced communities, to a multimedia format, retaining core components and theoretical constructs. mCOL also incorporates a number of enhancements, including: a) artistic regionalization (artwork tailored to cultural symbols typical of region based on zip code); b) flexible implementation so facilitators can tailor content to specific needs; c) online resources for facilitators; and d) updated, expanded, and medically reviewed content on teen pregnancy prevention, sexually transmitted diseases (STDs), and hepatitis C. Maintaining cultural integrity was a key objective in adapting COL content to a slightly younger audience. mCOL uses AIAN characters to guide youth on the journey through the chapters. Cultural ways of learning, including symbols, storytelling, and games, are included throughout the program. The goal was not to be pan-AIAN; instead, if a tribal-specific quotation or story was used, youth were invited to think about a similar story in their own tribe or tradition. (See Appendix A for mCOL’s logic model.) Similar to the original COL program, mCOL was designed for implementation in school, after-school, or community settings. The digital format expands program accessibility, ensures consistent delivery of medically sound content, and reduces the burden on program staff for teaching. mCOL consists of seven online chapters. Each online chapters take about 20-25 minutes to complete. The first three chapters provide content on adolescence, puberty, and healthy relationships; the next three cover teen pregnancy, STDs/HIV, and hepatitis C; and the final chapter reviews the content and presents youth with a certificate of completion. Our community steering committee recommended that we also develop class activities to complement online material for each chapter (see Appendix B for intervention content). Class sessions include discussions, instruction, demonstrations, games, and crafts. Classes are taught by local program staff who may invite health professionals or community members to deliver portions of the material. To implement the program in after-school settings, we recommended that, each week, youth complete one online chapter and then, preferably on a different day, attend the accompanying class session. By offering one online and one class session per week, the intervention would require 7 weeks to complete. mCOL facilitators received about 4 hours of in-person training and about 2 hours of practice for instruction and for procedures around research requirements (attendance, implementation logs, etc.). In addition, we created videos, available through the program website, showing how to teach each class lesson. 12 We provided each site with copies of the lesson plans and materials for teaching the class sessions (i.e., printed handouts, craft supplies). B. Description of counterfactual condition 5 Youth randomized to the control condition received the After-School Science Plus (AS+) program, a copyrighted science curriculum published by Educational Equity Center at FHI 360. 13 This program was selected with input from our community steering committee. AS+ is designed for after-school programs and is consistent with NBGC goals for teaching about physical science. The target age group for AS+ is 8 to 14-year-olds. Although the full curriculum consists of 11 lessons, we used only the first 7 lessons to match the intervention condition. (See Appendix B for intervention content.) Each lesson requires approximately 1 hour to complete and involves hands-on activities. Lessons were taught weekly in group sessions by NBGC program staff. Similar to mCOL, we developed seven online videos showing how to teach each session and we provided all the materials necessary to deliver the program. All AS+ facilitators received two published guides for delivering the program, and a brief in-person orientation to the materials and support provided. III. Study design A. Sample recruitment We contacted and subsequently recruited six Native Boys and Girls Clubs (NBGCs) in North and South Dakota, located on separate American Indian reservations. These NBGCs operate in some of the poorest areas of the nation, most with three times the national poverty rate for children, and with half the national median household income. Most clubs have multiple sites, known as units, which are dispersed in distinct communities across the reservation. In total, the six NBGCs operated 16 units, which were the recruited sample and the unit of randomization. Youth ages 10-12 who attended the units were eligible to be recruited for the study. No exclusion was made on the basis of race, although most, but not all, youth at the participating units were American Indian or Alaska Native. The protocol for recruiting and consenting was the same for treatment and control units. There were no differences in process or materials. Recruitment and consenting occurred between September 2012 and July 2014 for several months at each unit. Most units completed recruitment within 3-6 months; however, some experienced unforeseen events and closures due to financial difficulties, staffing changes, moving, etc. These factors interrupted recruitment, sometimes resulting in a recruitment period of longer than 6 months, albeit with no recruitment activity during the interruption. 14 Once recruitment ended, each unit provided the mCOL program (seven chapters) one time to enrolled participants. For recruitment, unit program staff identified eligible youth from their records and invited them and their parents/guardians to attend an informational meeting about the study. A modest dinner and small gifts (an aluminum water bottle and first aid kit) were given to each family. At the meetings, research team members described the study, answered questions, and explained the consent process to interested families. Families could complete consent and enrollment materials at that time or take materials home to review and decide participation of the child at a later time. For identified families that did not attend the meetings, additional recruitment methods were used, including individual meetings and telephone calls with research team members, and sending home enrollment packets with youth. 6 B. Study design The study used a group randomized control trial design. Random assignment was conducted by the principal investigator using the random number generator routine in Microsoft Excel. Club units were the unit of randomization. The units were grouped into strata according to state (eight for South Dakota and eight for North Dakota) and size (2 large and 14 small). Random assignment of units was conducted before enrollment of youth (June 2012), but the information was not shared with unit program staff charged with assisting in recruitment efforts until after recruitment. One AS+ unit closed after randomization but before any youth were enrolled leaving a total of 15 units. C. Data collection 1. Impact evaluation After completing an online assent, youth accessed the online baseline survey. Baseline surveys were administered at discrete times when evaluation staff were present across the recruitment period. After baseline surveys were collected at each unit, the unit program staff delivered their assigned intervention. Follow-up surveys were administered immediately post intervention and 9 months after intervention completion. Evaluation staff traveled to each unit for all data collection to facilitate data collection, troubleshoot technical problems, and compensate respondents. Like the baseline, the two online post-test surveys (immediate post-test and 9-month post-test) were administered by evaluation staff at the units. Each survey took about 20 minutes to complete. Audio Computer Assisted Self Interview (ACASI) software was used to ensure youth could understand the surveys, regardless of reading ability. Youth received compensation for completing the surveys ($10 for baseline and immediate post-test surveys, and $15 for 9-month post-test survey). To maximize survey participation, unit program staff or evaluation staff contacted all consented youth to let them know upcoming survey dates; multiple opportunities for completing surveys were provided. The survey administration protocol was the same for the intervention and comparison groups. Nevertheless, many units assigned to implement mCOL also experienced operational delays (i.e., not related to the intervention). As a result, mCOL units averaged a 7.5 month delay between baseline and immediate post-test surveys. These delays were longer than those in the AS+ group. Please see Table III.C.1 for details on data collection schedules. 7 Table III. C.1 Data collection schedule Notes explaining delays in recruitment and implementation: Unit 1: This unit did not experience delays. Unit 2: Located in a remote, low resourced community, this unit moved after completing the baseline, but prior to implementation. The intervention was delayed due to closure associated with renovation of the new site, staff turnover, and poor internet access. Unit 3: Inadequate staffing and low attendance for youth in our age group resulted in a long recruitment period. Recruitment remained open until there was sufficient staffing to launch the intervention. Unit 4: This unit recruited youth quickly, but experienced delays during implementation. The delays were caused by moving locations and renovations (1 month closure), poor internet connectivity and inconsistent attendance of youth which made it difficult to conduct classes. Unit 5: This is a small unit and recruitment was completed quickly. Shortly after recruitment and baseline completion, the unit closed and remained closed until the 9 month data collection point (estimated based on a neighboring unit). Although there was discussion of having youth attend mCOL at another unit, this never materialized. Youth at this unit did not receive the mCOL intervention. Unit 6: This unit experienced delays in recruitment and implementation due to intermittent closures associated with financial difficulties and problems with the facility. Additional reasons for delay were staff turnover and poor internet access. Unit 7: This unit experienced delays in launching the intervention due to closure of the unit. We anticipated that the unit would reopen. Eventually we engaged a different location to host the program. 8 Unit 8: This unit experienced intermittent closures which delayed recruitment activities. During the summer of 2014, the unit was opened and staffed. We recruited youth, and allowed staff to implement the program using a condensed schedule (2 weeks). Units 9-13 & 15: These units did not experience delays. Unit 14: This club closed due to financial difficulties after AS+ implementation was completed. We were unable to conduct post-test surveys until the club reopened three months later. 2. Implementation evaluation The mCOL program consists of seven chapters that are delivered online and supplemented with class sessions. To evaluate site-level provision of the online lessons, we used electronic and written records of chapter completion. Since the content and delivery for the online program were consistent across all sites, we did not measure “quality” for this component. Attendance at class sessions was recorded by unit program staff using paper attendance sheets. Staff also completed fidelity monitoring logs (FMLs) to report the number of activities completed, adaptations made, and level of youth engagement. Research team members monitored and supported implementation through regular calls with program staff and site visits to observe at least 10% of mCOL class lessons. Observation visit information was collected using the Observation Visit form, a standardized form designed to capture quality of instruction and engagement of youth in the lesson. To improve our understanding of the factors affecting program implementation, we collected additional qualitative information across both study conditions. Each monitoring call and site visit was documented with detailed field notes, and all email correspondence was archived. After program implementation was completed, we conducted telephone interviews with each facilitator and surveyed youth about their experience with either the AS+ or mCOL program. For a summary of elements included in the implementation evaluation, see Appendix C. D. Outcomes for impact analyses To measure sexual activity in this young sample, we used the question: Have you ever had sexual intercourse? Participants who answered “yes” were coded 1; those who answered “no” were coded 0. Following our institutional review board regulations, youth were allowed to skip this question if they did not want to answer. This question was asked in all three surveys: baseline, immediate post-test, and 9-month post-test. See Table III.D.1 9 Table III.D.1. Behavioral outcomes used for primary impact analyses research question Timing of measure Outcome name Description of outcome relative to program Ever had sexual The variable is a yes/no measure of whether a person has ever Baseline intercourse had sexual intercourse. The measure is taken directly from the following item on the survey: Immediately after program ends (post-test) • “Have you ever had sexual intercourse?” 9 months after program The variable is constructed as a dummy variable where ends respondents who respond yes they have had sex are coded as 1 and all others are coded as 0.Because our IRB requires an option for participants to skip questions they do not want to answer, non-response is coded as missing. E. Study sample Sixteen units were randomized, with 8 in each study arm. One unit closed prior to recruitment, leaving 8 units randomized to mCOL and 7 units to AS+. After receiving parental consent, evaluation staff entered tracking information about each youth and parent into the study database and assigned each youth a unique number. Next, prior to completing the baseline survey, each youth completed an online assent. The assent was audio assisted and included follow-up questions to ensure youth fully understood the risks and benefits of participation in the study. On the basis of club enrollment figures, we anticipated that several hundred youth in the eligible age group would enroll in the study. Although we enrolled a relatively high percentage of eligible youth at each unit, total study enrollment was lower than anticipated (n=167) due to declining club attendance for youth in this age range. Declining attendance was due to local after-school program competition, financial instability of clubs, and local mismatches between youth interests and club programming. At baseline, 167 youth completed a survey across 15 units. Of these, 93 completed an immediate post-test survey, and 89 completed a 9-month post-test survey, representing a 45% and 43% response rate, respectively. Aside from the unit that closed prior to recruitment, no units withdrew from the project. However, one of the 15 units, a mCOL unit, was not operational for the immediate post-test survey, though it was open for the 9-month post-test survey. See Table III.E.1 for the sample flow. See Table III.E.2 for characteristics of the communities, and Table III.E.3 for characteristics of the youth sample. Responses on outcomes and key measures by data collection point are located in the baseline equivalence section (Tables III.F.1a and b).] 10 Table III.E.1. Cluster and youth sample sizes by intervention status – cluster designs Total Total Intervention Comparison sample Intervention Comparison response response response Number of: Time period size sample size sample size rate rate rate Clusters: At beginning of study 16 8 8 N/A. NA. N/A. Clusters: Contributed at least one youth at baseline Baseline 15 8 7 93.75% 100% 87.50% Clusters: Contributed at least one Immediately youth at post- follow-up programming 14 7 7 87.50% 87.50% 87.50% Clusters: Contributed at least one 9-months youth at post- follow-up programming 15 7 7 93.75% 100% 87.50% Youth: In non- attriting clusters/sites at time of assignment* 208 123 85 N.A. NA. N/A. Youth: Who consented** 167 98 69 80.29% 79.76% 81.18% Youth: 80.29% 79.76% 81.18% Contributed a baseline survey 167 98 69 Youth: Contributed a Immediately follow-up post- survey programming 93 42 51 44.71% 34.15% 60.00% Youth: Contributed a 9-months follow-up post- survey programming 89 44 45 42.79% 35.77% 52.94% *Defined as any youth in age range for whom we had some paperwork. **Defined as parental consent and youth assent. 11 Table III.E.2. Characteristics of reservations of study sites American Indian Reservation Measure U.S. Communities* North Dakota South Dakota Total 825,198 311.5 million 9,036 689,781 High school or more (%) 86.0 70.8 90.9 90.4 Bachelor’s degree+ (%) 28.8 11.9 25.5 26.2 Median HH income ($) 49,495 53,046 29,137 53,741 HH’s below poverty (%) 14.1 15.4 37.9 11.9 Unemployment rate (%) 5.0 9.7 25.5 3.3 * To maintain community confidentiality, we do not name specific reservation communities, but instead estimate indicators based on weighted averages of those included in the project. HH=Household Source: 2009-2013 American Community Survey 5-Year Estimates. Margin of errors not reported here. Table III.E.3. Characteristics of youth sample Baseline Immediate 9-month post-test post-test Mean or proportion Mean or proportion Mean or proportion Measure (standard deviation) (standard deviation) (standard deviation) My tribe is important to me (range 1-4, strongly disagree to strongly agree) 3.72 (.054) 3.38 (.114) 3.67 (.118) Ever had alcohol (more than a sip) .152 (.026) .207 (.034) .264 (.036) Ever had a cigarette .259 (.038) .297 (.053) .348 (.041) Ever use marijuana .147 (.025) .233 (.054) .352 (.058) Played on a sports team .877 (.025) .890 (.032) .897 (.027) Sample size 167 93 89 Note: All estimates adjusted for clusters. F. Baseline equivalence We assessed baseline equivalence for two analytic samples (immediate post-test and 9- month post-test) on age, race, gender, and ever having sex. Equivalence was assessed using linear probability models regressing the baseline characteristics on a condition indicator (mCOL or AS+). No significant differences were found. Models also included binary blocking variables of unit’s size (small or large) and unit’s state (North Dakota or South Dakota), and standard 12 errors (SEs) were adjusted for clustering at the unit level using the Huber-White sandwich estimator (completed in SPSS Statistics 22 using the Complex Samples Plans for General Linear Models procedure). See Tables lll.F.1a and III.F.1b, respectively. Table III.F.1a. Summary statistics of key baseline measures for youth completing 9-month post-test Intervention Intervention versus versus Intervention mean Comparison mean comparison comparison or % (standard or % (standard mean p-value of Baseline measure deviation) deviation) difference difference Age 11.23 (0.86) 10.96 (0.85) 0.26 .164 Gender 0.59 0.42 0.17 .180 AI/AN* 0.93 0.93 0.0 .896 Ever had sex 0 0 0 NA Sample size 44 45 . . * AI/AN=American Indian or Alaska Native alone or in combination with some other race Table III.F.1b. Summary statistics of key baseline measures for youth completing immediate post-test Intervention Intervention versus versus Intervention mean Comparison mean comparison comparison or % (standard or % (standard mean p-value of Baseline measure deviation) deviation) difference difference Age 11.26 (0.86) 11.10 (0.99) 0.16 .112 Gender 0.55 0.47 0.8 .173 AI/AN* 0.93 0.90 3.0 .578 Ever had sex 0 0 0 NA Sample size 42 51 . . * AI/AN=American Indian or Alaska Native alone or in combination with some other race G. Methods 1. Impact evaluation We used linear probability modeling to estimate the treatment effects of mCOL, relative to the comparison intervention, on ever having had sex at 9-month post-test. This approach is appropriate for binary outcomes in the context of experimental impact estimation and provides easily interpretable parameter estimates. Because youth are nested within a small number of club units (n=15), the Huber-White sandwich estimator was used to adjust the SEs. The analysis is a complete case analysis (i.e., our analytic sample included only youth who were present for the focal 9-month post-test survey). Statistical significance is based on p < .05 in a two-tailed test. 13 The linear probability model for assessing the impact of mCOL, relative to the comparison group, treated “ever had sex” (yes = 1, no = 0) as the dependent variable and treatment indicator variables (mCOL vs. AS+) as the independent variable, controlling for age, gender, race (AIAN or not), unit size (two large, 14 small units) and state (North Dakota or South Dakota) and using the Huber-White sandwich estimator to adjust the SEs. Sensitivity analyses were conducted in which a youth’s “yes” response on the previous survey to the item “ever had sex” was used for the particular focal survey if the youth did not respond. There were no differences in findings with the sensitivity analyses (see Appendix D for details on impact evaluation equations and sensitivity analyses). The analytic approach for the secondary research question estimating the treatment effects of mCOL, relative to the comparison intervention, on ever having had sex at immediate post-test survey was the same as for the primary research question, including sensitivity analyses. 2. Implementation evaluation The mCOL intervention consisted of a total of 14 activities: seven online chapters and seven supplemental class sessions. Youth received the main content of the intervention through the online chapters. The class sessions were designed to reinforce the online content. Although the full intervention consisted of 14 activities, the contribution of each learning activity was not assumed to be equal. Youth who completed at least 70% of the online chapters (5 of 7 chapters), were defined as having received a significant portion of the content. Consequently, program implementation was reported for the full intervention as well as for each component. Below, we outline our data sources and measures for adherence, dosage, fidelity, and quality, the main dimensions of our implementation evaluation. Adherence: Adherence data were derived from online usage reports, attendance records, and FMLs completed by class session facilitators. Descriptive statistics (counts, percentages) were used to show the extent to which units delivered the full intervention (14 activities), and separate components (seven activities each). Across all units, the number of activities delivered was compared to the total number possible (8 units x 14 lessons = 112 possible) to determine the percentage of curriculum that was delivered. Dosage Received: Dosage data were from usage reports (online chapters) and attendance records reported by each unit (class sessions). Dosage measures were calculated in multiple ways, including the percentage of youth who received at least 70% of the full curriculum (10 of 14 activities), the percentage of youth who completed at least 70% of online chapters (5 of 7 chapters), and the percentage of youth who completed at least 70% of class sessions (5 of 7 sessions). Fidelity and Quality: For online chapters, standardized online content ensured 100% fidelity and consistent quality across units. For class sessions, fidelity was defined as the number of activities completed compared to the number possible, expressed as a percentage. The number of activities delivered by all units was compared to the total number of activities possible (units that did not provide an activity were not included in this analysis). Quality was assessed by averaging scores for question #7: “Rate the overall quality of the program session” (scale: 1=poor to 5=excellent) from the Program Observation Form for TPP Grantees. The names of facilitators 14 (from FMLs) were compared to lists of program staff trained to assess the percentage of sessions that were conducted by trained facilitators. For the control group, the implementation of the intervention was measured as the percentage of lessons delivered across all sites compared to the total number of lessons possible (7 lessons x 7 units = 49). IV. Study findings A. Implementation study findings Below, we report on implementation findings on adherence, dosage, fidelity, and quality for those units assigned the mCOL program. Adherence: Fifty percent (4/8) of the mCOL units delivered the full intervention (seven online lessons and seven class sessions). One unit closed after youth were enrolled and did not deliver any activities. All of the remaining units (n=7) delivered all seven of the online chapters. (See Chart IV.A.1.) Chart IV.A.1. Online and class activities completed by unit Compared to the online sessions, units had more difficulty delivering the class lessons. Only four units delivered all seven class lessons. One unit delivered three class lessons, one unit delivered two class lessons, and two units did not deliver any class lessons. Across all units, 80 activities were delivered, which equaled 71% of the total number of activities possible (total number = 8 units x 14 lessons/session=112). Overall, the implementation was carried out as intended in only one unit. That is, only one unit was able to complete one online chapter and its corresponding class lesson activities each week over seven weeks. Due to closures, technological issues (limited bandwidth, incompatible 15 platforms, etc.), weather, personnel turnover, unpredictable youth attendance, or other events, implementation length varied from four weeks to 15 months in other mCOL units. Dosage: Of the 84 youth assigned to receive the mCOL intervention, only 30% (n=25) completed 70% (10 of 14 activities) of the intervention. (See Table IV.A.1.) Completion rates for online chapters were higher compared to class lessons. Forty-five percent (n=38) completed 70% (5 of 7 chapters) of the online curriculum. Of these, 92% (n=35) completed all the online lessons. Twenty-seven percent (n=23) completed at least 70% (5 of 7 sessions) of the class lessons, and, of these, 91% (n=21) of youth completed all class lessons. In many cases, club closures and program staff turnover caused long delays between youth enrollment and intervention commencement. Long delays led to attrition, especially for youth who were not regular club attendees. Fifty-five percent (n= 46) of youth in the intervention group (mCOL) did not receive any portion of the intervention. Table IV.A.1. Percentage of youth who completed at least 70% of the mCOL curriculum. Full intervention Online chapters Class lessons (10/14 activities) (5/7 chapters) (5/7 lessons) Total sample 30% 45% 27% (n=84) (n=25) (n=38) (n=23) Fidelity: Online lessons consisted of standardized material that required sequential completion; fidelity for online lessons was 100%. The online program produced tracking data (e.g., length of time for each activity, skipping activities) for only one unit; the DVDs, used in many units, produced no tracking data. For units that conducted class sessions, 69% of possible activities were delivered according to FMLs. Of note, some unit personnel reported that they had to make unplanned adaptations to class sessions to accommodate shifting attendance and unit closures. For example, facilitators combined some class lessons or shortened others. These data were not recorded in the FMLs; the extent of these types of adaptations is not fully known. When fidelity data from the online and class sessions were combined, implementation fidelity was 85%. Quality of Implementation: Quality of implementation was observed only for the class activities. Observation visits were attempted at six units that delivered class activity sessions. Four visits were completed. A total of 33 class sessions were delivered, and observations were made for 12% of them. The average score for youth engagement was 4.25/5, and the average score for overall quality was 4.5/5. One hundred percent of the facilitators were NBGC program staff who had received mCOL training. Youth engagement was also assessed through interviews with facilitators. They reported that youth enjoyed and were interested in the content of the online program. Similarly, facilitators reported that youth engaged in the class lessons, and there were many rich discussions. These reports were corroborated by surveys in which youth consistently chose the words “interesting” and “fun” to describe the program. Experiences of Comparison Group: One hundred percent (49/49) of the lessons for the control intervention (AS+) were delivered. 16 Context: Several environmental or exogenous factors may have influenced the results of the evaluation. Throughout the project, at least nine units experienced external events that resulted in delays that affected various aspects of the project (i.e., recruitment, implementation, data collection). Of these, seven were mCOL units. The most common reason for delay was closure due to financial difficulties. Among the seven units that closed for financial reasons, the delays were intermittent and lasted from weeks to months. Shorter closures occurred due to units moving to new sites, community tragedies, weather, and facility problems (loss of heat, plumbing, etc.). These delays are associated with declines in attendance at the club units and, consequently, participation in our project. The combination of external events amplified the program delays and resulted in attrition. In order to stimulate engagement and fit the program into a shorter time frame, units experiencing delays were advised to focus on completing the online material rather than trying to deliver both online and class sessions. B. Impact study findings Students in the intervention group were no more or less likely than those in the comparison group to have ever had sex 9 months after the intervention. We examined the outcomes of 89 youth (44 mCOL and 45 AS+) to answer the primary research question: What is mCOL’s impact, relative to the control condition (AS+), on the proportion of youth ever having had sex at nine months after the intervention? Three youths (6.8%) in the mCOL group reported ever having had sex, while two youths (4.5%) in the AS+ group reported ever having sex. Linear probability modeling showed no difference in proportion between the two groups (mCOL vs. AS+) on “ever had sex” at 9-month post-test, controlling for age, gender, race, unit size, and unit state and adjusting SE estimates for clustering at the unit of assignment ( = .007; SE = 0.054, p = .901; see Table IV.B.1). Table IV.B.1. Post-intervention estimated effects using data from 9 month post-test survey to address the primary research question. Intervention compared to comparison Mean difference (p-value of Outcome measure Intervention % Comparison % difference) Ever had sex 0.06 0.05 0.01 (p = 0.901) Source: Nine-month post-test surveys administered 8-10months after intervention delivery. Notes: Prevalence adjusted for age, gender, race, unit size, and unit state; see section G.1 for details on model specification. Secondary Research Question: Students in the intervention group were no more or less likely than those in the comparison group to have ever had sex immediately after the intervention. We examined the outcomes of 93 youth (42 mCOL and 51 AS+) to answer the secondary research question: What is mCOL’s impact, relative to the control condition (AS+), on the proportion of youth ever having had sex immediately after the intervention? 17 Three youth (7.5%) in the mCOL group reported ever having sex, while 2 youth (4%) in the AS+ group reported ever having sex. Linear probability modeling showed no difference in proportion between the two groups (mCOL vs. AS+) on “ever had sex” at immediate post-test, controlling for age, gender, race, unit size, and unit state and adjusting SE estimates for clustering at the unit of assignment ( = .05; SE = 0.044, p = .274; see Table IV.B.2).] Table IV.B.2. Post-intervention estimated effects using data from immediate post-test survey to address the secondary research question. Intervention compared to comparison Mean difference (p-value of Outcome measure Intervention % Comparison % difference) Ever had sex 0.03 0.08 0.05 (p = 0.274) Source: Post-test surveys administered immediately after intervention delivery. Notes: Prevalence adjusted for age, gender, race, unit size, and unit state; see section G.1 for details on model specification. V. Conclusion AIAN youth are disproportionately at risk for teen pregnancy and compromised sexual health compared to their non-AIAN counterparts nationally. Unfortunately, to date, no evidence- based SRR intervention for this population is yet listed on federal Websites. This project was intended to address that gap. With respect to behavior outcomes, we found that mCOL had no significant effect on sexual activity. Future reports will examine the effects of mCOL on precursors to behavior, which are of interest given youth were under 14 years old. At 9-month post-test (primary outcome) and at immediate post-test (secondary outcome), no difference between groups in the proportion of youth who reported ever having sex was found. These findings were consistent across additional sensitivity analyses that imputed “yes” response to the question “ever had sex” from previous survey to later survey for youth who did not respond to the focal survey. Baseline equivalence of the analytic samples at 9-month post-test and immediate post-test was demonstrated. Notably, youth rates of ever having sex were small; at immediate post-test and 9-month post-test, only five youths reported ever having had sex (approximately 5% of the youth who responded to a particular survey). Several limitations to the study exist including: (1) it was not powered to detect statistically significant differences in behavior outcomes, (2) 47 % of the sample did not complete follow-up surveys, (3) a high percentage (55 %) of youth in the intervention group did not receive any portion of the intervention, (4) more than half of comparison youth were exposed to sexual health education in the past 12 months, and (5) there was variation in the time periods for the intervention and the time between baseline and post-test surveys. Our non-significant results, therefore, likely reflect a research design that was not sensitive enough to detect intervention effects on this behavioral outcome, and substantial limitations in sample retention and program implementation. While behavioral outcomes were not shown to differ by intervention condition, the program itself was enthusiastically embraced by unit staff. However, these units also experienced a large 18 number of implementation challenges, such as closures and technological problems. Additionally, mCOL facilitators made unplanned adaptations to the intervention. Despite these challenges, staff remained committed to providing the program, adapting it as necessary so that they could implement it within their unique circumstances. Interviews with staff and survey feedback from youth both showed positive responses to the content and format of the program. They also provided information that can be used to improve the program in the future. The study demonstrates the many challenges of implementing a rigorous study in resource- poor remote communities and with a population that most needs effective interventions.14 While mCOL was developed specifically for a young AIAN audience and was well received by unit staff, we cannot, from the data collected, determine its effectiveness behaviorally. Future reports examining precursors to behavior change will add to an understanding of how mCOL may work. 19 VI. References 1 Hamilton BE, Martin J, Ventura S. Births: Preliminary data for 2012. National Vital Statistics Reports. 2013;62(3). 2 Martin J, Hamilton B, Osterman M, et al. Births: Final data for 2013. Hyattsville, MD: National Center for Health Statistics;2015. 3 Centers for Disease Control and Prevention, Indian Health Service. Indian Health Surveillance Report-- Sexually Transmitted Diseases 2011. Atlanta, GA: US Department of Health and Human Services; 2014: http://www.cdc.gov/std/stats/ihs/ihs-surv-report-2011_062314.pdf. Accessed February, 24, 2015. 4 Centers for Disease Control and Prevention. Youth Risk Behavior Surveillance System (YRBSS). 2011; http://www.cdc.gov/HealthyYouth/yrbs/index.htm. Accessed February 27, 2015. 5 Department of Health and Human Services. TPP Resource Center: Evidence-Based Programs. 2014; http://www.hhs.gov/ash/oah/oah-initiatives/teen_pregnancy/db/. Accessed January 21, 2015. 6 Centers for Disease Control and Prevention. Effective Interventions: HIV Prevention That Works. 2014; https://effectiveinterventions.cdc.gov/en/HighImpactPrevention/Interventions.aspx. Accessed January 21, 2015. 7 Orbis Associates. Circle of Life HIV/AIDS and STD Prevention Curriculum: Wellness Education for American Indian and Alaska Native Middle School Students. Washington, DC2002. 8 ORBIS Associates. Pilot Test Report for the Circle of Life Middle School Curriculum. Washington, DC: Office of American Indian Education Programs; 2002. 9 National Congress of American Indians (NCAI), FirstPic Inc. Native American Boys & Girls Club HIV Prevention Initiative, 2005-2006 - Fourth Quarter and Final Report. Washington, CD: Author; 2006. 10 Kaufman CE, Litchfield A, Schupman E, Mitchell CM. Circle of Life HIV/AIDS-prevention Intervention for American Indian and Alaska Native Youth. American Indian and Alaska Native Mental Health Research 2012;19(1):140-153. 11 Kaufman CE, Whitesell NR, Keane EM, et al. Effectiveness of Circle of Life, an HIV-preventive Intervention for American Indian middle-school youth: A group-randomized trial in a Northern Plains tribe. American Journal of Public Health. 2014;104(6):e106-e112. 12 mCircle of Life Community Dissemination Presentations. Big Crow, C.K., "Medicine Horse Society," Kyle, SD, December, 2013; Shangreau, C., "Native Boys and Girls Clubs Summit Conference," Bismarck, ND, August, 2014; "Lakota Nations Education Conference," Rapid City, SD, December, 2014. 13 Fhi360. Education Equity Center. After School Science PLUS. 1999; http://www.edequity.org/programs/science-and-math-programs#23. Accessed November, 6, 2014. 14 Kaufman CE, Black K, Keane EM, et al. Planning for a group-randomized trial with American Indian youth. Journal of Adolescent Health. 2014;54(3 Suppl):S59-S63. 15 Office of Adolescent Health. “Using the Linear Probability Model to Estimate Impacts on Binary Outcomes in Randomized Controlled Trials.” Evaluation Technical Assistance Brief for OAH & ACYF Teenage Pregnancy Prevention Grantees. Brief #6, December 2014. 20 APPENDIX A: MCOL LOGIC MODEL 22 APPENDIX B: COL AND AS+ INTERVENTION CONTENT mCOL Curriculum Chapter 1: Introduction to the Circle of Life Chapter 2: Learning about Adolescence Chapter 3: Decision Making Chapter 4: Learning About Diseases Chapter 5: Learning About HIV/AIDS/STIs and How They are Spread Chapter 6: Protecting Yourself from HIV and STIs Chapter 7: Revisiting the Circle of Life AfterSchool Science Plus Curriculum Session 1: Who does Science? Session 2: Oobleck: Solid or Liquid? Session 3: Creating a Mystery Bottle Session 4: Sink and Float Session 5: Bubble Science Session 6: Making and Tossing Bean Bags Session 7: Building with Wonderful Junk 23 APPENDIX C: IMPLEMENTATION EVALUATION DATA AND METHODS Table C.1. Data used to address implementation research questions Types of data used to assess whether the element of the Frequency/sampling of data Party responsible Implementation element intervention was implemented as intended collection for data collection Adherence: To what extent did units Data sources deliver the intervention? Online Chapters: On demand electronic data reports generated Online data collected Electronic data The intervention consists of 2 from host website. electronically: automatic at collection modalities, each with 7 activities: user log-in. DVD version of Chapters: Paper records keep by NBGC program a) Online (7) staff. Collected at end of implementation. DVD version: daily records. b) Class lessons (7) Class lessons: Paper attendance records and Fidelity Monitoring Class lessons: daily records. Logs (FMLs) completed by NBGC program staff. Collected at Program staff c) Combined online end of project. and class sessions (14). Operationalization Program staff 1. Number of electronic chapters delivered: Combine data from electronic and paper reports to determine the number of Note: Some units received the electronic chapters delivered, expressed by: (1) all units, and (2) electronic material (e.g., Chapters) individual units. via DVD due to site-level technology problems. Consequently, there were 2. Number of class lessons delivered. Counts of number of class two sources of attendance data sessions delivered taken from attendance sheets, expressed for which were combined. (1) all units and (2) individual units. 3. Total number of activities delivered compared to total possible: Combine data for online and class session delivered for all units and compare to total possible (8 units x 14 lessons/session=112). Express as percentage. 24 Types of data used to assess whether the element of the Frequency/sampling of data Party responsible Implementation element intervention was implemented as intended collection for data collection Dosage Online Chapters: Electronic report from host website showing Online data collected Electronic data completion of chapters and discrete activities within each chapter electronically: automatic at collection How much of the intervention did by ID. user log-in. youth in the COL arm receive? DVD: Paper records kept by NBGC staff listing each chapter completed by ID. DVD version: daily records. Note: Although there are two parts to Class lessons: Paper attendance records listing each class Program staff the intervention the primary content lesson attended by ID. is delivered through the online program. Consequently, we believe Operationalization: Class lessons: daily records. Program staff receiving >70% of the online chapters is the most appropriate 1. % of youth who received > 70% of full curriculum (10 of 14 metric to use for measuring dosage. activities). 2. % of youth who completed > 70 % of online chapters (5 of 7). 3. % of youth who completed > 70% of class sessions (5 of 7). 4. % of sample that did not receive any sessions (no-shows). 5. % of sample that received any activities who completed > 70% of: a) online, b) class sessions, and c) full curriculum. Fidelity 1. Online chapters and DVDs: Electronic reports from the host Electronic Electronic website showed completed activities within each chapter by ID; For each online chapter and units that used DVDs did not receive activity-level reports. supplemental class session, what However, electronic chapters (online and DVD) require percentage of activities were sequential completion of activities so youth who finished a delivered with fidelity? chapter completed all activities. Fidelity monitoring forms were Program staff completed at the end of each 2. Classes: Fidelity Monitoring Forms and observation data. class. Note: Electronic medium ensures Operationalization: Evaluation staff consistent delivery of all topics Classroom observations (100% fidelity). 1. Online Chapters: all activities were fully delivered by the online occurred once per site. program thus fidelity was 100%. 2. Class Sessions: The number of activities delivered by all units was compared to the total number of activities possible (units that did not provide an activity were not included in this analysis). 3. Full intervention: Combine the percentage of activities delivered with fidelity for the online and class sessions and divide by 2. 25 Types of data used to assess whether the element of the Frequency/sampling of data Party responsible Implementation element intervention was implemented as intended collection for data collection Fidelity List of program staff trained compared to information about who Training records updated as Project director completed the fidelity monitoring form. needed. Was the intervention delivered by trained facilitators? Fidelity monitoring forms collected continuously. Quality Observation form, question #7. Classroom observations Evaluation staff occurred once per site Quality of staff-participant interactions Quality Observation form. Scores averaged for question #5. Classroom observations Evaluation staff occurred once per site Quality of youth engagement with program Counterfactual Monitoring calls with program staff. Weekly calls. Evaluation staff Experiences of comparison condition Survey question asking if youth participated in AS+. End of project feedback survey Evaluation staff completed by youth. Operationalization: Lessons delivered compared to total possible, expressed as a %. Context: Other TPP programming Survey question on baseline, immediate post-test and 9- month Baseline, immediate post-test Evaluation staff available or offered to study post-test. and 9-month post-test participants (both intervention and comparison) Operationalization: % of youth who were exposed to other TPP* content. Context: External events affecting Monitoring calls, email updates. Weekly calls with program Field staff, project implementation staff. Ongoing email director, evaluation Operationalization: communication. staff Qualitative reporting of external circumstances reported by club staff and evaluation staff during the project. Context: Substantial unplanned Monitoring calls, email updates. Weekly calls with program Field staff, project adaptation(s) staff director, evaluation Operationalization: Qualitative reporting of external staff circumstances reported by program staff and evaluation staff during the project. 26 APPENDIX D: SENSITIVITY ANALYSIS AND IMPACT EVALUATION EQUATIONS 1. Impact Evaluation Equations The Linear Probability Model (LPM) is simply the application of ordinary least squares (OLS) to binary outcomes instead of continuous outcomes. Equation 1 provides an example of the LPM in the context of experimental impact estimation, where Y is the outcome, T is a binary indicator of treatment status, X is a covariate, is the impact on Y of being assigned to the treatment group and is the mean marginal effect of X on Y.15 2. Sensitivity Analyses This appendix evaluates the sensitivity of estimates two alternative data analytic approaches to handle missing data. The benchmark approach described in the evaluation report employs complete case analysis. Sensitivity analyses were conducted in which a youth’s “yes” response on the previous survey to the item “ever had sex” was used for the particular focal survey if the youth did not respond. Based on the assertion that “yes” to “ever had sex” is true, we imputed 5 cases (two from baseline and three from immediate post-test) in immediate post-test and 9-month post-test. The results of this analysis appear in Table 1, under the column marked “Yes Forward.” There were no differences in findings with this sensitivity analysis. The “Yes Forward” analysis used baseline values for “ever had sex,” age, gender, race, unit size, and unit state. Because immediate post-test and 9-month post-test surveys were each collected across several months, varying by club (unit of assignment), baseline values for “age” are likely not appropriate as controls to add to the model. Therefore, a second sensitivity analysis imputed the “age” at 9-month post-test for the five youths who did not complete a survey for that wave but who’s previous “yes” response to “ever had sex” warranted their inclusion in the analytic sample to assess the primary research question. Age was imputed using the average difference in age from baseline to immediate post-test and from baseline or immediate post-test to 9-month post-test by unit of assignment. Results of this analysis appear in Table 1 under the column marked “Yes/Age Forward.” Again, this sensitivity analysis did not result in a change in the direction or significance of impact estimation. Identical “Yes Forward” and “Yes/Age Forward” sensitivity analyses were repeated to assess the secondary research question using the immediate post-test data. Results appear in Table 2; interpretation of impact estimates did not differ from benchmark analysis. Table.1. Sensitivity of impact analyses using data from 9-month post-test to address the primary research question 27 Yes Intervention Benchmark Benchmark Forward Yes compared approach approach Forward Yes/Age Yes/Age with comparison difference p-value difference p-value difference p-value Ever had sex 0.01 .901 0.02 .880 0.02 .761 Source: Nine-month surveys administered 8 to 10 months after intervention delivery. Notes: Prevalence adjusted for baseline scores for: “ever had sex,” age, gender, race, unit size, and unit state. Table 2. Sensitivity of impact analyses using data from immediate post-test survey to address the secondary research question Yes Intervention Benchmark Benchmark Forward Yes compared approach approach Forward Yes/Age Yes/Age with comparison difference p-value difference p-value difference p-value Ever had sex 0.05 .274 0.02 .726 0.05 .308 Source: Posttest surveys administered immediately after intervention delivery. Notes: Prevalence adjusted for baseline scores for: “ever had sex,” age, gender, race, unit size, and unit state. 28