What is program evaluation? Ask social sector leaders this question and you will get a variety of answers. To some it involves a quick survey at the completion of a training class, others will say it involves qualitative assessment including some focus groups and interviews, and others will describe a very rigorous randomized control trial. Each of these answers is correct; they are all forms of program evaluation. The key for program leaders is to understand what type of evaluation is needed for their organization’s goals.
At the national level, a lot of attention and money are given to organizations who can demonstrate program effectiveness as a result of the randomized control trial (RCT), sometimes referred to the “gold standard” in evaluation. If an organization desires their service-delivery model to become a recognized evidence-based approach, then it is very likely that the organization will need to invest in a RCT. The RCT is a research methodology that involves randomly selecting subjects from a larger test group to receive an experimental product or service. The benefit of this approach is that it is the most rigorous method to determine whether a cause and effect relationship exists between a given service and the desired outcome.
RCT provides the necessary structure to raise and sufficiently answer the question, “What would have happened to the same individuals at the same time had the program not been implemented?” Of course, it is impossible to attain a complete answer to this question because it implies an alternate reality. Therefore, randomized control trials work by creating a group that can mimic alternatives.
An RCT randomizes the population group that receives a program, service, or intervention and the one that does not. The group that receives no intervention is designated as the control group. The trial study then compares outcomes between those two groups. This comparison reveals the impact of the program. RCTs do not necessarily require a “no treatment” control. Control groups can be comprised of individuals randomized into different versions of the same program, different programs trying to tackle the same problem, or no intervention at all.
The important component in the RTC is that whether a participant is assigned to the control group or the intervention group is completely random. Using a randomization approach requires the identification of a target population by the program implementer. Following this step, program access is randomized within that population. For example, if we wanted to compare the impact of a new child welfare approach to standard treatment programs using an RTC, the organization would first need to decide the criteria to describe the target population for the treatment.
Next, when individuals were referred to the agency, program leaders would randomly assign individuals who meet the identified qualifications to one of three groups: the standard treatment group, the group that will receive the new approach, or the control group which will receive the existing approach. It is important that the two groups be as similar as possible in characteristics, as this helps reduce intervening factors related to the individuals in the group outside of the intervention approach.
Instead of random assignments at the individual level, sometimes a RTC design will include random assignment at a larger unit level. This approach is best used if the control and treatment groups may have interaction with each other. For example, if a school district wanted to implement a new anti-bullying program, it may not be appropriate to randomly assign kids in one school into this program. Instead, the district may randomly assign their schools to participate in the program. Using this approach is fairer to the children in the schools receiving the program and it also prevents program effects on students who participate in the program from affecting other students who were not exposed to the program. For example, if one group of students appeared to be communicating better with their peers and as a result the school was seeing less conflict, even students not participating in the program may have engaged in this behavior due to peer pressure alone. Randomization at the school level prevents such a scenario and ensures greater accuracy in RCT findings.
Programs that know they have more demand for services than they are able to provide are good candidates for RCTs. In these cases, the organization accepts applicants into a program. Participants are then randomly assigned into the current class or put on a waitlist for the next available session – one that will begin after the study design is complete. A comparison of program outcomes is then prepared, comparing and contrasting those currently enrolled in the program and those individuals still waiting to enroll. Because both groups expressed interest in the program or course, it is likely that members of both groups are more similar than would be the case if program leaders had chosen an unrelated control group when looking at program outcomes.
The current interest in RCTs is an encouraging sign of the growing momentum for linking nonprofit and government funding to proven results and investing in what works. Although RCTs are considered the gold standard in determining causation, they should not be viewed as the only option organizations have to more completely understand the effectiveness of program offerings.
For a variety of reasons, RCTs are not the right approach for every program. A randomized trial requires the sustained commitment of an organization, including a financial investment and staff or outside consultants who can ensure the trial is conducted correctly. As funding is one of the main challenges of organizations, it is unlikely that the majority of nonprofits are ready to conduct RCTS for their program.
In addition, in order to obtain statistically significant results, programs need to be large enough to produce sample sizes that are big enough for significant testing. There are also often ethical concerns about denying individuals into a treatment program. Given these many concerns, if a RCT is not feasible for an organization, there are parts of the RCT or other similar activities that can be implemented to obtain greater proof of impact.
If you are looking to evaluate your programs and are wondering what design is best for your unique situation, Measurement Resources is here to help! We can assess your evaluation goals and program situations to help you design the best evaluation that will lead to greater impact. Our favorite part is to celebrate our clients’ success on their increased impact on the world! We’d love to help you make data-driven decisions with confidence. Contact us today for your free 20 minute strategy session.
Want more information on how to increase funding, morale, positive press, and organizational impact? Join the Achieving Excellence Community and receive our free eBook, Ten Tips to Open the Door to More Grants (and Other Funding): Overcoming Common Mistakes in Outcomes Measurement.
Sheri Chaney Jones
Measurement Resources Company