Build Evidence Through Evaluations: Choosing Your Approach

Choose Your Approach

There are a variety of different types, or methodologies, of evaluations across a spectrum of complexity. A workforce agency should choose an evaluation approach that will help them answer the questions that are of greatest interest to the agency.

Timing is an important consideration for all evaluation types. If your intervention follows a specific set of steps each time, starts and ends on a specified date or requires a certain critical mass to meet its objective, it can be important to evaluate the intervention when these conditions have been met. If the service or intervention being evaluated is ongoing in nature (e.g., responding to customer queries) or providing differentiated services based on individual need, it can make sense to take a representative sample, both in terms of time and participants.

It's important to note that the world of impact evaluation is continuously evolving. Workforce agencies should select the tool that best meets their needs, operational realities, available data sets and budgets. Over the course of a multi-year intervention, new evaluation approaches may become available or the additional data collected may make quantitative approaches more accessible. The most important consideration is to start incorporating evaluation into the procurement process as soon as possible. Agencies can add new approaches, or additional requirements, as they build evaluation capacity — both internally and among their service providers — and as the data is available. All of the evaluation approaches described here can provide valuable new insights into the work and shed light on opportunities for improvement.

  • Realist Evaluations - Tests and refines the program theory before it's launched, or while early in its delivery, based on initial data collection, needs assessment, or process evaluation. Generally looks at considerations around met and unmet needs of the target population, prioritization of the intervention and specific strategies that can be applied. Often focuses on determining if the approach is fit for purpose.

  • Theory of Change (logic model) - Outlines a perspective or theory of how the project should work once operationalized. It describes a set of planned inputs and activities that should create a specific series of outputs, outcomes and impacts.

Such evaluations are often performed at early stages, while the work is still in progress. For example, Rapid-Cycle Evaluations (RCEs) allow an agency to test out tweaks to programs and services, offering data to iterate and improve programs more quickly instead of letting them run to their conclusion before making changes. For example, an agency might use an RCE to measure the impact of new outreach strategies on participant enrollment and might apply the findings of that RCE to ongoing recruitment efforts. This two-pager from Mathematica provides more information on RCE, including the types of questions it can help an agency answer and examples in workforce settings.

Qualitative data from surveys, interviews or focus groups can help an agency understand how participants experience programs, and can offer valuable insights alone or in conjunction with quantitative data. For example, qualitative approaches can help shed light on the specific elements of a program that participants and/or employers found most helpful.

  • Interviews provide an opportunity for one-on-one interaction and lend themselves to open-ended questions that can draw out details of the individual’s experience or perception. Their personal nature can help make individuals feel more at ease and enable an interviewer to address considerations such as English language fluency and cultural context.

  • Focus groups provide small group conversations around a specific topic that are intended to create understanding, gather insights and foster connection. Generally focused on dialogue rather than specific data points. Participants can benefit from hearing from each other though care should be taken to ensure all participants have space to contribute ideas and that the focus group leader establishes trust and expectations for how information will be used up front.

  • Surveys are useful to capture information from a larger group of individuals and can offer opportunities to understand their accompanying demographic makeup. Common types of surveys include (a) opinion and satisfaction surveys which measure views, attitudes and perceptions; (b) culture surveys which measure the point of view of employees/participants and are designed to assess whether it aligns with that of the program, the organization or its departments; and (c) engagement surveys which measure commitment, motivation, sense of purpose, and passion for experience or work.

  • Case studies, often put together based on a series of interviews, focus groups, and research are another mechanism that can provide a more in-depth qualitative analysis of the intervention.

Urban Institute describes different approaches for evaluating workforce programs as part of its Local Workforce System Guide. Chicago Beyond’s Why Am I Always Being Studied? guidebook includes a chart (Page 37) showing different types of evaluation approaches and considerations for deciding which approach is right for an agency.

  • In experimental evaluation, a randomly selected group(s) receive an intervention or program and a randomly selected control group(s) does not. This approach, known as a randomized controlled trial (RCT), is a rigorous form of quantitative, causal evaluation that allows an agency to attribute outcomes to a particular intervention. RCTs can provide valuable information on how well a program is serving participants and can help advocate for more resources when it shows a program to be effective. For example, randomly assigning individuals who are experiencing homelessness to workforce services and comparing this group to those who do not receive services could shed light on the impact of workforce services in enabling an individual to obtain or retain stable housing. However, RCTs generally require high levels of financial investment and staff capacity, and the randomization needed for an RCT is not always possible due to funding, programmatic or policy considerations.

  • Quasi- or non-experimental evaluation design includes a broad range of quantitative methods and are often used when randomization is not logistically feasible or ethical. These studies aim to establish a cause-and-effect relationship between a program and an outcome but without the use of randomization. Statistical or qualitative methods are used to account for potential differences between the group that benefited from the intervention and a similar group that did not. For example, because randomly assigning WIOA-enrolled participants to a control group that will not receive services might be politically or ethically problematic, other methods can be used to compare these participants with TANF participants who have similar characteristics but are not receiving WIOA services. Types of quasi-experimental designs include:

    • Pre and post evaluation design is a simple technique that includes assessing the current state of the individual or process, applying the intervention, and then assessing the individual or process again to capture any changes. This approach can provide useful insights but care should be taken as it does not provide the necessary basis to determine cause and effect. For example, assessing an individual's understanding of what makes a quality job, delivering a training on job quality elements and then assessing their understanding after the evaluation would create a pre and post evaluation.

    • Interrupted time series design applies a set of repeated measurements before and after delayed implementation of the intervention as a means to help eliminate other explanations for the outcome. This can be applied to a single group or multiple groups. Taking multiple observations can improve the reliability of the results. Similarly, if an intervention includes multiple facets – education, transportation assistance, and childcare, for example – implementing one facet, allowing for a passage of time, and then adding in the next, can improve the evaluator’s ability to understand the impact of each facet of the intervention.

  • Economic evaluation generally includes cost-benefit, cost-utility, cost-effectiveness, cost-minimization and/or cost-consciousness calculations to determine how to deploy resources to maximize desired impacts. For example, performing a Return on Investment evaluation on the cost of the training versus the financial benefit to the individual when entering the next job would provide an economic evaluation.

Mixed methods are combinations of the other approaches listed.

  • This approach can provide a more robust understanding of the intervention’s results.

  • Can be performed in tandem or at different points during the process.

Resources: