logo image

ATD Blog

Evaluation Hacks

By

Tue Aug 08 2023

Evaluation Hacks
Loading...

Measuring the success of our training efforts is critical to be seen as a value-added function in our organizations. With a series of proven successes on measures that matter to senior leaders, we can more easily secure resources and endorsements for future initiatives. Yet, measuring the success of our programs continues to vex even the most seasoned instructional designers: What data might we have access to? What measurable results might we actually be able to produce? How do we determine whether any results we see were the result of training and not some other environmental factor?

Ideally, a tangible, quantitative measure of our programs’ success can be collected before and after the training intervention for those who participated and for a control group of those who did not participate in the program. Additionally, whatever we are measuring would be evidence-based. That is, what we’re measuring should have a strong, proven correlation to successful on-the-job outcomes. For instance, after a leadership development program, we could measure how many leaders were producing organizational superstars by tracking how many of their direct reports were promoted, but if the number of direct reports promoted isn’t correlated with increased organizational success, this isn’t exactly a meaningful measure. Whereas, if you instead asked each of the leaders who participated in the program to have their team members complete an evidence-based survey, like Gallup’s Q12, both pre- and post-training, you can rest assured that its results have been positively aligned with all kinds of enhanced performance measures, from increased morale to increased productivity.

Advertisement

When the ideal is impossible, there are some workarounds and considerations that can help. They clearly aren’t all scientific, but they can be a starting point when empirical performance measures are difficult to obtain. Here are some evaluation hacks:

  • Utilize a random sample – When it’s too costly or time-consuming to measure the results of every participant in your program, select a random sample. This also makes it easier to compare the results of someone who completed the program with someone who didn’t. For example, for a program that 100 people attended, select a dozen who went through the program and a dozen who didn’t to compare their results.

  • Invent do-it-yourself (DIY) measures – When you can’t find an existing metric that aligns to your program outcomes and objectives, create your own. Clearly, that won’t be an evidence-based solution, and it can take much longer to analyze results than an existing measure might, but it will truly be aligned to your curriculum. I’ve used this when an organization had a nebulous outcome they insisted on, like “employees will have increased confidence in this tool.” In this case, we created a self-assessment of confidence survey and administered it before, during, immediately after, and 3 and 6 months after the training. We also asked our IT department to start tracking how many people were using a certain function of the tool that hadn’t been tracked before.

  • Randomize the measures used – When your team isn’t 100 percent sure that a particular metric will tell the story you want it to tell, or when you can’t find one measure that will align to all your learning objectives, consider using more than one. In one instance, my team utilized a tested industry standard measure for some of the program participants and used a homegrown measure for others; that way, we would have some evidence-based data for the organization and some data that more closely aligned to our program objectives for ourselves. In another, every participant was measured using a combination of organizational metrics as well as 360 results, as neither of those told the whole story in and of themselves.

  • Have participants identify outcomes and related metrics – After sharing course outcomes and objectives, ask participants to create their desired outcome and identify what measure they can use to prove they’ve achieved it. Then have them submit that measure before and after. The main advantage is that the program will measure something relevant and meaningful to them. The downside is that perhaps they will identify a measure that isn’t important to the organization or invalid. I’ve had supervisors identify that they wanted more people to speak up in their meetings, which they would measure by asking a team member to keep a tally of how many people were speaking up in meetings. While they reported very significant increases, that wasn’t a measure that senior leaders were interested in hearing about when we reported on the success of the overall program.

As you are choosing a way to measure and evaluate your programs, whether from the list above, or more broadly, keep these considerations in mind:

  • Tie measures of training to desired organization outcomes – This is the most important aspect of a results-based (level 4) evaluation. Does it move the needle on a business metric that is important to your sponsor and organization? This means that if your sponsor doesn’t have a clear idea of what outcomes they seek, you may need to help them to clarify from the start.

  • Will the selected measure tell a good story? – Be realistic about what performance or behavioral changes you can realistically expect as a result of your training. You may aspire to change organizational culture, for example, but will a 20-hour training for a select group of leaders achieve that?

  • Measure the difference, not the absolute – Star performers who come to your programs may not see huge changes in performance after your training because they already were at the top of their game. Weaker performers who come to your trainings may indeed improve, and they may still score below a certain threshold. What matters in these instances is the improvement, not the actual score—for example, rather than “50 percent of participants scored a 5 after training,” “90 percent of participants increased their score by more than 2 points.”

  • Consider environmental influencers – What else is happening that might be influencing performance results and outcomes other than training? You can never completely isolate a specific result to the training someone received. One way to factor the broader context in is to do a trend line analysis as part of your overall evaluation. This report tracks changes on a particular metric over time, pointing out when the training occurred and also when other external influences happened, for example, a pandemic, a hiring freeze, a merger, or a new organizational structure.

  • Factor in quality – When you establish a measure that is a count of something that doesn’t account for how well these things were done (how many stay interviews were conducted, for instance), ensure your quantitative measures include quality factors. Consider how many items were produced without any errors, not just how many items were produced. And, rarely, if ever, use how many people attended the training as an evaluation measure. This is a figure that matters to our L&D teams, not to our senior leaders. Just because we got people to attend does not translate to increased performance or business results. The only time this works as a measure is when it is combined with organizational compliance—ABC Company is in 100 percent compliance with governmental training requirements, or ABC Company decreased its fines for failing to train a required percentage of the organization by 30 percent.

  • Consider your capacity to analyze the results – If, for example, a pre- and post- 360 is how you will measure the success of your leadership program, do you have the time and resources to administer this 360 two times? If your measure is how the direct reports of a particular leader responded to an employee engagement survey before and after the program, do you have the ability and time to break out the data in that way?

Learning and development functions are increasingly being asked to show the results of their efforts. The benefits of reporting on a successful program are many and well worth the effort. With a few hacks, performance and results-based evaluations are within your reach.

You've Reached ATD Member-only Content

Become an ATD member to continue

Already a member?Sign In


Copyright © 2024 ATD

ASTD changed its name to ATD to meet the growing needs of a dynamic, global profession.

Terms of UsePrivacy NoticeCookie Policy