This page contains a list of resources for conducting training program evaluations. Feel free to use them because they are a great way to make your courses shine.

What is training program evaluation?

It’s a way of making sure your courses are the best they can be. They assess the course against the organisation’s priorities and analyse past performance. It is a thorough, methodical investigation of the training program from all angles.

This is a valuable exercise. It identifies how valuable courses are and how they can be improved. It analyses alternatives to the training, which leads to two possibilities: either you prove that your course is the best option for people; or you identify a better option, freeing up the need for you to put resources against the course.

Program evaluation gathers evidence of your course’s effectiveness. No matter the outcome, your next step becomes easier with this evidence in-hand.

Conducting a training program evaluation

Program evaluations are a five-step process:

  1. Planning the strategy,
  2. Designing the tools,
  3. Gathering the data,
  4. Analysing the information,
  5. Sharing the results.

The rest of this post is a description of these steps. You can find the resources at the bottom of the page.

Planning the strategy

The first step is to map out your approach. The evaluation strategy template has three parts.

First is a background and overview section. This is where you write the purpose of the course, the purpose of the evaluation and the context of both. If a change prompted the evaluation, mention it here. If there is a problem facing the training program – for example, falling popularity or rising costs – also mention it here.

The second part of the evaluation strategy is a logic map. A logic map captures the key elements of the course and how it affects the community. It lists:

  • Drivers – what motivates learners to take the course; what problems does the course address;
  • Activities – what the learners do on the course;
  • Resources – what the learners need to do the Activities;
  • Outcomes – what the course should achieve under ideal circumstances. Note that this is aspirational, not necessarily what the course actually achieves.
    • Short-term outcomes – what the learners can do at the end of the course (this corresponds to the Learning level of the Kirkpatrick model);
    • Medium-term outcomes – what the learners do differently following the course (this corresponds to the Behaviour level of the Kirkpatrick model);
    • Long-term outcomes – how the course changes the capabilities of the community (this corresponds to the Results level of the Kirkpatrick model).
    • Ultimate outcomes – how the changes in capability address the problems in the Drivers section. Think of this as the course’s ‘Why’ – what is the transformation your learning program dreams to achieve?
  • Assumptions – for the course to work, there must be things that are true. Writing your assumptions helps identify where the program might be failing and why. Examples might be that you assume the community will embrace your course and that this training is the best way to resolve the issue.

Note that the logic map identifies things that the Learning and Development (L&D) Team can directly control and those it can’t. For most situations, the L&D Team can control the short- and some medium-term outcomes, but not the long-term outcomes.

The third part of the strategy is a list of questions that the evaluation will answer. I like to ask questions about the:

  • Program Goal – whether the course’s purpose is worthwhile;
  • Appropriateness – whether the outcomes of the course matches the needs of the organisation;
  • Effectiveness – whether the course creates meaningful change;
  • Resources – what the course needs to succeed;
  • Value of Investment – whether the outcomes are worth the cost.

This document lays out the plan for the evaluation. It covers the purpose of the course, its role within the community and what the evaluation aims to achieve. Next is to use this as a guide for developing data collection tools.

Designing the tools

By now, you have created the evaluation strategy. The step following this will be gathering the data to address the strategy’s requirements. Between these two steps is designing the tools you need to gather the data.

The tools will be things that you are already familiar with – at some stage, you will have given or received formal feedback on something. These tool operate the same way. They might be a list of questions to ask a focus group; an online questionnaire; a script to follow during an interview; or anything else that suits your circumstances.

There are two things to ask when designing a tool for gathering data: one is what information you need; the second is who you are gathering the data from. The first part is easy – they are the questions in the strategy document. You’ll usually copy them straight from the third section of the strategy document (see above). The second part needs some thought.

What do you know about the people you’ll be gathering data from? Are they co-located or geographically dispersed? How many of them are there? Do they like technology and paperwork, or do they conduct all business face-to-face? Are the learners, teachers, supervisors, subject matter experts, or stakeholders from outside your organisation?

The types of data collection tools have their own strengths and weaknesses. For example, an online questionnaire is simple to disseminate, scales well and provides quantitative data. It can be less effective at gathering qualitative data and many people don’t like these forms. Interviews and focus groups have the human touch, allowing you to engage with the people. Collating the data could be difficult, though. Chances are, you’ll use a combination of strategies.

Gathering the data

This is probably the simplest step. Once you have the data collection tools, you use them to collect the data. The most common problem people encounter here is needing to manage the relationship. It’s important that the relevant stakeholders offer their input in a timely and accurate way. This is why it helps to make the process as easy on them as possible – you’ll want to undermine any hesitation from them before it arises.

Analysing the information

You probably had an idea about what sorts of feedback people would give. The data from the tools usually aligns with your intuition, but be alert for surprises. Often there’ll be one or two points (hopefully minor ones) that you never expected. These can be valuable, though sometimes these are just isolated remarks.

Your data should be straightforward and easy to understand – after all, you designed the surveys and questionnaires. For quantitative data, you can identify trends and calculate percentages. For qualitative data, you can group the comments into common themes. Both approaches give you an overarching view of how the community views your training program. It will likely be a nuanced picture, often filled with contradictions. It’s important to note these and make a judgement call.

Sharing the results

These results will need to go to the relevant decision-makers in the appropriate format. For some organisations, that will be a casual chat with the boss; for others, a decision brief will be pushed up the chain. You might share the report with stakeholders, or not. You know your circumstances better than anyone – do what people normally do to get a point across.

What is your point, exactly? Your argument will be a recommendation, backed by the data. It might be that the course needs more resources, an overhaul or to be replaced by something better. It will answer the question: what does the community need to do to achieve the Ultimate Outcome (see the evaluation strategy).

This recommendation is backed up by the data. The qualitative data is useful for telling a story. The quantitative data provides the hard evidence that supports the story. You will need both to get your message across.

Aftermath

What happens next is up to you. If the evaluation found that no change is needed, it should provide the cover for you to keep the course alive. If some sort of change is needed, then it’s up to you to pursue it. Giving the decision-makers your report is only the start – you need to ensure that they follow up on it.

Training program evaluations are powerful things. They put a course under the microscope, examining each component and its place in the community. The insights they provide are unique and meaningful, giving you the evidence to enhance a course, scrap it or anything between. Neglecting this process means that your programs become less relevant, drifting away from their purpose (if they ever had one) with each iteration. Like putting your car in for a check-up, it’s not something that should be delayed.

Templates

Evaluation Strategy

This document guides the first step in the evaluation – Planning the Strategy. This strategy will save you a lot of effort in the long run by ensuring you follow a strong plan.

Learner Feedback

To run a great course, you need input from the learners. This simple yet robust questionnaire gathers data from the learners, without the fuss.

Feedback Summary

This Excel spreadsheet calculates and summarises the learner responses to the questionnaire. It is easily adaptable to any format of questionnaire, providing instant statistical analysis of the responses.

Interview

Getting feedback face-to-face is a great way to gain unique insights to your course. Unfortunately, this is rarely done in a systematic manner. Remove the pain from stakeholder interviews.

Supervisor Feedback

Gauge the training program’s impact with this questionnaire for supervisors of learners. The supervisors are the ones that release staff for training – they often have a great perspective on the value of courses.

Evaluation Report

This is where the action is. Capture the results and recommendations in this streamlined, simple reporting format. Get your point across and ensure your voice is heard.

TPE Tools

Download all the TPE Tools in this convenient .zip file.