Training evaluation

Evaluation of training

Monitoring and evaluation should be a key component of doing work with social outcomes, so that you can determine – and say – what outcome your work has actually had. You may see the positive impact of your work, but having clear evidence from an evaluation process enables you to make the case to a funder or partner for supporting the work you do. Remember: a funder wants to fund social outcomes, and good evaluation processes will help you to show to an external body the beneficial outcome of your work.

 

 

What to evaluate and monitor

Evaluation is about measuring outcomes.

  • An outcome is a change (in a person, society) that you have brought about.
  • Outputs, often confused with outcomes, are an activity, product, service etc that you have delivered (e.g. a course). Outputs lead to the outcome.
  • An outcome is measured by outcome targets. These can be ‘hard outcome’ targets and ‘soft outcome’ targets (see below).

 

How to evaluate and monitor

There are, of course, many different ways that projects can be evaluated.  Sometimes funders will dictate the kind of information that you’ll need to collect.  Often this might be numerical and involve keeping accurate registers and data on individual participants. Or, it may involve doing interviews, or encouraging participants to keep learning journals or video diaries to capture their experiences as they progress through your training programme.  There are plenty of resources out there to help you evaluate, but it may be a good idea for you to consider setting up your own monitoring and evaluation processes in order to support your station’s development as a training provider. You should also think, when designing a project or programme, about what specifically you need to measure, and how you should go about it.

 

Why to evaluate and monitor

Here is Dave Chambers of Preston FM on why we need to monitor and evaluate.

 

Evaluation is also a means by which you can assess and improve the quality of your training. Shane Carey of Reprezent makes the case for incorporating student feedback into the continuous improvement of your training – and emphasises that this shouldn’t be considered an onerous task: “It has to be dynamic – it’s not paperwork for the sake of paperwork”.

Monitoring and evaluation can:

  • Provide constant feedback on the extent to which the training (or wider project) is achieving its goals, and track need for any adjustments or improvements
  • Identify potential problems at an early stage and propose possible solutions
  • Monitor the accessibility of the training to the community, or target group/population
  • Monitor the efficiency of the training and suggest improvements
  • Evaluate the extent to which the training (or wider project) is able to achieve its general objectives
  • Provide guidelines for the planning of future projects/training.
  • Improve project design by reviewing the soundness of project objectives
  • Incorporate views of stakeholders to enhance their participation in, and ownership of, the training offer

 

 

Hard and soft outcomes

In monitoring and evaluating projects and training, we often talk about ‘hard outcomes’ and ‘soft outcomes’. Outcomes are simply something that you have achieved.

  • Hard outcomes are outcomes that can be directly determined and quantified – such as numbers of participants who got jobs within 2 months of the project.
  • Soft outcomes are often more subjective and harder to measure – such as a person’s level of confidence.

 

Hard outcomes

Hard outcomes have targets that are clear and numerical, and may include:

  • Number of participants achieving a certificate;
  • Number of participants who went onto a positive destination (e.g. further training or education); and
  • Number of participants who got a job within a certain time period (e.g. 2 months after the course).

 

Outcomes will have outputs and outcome targets.

Outputs : Detailed activities we can actually provide.  An output is often a number.

Outcome targets: This is where we choose what to measure and monitor.

  • These describe what we hope to achieve in terms of activity.
  • We should monitor and evaluate performance against these targets.
  • They should be useful and relevant and can help us plan for the future.
  • They can be set for many things including: Quantity, Take-up, Accessibility, Timescale, User satisfaction, Cost, etc.

 

The table below gives an example of an approach to measuring hard outcomes and how this approach relates to the aims and objectives of the project.

 

Outcomes

 

Outputs (activities)

 

Outcome Targets

  • To improve media, IT and employability skills
  • Radio station based learning delivered by tutor
  • IT/ digital media training delivered by tutor
  • Employability training/ qualification delivered
  • 30 hours of employability training delivered
  • 12 participants started the course
  • 8 participants completed the course
  • 8 participants achieved the qualification(s)

 

 

Soft outcomes

Soft outcomes may include achievements relating to:

  • interpersonal skills, e.g. social skills, ability to work in a team;
  • communication skills, such as speaking & listening;
  • organisational skills, such as: personal organisation, and the ability to plan and prioritise;
  • analytical skills, such as: the ability to exercise judgement, managing time or problem solving; and
  • personal skills, e.g. motivation, confidence, reliability and health awareness.

 

It is crucial to establish a baseline of soft skills, aptitudes and attitudes from which individual progress can be measured. This can normally be done during the initial assessment phase when learners’ needs are established, personal barriers are identified and personal development targets are set.

A potentially useful approach to measuring an individual’s skills or the impact of your training is to measure ‘distance travelled’ – i.e. the change in a person’s outlook, capacity or skill over a period of time (usually, the duration of the training/project). One such tool is an Outcome Star, a tool developed by Triangle Consulting – see their website for details of the Work Star, with employability skills in mind: http://www.outcomesstar.org.uk/work/ You can also download the Speaking and Listening Star, which was used as part of the Connect:Transmit project, and the Evaluation Handbook – details of the project are below or on the Connect:Transmit website http://www.connecttransmit.org.uk/ Using these types of evaluation tools is one way of generating quantitative data (which is likely to be useful for certain funders, partners and/or audiences). It is much better to do this on an individual basis with learners if at all possible.  If you simply give out a questionnaire in the first session of a course people are likely to answer the questions to make themselves look better which risks invalidating your data later on.

Another approach is to use diaries and interviews to track people’s progress along a learning path. Diaries are often difficult to get working, as they may require people to overcome their nervousness and scepticism; however, they can be extremely useful in capturing people’s ‘in the moment’ lived experience of a programme – e.g., their personal fears, challenges, and triumphs. Interviews can also capture this kind of data, but at more of a distance: the interviewees are usually talking about an experience they had in the past, and what they say can thus be less vivid. However, interviews are a very good way of focusing on particular points that you are interested in measuring, but which the participant may not have thought about before, or which gives the participant a fresh opportunity to think about their experience. The quality of an interview will largely depend on the skill of the interviewer – as you already know!

 

The table below lists a number of collection methods that could be used to measure soft outcomes, as well as points to consider in using each method.

 

Type of collection method Comments and issues to consider 
Individual action planning, personal action planning and goal setting  The drawing up of individual action plans is normally carried out during the initial assessment session and then reviewed at regular intervals to gauge whether goals have been met. An action plan can include personal objectives, priorities and reflections on progress. 
Informal reviews between trainers/assessors and clients to record soft outcomes  Improvements over time can be noted and recorded during regular formal or informal reviews. This system is reliant on a sound judgement from the client and/or project worker and will not necessarily provide an absolute or formal measure of distance travelled. Baseline information is particularly useful here as data can be compared over time. 
Tool-based monitoring of skills/competences. A formalised tool can be used or developed to assess the learner’s skill at different points in time (usually the beginning and end of the training). Specific skills are rated on a numerical scale; this is best to do as a one-to-one conversation between assessor and learner. Having a tool such as an Outcome Star helps by removing the reliance on the assessor’s judgement. A recognised challenge is that learners often have inflated/unrealistic assessments of their skills at the beginning, and a more realistic sense at the end, thereby not fully reflecting their progress in the numerical data produced.
Daily diary or personal journal  Clients can be encouraged to write about progress towards soft outcomes. Issues of confidentiality should be considered.
In-depth reflection during or after the course  Beneficiaries could be asked to consider and review their progress as they come to the end of their training course, or a particular element of the project (such as a work placement). This could be incorporated as an assignment that could be included in a beneficiary’s portfolio of evidence of achievement. Alternatively you could consider using a social media platform to encourage learners to reflect on a daily or weekly basis. Additionally you could hold a focus group or interview individual members of the group. 
Recorded observations of group or individual activities  It is important to have comprehensive documentation systems that will allow for the recording of anecdotal evidence of outcomes achieved and progress made.This method requires a high level of observer skill, and there is the danger of observer bias, and also that the observer will influence the behaviour being observed. 
Presentation of material in a portfolio  This could include evidence of tasks completed successfully indicating achievement of outcomes, or progress towards them. An evidence based portfolio would be a concrete output that could be presented to an employer. 
Tests Some projects use psychometric testing within the assessment process. This is generally a diagnostic procedure but could be adapted to establish a baseline and measure distance travelled. Tests may be useful in establishing a person’s existing skill level. The test could then be conducted at a later stage to illustrate any progress made.

 

 

When you define the soft outcomes you want to measure, and depending on how you are measuring these, you may want/need to list indicators that relate to each soft outcome. The table below is a sample list of outcomes and some associated indicators.

 

 

Types of ‘soft’ outcomes Examples of indicators 
Key work skills
  • The acquisition of key skills, e.g. team working, problem solving, numeracy skills, ICT and digital media skills
  • The acquisition of language and communication skills
  • Completion of work placements/ tasks
  • Lower rates of sickness related absence
  • Observing rules and behaviours
Attitudinal skills
  • Increased levels of motivationIncreased levels of confidence
  • Recognition of prior skills
  • Increased feelings of responsibility
  • Increased levels of self-esteem
  • Higher personal and career aspirations
  • Showing initiative
Personal skills
  • Improved personal appearance/ presentabilityImproved levels of attendance
  • Improved timekeeping
  • Improved personal hygiene
  • Greater levels of self-awareness
  • Better health and fitness
  • Greater levels of concentration and/or engagement
  • Willingness to learn new things
Practical skills
  • Ability to complete forms
  • Ability to write a CV
  • Improved ability to manage money
  • Improved awareness of rights and responsibilities

 

Interpersonal skills
  • Social skills such as positive social networks, community participation, cultural integration

 

 

Below is an example table of aims and outcomes (in this instance relating to employability), and the indicators and the evidence you would need to collect in order to measure a specific social outcome.

 

Outcome (the changes we want to achieve)

Outcome targets 

Indicators

Evidence

  • To increase work skills of long term unemployed participants
  • All (or % of) participants will have improved their work skills in at least two of these given areas:
  1. ICT skills
  1. Communication skills
  2. Teamwork
  3. Planning
  4. Research
  • Participants report that they have developed a range of skills for work.
  • Trainers report that participants have developed a range of skills for work.
  • Stakeholders evidence and report that participants have developed skills for work.

 

  • Soft outcomes questionnaire
  • Focus groups or individual interviews/ learning reviews with participants
  • Observations, photos and notes from trainer and participant logs and other accredited work
  • Group blog/ facebook page

 

 

 

Examples

It may be useful to see a real evaluation approach ‘in action’. As illustrative examples, see evaluation resources for a training course on employability delivered by Radio Regen. This lists the evaluation methods used, when and how they were used, and also gives the evaluator’s comments on how useful each method was, as well as any issues around this method. You can also download the evaluator’s observation sheet used throughout the course.

See also the Radio Regen student questionnaire (doc).

 

Connect:Transmit monitoring and evaluation

In Connect:Transmit, which successfully evaluated the use of community radio training to develop young people’s speaking and listening skills, a number of evaluation tools were used. The main tools used were of the ‘distance travelled’ type: to gauge the development in learners’ (and trainers’) skills and understanding over the course of the project. Two specific tools we used were called ‘Speaking and Listening Star’ and ‘Community Quiz’. You can see these tools in more detail in the Connect:Transmit evaluation handbook. You can also download and use the Speaking and Listening Star spreadsheet which was used in Connect:Transmit – you could also adapt it for your own purposes – which was developed to easily produce statistics about the learner’s ability in various categories at the beginning and end of the project. Instructions for using the spreadsheet are on the cover page.

More resources on evaluation and monitoring from Connect:Transmit can be found on the following pages:

 

 

Building an initiative around evaluation

Before your training-based project or initiative even begins, think about the main reasons you have for gathering information. For instance, these could be:

  • to provide evidence to current or future funders and stakeholders that the project is achieving its aims
  • to provide feedback to trainees or client groups and to tailor provision to their identified needs
  • to support overall project evaluation, project development and lessons for the future
  • to provide evidence for your trainees to use on CVs or to access further training, education or job opportunities.

 

Things to do and/or think about in developing an evaluation approach:

  • check if there is any requirement or expectation on the part of project funders, wider stakeholders or managers
  • identify core outcomes and indicators appropriate to your own project, and target group
  • identify any additional outcomes and indicators specific to your particular target group
  • consider the alternative methods of collecting information and select those that best fit your needs and resources
  • decide if the methods chosen are appropriate to your target group as a whole
  • think about how robust the findings will be. How far do they depend on value judgements and how can you guard against any problems this might lead to?