The reality of budgetary pressures is not new to Safety departments. Especially during economic slowdowns when executives focus in on cutting costs, it has always been vital for safety professionals to be able to justify their activities.
Aside from budget pressures, being able to demonstrate the value and the impact of your safety programs are crucial to help build momentum and support for future initiatives. Without the support of executive management, supervisors, and employees, safety programs will not reach their full wide-ranging impacts or help drive a cultural shift. One of the most effective ways to gain this kind of company-wide buy-in is to be able to demonstrate the effectiveness of previous programs and building upon that success.
Let’s take a deeper look into the challenges that safety professionals face when trying to define success and outline best practices to help you demonstrate the value of the companies safety programs and initiatives.
Making invisible success visible
Every safety department faces the same challenge of figuring out how to measure their positive impacts. Oftentimes, safety teams are viewed as a cost of doing business with no tangible value, but we all know that’s not the case. Safety incidents can be costly and impact the company’s bottom line if the public loses trust in the company. While injuries, illnesses, absences and accidents are visible, and the negative effects of an unsafe environment are readily apparent to anyone in the company, it’s what safety professionals do behind the scenes to prevent these things from happening that often gets overlooked.
So how do you shine a light on these efforts that prevent injuries and illnesses? Much of the work that safety professionals do results in benefits that are only apparent through careful measurement and reporting. Being able to attribute a measurable impact to your efforts is critical to get buy-in for current and future safety programs and initiatives.
How to evaluate success
Despite the complexity in justifying your safety programs and initiatives, there are ways to define and measure the results of your work. The first step is to select an appropriate methodology.
There are many ways that the impact of a project or program can be measured. Each is calculated and measured differently and often relates to a different stage of your safety program. Let’s look at four examples:
If your safety program is at its earliest stages, a needs assessment determines which areas of your organization would most benefit from your safety improvement efforts. By analyzing safety statistics, incident reports and/or employee surveys, you can identify areas of improvement and allocate resources accordingly.
If you have already determined the most appropriate use of your resources, a process evaluation looks at the implementation of a program, measures whether the planned processes have been put in place, and measures how individuals are responding to the new processes.
An effectiveness evaluation, in its simplest terms, determines whether the results of the specific program met the outlined objectives. The success measurements for this method identify the impacts of a program and look at the magnitude of its effect. Areas that safety professions would look to measure include: injury rates, near misses, events with significant injuries, worker compensation benefits. It’s important to note that there are a number of variables that can have an impact on the perceived effectiveness of a particular program, so safety professionals may need to dig in to learn more about why a program wasn’t as successful as they thought it would be. Things like employee turnover, changing operations, and mergers and acquisitions can impact the results.
Executive management wants to see financial-based analyses, such as cost-outcome analysis, cost-benefit analysis or cost-effective analysis. All of these analyses more or less work the same way, but one may be more appropriate to use than the others. First, you estimate the net cost of your program by defining how much it costs to implement, then subtracting the cost savings that can be associated to the project. Determining the cost savings can be challenging, since these are typically considered avoided costs, so it’s important to be able to demonstrate quantitatively that the program had a direct positive impact on such things as injury rates, absenteeism, and occupational health costs.
Planning comes first
Demonstrating program value should be the primary goal before designing the program. The requirement to provide justification for your work has numerous knock-on effects that dictate what kind of program to implement and what to measure. If this value calculation is not front of mind during the planning phase, then there’s a good chance that you’ll get to the end of the program without the data you need demonstrate to management that the program was a success. Typically, for every safety program, the planning phase should cover the following:
- Define the scope: Work collaboratively - involve employees and managers to define the purpose of the program, the main questions, identify available resources, establish goals for the project, and specify a deadline.
- Organize a committee of stakeholders: Be sure to include those who will communicate results, such as managers, worker representatives and evaluation experts. You should look for members who have different perspectives. Getting buy in from different divisions, departments and disciplines will ensure that your program benefits a wider segment of the organization, rather than a small niche.
- Develop models: Attempt to predict how the program will work and try to identify any outside variables that may have an impact on the validity of your results. Work done at this stage should save you time and energy. No one wants to have to redesign a program after its been implemented.
- Choose your evaluation criteria: As already discussed, you need to determine what the goals are and how you are going to measure the program's impact. Consider giving a higher weighting to certain outcomes. Be careful not to make data collection too onerous, think about scalability - this group may have buy-in but will everyone else? It’s important that you understand what it is you are going to measure. Questions you should ask yourself and others, “Does the program lean towards being measured in a certain way?” and “Will the results be statistically valid?”. Asking these sorts of questions, will help ensure that you aren’t overlooking things when you are analyzing the results.
- Resources: Do you have the resources to introduce experimental design elements into your evaluation, such as control groups, random selection, and accurate pre-program measurements? Knowing the answers to these questions will help ensure that the results you get are defensible.
If you want to be able to demonstrate the impact of specific interventions, you should set up your program using the scientific method.
There are two different levels you can choose here – a simple ‘before and after’ test or a more complex experimental design. You will want to choose the one that suits the resources available to you.
Before and After
A before and after experiment is one where you take readings before and after the project begins and ends. If you don’t have the before data, it is impossible to prove any effect. The longer a project goes on, the more outside factors could be having an influence on what you are measuring, so the less certain you will be in your conclusions.
Threats to the validity of before and after tests could include:
- Outside influences – external variables
- Regression to the mean
- Test design
- Placebo effect
With all of these, the simplest way to mitigate their impact is to consult outside data to compare with your results.
With an experimental design, you require two groups: a control group and a randomly selected experimental group, which ideally is representative of the wider population. The control group must not learn in passing from the other groups, and any communications must make it clear that one group is not losing out by not having the program rolled out in their factory/department.
How you measure the success of your program will depend on the model you outlined during the planning phase. Ideally, you should measure all the way through – implementation, during the project and final outcome - to get an idea of change over time. There are many different ways you can measure your impact, so you will have to consider the most valid and reliable measures for your particular circumstances.
Most of all, you should beware of headline figures and statistics. It is often easy to add one and one together and get three, but it’s important to remember that correlation does not equal causation. It is important to measure the underlying data points as well as the headline statistics.
At the end of a project you want to be able to see whether the program as designed is suitable for a wider application in the organization. If something in the data indicates that the design of the program should be changed – test it again. Don’t look at the data and see a positive effect and be lulled into thinking that the positive effect will surely be replicated in the rest of the company. If there are indications that something might not be right, drill down into it and think about ways to improve the program design. This is the whole point of continuous improvement.
Hopefully, it is clear by now that there is no silver bullet when it comes to proving the value of safety programs and initiatives. Additionally, every organization has different priorities, so there is no standard definition of ‘value’ that is consistent across industries. It's because of these shifting priorities that we have such an abundance and variety of tools available to us to plan and construct a safety program. By taking the time to plan and tailor your programs to reflect the priorities of your stakeholders, you can make use of metrics and KPIs that are most appropriate and are most likely to gain you the support you require to continue investing in the safety and health of your employees. We invite you to learn more about Cority's safety and business intelligence solutions that can help you track, manage and measure your programs.
About the Author
Ian Cohen, MS is the Product Marketing Manager responsible for Cority's Environmental and Safety initiatives. Before taking this role, Ian was Cority's Environmental Product Manager, where he was responsible for developing Cority's Environmental Compliance and Data Management Suite. Prior to working with Cority, Ian was an environmental specialist at Florida Power & Light Company, a NextEra Energy, Inc., company, where he led the development, implementation, and management of various environmental management systems and programs. Ian is well versed in the development of enterprise environmental management information systems and is a subject matter expert in corporate sustainability, including program development, annual reporting and stakeholder communications. Ian earned a Bachelor of Science degree in Biology and a Master of Science in Environmental Science, both from The University of Tennessee at Chattanooga.More Content by Ian Cohen