Good evening. I want to start this presentation by congratulating the Civil Engineering Community on the excellent progress you have made in changing the safety culture throughout the industries that you work in. It is not so many years ago – maybe a little over ten – that the prevailing culture in many of the sectors you work in – especially Construction – was that it was an inherently dangerous business and that serious injuries and sometimes fatalities were somehow inevitable. If I had suggested to you at the turn of the millennium that we could build the venues for and stage the Olympics in 2012 without a single worker fatality, I suspect few would have thought that possible.
But we did it - you did it - and it is what we have come to expect on major projects like Terminal 5 and now of course, Crossrail. I suspect most of you now recognise safety as a priority in your business – some of you may even say it's your "number one priority". I hope you don’t – I’d much rather you position it as a core value and really mean it.
But before you all get too comfortable, let me make it clear that I am not here tonight simply to give you a warm glow about what you’ve achieved so far. I mention the progress you’ve made because I want you to recognise what you can achieve collectively when you give your commitment and show leadership – when you recognise that there is a problem/challenge that needs to be addressed. The reason for that is that I believe there is still a significant piece of work in relation to safety that remains largely undone in your industries.
The theme of tonight’s event is learning from other industries. You don’t have to look too far to other sectors to see that they have learned some very hard lessons about the need to address personal safety and what they call process safety. Major hazards sites in the chemical and oil and gas sectors have prided themselves on their outstanding performance in personal safety for years. They have encouraged reporting of near misses, investigating many near misses and very minor injuries in depth to understand the causes. Behavioural safety programmes are embedded in their thinking. But the harsh reality is, that holding the handrails, wearing ppe, and avoiding slips and trips onsite may well be successful in driving down injury rates, but it will do very little to address issues of process safety – and it is process safety issues that lead to disasters like Buncefield, Texas City, Macondo and many more.
Process safety is a well-understood concept among chemical engineers like me – we often refer to ourselves as process engineers. We design and operate chemical plants which run continuously where we need to control chemical reactions. More often than not some if not all of the chemicals which are contained within the system are hazardous and will cause serious harm to people and to the environment if there is a loss of containment. To give it the full title it is actually Process safety and Loss Prevention. This starts to provide a better insight to what the subject is all about. Some sectors and organisations now use terms like Operational Integrity or Asset integrity management / others draw the distinction between Occupational Safety and Operational Safety. But at the heart of the concept of Process safety and Loss prevention lie two very simple principles which are much more widely applicable to other sectors beyond chemical processes:
Inherently safer design is a principle which can be applied during certain “windows of opportunity” in the life of any project or facility. Trevor Kletz, who has sadly died recently but will be remembered by chemical engineers around the world as a leader on process safety once said very succinctly “What you don’t have can’t leak”. In design terms to chemical engineers this means eliminating inventories of hazardous materials within the process.
In civil engineering terms the principle of inherent safety in design lies at the heart of the Construction Design and Management Regulations. One of the intentions of these regulations is to get people involved in projects to consider safety issues beyond the construction phase and to build into the design the means for the facility to be used, operated, maintained and ultimately demolished more safely. I have to say that despite the many successes we can attribute to the introduction of CDM, I have yet to see overwhelming evidence that Architects pay sufficient attention to the practical risks of changing light bulbs and other such mundane maintenance tasks in some of the undoubtedly eye-catching and often beautiful structures which they develop.
Inherently safer design is of course not limited to conceptual grass roots design. There will be many other opportunities during the life span of any project to make a difference and reduce inherent levels of risk. The recent breakthrough, thanks to chemistry of course - which has painted the Forth railway bridge with a long lasting corrosion resistant paint will eliminate the customary reference to painting the bridge as being a continuous operation.
But inherently safer design may also require some different thinking to take place in companies and organisations who commission major projects. All too often competitive bids for projects are assessed on price alone and by “price” what is actually meant is the capital expenditure required to build the facility without sufficient if any regard to ongoing operating cost. It may well be the case that an inherently safer design will cost incrementally more to build, but the case must look at the full life cycle costs and the potential for massive savings in operating expense over the life cycle of the project which could potentially dwarf the increased Capex outlay.
But I now want to come onto my main theme of preventing catastrophe and look at what lessons can be learned from other industries to apply to your own.
The first is – never assume that the worst can’t happen and that you have “nailed” the problem. Whilst it is true that we have made significant advances in safety systems – instrumentation, detection monitoring, surveillance and so on, remember that those new systems that we have introduced can also fail. We may have added layers of protection but if those layers are not maintained they can provide a false comfort blanket.
Despite the tragic consequences and hard lessons of catastrophes like Flixborough, Bhopal, Piper Alpha and others, there has been a strong tendency for complacency to creep in over time. Commitments made in the 70s and 80s to ensure that such disasters would never happen again were allowed to develop into that sense that process safety had been addressed and would take care of itself. Process control was much more sophisticated and “the computer wouldn’t let that happen any more”.
But catastrophes can only be prevented if the potential for them to happen is recognised throughout the organisation from the very top to the very bottom- and if that recognition creates a feeling of constant unease and vulnerability, not a sense of complacency.
Catastrophic incidents when they happen send shock waves through whole sectors of industry. Sometimes those shock waves extend beyond the sector and get picked up more broadly. So it would be reasonable to expect that industry performance in general as measured by number of major incidents, is reducing. Even if I told you that performance is not improving you might assume that this is because new things are happening. New risks that we hadn’t anticipated are catching us out. But that isn’t the case either.
What we are seeing is a real failure to learn across sectors, and sometimes even with different business units or locations of the same company. So, what is happening? What is going wrong? In today’s world, when major industrial catastrophe occurs anywhere in the world everyone knows about it – Buncefield, Texas City, Macondo all made headlines around the world – often for days/weeks. There are exceptions of course – the explosion which occurred at an ammonium nitrate facility Toulouse, France in September 2001 did not make the global public impact it would have done had it not occurred just a few days after 9/11.
But when a catastrophe occurs people are very keen to know what happened and why. A major investigation will be undertaken, sometimes more than one. But motives will vary:
Businesses too will be very interested to know what happened, but what often happens is that the focus of their attention is too narrow. There is a strong tendency to try to pinpoint something which enables them to distance themselves from the catastrophe.
So it’s not too surprising that if that degree of rationalisation and distancing is happening in the same and closely relates sectors, it makes it that much more challenging for different sectors to learn more from one another.
In the case of Buncefield in 2005 a level gauge that should have cut flow to a gasoline storage tank when the tank was full, failed to operate and the tank overflowed.
So the superficial response is that if you don’t use that type of cut off valve, if you don’t have a hazardous storage facility, there is nothing to learn. Not true. The learnings which everyone needs to take from Buncefield are many and widely applicable:
Faults were being overlooked, assumptions were being made. These are the lessons which every industry sector needs to learn form Buncefield.
I said earlier that companies are often keen to find what differentiates them from the installation where the catastrophe occurred. That reaction is driven by the culture of the organisation and that culture is set at the very top. It is all about leadership.
If the boss hears about the competitor’s misfortune and says his engineer “find out what happened and tell me why that can’t happen here” – that’s what he will be told. “That won’t happen to us, boss, because we don’t use XYZ valves”.
It takes a very brave engineer who will respond to that question by saying “It could easily happen to us because it’s really difficult to raise concerns and get the funds to do the urgent maintenance work on safety critical systems”.
As engineers we all buy the statement that you can’t manage what you don’t measure. But how do you measure potential for catastrophe? I spoke earlier about safety now being a priority, or better still, a core value in many of the companies where you work. I’d be confident that safety is on the agenda of every Board Meeting – probably first item on the agenda. But what gets reported? What are you measuring? The safety triangle is very effective for personal safety by measuring near misses as well as minor injuries, the Board will be able to measure improvement, assess the potential for serious injury or a fatality to occur and compare performance between sites.
But catastrophe prevention / operational safety requires different indicators . . . and different questions to be asked.
Catastrophe prevention / Operational safety indicators must be predominantly leading not lagging. The absence of a major incident for several years is not a good measure of performance. You need to know and understand your vulnerabilities. What are the worst things that could go wrong in your particular industry and what measures do you need to have in place to tell you how you’re doing?
I can suggest some such measures which could be widely applicable:
Process Safety / Catastrophe prevention is not easy. It requires the asking of searching questions. What you need to measure is likely to be specific to your business / process so it’s hard to simply copy what others do - but you can still learn from others.
Only last week I heard an excellent presentation from First Group who are very clear about the distinction between Operational and Occupational Safety. Their examples of catastrophic risk included a major train derailment or a major road traffic accident involving a school bus carrying 60-70 children. What struck me about these examples is that in both cases the failure which leads to the catastrophe could well be outside of the direct control of the company but they were no less clear that the operational catastrophe would be theirs.
Fukushima should teach us that companies must take responsibility for assumptions they make about external events which they can’t control as well as the risks they create.
The CTV (Canterbury TV) building in Christchurch was not built to withstand an earthquake so 115 of the 185 fatalities which occurred in the Christchurch earthquakes in 2011 occurred in that building. In spite of Christchurch not being considered a high earthquake risk zone hitherto, those lives lost were not inevitable nor an Act of God.
I cannot emphasise enough how important it is that this rather specialised and industry specific notion of ‘process safety’ is seen in a much broader context. Every branch of engineering which builds and operates facilities on a large scale has the capacity for catastrophic events to occur which could not only claim lives but destroy the business itself. Operational safety – catastrophe prevention is an essential part of a comprehensive and effective safety management system.
It really does mean you!