Own a system for all time your Cash Loan Company Cash Loan Company faxless hour online website.

Use Near-Misses to Learn about Construction Safety

by Hal on June 18, 2009

in Safety, lean

If you're new here, you may want to subscribe to my RSS feed. Thanks for visiting!

ladder lark
Image by Elliot Moore via Flickr

Ihaven't written about construction safety in awhile. I used to write about it every Thursday. I just read an ENR editorial Analyzing Near-Misses Is Key to An Effective Safety Plan. It reminded me of how far we need to go in construction. Our industry kills about 1300 people in the US every year. Thousands of others are seriously injured. Yet, there are far more dangerous industries where people are not getting hurt at anywheres near the construction rates. Alcoa has made amazing strides to create an injuring-free workplace in their smelters. Dupont's chemical operations as dangerous as those processes are don't result in anything near the injuries of construction. These companies and many others across industries all have one thing in common that is fundamentally missing in construction. They systematically learn from each anomaly, variance, problem and near-miss. It's an approach that separates Toyota from all the other auto manufacturers. It's an approach that we can adopt today for safety.

They systematically learn from each anomaly, variance, problem and near-miss.Near-misses happen all the time. I could be working on a ladder and drop a screwdriver. That's a near-miss. No one needs to be under the ladder, they don't even need to be in the work area. That I dropped the screwdriver is unintended and potentially injurious. In the usual situation I might say, "Oops!" getting down off the ladder, retrieving my screwdriver, and going back to work. However, someone could have been injured, or worse. It's exactly this kind of situation that we need to investigate. If we can learn why that incident happened, then we have a chance to prevent it from ever happening again. How do we do that? We call attention to our mistake and get to the root cause.

Toyota practices getting to the root cause by asking why 5 times. It's a process that takes place at the time of the variance, with the people who were present to the variance and for the purpose of learning. There's no finger-pointing or blame. People bring sincere curiosity to learn. It's doesn't take long. But it does take the courage to call attention to what isn't working, especially when it's something that I did wrong.

It does take the courage to call attention to what isn't working, especially when it's something that I did wrong.Amazing results come from doing a Good 5-Why™. One top-ten design and engineering firm that has started the practice is getting a dozen or more improvements from each 5-why. Imagine what we could do on the construction job site if we began the practice of investigating each variance and "oops". It wouldn't be long before real gains were made in making it a far safer place.

{ 7 comments… read them below or add one }

1 Alan Mossman June 19, 2009 at 6:20 am

Great post Hal
what led me to put fingers to keyboard is your reference to ladders. I don’t know if it is the same in the US, but in the UK HSE’s key message is that that ladders should only be used for low-risk, short-duration work. On average in UK 13 people a year die at work falling from ladders and nearly 1200 suffer major injuries. More than a quarter of falls happen from ladders. http://www.hse.gov.uk/falls/ladders.htm
HSE is the UK Health & Safety Executive, equivalent to OSHA – their job is to learn from other peoples misfortunes.
Alan

2 Glen B. Alleman June 19, 2009 at 10:04 am

Great work Hal. I missed the Thursday posts.

At Rocky Flats a “near miss” was the same as a reportabl, but doen internally. The Lessons Learned team – who used the Phoenix Mehtod from Nuclear Safey Review Concepts – then held her “little get together,” to capture the causal source. Over time the near misses went down for repeativie work, the the pre-work briefing increased the number of checklist items for new work.

3 Hal June 19, 2009 at 10:34 am

Glen mentions “pre-work briefing”, a topic of a future post. Pre-work briefings are keeping us safe from the dangers of all kind of nasty situations. We live safely with nuclear power in part by the practice within the industry of doing pre-work briefings for all maintenance activities. Power lineman keep themselves safe working with live power by doing pre-work briefings. They work. We need to do them throughout construction.

4 Dennis Sowards June 20, 2009 at 5:38 pm

Masaaki Imai talks about ’scare reports’ in a hospital in his book gemba kaizen. It also refers to Heinrich’s Law of Safety which is that to avoid the serious accident one must avoid all the near misses (scare reports). We in construction need to follow what has already proven to be effective.

5 Tim Eiler June 21, 2009 at 9:23 pm

Just as with “lessons learned” in general project management, we must be careful not to let the analysis aspect not get carried into practical improvements. We can’t, particularly where safety is concerned, let hope be the guiding management principle…analysis is critical, but no more so critical than actually systematizing improvements through process improvement and wide communication.

6 Scott Stribrny June 22, 2009 at 1:56 pm

Heinrich, a noted pioneer in the scientific approach to accident prevention invoked the Ice Berg Model that I described in the following article “Shaken, Not Stirred” that was originally published April 26, 2007 by the Cutter Consortium Enterprise Risk Management & Governance Resource Center.

Shaken, Not Stirred
by Scott Stribrny

The crash happened suddenly. The other car appeared as a brief blur in my windshield. In an instant, the airbag deployed, ruptured, and surrounded me in white dust. My vehicle and I came to an abrupt and violent halt. It was a lot like one of those Volkswagen “Safe Happens” spots — the ones that caused so much controversy by depicting casual conversations interrupted by graphic car crashes. While my car is a total loss, thankfully my injuries are minor (apparently I was merely shaken, not stirred). The other driver also escaped with minor injuries and was ticketed for not yielding to traffic.

For most of us, an accident is an unexpected outcome of car travel. Yet, in the time it takes to read this piece, someone less fortunate will die in a car accident.1 According to the National Center for Statistics and Analysis, in 2005 there were an estimated 6,159,000 police-reported traffic crashes, in which 39,189 people were killed and 1,816,000 people were injured. Some 4,304,000 of these crashes involved property damage only. One can’t help but wonder how many more mishaps and near misses are never reported.

As we intensify our study of errors in software systems, their impact on business, and the effectiveness of risk management principles, processes, and behaviors, we need to keep in mind that software system errors are not unique. They share many causal factors with errors in complex situations encountered in military aviation and health care (see “Disaster prevention: lessons learned from the Titanic”). We can — and should — learn from those sectors’ efforts to study error and its prevention. In addition, we need to remember that errors and near misses are extremely important sources of useful information.

The Iceberg Model

H. W. Heinrich, a noted pioneer in the scientific approach to accident prevention, developed the Iceberg model of accidents and errors.2 The part of an iceberg above the water represents errors that cause major harm; below the water are no-harm events, events that cause only minor injuries, and near misses. After studying automobile accidents for many years, Heinrich suggested that for every one event that causes major injury, there are 29 that cause minor injury and 300 that result in no injuries. A near miss is defined as an error process that is caught or interrupted (i.e., someone intervenes to prevent the error).

Heinrich emphasizes that the importance of an individual mishap lies in its potential for creating injury and not in the fact that it actually does or does not. Therefore, any analysis as to cause and remedial action is limited and misleading if based on one major accident out of a total of 330 similar accidents, all of which are capable of causing injuries or damage. In other words, those who limit their study to isolated, spectacular cases (e.g., major aircraft accidents or news-making software failures {see “Software failure cited in August blackout investigation”}) are looking only at the tip of an iceberg.

Relatively few potential problem areas are identified either through accident investigation or post-project inquiry. One reason for this is that most errors do not result in accidents. In aviation, this is because someone — usually the pilot — saves the aircraft. Airborne emergencies that are safely recovered belong in this category, as they are events that could have been accidents. The Federal Aviation Administration (FAA) has had a confidential program called the Aviation Safety Reporting System in place for several decades to report near misses, as these incidents provide on-going insight into low-probability but high-consequence situations. Similar reporting systems have been in place for fire fighters, doctors, and railway engineers, all to try to better understand and manage the risks these people face.

In software projects, we’re all too familiar with stories of those few individuals that step in with late-stage heroic effort to avert project blunders. In reality, these blunders should be considered as accidents, albeit accidents that did not result in injury or damage. It is here that a fallacy becomes apparent: in most organizations, these “accidents” will not be analyzed as risks because there was no injury or damage.

A superior approach in risk analysis would be to exploit the frequent no-harm and near-miss occasions to better identify the risks involved in achieving our business objectives. Obviously, we have to respond to and learn from disasters, but if we want to be proactive, we also need to deal with the less serious but more frequent mishaps.

Looking Below the Water Line — Analyzing the Mishaps

Has mishap analysis been validated in practice in other professions? In other words, has the extra effort to better avert risks been worth it? In military aviation, the degree of success has been significant and widely praised as a strong deterrent to aircraft accidents. According to Colonel David L. Nichols, in Mishap Analysis: An Improved Approach to Aircraft Accident Prevention:

I?n 1970 Pacific Air Forces Command suffered 60 major accidents and 1739 minor accidents and reportable incidents. Thus for every major accident there were 28.9 accidents of lesser damage. Heinrich says there should be 29.3 Also in 1970, the Air Force experienced 200 major accidents and 5800 minor accidents and reportable incidents.4 This represents exactly 29 accidents of lesser degree for every major accident. In other words, the findings of Heinrich and the Air Force are compatible as far as the top blocks of the pyramid are concerned … the mishap analysis program appears to relate to all segments of the pyramid. The twenty months of data collected in one wing revealed three major accidents, 87 reportable incidents, and 885 non-reportable.5 This is a relationship of 295 accidents with no damage or injury and 29 accidents with little damage for each major accident.

Admittedly, the sample is small, but the correlation to Heinrich’s ratios is so close as to indicate that mishap analysis could help to fill in the risk knowledge gap and provide the needed database to inform a proactive and coherent approach to long-term risk aversion.

“Safe Is Not the Equivalent of Risk Free”

Even with near miss information at hand, there is no guarantee that something untoward may still happen. In a 1980 decision, the US Supreme Court said that “safe is not the equivalent of risk free.” If “safe” meant “freedom from the possibility of harm,” few human activities would meet the standard. While any individual risk does not guarantee injury or make an activity unsafe in and of itself, this fact alone should not mean that it should be ignored.

When in an enterprise we find ourselves merely shaken, but not stirred, we need to seize the opportunity to recognize the potential impact that might have been realized. The level of risk can change dramatically between assessment periods without being noticed, leaving the enterprise overexposed. Before enterprise-level risks outgrow your capabilities to manage them, you need to aggressively analyze mishaps, learn about potential risks, and take early steps to monitor and control them.

Notes

1 According to the National Highway Traffic Safety Administration, on average, a person was injured in a police-reported motor vehicle crash every 12 seconds, and someone was killed every 12 minutes.

2 Heinrich, H.W. Industrial Accident Prevention. New York and London, 1941.

3 “Unit Safety Officers Guide to Statistical Analysis,” Headquarters PACAF, Safety Analysis Division, Hickam AF Base, Hawaii, 1971, pp. 5-6.

4 US Air Force Accident Bulletin 1970, Directorate of Aerospace Safety, Norton AF Base, California, 1971, p. 1.

5 XX Tactical Fighter Wing’s Safety System Analysis data (a PACAF subordinate Unit), September 1970-April 1971.

7 los angeles project manager June 22, 2009 at 3:22 pm

Hal you will love Dubai (not!) – there builders simply fall off high rises due to heat exhaustion – the sheiks don’t give a damn about worker safety. Its rather disturbing! Regards Billy

Leave a Comment

You can use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Note: This post is over 5 years old. You may want to check later in this blog to see if there is new information relevant to your comment.

Spam Protection by WP-SpamFree

Previous post: Time to Re-Th!nk Improvement

Next post: Lean Projects Are Defined by Lean Behaviors