Drilling rig Deepwater Horizon captured headlines in 2010 when an onboard explosion caused one of the largest oil spills in history. Numerous investigations took place, and the incident – which claimed the lives of 11 workers – has presented a case study for analytics improvements in the oil and gas industries.

 

What happened?

 

According to New Scientist contributor Justin Mullins, several different system failures created the conditions that resulted in the Deepwater Horizon explosion. Although crew followed standard procedure by pumping cement into the bottom of a borehole to seal it and prevent leaks, and checks were made to ensure that this seal was applied properly, oil and gas did begin to leak through the cement to the surface.

The leak was not spotted soon enough to allow crew members to react in a way that would prevent the explosion. This, combined with the misinterpretation of a pressure test, multiple valve failures, the lack of an onboard gas detection alarm and an overwhelmed mud-gas separator resulted in the catastrophic explosion and subsequent leak.

 

The issue of disparate data sets

 

As Mullins notes, the explosion was caused by failures and issues with several different complex systems. An important underlying factor to understand here is the fact that while these systems are monitored, the data sets from each system was likely viewed independently, and not alongside or in context with data sets from other critical rig systems.

This is a common practice in offshore drilling – many rigs include several different service companies, each of which maintains and monitors its own system individually. While these organizations are required to take part in monitoring, many are not sharing their findings with other service companies involved in the rig’s operations.

This creates disparate data sets that, when viewed individually, don’t show a complete picture of the rig’s operational conditions. For instance, one company’s system monitoring may show that its particular service is within the allowable tolerance, but even small variances here can cause problems, especially when each service operators’ datasets aren’t viewed together in a holistic manner.

As DataInformed contributor Atanu Basu wrote, the oil and gas sector has traditionally leveraged data analytics, including for datasets related to well logs, seismic reports, recorded video and sound, as well as production numbers and lift data.

“The oil and gas industry looked at images and numbers, but in separate silos,” Basu wrote. “But the ability to analyze hybrid data – a combination of unstructured and structured data – provides a much clearer and more complete picture of the current future and problems and opportunities, along with the best actions to achieve the desired outcomes.”

 

Leveraging preemptive analytics

 

The process of utilizing data to determine the best next action is known as preemptive analytics, and it offers more actionable insights that organizations in the oil and gas industry can utilize to prevent failures before they happen.

When barriers between disparate data sets are removed and the entire body of information analyzed, companies operating services on oil and gas rigs can work together with crew members to proactively stop problems before they create unsafe conditions.

In order to achieve this, though, data must be appropriately aggregated, linked and integrated. This will create a unified set of data ready for analysis which will provide the most accurate, preemptive insights organizations in the oil and gas field require.

To find out more about preparing data for preemptive analytics, contact the experts at Unifi today.