Why oil and gas companies must act on analytics

| Article

To a large extent, the financial results of companies in many sectors of the oil and gas (O&G) industry depend on the performance of complex, capital-intensive process facilities. But until very recently, O&G companies have not had the tools or the capabilities needed to operate these assets at their maximum capacity.

What’s changed? Put simply, analytics tools and techniques have advanced far and fast—to the point where they can unlock the production potential of complex process facilities and enhance asset investment returns in O&G. Applied properly, advanced analytics can yield returns as high as 30-50 times investment within a few months of implementation. Moreover, they can positively transform the organizations and fundamental operating models of O&G production systems.

O&G has a $200 billion performance gap

Lower oil prices, now in their third year, have lent real urgency to the ongoing efforts to improve O&G process efficiency. Those efforts deserve praise: operators have been investing in the best conventional technology at their disposal and they have been successful in capturing some value from those investments. In the North Sea, for instance, operators have been able to reverse declining process efficiencies and even increase their average production rates.

Yet the hard truth is that most operators have not maximized the production potential of their assets. McKinsey benchmarks reveal that the typical offshore platform runs at approximately 77 percent of its maximum production potential. Industry-wide, this shortfall represents something in the order of 10 million barrels per day, or US$200 billion in annual revenue.

Conventional control systems, tools, and training fall short

The primary source of O&G’s performance gap is the operational complexity of production and processing facilities. Think about a crew of two or three control room operators on an offshore rig. The crew works at the center of a massive data hub. As many as 30,000 sensors continuously feed data into this hub from downhole, subsea, and topside equipment. In theory, the crew controls 200 or so operating variables, each with a multitude of different settings. In addition to the millions of possible control combinations these variables represent, the crew must also consider exogenous factors that affect production, including wave heights, temperature, and humidity.

Of course, the crew is supported by SCADA systems, simulation tools, extensive training and onshore experts. But those control systems and practices are usually calibrated to the design capabilities of the asset. They do not update dynamically. They seldom take exogenous factors into account. They typically are not updated when the asset is modified. There are substantive flaws in the tools—and in the training that offshore crews receive. Simulation tools, for instance, have only a limited capacity to process actual performance and operational data.

The complexity translates into material performance differences. Analysis of real performance data from an offshore field in the North Sea reveals more than a 5 percent difference in production output between the highest- and lowest-performing control room crews. At another asset, the difference was a staggering 12 percent. These performance data were adjusted for scheduled downtime and larger unplanned production outages.

Advanced analytics can bridge the performance gap

To smooth out such glaring differences, and to raise the performance bar overall, operators must embrace advanced analytics. Today’s powerful tools use a combination of state-of-the-art engineering, data science, and computing power to identify superior solutions to complex production optimization problems. They will not replace the conventional models and physical understanding of O&G asset operation—they will supplement them, filling in the performance gaps that hold back production.

Would you like to learn more about our Oil & Gas Practice?

Advanced analytics are particularly effective in environments that involve copious amounts of data and highly complex and variable operating conditions—that is, the same environments that O&G operators currently struggle to manage with simulator training, rules of thumb, and on-the-job experience.

Advanced analytics are powered by machine learning, which uses statistical methods and computing power to spot patterns among hundreds of variables in continual conditions. The patterns are used to build algorithms which analyze the parameters critical to production, quality, and efficiency, alert operators to conditions that are hours and days in the future, and enable them to respond fast and effectively. In short, advanced analytics identify bottlenecks and recommend prescriptive action aimed at ensuring optimal operating conditions.

Advanced analytics at work offshore

Adoption of such tools is early days in the O&G sector, but we’ve seen the promise of the new tools at a North Sea operator. The company piloted advanced analytics on a mature, semi-submersible, production platform. Even though the platform was already one of the world’s top-performing installations, with a production efficiency of 95 percent, there were indications that analytics could help optimize production settings and raise production output. These included daily and weekly variations in production, material performance differences between operator crews, and the high complexity of the asset.

To understand the driving factors of production, the team of data scientists used three years’ worth of data collected from 5,000 sensors and controls, totaling hundreds of gigabytes. The team used a machine-learning algorithm to skim the data for correlations and causalities.

The first finding highlighted the limitations of human teams in maximizing production: of the 200 or so control variables available in the control room, about 50 were found to be essential to production. Yet the average operator team was working with only 10-20 control variables. Moreover, different teams were using their own “signature” control settings, based on their experience.

The data scientists also used machine learning to identify five bottlenecks that were constraining production on the platform. For example, a bottleneck caused by high gas pressure variation could be resolved with two algorithms: one simpler algorithm to predict the risk of unplanned pressure spikes (slugging), and the second to minimize the size of slugs by optimizing the choke settings from each of the 100 producing wells. By predicting the probability of pressure build-ups and reducing the size of each pressure spike, the operator may be able to capture a 1-2 percent gain in production if made fully operational.

Algorithms were also developed to address a major bottleneck created by high oil content in water. For environmental reasons, operators are not allowed to release water with an oil content above 18 parts per million. When the oil content exceeds that level, the production rate must be reduced and the entire production team must work to resolve the issue. This problem was constraining production and consuming the energy of the production team on almost half of all operating days.

The team applied an algorithm to analyze historical data to determine the probability of oil-in-water incidents, and to formulate the most effective mitigating actions, based on specific conditions. An algorithm was developed to predict the occurrence of such incidents early, giving the operator the time needed to prevent them altogether. The pilot result: an expected production increase of 0.25-0.5 percent, and more time to pursue other production-enhancing activities. This use-case revealed, however, an important bottleneck to fully implement and realize the benefits of machine learning tools: the data frequency from the incidents must be sufficient to train the algorithms. For this particular asset, the OiW sampling only once a day was not sufficient.

On several other assets, McKinsey teams have developed methods to reduce unplanned downtime caused by failures in production critical equipments. Predictive maintenance is a well-known use-case to achieve this. Another example is to improve the reliability of the gas compression system - which is the most critical system at at many platforms. Any downtime here causes large losses. One team developed algorithms to predict failures in a gas compressor train with more than 70% accuracy. Also here, the increased production from implementing and acting on these models amount to 0.25-0.5 percent.

An analytical approach to reservoir production

An analytical approach to maximizing reservoir production

Five requisites for exploiting advanced analytics

So what does it take for O&G companies to tap the power of advanced analytics? These five elements are essential:

  1. Data availability: Data is the fuel for advanced analytics. The good news is that most O&G companies have vast volumes of semi-structured data on hand, and they are using less than 1 percent of it.
  2. Analytics infrastructure: Today, computational power and tools are cheap, and off-the shelf analytics solutions are easily available. Although the market for solutions tailored to O&G processes is still evolving, there are plenty of niche offerings, such as predictive maintenance software for O&G, and no shortage of services that can be customized.
  3. Analytics skills and capabilities: Many O&G companies will want to begin developing an internal analytics capability from day one. First, they will need to hire or train data scientists. This may require creating a center of excellence in data science that can develop, implement, and train others to use advanced analytics, as well as ensure that external investments are well spent and properly managed. Second, companies will need to develop the ability to translate business problems into advanced analytics problems, and to convert analytics solutions into actionable insights.
  4. Redesigned work and governance: To capture the full benefits of analytics, O&G companies should be prepared to redesign work processes to improve process efficiency. They should also rethink the interfaces between their central production optimization centers and the resources located at the asset. Analytics projects should start with the end user in mind, seek to understand how processes currently work, and identify and eliminate process-based bottlenecks and inefficiencies.
  5. Business-driven agility: To build momentum for the adoption of advanced analytics, O&G companies should launch short, quick pilot projects designed to produce clearly measurable gains. When those pilots have demonstrated analytics’ value, companies should formulate a clear vision and road map for transforming their operating models, based on a clear understanding of where and how to create business value.

Given the speed at which advanced analytics and machine learning are developing, we expect that fully autonomous control systems for complex processing facilities will be available within five years. Moreover, we think that in as little as 10 years’ time, production facilities will be run very differently. That is because more and more industry executives are realizing that the big gains from analytics won’t happen unless they drastically rethink the operation of their production assets. This may involve centralizing control in one or a few production optimization facilities whose machine learning systems run the daily operations of a far-flung network of assets. The first O&G companies to make those moves will, we believe, become the industry leaders of tomorrow.

Explore a career with us