Abstract As a result of improved technology and declining conventional gas reserves, shale gas SG and other tight-rock reservoirs have emerged as significant sources of oil and natural gas. Since late natural gas prices have been depressed in North America as a result of oversupply with unmatched demand. Suppressed commodity pricing has made unconventional gas production uneconomic or marginally economic in many areas, which places a greater emphasis on prospect analysis and careful selection of areas of investigation and drilling locations. This paper discusses a new tool that was developed specifically for generating probabilistic P10, P50 and P90 1 type curves for shale plays, based on a series of input production wells, which can be used in the early stages of the stochastic analysis of shale gas prospects. This technique will be discussed in detail and a sample case will be given to demonstrate the methodology for a simulated prospect.
|Published (Last):||27 January 2012|
|PDF File Size:||7.48 Mb|
|ePub File Size:||15.65 Mb|
|Price:||Free* [*Free Regsitration Required]|
SensorPx and Examples demonstrating probabilistic forecasting and optimization. Reserves Definitions. Bayes' Theorem and Markov Chain Theory. When there are significant uncertainties in our heterogeneous reservoir descriptions, as there almost always are, there is no such thing as a meaningful P10, P50, P90, or Px case description.
There are only meaningful P10, P50, or Px results that must be determined from probabilistic analysis. Any valid question in reservoir modeling, regardless of the model used, must be asked to some number of cases, representing many combinations of the uncertainties, in order to obtain a probabilistic distribution of the answer.
Absolute predictions require a statistically significant set of cases, but optimizations may require only a small number see SensorPx Example 3.
Individual scenarios have a near-zero probability of occurrence, and any desired number of Px cases of oil recovery, for example, can be found or constructed. When the uncertainties have large effects on the results, no single case can answer any question or represent any probability of description or behavior, and is virtually meaningless in itself, beyond the Px probabilistic result that it reproduces and that was used to choose it amongst the considered scenarios.
In probabilistic analysis, one might run realizations of equally-probable combinations of the uncertain variables.
A given case might give a P90 oil recovery and a P10 gas recovery. In general there will be no realization that gives P90 results for more than one variable. Depending on how much effect the uncertain variables have on the results, there may be no such thing as "what happens in the neighborhood of the P10". There is such a thing, if the effect is little.
Two P90 cases of a given variable can easily be absolutely and completely different. Multiple P10 cases tend to exhibit less difference in description and behavior.
The differences in multiple cases giving a result of the same exceedance probability Px decrease with decreasing value of x. Consider a case that gives optimistic P10 oil recovery that has very many wells.
From that case one might easily construct two P90 oil recovery pessimistic cases that each operate a completely different subset of the wells in the P10 oil recovery case. That same extreme difference of representative cases naturally results from stochastic or probabilistic representation of the most common uncertain variables, porosity and permeability.
Coming up with a Px case requires a reverse model calculation, no matter what model is used, to determine the inputs corresponding to a set of Px scalar outputs, as a function of time.
That is a far, far more complex task than the forward model. Those companies and processes insisting on single or a few representative probabilistic cases rather than results are not correctly using reservoir models. Providing a number is never a problem, as an answer to a valid question. The same principles apply using any model with uncertain inputs that is used to obtain a probabilistic solution, which includes the use of analogs and experience.
For example, unless you have production, initial well rate and decline curve parameters are always uncertain. Some number of analogs and amounts of experience and data allow engineers to compare wells with ranked analog results and estimate probabilistic results from them.
The uncertainty decreases with increasing applicable analogs, experience, and data. The applicable analogs are simply the realizations of the uncertainties. Local similarities in field geology and production behavior are learned from experience and significantly reduce uncertainty. Simulation can further reduce uncertainty by relating production and injection to input descriptive properties of the reservoir s , well s , facilities, and fluids, and initial conditions and boundary conditions versus time.
But much of that input data is potentially uncertain, especially in optimizations, when we are attempting to determine optimal values of the input controls like well locations and completions and constraints process. Simulation can be used to numerically quantify the uncertainty in predicted production and injection, and to probabilistically optimize it, as a function of uncertainty in input descriptive variables and options in control variables.
Spe1 1 makes a good example. It is a well-known 10x10x3 blackoil model of gas injection and oil recovery, with the single gas injector completed in 1,1,1 and the single producer in 10,10,3. Layer thicknesses are 20, 30, and 50 ft, respectively. Change the bhp of the gas injection well from to psi to avoid effects of the negative compressibility error in the specified pvt data. Also complete both the injector and producer in all 3 layers, and change Kz to be equal to.
The Sensor datafile is spe1pbase. Assume that the base case is our "best guess" case, which is defined by the most likely values of all uncertain inputs, and that only the areal layer permeabilities K1 K2 K3 are uncertain. Base case values are , 50, and md respectively. Assume that the base case layer perms are estimated from analogs are not in error by more than a factor of 2 in either direction, i.
Results, verifiable with any simulator they should be the same or very close, for the given individual cases :. The base case gives cumulative oil recovery of The Sensor data file spe1p. Each execution of spe1p. The Makespx datafile spe1p. The large number of runs required to obtain a statistically significant set, for only 3 total variables, is due to the fact that the variables have very large effects on results over their given ranges.
In another example using different assumptions of uncertainty link to SensorPx Example 1 is at bottom of this page , 10, runs was found to be sufficient to quantify uncertainty in results for uncertain variables that are randomly populated according to their input probability distributions. The calculated Field Px,y,z values are given in the SensorPx output file spe1p. Results at end of run are:. Note that each Px, Py, and Pz result in the entire table is from a different case the case number giving each result is indicated below the result.
For example, of all runs, case number in the above table has the lowest cumulative gas production and final producing GOR, and the highest final average pressure all related to very low gas production.
But in general there is no such thing as a Px, Py, or Pz case. The case that was found to give that We can choose to either decrease the top layer perm from to md , or we can increase the bottom layer perm from to There are a very large number of combinations of the 3 variables that will give In terms of what is allowed to vary in the base case, any conclusions of probable description or behavior that may be inferred from any chosen Px case, or from differences between any two chosen Px and Py cases, are virtually guaranteed to be wrong!
To efficiently compute probabilistic results from data uncertainties, the number of uncertain variables, generally equal in actual number to many times the numbers of gridblocks and wells, must be minimized. We must strive to build and evaluate as many of the fastest and coarsest and least detailed models that are sufficient as fast as we can, rather than the most detailed. Detailed modeling is valid only at the fine scale, for subsequent upscaling to field-scale problems that can be practically solved.
In our discrete upscaled numerical models, any surface or feature is justifiably represented by nothing more than large coarse-block average permeability and porosity and rock type distributions.
Conforming the grid to detail or surfaces is counter-productive. All issues of behavior must be investigated with respect to the probabilistic results given by a large set of possible realizations. That includes history matching and optimization. We can determine the uncertainty in our predictions only by basing them on many equally probable history matches or scenarios. In optimization, questions that can not be represented by changes in the data sets history matches can not be answered by reservoir modeling.
The question of whether or not Option A is better than Option B is represented in all the realizations by some change in the data. The probability that A is better than B is given by the fraction of case A runs giving better results.
A simple component of probabilistic forecasting and optimization workflows for Sensor is provided by SensorPx. The Px,y,z results are output in files casename. Exceedance P10 is a high, optimistic estimate, and exceedance P90 is pessimistic. Cumulative P10 is a low, pessimistic estimate, and cumulative P90 is a high estimate. Both exceedance and cumulative probabilities are commonly used. The terms "at least" and "at most" appear in the above definitions because Pxi and Pyi values can be the same.
Usually, if enough cases are run and if significant uncertainty exists, Pxi results will be continuously variable in x, and the terms "at least" and "at most" do not apply. Runs for real cases in which the wells remain rate-limited at all times are very rare.
Odeh, A. Results, verifiable with any simulator they should be the same or very close, for the given individual cases : The base case gives cumulative oil recovery of
Terminology Explained: P10, P50 and P90
When a risk simulation is complete, you are left with three forecasts based on their percentile. Specifically, there will be a P10, P50, and P90 forecast. These forecasts are automatically exported to the Analysis Manager , where they can be viewed from other worksheets, such as forecast or decline worksheets. When a risk simulation is re-run, these forecasts update with the most recent results. The three lines that are visible on the rate vs time, and rate vs cum plots are the P10, P50, and P90 curves. These curves are defined as follows:.
P50 (and P90, Mean, Expected and P10)
When working with Monte Carlo simulations, some parameters that show up quite a lot are the P10, P50 and P The large amount of data produced by statistical methods sometimes make it difficult to effectively use its results in the decision-making process. An example of its use in the oil and gas industry is the estimation of potential lifecycle i. Sometimes, when running models with a large variation, analysts will engage simulations that go beyond lifecycles. This number multiplied by the specified period required to simulate the asset performance i. The P10, P50 and P90 are useful parameters to understand how the numbers are distributed in a sample.
Log in to your subscription
You must log in to edit PetroWiki. Help with editing. Content of PetroWiki is intended for personal use only and to supplement, not replace, engineering judgment. SPE disclaims any and all liability for your use of such content.
Uncertainty range in production forecasting