As computer technology has advanced, so has the complexity of our signal processing, noise attenuation, travel-time correction, and migration. So before embarking on any interpretation or analysis project involving seismic data, it is important to assess the quality and thoroughness of the processing flow lying behind the stack volumes, angle stacks, gathers or whatever our starting point is.
At the outset of 3D seismic surveying — and my career — about 25 years ago, computers were small. Migrations would be performed first in the inline direc- tion in small swaths, ordered into crossline ‘miniswaths’, then miniswaths were stuck together, migrated in the crossline direction, then sorted back into the inline direction for post-migration processing. The whole process would take maybe two to three months — and that was just for post-stack migration. As computers evolved so did the migration flow, through partial pre-stack depth migrations in the days of DMO (dip moveout), then to our first full pre-stack time migrations. These days, complex pre-stack time or depth migrations can be run in a few days or hours and we can have migrated gathers available with a whole suite of enhancement tools applied in the time it took to just do the ‘simple’ migration part.
So when starting a new interpretation project, a first question might be: what is the vintage of the dataset or reprocessing that we are using? Anything more than a few years old might well be compromised by a limited processing flow, to say nothing of the acquisition parameters that might have been used. Recorded fold and offset have also advanced hugely with technology.
For simple post-stack migrated datasets, the scope of any interpretation might be limited to picking a set of time horizons. Simple processing flows like this often involve automatic gain control (usually just called AGC), which will remove any true amplitude information from the interpretation — the brightest bright spots might survive but more subtle relationships will be gone. Quantitative interpretation of any kind is risky.
In the pre-stack analysis world, optimally flattened gathers are critical. The flat- tening of gathers is a highly involved processing flow in itself, often involving several iterations of migration velocity model building, a full pre-stack time migration, and repicking of imaging, focusing, and stacking velocities. What happens in the event of class 1 or 2p AVO? Both involve phase reversal with offset and might require the picking of a semblance minimum in our velocity analysis. Is anisotropy present? How shall we correct for that? If anisotropy is present, we can build a simplified eta field into the migration, but is that too simple? We should scan for eta at the same time as we correct for NMO (normal moveout), but that is tricky. Eta is often picked after the NMO correction — but is that correct? Have we over-flattened our gathers in the NMO process and can’t get to the true value of eta (or truly flat gathers)? Do our gathers even want to be flat? AVO effects can play tricks on us as illustrated by the phase change here:
Signal processing, especially noise and multiple attenuation, have as many, if not more, questions involved with them. Even our friend deconvolution can damage an AVO response if it is applied trace-by-trace. Multiples are a persistent problem because they are often difficult to remove without damaging our signal, and can sometimes only become apparent way down the analysis or inversion route.
So a knowledge of the processing flow is a vital part of any interpretation exercise. It sets expectations as to what can be achieved. Ideally, the interpreter can get involved in the processing exercise and become familiar with the limitations in the seismic dataset, or some of the pitfalls. Failing that, a detailed processing report might be available which can help to answer some of these questions. If none of this direct or indirect knowledge is to be found, then perhaps reprocessing is the only option.