Seismic Processing Timeline

The Evolution of Seismic Processing

Better pictures of the subsurface required better math.

Better math required better computing.

Core pattern

As seismic algorithms moved from stacking to migration, RTM, and FWI, compute requirements increased by orders of magnitude.

Data growth

Typical seismic projects grew from analog records and megabyte-scale digital surveys to modern 100 TB to multi-petabyte programs.

Architecture shift

Mainframes gave way to vector systems, then distributed-memory clusters, and finally GPU-dense platforms with very high I/O throughput.

Why oil and gas mattered

Seismic processing consistently pushed memory bandwidth, storage bandwidth, and large-scale numerical simulation harder than most commercial workloads.

How the algorithms evolved

The cleanest way to understand this history is to track the progression of seismic algorithms. As computing power increased, the industry moved toward more complete representations of wave physics.

Stage What it added Relative cost
Migration Transforms recorded events into a spatial image of the subsurface. 1× baseline
Reverse Time Migration Uses the full two-way wave equation and handles complex geology much better than simpler migration methods. ~10×
Acoustic FWI Fits full wavefields to update velocity models iteratively. ~100×
Elastic FWI Adds shear physics and more complete subsurface behavior. ~500–1000×
Multi-physics inversion Pushes toward richer coupled models and even larger optimization problems. 1000×+

Processing Workload Advances

Seismic HPC repeatedly evolved across the same planes: compute, network, storage, control, and facility power. Algorithm demands shaped each of them.

Era Channels Traces per Survey Workload Processing Scale
1930s 6-12 ~1000 Manual interpretation (travel-time measurement, hand calculations) Concentrating really hard
1940s 12-24 ~5,000 Manual plus mechanical calculators Discussing with others
1950s 24-28 ~20,000 Early analog filtering and stacking experiments Analog Kiloscale
1960s 48-96 ~100,000 Digital stacking and velocity analysis Megascale
1970s 96-240 ~1 million Digital filtering, deconvolution, NMO stacking Tens of Megascale
1980s 240-1000 ~10 million Early 3D processing and migration Gigascale
1990s 1000-3000 ~100 million Large-scale 3D migration Tens to Hundreds Gigascale
2000s 3000-10,000 ~1 billion Prestack time migration Terascale
2010s 10,000-50,000 ~10 billion Reverse Time Migration Petascale
2020s 50,000-200,000+ ~100 billion RTM+FWI and AI-assisted Exascale

Data growth Forcing Architectural Changes

The rise of HPC in oil and gas was not driven by compute alone. Acquisition systems captured more channels, denser sampling, longer records, and larger areas. That created a storage and network problem just as serious as the CPU or GPU problem.

Era Typical data size Dominant acquisition pattern Processing implication
1950s KB to MB equivalent Analog 2D reflection surveys Manual and analog processing
1960s MB to 100 MB Digitized 2D surveys on magnetic tape Mainframe batch processing
1970s 100 MB to 1 GB Multi-channel marine acquisition Industrial tape-library workflows
1980s 10 GB to 100 GB Commercial 3D seismic surveys Vector supercomputers for large numerical methods
1990s 100 GB to 1 TB Larger 3D marine surveys Massively parallel computing and distributed storage
2000s 1 TB to 10 TB Wide-azimuth and ocean-bottom systems Linux clusters and parallel filesystems
2010s 10 TB to 100 TB High-density nodal and monitoring deployments GPU acceleration and high-throughput storage
2020s 100 TB to multi-PB DAS, permanent monitoring, ultra-dense arrays Exascale-class hybrid HPC systems