My collection of geophysical notes written as Jupyter notebooks.
Do magic things with well log data.
well_2*.txt
: raw log data from Well 2 of Quantitative Seismic Interpretation (QSI)qsiwell2.csv
: assembled all the logs from various filesqsiwell2_frm.csv
: qsiwell2 + fluid replaced elastic logsqsiwell2_augmented.csv
: barebones well data, only Ip, Vp/Vs and LFC (litho-fluid class log)qsiwell2_synthetic.csv
: synthetic data generated through Monte Carlo simulation, same logs as inqsiwell2_augmented.csv
(Ip, Vp/Vs and LFC)qsiwell2_dataprep.py
: Python script to assemble all the original QSI files
How to load and display SEG-Y files, plus some simple ways to play with the data, e.g. extracting amplitude informations, adding noise & filtering. Also, a notebook entirely dedicated to wedge modeling and how to reproduce a couple of figures from scientific publications.
- Playing with seismic
- Playing with seismic (interactive)
- Amplitude extraction
- Wedge modeling for variable angles of incidence
- Notes on spectral decomposition
- Top Heimdal map, or how to reproduce figure 1 from Avseth et al., 2001
- AVO projections
- How to calculate AVO attributes
- Elastic Impedance
- "The relationship between reflectivity and elastic impedance", or how to reproduce figure 5.62 from Seismic Amplitude by Simm & Bacon (2014)
- Notes on anisotropic AVO equations
- AVO Explorer v2: Interactive AVO and AVO classes explorer: meant to be downloaded and run locally.
- Simple porosity modeling: how to model porosity variations and its effects on elastic properties using the concept of pore stiffness invariance.
16_81_PT1_PR.SGY
,16_81_PT2_PR.SGY
,16_81_PT3_PR.SGY
,31_81_PR.SGY
: 2D lines in SEGY format from the USGS Alaska dataset3d_farstack.sgy
,3d_nearstack.sgy
: 3D cubes from the QSI dataset (see above)Top_Heimdal_subset.txt
: interpreted horizon for the QSI near and far angle cubes
Other notebook of interest, maybe only tangentially related to geophysics, such as a notebook showing a comparison between colormaps (the dreadful jet against a bunch of better alternatives) and another that uses the well known Gardner's equation as an excuse to practice data fitting in Python.
I used to recommend either Enthought's Canopy Express or Anaconda. I haven't been using Canopy for a while now and I'm very particular about installing stuff on my computers, so right now what I use (and suggest everyone else to do) is to install a subset of Anaconda called miniconda. Starting from this, it's easy to install the packages you need for your work and nothing else. For example, this is how I setup my system for work:
$ conda install numpy scipy pandas matplotlib jupyter scikit-learn scikit-image xarray dask netCDF4 bottleneck
$ conda install -c bokeh colorcet
$ conda install -c conda-forge jupyterlab
Then I install some additional packages with pip
:
$ pip install bruges lasio segyio
Instead of integrated environments (for example, Spyder) I simply use a modern (and free!) editor like VSCode (Atom is a good alternative) to code and write. However, JupyterLab gets better everyday and it can already be used to do everything in a browser window (but to me it's still slower than a text editor and a jupyter console window); I like the idea of Juyter Notebooks to distribute commented code and simply as a working tool to make code interact with explanatory text and plots.
To read and write SEG-Y data in Python you need additional libraries like ObsPy, Segpy or Equinor's segyio.
My current favourite is however segysak, based on segyio. In addition to the qualities of the underlying segyio (a 340 Mb file is read in 1 second, while obspy needs 8), it also adds some very cool features to map horizons to seismic cubes (amplitude extraction!).
# timeit results using segyio:
1.11 s ± 17.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# timeit results using obspy:
8.85 s ± 1.07 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.