An implicit neural representation for full waveform inversion
Tianze Zhang, Jian Sun, Daniel O. Trad, Kristopher A. Innanen
We introduce and analyze implicit full waveform inversion (IFWI), which uses a neural network to generate velocity models and perform full waveform inversion. IFWI has two main parts: a neural network that generates velocity models, and a recurrent neural network FWI to perform the inversion. IFWI is distinct from conventional waveform inversion in two key ways. First, it does not require an initial model as does conventional FWI. Instead, it requires general information about the target area, for instance means and standard deviations of medium properties in the target area, or alternatively well-log information in the target area. Second, within IFWI, we update the weights in the neural network, unlike in the conventional FWI, which updates the velocity model directly. The neural network we use to generate velocity models is a fully connected set of sinusoidal activation layers, which has been shown to outperform
Relu and
tanh because of its ability to learn high-order spatial derivatives. Through numerical tests, we demonstrate that, by controlling the random initialization of the weights in the network and the scale of the velocity the network generated, the IFWI can in principle build accurate models in the absence of an initial model. In practice IFWI itself may be a useful tool for building initial models for conventional or high-frequency FWI.