Ultrasound computed tomography (USCT) is an emerging computed imaging modality that holds great promise for breast cancer screening. Waveform inversion-based image reconstruction methods that are based on the wave equation can produce speed of sound (SOS) images with improved spatial resolution over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and the computational burden increases significantly when wave propagation is conducted in the 3D domain. Experimental systems that are carefully designed with elevationally focused transducers allow the reconstruction of SOS over a 3D volume to be estimated as a reconstruction of a stack of 2D slices. This allows us to circumvent the computational burden associated with 3D waveform inversion by applying full-waveform inversion (FWI) algorithms in the computationally attractive 2D domain. In such scenario, there is a model mismatch between the 2D model employed in the reconstruction process, and the 3D model that represents the true physics of wave propagation. The mismatch is more pronounced when the medium properties are inhomogeneous in 3D and can have deleterious effects on the reconstructed FWI images. To overcome this issue, we propose to implement a convolutional neural network that can map a 3D USCT dataset to its equivalent 2D USCT dataset. The transformed data can then be subsequently employed in a 2D waveform inversion algorithm, allowing for mitigation of artifacts due to the 3D-2D model mismatch without significant increase in computational cost. Reconstructed images from realistic numerical breast phantoms are employed to demonstrate the feasibility and effectiveness of the approach.
|Original language||English (US)|
|State||Published - Mar 15 2019|
|Event||Ultrasonic Imaging and Tomography - San Diego, United States|
Duration: Feb 16 2019 → Feb 21 2019
|Conference||Ultrasonic Imaging and Tomography|
|Period||2/16/19 → 2/21/19|