Adaptive and Compressive Beamforming Using Deep Learning for Medical Ultrasound

Shujaat Khan*, Jaeyoung Huh, Jong Chul Ye

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

104 Scopus citations

Abstract

In ultrasound (US) imaging, various types of adaptive beamforming techniques have been investigated to improve the resolution and the contrast-to-noise ratio of the delay and sum (DAS) beamformers. Unfortunately, the performance of these adaptive beamforming approaches degrades when the underlying model is not sufficiently accurate and the number of channels decreases. To address this problem, here, we propose a deep-learning-based beamformer to generate significantly improved images over widely varying measurement conditions and channel subsampling patterns. In particular, our deep neural network is designed to directly process full or subsampled radio frequency (RF) data acquired at various subsampling rates and detector configurations so that it can generate high-quality US images using a single beamformer. The origin of such input-dependent adaptivity is also theoretically analyzed. Experimental results using the B-mode focused US confirm the efficacy of the proposed methods.

Original languageEnglish
Article number9025198
Pages (from-to)1558-1572
Number of pages15
JournalIEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control
Volume67
Issue number8
DOIs
StatePublished - Aug 2020
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 1986-2012 IEEE.

Keywords

  • Adaptive beamformer
  • B-mode
  • Capon beamformer
  • beamforming
  • ultrasound (US) imaging

ASJC Scopus subject areas

  • Instrumentation
  • Acoustics and Ultrasonics
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Adaptive and Compressive Beamforming Using Deep Learning for Medical Ultrasound'. Together they form a unique fingerprint.

Cite this