Foundation Models and Generative AI for Medical Imaging Segmentation in Ultra-Low Data Regimes

Talk
Pengtao Xie
Time: 
05.21.2024 11:00 to 12:00
Location: 

Semantic segmentation of medical images is pivotal in disease diagnosis and treatment planning. While deep learning has excelled in automating this task, a major hurdle is the need for numerous annotated masks, which are resource-intensive to produce due to the required expertise and time. This scenario often leads to ultra-low data regimes where annotated images are scarce, challenging the generalization of deep learning models on test images. To address this, we introduce two complementary approaches. One involves developing foundation models. The other involves generating high-fidelity training data consisting of paired segmentation masks and medical images. In the former, we develop a bi-level optimization based method which can effectively adapt the general-domain Segment Anything Model (SAM) to the medical domain with just a few medical images. In the latter, we propose a multi-level optimization based method which can perform end-to-end generation of high-quality training data from a minimal number of real images. On eight segmentation tasks involving various diseases, organs, and imaging modalities, our methods demonstrate strong generalization performance in both in-domain and out-of-domain settings. Our methods require 8-12 times less training data than baselines to achieve comparable performance.