Multimodal Learning for Cancer Detection and Characterization on Radiology Images
Friday, December 12, 2025
11:00am - 12:00pm
James H. Clark Center, Room S360, 3rd floor
Abstract:
Clinical care is inherently multimodal, with medical image data collected throughout the patient’s journey. For example, a patient at risk of cancer will undergo an ultrasound-guided biopsy, and when available with MRI revealing regions to be targeted due to higher risk to harbor aggressive disease. This biopsy procedure seeks to collect tissue samples for pathology and will inform treatment strategies for best outcomes. This common scenario provides unique opportunities for Artificial Intelligence (AI) methods to effectively integrate multimodal data, and learn imaging signatures in patients with known outcomes, to enable early cancer detection for patients at risk. My research focuses on developing AI methods that bridge the gap between highly informative modalities, e.g., pathology or MRI, and lower resolution modalities, e.g., ultrasound. These methods rely on multimodal image registration, image feature fusion, or integration of patient-specific data and population-specific information and rely on AI approaches for effective integration. While the learning is done with multiple imaging modalities, the inference requires only the low-resolution modality, e.g., ubiquitous conventional ultrasound, with applications in low-resource settings. These methods are applied to detect cancer and its aggressive extent in various cancers, e.g. prostate, kidney, or breast.
Biography:
Dr. Mirabela Rusu is an Assistant Professor, in the Department of Radiology, and, by courtesy, Department of Urology and Biomedical Data Science, at Stanford University, where she leads the Personalized Integrative Medicine Laboratory (PIMed). The PIMed Laboratory has a multi-disciplinary direction and focuses on developing analytic methods for biomedical data integration, with a particular interest in radiology-pathology fusion to facilitate radiology image labeling. The radiology-pathology fusion allows the creation of detailed spatial labels, that later on can be used as input for advanced machine learning, such as deep learning. The recent focus of the lab has been on applying deep learning methods to detect and differentiate aggressive from indolent prostate cancers on MRI using the pathology information (both labels and the image content), work that was recently published in Medical Physics and Medical Image Analysis Journals. Moreover, our project are interested in further develop these approaches for ultrasound images. Dr. Rusu received a Master of Engineering in Bioinformatics from the National Institute of Applied Sciences in Lyon, France. She continued her training at the University of Texas Health Science Center in Houston, where she received a Master of Science and PhD degree in Health Informatics for her work in biomolecular structural data integration of cryo-electron micrographs and X-ray crystallography models. During her postdoctoral training at Rutgers and Case Western Reserve University, Dr. Rusu has developed computational tools for the integration and interpretation of multi-modal medical imaging data and focused on studying prostate and lung cancers. Prior to joining Stanford, Dr. Rusu was a Lead Engineer and Medical Image Analysis Scientist at GE Global Research Niskayuna NY where she was involved in the development of analytic methods to characterize biological samples in microscopy images and pathologic conditions in MRI or CT.