Skip to content

AI advances make the leap to 3D pathology possible

Human tissue is intricate, complex and, of course, three-dimensional. But the thin slices of tissue that pathologists most often use to diagnose diseases are two-dimensional and offer only a limited view of the tissue’s true complexity. There is a growing push in the field of pathology toward examining tissue in its three-dimensional form. But 3D pathology data sets can contain hundreds of times more data than their 2D counterparts, making manual examination infeasible.

In a new study, researchers at Mass General Brigham and their collaborators present Tripath: new deep learning models that can use 3D pathology data sets to make predictions of clinical outcomes. In collaboration with the University of Washington, the research team imaged selected prostate cancer samples, using two high-resolution 3D imaging techniques. The models were then trained to predict the risk of prostate cancer recurrence in volumetric biopsies of human tissue. By comprehensively capturing 3D morphologies of the entire tissue volume, Tripath outperformed pathologists and outperformed deep learning models that rely on 2D morphology and thin tissue slices. The results are published in Cell.

While the new approach needs to be validated in larger data sets before it can be further developed for clinical use, the researchers are optimistic about its potential to help inform clinical decision making.

“Our approach underscores the importance of comprehensively analyzing the entire volume of a tissue sample for accurate prediction of patient risk, which is the hallmark of the models we develop and is only possible with the 3D pathology paradigm,” he said. lead author Andrew H. Song, PhD. , from the Division of Computational Pathology in the Department of Pathology at Mass General Brigham.

“Using advances in AI and 3D spatial biology techniques, Tripath provides a framework for clinical decision support and may help reveal new biomarkers for prognosis and therapeutic response,” said co-corresponding author Faisal Mahmood, PhD, of the Division of Computational Pathology, Department of Pathology, Mass General Brigham.

“In our previous work in 3D computational pathology, we looked at specific structures such as the prostate gland network, but Tripath is our first attempt to use deep learning to extract subvisual 3D features for risk stratification, showing promising potential for guide critical aspects of treatment decisions,” said co-author Jonathan Liu, PhD, of the University of Washington.

Disclosures: Song and Mahmood are inventors of a provisional patent corresponding to the technical and methodological aspects of this study. Liu is a co-founder and member of the board of directors of Alpenglow Biosciences, Inc., which licensed the OTLS microscopy portfolio developed in its laboratory at the University of Washington.

Funding: The authors report financial support from the Presidential Fund of Brigham and Women’s Hospital (BWH), Mass General Hospital (MGH) Pathology, the National Institute of General Medical Sciences (R35GM138216), the Department of Prostate Cancer Research Program of Defense (DoD) (W81WH- 18-10358 and W81XWH-20-1-0851), the National Cancer Institute (R01CA268207), the National Institute of Biomedical Imaging and Bioengineering (R01EB031002), the Canary Foundation, the Service Award National Ruth L. Kirschstein NCI (T32CA251062), the Leon Troper Chair in Computational Pathology at Johns Hopkins University, UKRI, mdxhealth, NHSX, and the Clarendon Fund.