![]() All of the clinical outcomes data provided in this collection have already been published within the supplement of. Imaging files and all patients are referred to with coded identifiers. There is no clinical metadata within the i Jonathan Liu’s lab at the University of Washington with a custom open-top light-sheet (OTLS) microscope developed by the lab. Note that the 3D pathology datasets provided in this collection were generated in Dr. ![]() ![]() ![]() The Python code for the deep-learning models, and for 3D glandular segmentations based on synthetic-CK8 datasets, are available on GitHub at. All datasets are from the 50 patient cases studied in this publication. In this TCIA collection, we provide the 2x down-sampled fused OTLS-imaged images (H&E-analog staining), the synthetic cytokeratin-8 (CK8) immunofluorescent images at 2x-downsampled resolution, the 3D semantic segmentation masks of glands at 4x down-sampled resolution, the clinical data for patient outcomes (biochemical recurrence), and the coordinates for the cancer-enriched regions of each biopsy. This data collection will promote research in the field of computational 3D pathology for clinical decision support. These 3D tissue structures are revealed through: (1) a H&E-analog stain, (2) synthetically generated immunofluorescence staining of CK8 (targeting the luminal epithelial cells of all prostate glands), and (3) 3D segmentation masks of the gland lumen, epithelium, and stromal regions of prostate biopsies. We envision that providing prediction uncertainty to radiologists may help them focus more on uncertain cases and thus expedite the diagnostic process effectively.This collection provides public access to a 3D pathology dataset of prostate cancer, allowing researchers to further investigate various 3D tissue structures and their correlation with prostate cancer patient outcomes (biochemical recurrence). Our experiments demonstrate that the proposed method significantly improves the Area Under ROC Curve (AUC) by over 20% compared to the previous work (98.4% vs. Furthermore, we introduce a simple, yet efficient Evidential Focal Loss that incorporates the focal loss with evidential uncertainty to train our model. Second, we estimate the uncertainty of our models through an evidential deep learning approach and leverage the dataset filtering technique during the training process. First, we introduce domain transfer, a novel pipeline to translate unpaired 3.0T multi-parametric prostate MRIs to 1.5T, to increase the number of training data. In this paper, we have presented a novel approach for unpaired image-to-image translation of prostate mp-MRI for classifying clinically significant PCa, to be applied in data-constrained settings. Additionally, multi-source MRIs can pose challenges due to cross-domain distribution differences. However, training such models often requires a vast amount of data, and sometimes it is unobtainable in practice. In the past few years, deep learning-based models have been proven to be efficient on the PCa classification task and can be successfully used to support radiologists during the diagnostic process. However, there are still many medical centers that use 1.5T MRI units in the actual diagnostic process of PCa. Download a PDF of the paper titled Domain Transfer Through Image-to-Image Translation for Uncertainty-Aware Prostate Cancer Classification, by Meng Zhou and 5 other authors Download PDF Abstract:Prostate Cancer (PCa) is often diagnosed using High-resolution 3.0 Tesla(T) MRI, which has been widely established in clinics.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |