Matches in SemOpenAlex for { <https://semopenalex.org/work/W4365814183> ?p ?o ?g. }
- W4365814183 endingPage "912" @default.
- W4365814183 startingPage "899" @default.
- W4365814183 abstract "The accuracy and timeliness of the pathologic diagnosis of soft tissue tumors (STTs) critically affect treatment decision and patient prognosis. Thus, it is crucial to make a preliminary judgement on whether the tumor is benign or malignant with hematoxylin and eosin–stained images. A deep learning–based system, Soft Tissue Tumor Box (STT-BOX), is presented herein, with only hematoxylin and eosin images for malignant STT identification from benign STTs with histopathologic similarity. STT-BOX assumed gastrointestinal stromal tumor as a baseline for malignant STT evaluation, and distinguished gastrointestinal stromal tumor from leiomyoma and schwannoma with 100% area under the curve in patients from three hospitals, which achieved higher accuracy than the interpretation of experienced pathologists. Particularly, this system performed well on six common types of malignant STTs from The Cancer Genome Atlas data set, accurately highlighting the malignant mass lesion. STT-BOX was able to distinguish ovarian malignant sex-cord stromal tumors without any fine-tuning. This study included mesenchymal tumors that originated from the digestive system, bone and soft tissues, and reproductive system, where the high accuracy of migration verification may reveal the morphologic similarity of the nine types of malignant tumors. Further evaluation in a pan-STT setting would be potential and prospective, obviating the overuse of immunohistochemistry and molecular tests, and providing a practical basis for clinical treatment selection in a timely manner. The accuracy and timeliness of the pathologic diagnosis of soft tissue tumors (STTs) critically affect treatment decision and patient prognosis. Thus, it is crucial to make a preliminary judgement on whether the tumor is benign or malignant with hematoxylin and eosin–stained images. A deep learning–based system, Soft Tissue Tumor Box (STT-BOX), is presented herein, with only hematoxylin and eosin images for malignant STT identification from benign STTs with histopathologic similarity. STT-BOX assumed gastrointestinal stromal tumor as a baseline for malignant STT evaluation, and distinguished gastrointestinal stromal tumor from leiomyoma and schwannoma with 100% area under the curve in patients from three hospitals, which achieved higher accuracy than the interpretation of experienced pathologists. Particularly, this system performed well on six common types of malignant STTs from The Cancer Genome Atlas data set, accurately highlighting the malignant mass lesion. STT-BOX was able to distinguish ovarian malignant sex-cord stromal tumors without any fine-tuning. This study included mesenchymal tumors that originated from the digestive system, bone and soft tissues, and reproductive system, where the high accuracy of migration verification may reveal the morphologic similarity of the nine types of malignant tumors. Further evaluation in a pan-STT setting would be potential and prospective, obviating the overuse of immunohistochemistry and molecular tests, and providing a practical basis for clinical treatment selection in a timely manner. Recently, artificial intelligence (AI) has made great progress in assisting pathologic diagnosis of epithelial malignancy. Applications of deep learning based on convolutional neural networks (CNNs) in carcinomas of the skin,1Esteva A. Kuprel B. Novoa R.A. Ko J. Swetter S.M. Blau H.M. Thrun S. Dermatologist-level classification of skin cancer with deep neural networks.Nature. 2017; 542: 115-118Crossref PubMed Scopus (54) Google Scholar breast,2Bejnordi B.E. Veta M. Van Diest P.J. Van Ginneken B. Karssemeijer N. Litjens G. Van Der Laak J.A. Hermsen M. Manson Q.F. Balkenhol M. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer.JAMA. 2017; 318: 2199-2210Crossref PubMed Scopus (1678) Google Scholar, 3Le H. Gupta R. Hou L. Abousamra S. Fassler D. Torre-Healy L. Moffitt R.A. Kurc T. Samaras D. Batiste R. Utilizing automated breast cancer detection to identify spatial distributions of tumor-infiltrating lymphocytes in invasive breast cancer.Am J Pathol. 2020; 190: 1491-1504Abstract Full Text Full Text PDF PubMed Scopus (51) Google Scholar, 4Li J. Mi W. Guo Y. Ren X. Fu H. Zhang T. Zou H. Liang Z. Artificial intelligence for histological subtype classification of breast cancer: combining multi-scale feature maps and the recurrent attention model.Histopathology. 2022; 80: 836-846Crossref PubMed Scopus (2) Google Scholar, 5Lin H. Chen H. Dou Q. Wang L. Qin J. Heng P.-A. Scannet: a fast and dense scanning framework for metastastic breast cancer detection from whole-slide image.in: IEEE Winter Conference on Applications of Computer Vision: IEEE. 2018: 539-546Google Scholar, 6Litjens G. Bandi P. Ehteshami Bejnordi B. Geessink O. Balkenhol M. Bult P. Halilovic A. Hermsen M. van de Loo R. Vogels R. 1399 H&E-stained sentinel lymph node sections of breast cancer patients: the CAMELYON dataset.Gigascience. 2018; 7: giy065Crossref PubMed Scopus (171) Google Scholar prostate,7da Silva L.M. Pereira E.M. Salles P.G. Godrich R. Ceballos R. Kunz J.D. Casson A. Viret J. Chandarlapaty S. Ferreira C.G. Independent real-world application of a clinical-grade automated prostate cancer detection system.J Pathol. 2021; 254: 147-158Crossref PubMed Scopus (33) Google Scholar,8Ström P. Kartasalo K. Olsson H. Solorzano L. Delahunt B. Berney D.M. Bostwick D.G. Evans A.J. Grignon D.J. Humphrey P.A. Artificial intelligence for diagnosis and grading of prostate cancer in biopsies: a population-based, diagnostic study.Lancet Oncol. 2020; 21: 222-232Abstract Full Text Full Text PDF PubMed Scopus (273) Google Scholar lung,9Viswanathan V.S. Toro P. Corredor G. Mukhopadhyay S. Madabhushi A. The state of the art for artificial intelligence in lung digital pathology.J Pathol. 2022; 257: 413-429Crossref PubMed Scopus (18) Google Scholar kidney,10Bouteldja N. Hölscher D.L. Klinkhammer B.M. Buelow R.D. Lotz J. Weiss N. Daniel C. Amann K. Boor P. Stain-independent deep learning–based analysis of digital kidney histopathology.Am J Pathol. 2023; 193: 73-83Abstract Full Text Full Text PDF PubMed Scopus (3) Google Scholar,11Hermsen M. Ciompi F. Adefidipe A. Denic A. Dendooven A. Smith B.H. van Midden D. Bräsen J.H. Kers J. Stegall M.D. Convolutional neural networks for the evaluation of chronic and inflammatory lesions in kidney transplant biopsies.Am J Pathol. 2022; 192: 1418-1432Abstract Full Text Full Text PDF PubMed Scopus (6) Google Scholar stomach,12Park J. Jang B.G. Kim Y.W. Park H. Kim B.-H. Kim M.J. Ko H. Gwak J.M. Lee E.J. Chung Y.R. A prospective validation and observer performance study of a deep learning algorithm for pathologic diagnosis of gastric tumors in endoscopic BiopsiesDeep learning–assisted diagnosis in gastric biopsies.Clin Cancer Res. 2021; 27: 719-728Crossref PubMed Scopus (18) Google Scholar colorectum,13Kumar N. Verma R. Chen C. Lu C. Fu P. Willis J. Madabhushi A. Computer-extracted features of nuclear morphology in hematoxylin and eosin images distinguish stage II and IV colon tumors.J Pathol. 2022; 257: 17-28Crossref PubMed Scopus (3) Google Scholar, 14Echle A. Grabsch H.I. Quirke P. van den Brandt P.A. West N.P. Hutchins G.G. Heij L.R. Tan X. Richman S.D. Krause J. Clinical-grade detection of microsatellite instability in colorectal tumors by deep learning.Gastroenterology. 2020; 159: 1406-1416.e11Abstract Full Text Full Text PDF PubMed Scopus (146) Google Scholar, 15Xu H. Cha Y.J. Clemenceau J.R. Choi J. Lee S.H. Kang J. Hwang T.H. Spatial analysis of tumor-infiltrating lymphocytes in histological sections using deep learning techniques predicts survival in colorectal carcinoma.J Pathol Clin Res. 2022; 8: 327-339Crossref PubMed Scopus (8) Google Scholar and liver16Kim Y.J. Jang H. Lee K. Park S. Min S.-G. Hong C. Park J.H. Lee K. Kim J. Hong W. PAIP 2019: liver cancer segmentation challenge.Med Image Anal. 2021; 67101854Abstract Full Text Full Text PDF Scopus (41) Google Scholar have improved the diagnostic efficiency and accuracy, and even shown a more objective and forward-looking trend than the diagnoses made by pathologists. In contrast, AI is scarcely studied in the diagnosis of soft tissue tumors (STTs). It is challenging for pathologists to judge whether an STT is benign or malignant based only on hematoxylin and eosin (H&E)–stained slides. STT includes a variety of benign, borderline, and malignant tumors that originate from the mesenchymal connective tissue. Most tumors consist of spindle cells or contain a portion of spindle cell components.17Ywasa Y. Fletcher C. Flucke U. WHO Classification of Soft Tissue and Bone Tumours. IARC Publications, Lyon, France2020Google Scholar Routinely, malignant STTs are distinguished from benign STTs primarily according to the arrangement, atypia, and nuclear mitosis of tumor cells, tumor margin growth pattern, secondary changes as tumor necrosis, and hemorrhage. Unfortunately, some malignant STTs are similar in growth pattern and morphologic features to the benign ones.17Ywasa Y. Fletcher C. Flucke U. WHO Classification of Soft Tissue and Bone Tumours. IARC Publications, Lyon, France2020Google Scholar Immunohistochemistry and genetic tests are often performed to make an accurate diagnosis, which inevitably increases the burden on patients and difficulty of diagnosis in local hospitals.18Burns J. Brown J.M. Jones K.B. Huang P.H. The cancer genome atlas: impact and future directions in sarcoma.Surg Oncol Clin. 2022; 31: 559-568Abstract Full Text Full Text PDF PubMed Scopus (3) Google Scholar The present study focused on STTs originated from the digestive system, soft tissue, and bone, as well as mesenchymal tumors from reproductive system, to establish and test the deep learning–based system, called Soft Tissue Tumor Box (STT-BOX), in distinguishing malignant STTs from benign STTs (Supplemental Figure S1 provides the system interface of STT-BOX). First, the core model of the system was trained on gastrointestinal stromal tumors (GISTs). GIST is the most common malignant STT of the gastrointestinal tract,19Joensuu H. Hohenberger P. Corless C.L. Gastrointestinal stromal tumour.Lancet. 2013; 382: 973-983Abstract Full Text Full Text PDF PubMed Scopus (441) Google Scholar and has varying degrees of recurrence and metastasis risk.20Papke Jr., D.J. Hornick J.L. Recent developments in gastroesophageal mesenchymal tumours.Histopathology. 2021; 78: 171-186Crossref PubMed Scopus (8) Google Scholar,21Joensuu H. Vehtari A. Riihimäki J. Nishida T. Steigen S.E. Brabec P. Plank L. Nilsson B. Cirilli C. Braconi C. Risk of recurrence of gastrointestinal stromal tumour after surgery: an analysis of pooled population-based cohorts.Lancet Oncol. 2012; 13: 265-274Abstract Full Text Full Text PDF PubMed Scopus (689) Google Scholar Histopathologically, it is usually difficult to distinguish GISTs from benign tumors with spindle cell morphology, such as leiomyoma and schwannoma, both in gross specimens and H&E-stained slides. Pathologists must apply a panel of immunohistochemical (IHC) antibodies to distinguish GISTs from benign STTs with similar histopathologic characteristics.22Joensuu H. Risk stratification of patients diagnosed with gastrointestinal stromal tumor.Hum Pathol. 2008; 39: 1411-1419Crossref PubMed Scopus (893) Google Scholar,23Karakas C. Christensen P. Baek D. Jung M. Ro J.Y. Dedifferentiated gastrointestinal stromal tumor: recent advances.Ann Diagn Pathol. 2019; 39: 118-124Crossref PubMed Scopus (13) Google Scholar Genetically, most GISTs harbor gain-of-function mutations in either c-KIT or platelet-derived growth factor receptor α (PDGFRA) oncogene.19Joensuu H. Hohenberger P. Corless C.L. Gastrointestinal stromal tumour.Lancet. 2013; 382: 973-983Abstract Full Text Full Text PDF PubMed Scopus (441) Google Scholar,21Joensuu H. Vehtari A. Riihimäki J. Nishida T. Steigen S.E. Brabec P. Plank L. Nilsson B. Cirilli C. Braconi C. Risk of recurrence of gastrointestinal stromal tumour after surgery: an analysis of pooled population-based cohorts.Lancet Oncol. 2012; 13: 265-274Abstract Full Text Full Text PDF PubMed Scopus (689) Google Scholar,23Karakas C. Christensen P. Baek D. Jung M. Ro J.Y. Dedifferentiated gastrointestinal stromal tumor: recent advances.Ann Diagn Pathol. 2019; 39: 118-124Crossref PubMed Scopus (13) Google Scholar Pathologists use a panel of protein biomarkers and molecular detection to achieve the definite diagnosis and guide accurate target therapy. This brings great challenges to many pathology departments that lack diagnostic experience and auxiliary pathologic technology. Therefore, considering the achievements of CNNs in distinguishing carcinoma histopathologic features, a novel CNN-based system may help in the diagnosis of malignant STTs represented by GIST. By training a variety of CNNs, an effective hierarchical feature representation strategy was proposed so that only H&E-stained slides were sufficient to distinguish GIST. Furthermore, the STT-BOX system achieved higher diagnostic accuracy than experienced pathologists. Next, the H&E-stained images of six common types of soft tissue sarcomas were tested from The Cancer Genome Atlas (TCGA) data set (https://gdc.cancer.gov, last accessed February 24, 2023). This included a total of 235 cases from 32 centers, including leiomyosarcoma, dedifferentiated liposarcoma, undifferentiated pleomorphic sarcoma, myxofibrosarcoma, synovial sarcoma, and malignant peripheral nerve sheath tumor.24Lazar A.J. McLellan M.D. Bailey M.H. Miller C.A. Appelbaum E.L. Cordes M.G. Fronick C.C. Fulton L.A. Fulton R.S. Mardis E.R. Comprehensive and integrated genomic characterization of adult soft tissue sarcomas.Cell. 2017; 171: 950-965Abstract Full Text Full Text PDF PubMed Scopus (567) Google Scholar The STT-BOX system accurately highlighted the malignant mass lesion in each case. Lastly, the established CNN-based model of GIST was transferred to the reproductive system and the effectiveness of STT-BOX system was tested in the diagnosis of mesenchymal tumors from different primary locations. There are three common types of ovarian sex-cord stromal tumors [SCSTs; ie, theca cell tumors (TCTs), adult granulosa cell tumors (AGCTs), and Sertoli-Leydig cell tumors (SLCTs)].25Young R.H. Ovarian tumors: a survey of selected advances of note during the life of this journal.Hum Pathol. 2020; 95: 169-206Crossref PubMed Scopus (3) Google Scholar TCT is benign, and AGCT and SLCT are malignant. These three types of tumors have a spindle cell component and therefore have similar histopathology.26Karnezis A.N. Cho K.R. Gilks C.B. Pearce C.L. Huntsman D.G. The disparate origins of ovarian cancers: pathogenesis and prevention strategies.Nat Rev Cancer. 2017; 17: 65-74Crossref PubMed Scopus (210) Google Scholar,27Young R.H. Reflections on a 40-year experience with a fascinating group of tumors, including comments on the seminal observations of Robert E. Scully, MD.Arch Pathol Lab Med. 2018; 142: 1459-1485Crossref PubMed Scopus (37) Google Scholar Without any training or fine-tuning, the STT-BOX system showed excellent capability and stability to distinguish between benign and malignant SCSTs. Overall, this study demonstrated the potential of STT-BOX system in distinguishing malignant STTs from benign ones solely through H&E images. The codes are available online (https://github.com/dreambamboo/STT-BOX-public, last accessed February 24, 2023). This study was approved by the Institutional Review Board and Ethics Committee Board of the Peking University Third Hospital (Beijing, China). The present study consists of four steps (Figure 1A) and 430 whole-slide images (WSIs) from 386 patients who were enrolled in this study (Supplemental Table S1). Step 1 conducted the system construction for differentiation of GIST with spindle cell type from benign schwannomas and leiomyomas with WSIs collected from Peking University Third Hospital (Dataset-P). Step 2 performed the system cross-cohort validation on WSIs of GIST with spindle cell type, schwannomas, and leiomyomas, from Peking University Cancer Hospital and Institute (Dataset-T) and Beijing Luhe Hospital (Dataset-L). When selecting the GIST cases, the IHC results of CD117, Dog-1, CD34, S100, α-smooth muscle actin, and desmin, and gene mutation analysis of c-KIT and PDGFRA were all known. The tumor size of all GISTs was at least 2 cm. Finally, a total of 145 WSIs, 6250 screenshots, and 50,000 patches were enrolled in training, cross-validation, and cross-cohort validation (Supplemental Tables S2 and S3 and Supplemental Figures S2 and S3). Dataset-P was used for training and three-fold cross-validation (Supplemental Table S3). Patients in different folds were independent of each other to ensure that the data of the patients in the test set would not appear in the training set. Dataset-T and Dataset-L were used to validate the generalization of the present system across cohorts (Supplemental Table S2). Step 3 assigned the direct testing and highlighting of malignant lesions on the public data set TCGA with the present system. There were 262 cases of soft tissue sarcomas in TCGA data set. Each case contains H&E-stained images of frozen and paraffin sections. Two specialist pathologists (Y.L. and Y.W.) browsed all the pathologic reports and images, and selected the best-qualified paraffin section image containing tumor and peritumoral areas for each case. According to the pathologic report and morphologic observation of each case, these six types of tumors are either spindle cell subtypes or contain spindle cell components. Finally, 235 WSIs were used for testing, including leiomyosarcoma (n = 96), dedifferentiated liposarcoma (n = 54), undifferentiated pleomorphic sarcoma (n = 44), myxofibrosarcoma (n = 22), synovial sarcoma (n = 10), and malignant peripheral nerve sheath tumor (n = 9). Step 4 put forward automatic identification of ovarian TCTs, AGCTs, and SLCTs on the Dataset-O without model fine-tuning. In Dataset-P/T/L/O, for any patient, at least one tissue slide was available. All slides were completely anonymous, and images were scanned at ×40 magnification (0.12 μm/pixel) by two pathologists (Y.W. and B.H.) with a UNIC scanner (UNIC Technologies, Inc., Beijing, China). The clinicopathologic features of all cases in each cohort are summarized in Supplemental Table S1. First, unlike conventional deep learning algorithms trained with large amounts of data, a feature extractor was built with limited data from Dataset-P (Figure 1B). Second, a hierarchical feature representation strategy was proposed for the inference of cross-validation and cross-cohort validation (Figure 1D). Third, without any fine-tuning, the STT-BOX system with trained model was directly applied for labeling lesions on soft tissue sarcoma WSIs from TCGA data set to measure the performance of the present system against unfamiliar tissues and organs, which were not included in the training set. Finally, the STT-BOX system was challengingly used to distinguish malignant from benign ovarian SCSTs. A total of 121 patients with 62 cases of GIST with spindle cell type, 27 cases of schwannoma, and 32 cases of leiomyoma were enrolled in training, cross-validation, and cross-cohort validation, including 145 WSIs, 6250 screenshots, and 50,000 patches (Supplemental Tables S1–S3, Supplemental Figure S2, and Figure 1A). Each slide was annotated with a single label (namely, GIST, schwannoma, or leiomyoma) (Supplemental Figure S3). The labels were confirmed by pathologists with the assistance of IHC staining and gene mutation analysis. The screenshots were cropped within the tumor areas by the pathologists to fairly measure the morphologic differences among the three different types of STTs. Eight patches were then cropped with an overlapping manner on each screenshot. Dataset-P was used for training and three-fold cross-validation (Supplemental Table S3). In addition, Dataset-T and Dataset-L were used to validate the generalization of the present system across cohorts (Supplemental Table S2). In the cross-cohort validation, only data from Dataset-P were trained for the model of the STT-BOX system, whereas the data from the other two data sets were only used for testing without any fine-tuning. Figure 1B shows the acquisition process of the feature extractor. The slides in Dataset-P were diagnosed as GIST, schwannoma, or leiomyoma through IHC. The corresponding screenshots were cropped by pathologists (J.Y. and B.H.) from the tumor regions of interest on the slides. The average size of the screenshots was 1898 × 878 pixels. Every screenshot was cropped into eight patches with a size of 512 × 512 pixels, and thus, the adjacent patches overlapped each other to ensure that the information of each part of the screenshot was considered. All patches made up the training set. Data were augmented through random cropping, flip, rotation, superimposing gaussian noise, and color transformation before being fed into the CNNs. H&E stained the nucleus and cytoplasm of the cells into blue and red, respectively. External factors, including dye concentration, staining time and temperature, and scanner versions, inevitably led to differences in the color distribution of digital images. Therefore, each input patch was color fluctuated randomly to simulate different color distributions. As shown in Figure 1C, the color of an original patch fluctuated through random changes in hue, saturation, and brightness. In the present study, five CNNs, including Inception-v3,28Szegedy C. Vanhoucke V. Ioffe S. Shlens J. Wojna Z. Rethinking the inception architecture for computer vision.in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 2818-2826Crossref Scopus (17119) Google Scholar ResNet-18,29He K. Zhang X. Ren S. Sun J. Deep residual learning for image recognition.in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 770-778Crossref Scopus (113293) Google Scholar ResNet-34,29He K. Zhang X. Ren S. Sun J. Deep residual learning for image recognition.in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 770-778Crossref Scopus (113293) Google Scholar ResNet-50,29He K. Zhang X. Ren S. Sun J. Deep residual learning for image recognition.in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 770-778Crossref Scopus (113293) Google Scholar and ResNet-101,29He K. Zhang X. Ren S. Sun J. Deep residual learning for image recognition.in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 770-778Crossref Scopus (113293) Google Scholar were trained as the feature extractors F. The extractors were loaded with parameters pretrained on a large data set called ImageNet30Deng J. Dong W. Socher R. Li L.-J. Li K. Fei-Fei L. Imagenet: a large-scale hierarchical image database.in: IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2009: 248-255Crossref Google Scholar before training. The extractors were optimized via stochastic gradient descent while training. The learning rate τe was initialized with τmax=0.001, and then decreased on the basis of cosine annealing schedule:τe=τmin+12(τmax−τmin)[1+cos(e100π)](1) where τmin=0.00001, e was the current epoch. The batch size was set to 32. The tools adopted in this work were CUDA (10.0.130), Pytorch (1.2.0),31Paszke A. Gross S. Massa F. Lerer A. Bradbury J. Chanan G. Killeen T. Lin Z. Gimelshein N. Antiga L. Pytorch: an imperative style, high-performance deep learning library.Adv Neural Inf Process Syst. 2019; 32Google Scholar Python (3.7.6), Numpy (1.18.1), OpenSlide,32Goode A. Gilbert B. Harkes J. Jukic D. Satyanarayanan M. OpenSlide: a vendor-neutral software foundation for digital pathology.J Pathol Inform. 2013; 4: 27Crossref PubMed Google Scholar and Matplotlib.33Hunter J.D. Matplotlib: a 2D graphics environment.Comput Sci Eng. 2007; 9: 90-95Crossref Scopus (17444) Google Scholar All the experiments were conducted on a single Tesla T4 graphics processing unit with 16 GB of memory. A hierarchical feature representation strategy was designed for the inference of STT-BOX (Figure 1D). Specifically, hierarchical features (ie, patch-, screenshot-, and slide-level features) were concerned (Figure 1D). The patches and screenshots were obtained through the same strategy as that during the training process in Figure 1B. Each screenshot xij from the slide Xi was cropped into eight patches, ie, xij={xij1,xij2,⋯,xij8}. First, the patch-level features were extracted by the feature extractor F, which was trained on the basis of Dataset-P. As shown in Figure 1D, the output feature vector of the feature extractor F (ie, a classification model) was F(xijm)=<pijm1,pijm2,pijm3>. pijm1 indicated the similarity between the input patch xijm and GIST, pijm2 indicated the similarity between the input patch xijm and schwannoma, and pijm3 indicated the similarity between the input patch xijm and leiomyoma. The output vector met the condition of pijm1+pijm2+pijm3=1. Then, the patch-level output feature vectors were fused into the screenshot-level feature vectors through a voting mode or a mapping mode. Similarly, the screenshot-level output feature vectors were finally fused into the slide-level feature vectors. Voting: When the voting mode of the STT-BOX system was adopted, a screenshot was diagnosed as GIST when more patches were diagnosed as GIST instead of schwannoma and leiomyoma. The probability of screenshots that corresponded to the category was the proportion of the patches predicted for this category. The plurality label (ie, the most common prediction) of the screenshots from a slide determined the classification of this slide, whereas the proportion of the predicted screenshot category determined the probabilities of the corresponding categories of the slide. Mapping: There is also a mapping mode in the STT-BOX system. As mentioned before, a patch xijm would generate a feature vector through the feature extractor, ie, F(xijm)=<pijm1,pijm2,pijm3>, the feature of the corresponding screenshot could be represented as a matric with a shape of (8 × 3) by concatenating the feature vectors of the eight patches. Then the screenshot-level feature vector F(xij)=<pij1,pij2,pij3> of a screenshot xij was mapped bypijc=∑m=18pijmc,(2) where c∈{1,2,3} was the channel of F(xij). Similarly, the slide-level feature vector F(Xi)=<Pi1,Pi2,Pi3> of a slide Xi was mapped byPic=∑j=1NPijc,(3) where N was the number of screenshots cropped from the slide Xi. The category with the highest probability was regarded as the final diagnostic category D(Xi) of the input slide Xi:D(Xi)=argmax[F(Xi)]=argmax(Pi1,Pi2,Pi3),(4) and Pi1,Pi2,Pi3 represented the confidence probability that Xi was predicted to be GIST, leiomyoma, or schwannoma, respectively. Above-mentioned voting and mapping are two modes of the hierarchical feature representation strategy embedded in the STT-BOX system. Through this hierarchical feature representation strategy, the slide diagnosis was quickly obtained without IHC data. The study included 235 WSIs of soft tissue sarcomas from TCGA data set. Each WSI showed the tumor area and peritumoral tissues, including skeletal muscle, smooth muscle, collagen fiber, nerve, or fat. The performance of the STT-BOX system was validated to distinguish and outline malignant from normal areas in all images. Specifically, all the soft tissue sarcoma WSIs were cropped into (512 × 512 pixel) patches with a stride of 256 pixels before being input to the feature extractor trained on Dataset-P. The present system generated a probability pi (pi∈[0,1]) for each patch i to represent the similarity of this patch to GIST in the training data. The predictions were spliced according to the cropping locations of the input patches, and a heat map highlighted that the malignant lesions could be obtained. To avoid missing small lesions, the predicted probabilities pi of the patches belonging to a WSI were sorted, and M patches with the highest probabilities constituted the regions of interest set {pi|ri%>20%}, where r% was the foreground ratio of each patch. The mean of the predicted probabilities of the regions of interest was regarded as the likelihood P that the WSI contained a region with high similarity to GIST:P=∑i=1M{pi|ri%>20%}/M.(5) When P>50%, the tested WSI was regarded as a positive sample containing the target area. To further validate the capability of capturing the morphologic high-dimensional features of malignant SCSTs in the ovary, three common ovarian SCSTs were selected with spindle cell morphology, including TCTs, AGCTs, and SLCTs. Specifically, the WSIs from Dataset-O were cropped into (512 × 512 pixel) patches with a stride of 256 pixels before being input to the system with the model trained on Dataset-P. After extracting features, each patch was assigned a probability value pg, which represented the similarity between the input patch and the GIST patches. The dissimilarity between the input patch and GIST were represented by pg¯=1−pg. The similarity score S between a WSI and GIST was obtained by aggregating the pg of all corresponding patches. S was defined as the ratio of similarity and dissimilarity between the input WSI features and GIST features:S=(pg1⋅r1%+pg2⋅r2%+⋯+pgn⋅rn%)/n(pg¯1⋅r1%+pg¯2⋅r2%+⋯+pg¯n⋅rn%)/n=∑i=1npgi⋅ri%n/∑i=1npg¯i⋅ri%n,(6) where n was the total number of patches cropped from the corresponding WSI. The more patches in WSI resembled GIST, the larger S would be. Similarly, when most tissue in WSI was unlike GIST, S was relatively small. Three-fold cross-validation was conducted on Dataset-P. All experiments were performed on the basis of finely tuned color transformation to avoid the distribution deviation between the training set and the test set (Figure 1C" @default.
- W4365814183 created "2023-04-16" @default.
- W4365814183 creator A5013151110 @default.
- W4365814183 creator A5014224289 @default.
- W4365814183 creator A5015302215 @default.
- W4365814183 creator A5021293751 @default.
- W4365814183 creator A5026860986 @default.
- W4365814183 creator A5036476031 @default.
- W4365814183 creator A5048050496 @default.
- W4365814183 creator A5056382629 @default.
- W4365814183 creator A5060284538 @default.
- W4365814183 creator A5061224733 @default.
- W4365814183 creator A5073730933 @default.
- W4365814183 creator A5076834193 @default.
- W4365814183 creator A5079144156 @default.
- W4365814183 date "2023-07-01" @default.
- W4365814183 modified "2023-10-17" @default.
- W4365814183 title "A Deep Learning–Based System Trained for Gastrointestinal Stromal Tumor Screening Can Identify Multiple Types of Soft Tissue Tumors" @default.
- W4365814183 cites W1977653087 @default.
- W4365814183 cites W1987037759 @default.
- W4365814183 cites W2011301426 @default.
- W4365814183 cites W2059864409 @default.
- W4365814183 cites W2288892845 @default.
- W4365814183 cites W2550409828 @default.
- W4365814183 cites W2555058267 @default.
- W4365814183 cites W2581082771 @default.
- W4365814183 cites W2765468373 @default.
- W4365814183 cites W2772723798 @default.
- W4365814183 cites W2798643036 @default.
- W4365814183 cites W2805735218 @default.
- W4365814183 cites W2805886241 @default.
- W4365814183 cites W2902670323 @default.
- W4365814183 cites W2904343867 @default.
- W4365814183 cites W2974825848 @default.
- W4365814183 cites W2981420409 @default.
- W4365814183 cites W2983868555 @default.
- W4365814183 cites W2999399991 @default.
- W4365814183 cites W3016045558 @default.
- W4365814183 cites W3036122989 @default.
- W4365814183 cites W3043835773 @default.
- W4365814183 cites W3092103057 @default.
- W4365814183 cites W3094977690 @default.
- W4365814183 cites W3105771333 @default.
- W4365814183 cites W3113664182 @default.
- W4365814183 cites W3118855884 @default.
- W4365814183 cites W3157209223 @default.
- W4365814183 cites W4200363653 @default.
- W4365814183 cites W4205601010 @default.
- W4365814183 cites W4210999713 @default.
- W4365814183 cites W4225116060 @default.
- W4365814183 cites W4280550098 @default.
- W4365814183 cites W4282925719 @default.
- W4365814183 cites W4285679076 @default.
- W4365814183 cites W4307656790 @default.
- W4365814183 doi "https://doi.org/10.1016/j.ajpath.2023.03.012" @default.
- W4365814183 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/37068638" @default.
- W4365814183 hasPublicationYear "2023" @default.
- W4365814183 type Work @default.
- W4365814183 citedByCount "1" @default.
- W4365814183 countsByYear W43658141832023 @default.
- W4365814183 crossrefType "journal-article" @default.
- W4365814183 hasAuthorship W4365814183A5013151110 @default.
- W4365814183 hasAuthorship W4365814183A5014224289 @default.
- W4365814183 hasAuthorship W4365814183A5015302215 @default.
- W4365814183 hasAuthorship W4365814183A5021293751 @default.
- W4365814183 hasAuthorship W4365814183A5026860986 @default.
- W4365814183 hasAuthorship W4365814183A5036476031 @default.
- W4365814183 hasAuthorship W4365814183A5048050496 @default.
- W4365814183 hasAuthorship W4365814183A5056382629 @default.
- W4365814183 hasAuthorship W4365814183A5060284538 @default.
- W4365814183 hasAuthorship W4365814183A5061224733 @default.
- W4365814183 hasAuthorship W4365814183A5073730933 @default.
- W4365814183 hasAuthorship W4365814183A5076834193 @default.
- W4365814183 hasAuthorship W4365814183A5079144156 @default.
- W4365814183 hasBestOaLocation W43658141831 @default.
- W4365814183 hasConcept C136948725 @default.
- W4365814183 hasConcept C142724271 @default.
- W4365814183 hasConcept C154945302 @default.
- W4365814183 hasConcept C16930146 @default.
- W4365814183 hasConcept C2775922572 @default.
- W4365814183 hasConcept C2777007597 @default.
- W4365814183 hasConcept C41008148 @default.
- W4365814183 hasConcept C71924100 @default.
- W4365814183 hasConceptScore W4365814183C136948725 @default.
- W4365814183 hasConceptScore W4365814183C142724271 @default.
- W4365814183 hasConceptScore W4365814183C154945302 @default.
- W4365814183 hasConceptScore W4365814183C16930146 @default.
- W4365814183 hasConceptScore W4365814183C2775922572 @default.
- W4365814183 hasConceptScore W4365814183C2777007597 @default.
- W4365814183 hasConceptScore W4365814183C41008148 @default.
- W4365814183 hasConceptScore W4365814183C71924100 @default.
- W4365814183 hasFunder F4320321001 @default.
- W4365814183 hasFunder F4320325902 @default.
- W4365814183 hasFunder F4320335777 @default.
- W4365814183 hasIssue "7" @default.
- W4365814183 hasLocation W43658141831 @default.
- W4365814183 hasLocation W43658141832 @default.
- W4365814183 hasOpenAccess W4365814183 @default.