Matches in SemOpenAlex for { <https://semopenalex.org/work/W3048653896> ?p ?o ?g. }
Showing items 1 to 84 of
84
with 100 items per page.
- W3048653896 endingPage "4273" @default.
- W3048653896 startingPage "4262" @default.
- W3048653896 abstract "Fully convolutional networks (FCNs) are widely used for instance segmentation. One important challenge is to sufficiently train these networks to yield good generalizations for hard-to-learn pixels, correct prediction of which may greatly affect the success. A typical group of such hard-to-learn pixels are boundaries between instances. Many studies have developed strategies to pay more attention to learning these boundary pixels. They include designing multi-task networks with an additional task of boundary prediction and increasing the weights of boundary pixels’ predictions in the loss function. Such strategies require defining what to attend beforehand and incorporating this defined attention to the learning model. However, there may exist other groups of hard-to-learn pixels and manually defining and incorporating the appropriate attention for each group may not be feasible. In order to provide an adaptable solution to learn different groups of hard-to-learn pixels, this article proposes AttentionBoost , which is a new multi-attention learning model based on adaptive boosting, for the task of gland instance segmentation in histopathological images. AttentionBoost designs a multi-stage network and introduces a new loss adjustment mechanism for an FCN to adaptively learn what to attend at each stage directly on image data without necessitating any prior definition. This mechanism modulates the attention of each stage to correct the mistakes of previous stages, by adjusting the loss weight of each pixel prediction separately with respect to how accurate the previous stages are on this pixel. Working on histopathological images of colon tissues, our experiments demonstrate that the proposed AttentionBoost model improves the results of gland segmentation compared to its counterparts." @default.
- W3048653896 created "2020-08-18" @default.
- W3048653896 creator A5011846587 @default.
- W3048653896 creator A5043106100 @default.
- W3048653896 creator A5052857662 @default.
- W3048653896 date "2020-12-01" @default.
- W3048653896 modified "2023-09-24" @default.
- W3048653896 title "<i>AttentionBoost</i>: Learning What to Attend for Gland Segmentation in Histopathological Images by Boosting Fully Convolutional Networks" @default.
- W3048653896 cites W1903029394 @default.
- W3048653896 cites W1905829557 @default.
- W3048653896 cites W1988790447 @default.
- W3048653896 cites W2102605133 @default.
- W3048653896 cites W2117108042 @default.
- W3048653896 cites W2129259959 @default.
- W3048653896 cites W2164921999 @default.
- W3048653896 cites W2439951536 @default.
- W3048653896 cites W2482581235 @default.
- W3048653896 cites W2516903654 @default.
- W3048653896 cites W2550409828 @default.
- W3048653896 cites W2584471766 @default.
- W3048653896 cites W2592929672 @default.
- W3048653896 cites W2734349601 @default.
- W3048653896 cites W2805735218 @default.
- W3048653896 cites W2890587981 @default.
- W3048653896 cites W2955553907 @default.
- W3048653896 cites W2963351448 @default.
- W3048653896 cites W2963420686 @default.
- W3048653896 cites W2963881378 @default.
- W3048653896 cites W2963971305 @default.
- W3048653896 cites W2966967545 @default.
- W3048653896 cites W3048653896 @default.
- W3048653896 doi "https://doi.org/10.1109/tmi.2020.3015198" @default.
- W3048653896 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/32780699" @default.
- W3048653896 hasPublicationYear "2020" @default.
- W3048653896 type Work @default.
- W3048653896 sameAs 3048653896 @default.
- W3048653896 citedByCount "6" @default.
- W3048653896 countsByYear W30486538962020 @default.
- W3048653896 countsByYear W30486538962022 @default.
- W3048653896 countsByYear W30486538962023 @default.
- W3048653896 crossrefType "journal-article" @default.
- W3048653896 hasAuthorship W3048653896A5011846587 @default.
- W3048653896 hasAuthorship W3048653896A5043106100 @default.
- W3048653896 hasAuthorship W3048653896A5052857662 @default.
- W3048653896 hasBestOaLocation W30486538962 @default.
- W3048653896 hasConcept C124504099 @default.
- W3048653896 hasConcept C153180895 @default.
- W3048653896 hasConcept C154945302 @default.
- W3048653896 hasConcept C31972630 @default.
- W3048653896 hasConcept C41008148 @default.
- W3048653896 hasConcept C46686674 @default.
- W3048653896 hasConcept C81363708 @default.
- W3048653896 hasConcept C89600930 @default.
- W3048653896 hasConceptScore W3048653896C124504099 @default.
- W3048653896 hasConceptScore W3048653896C153180895 @default.
- W3048653896 hasConceptScore W3048653896C154945302 @default.
- W3048653896 hasConceptScore W3048653896C31972630 @default.
- W3048653896 hasConceptScore W3048653896C41008148 @default.
- W3048653896 hasConceptScore W3048653896C46686674 @default.
- W3048653896 hasConceptScore W3048653896C81363708 @default.
- W3048653896 hasConceptScore W3048653896C89600930 @default.
- W3048653896 hasFunder F4320322626 @default.
- W3048653896 hasIssue "12" @default.
- W3048653896 hasLocation W30486538961 @default.
- W3048653896 hasLocation W30486538962 @default.
- W3048653896 hasOpenAccess W3048653896 @default.
- W3048653896 hasPrimaryLocation W30486538961 @default.
- W3048653896 hasRelatedWork W1669643531 @default.
- W3048653896 hasRelatedWork W1721780360 @default.
- W3048653896 hasRelatedWork W2110230079 @default.
- W3048653896 hasRelatedWork W2117664411 @default.
- W3048653896 hasRelatedWork W2117933325 @default.
- W3048653896 hasRelatedWork W2122581818 @default.
- W3048653896 hasRelatedWork W2159066190 @default.
- W3048653896 hasRelatedWork W2739874619 @default.
- W3048653896 hasRelatedWork W2897195263 @default.
- W3048653896 hasRelatedWork W2979932740 @default.
- W3048653896 hasVolume "39" @default.
- W3048653896 isParatext "false" @default.
- W3048653896 isRetracted "false" @default.
- W3048653896 magId "3048653896" @default.
- W3048653896 workType "article" @default.