Matches in SemOpenAlex for { <https://semopenalex.org/work/W4313124789> ?p ?o ?g. }
- W4313124789 endingPage "106654" @default.
- W4313124789 startingPage "106641" @default.
- W4313124789 abstract "The critical challenge of image inpainting is to infer reasonable semantics and textures for a corrupted image. Typical methods for image inpainting are built upon some prior knowledge to synthesize the complete image. One potential limitation is that those methods often remain undesired blurriness or semantic mistakes in the synthesized image while handling images with large corrupted areas. In this paper, we propose a Collaborative Contrastive Learning-based Generative Model ( <italic xmlns:mml=http://www.w3.org/1998/Math/MathML xmlns:xlink=http://www.w3.org/1999/xlink>C2LGM</i> ), which learns the content consistency in the same image to ensure that the inferred content of corrupted areas is reasonable compared to the known content by pixel-level reconstruction and high-level semantic reasoning. <italic xmlns:mml=http://www.w3.org/1998/Math/MathML xmlns:xlink=http://www.w3.org/1999/xlink>C2LGM</i> leverages the encoder-decoder based framework to directly learn the mapping from the corrupted image to the intact image and perform the pixel-level reconstruction. To perform semantic reasoning, our <italic xmlns:mml=http://www.w3.org/1998/Math/MathML xmlns:xlink=http://www.w3.org/1999/xlink>C2LGM</i> introduces a Collaborative Contrastive Learning ( <italic xmlns:mml=http://www.w3.org/1998/Math/MathML xmlns:xlink=http://www.w3.org/1999/xlink>C2L</i> ) mechanism that learns high-level semantic consistency between inferred and known content. Specifically, <italic xmlns:mml=http://www.w3.org/1998/Math/MathML xmlns:xlink=http://www.w3.org/1999/xlink>C2L</i> mechanism introduces the high-frequency edge maps to participate in the process of typical contrastive learning and enables the deep model to ensure the semantic reasonableness between high-frequency structures and pixel-level content by pushing the representations of inferred content and known content close and keeping unrelated semantic content away in the latent feature space. Moreover, <italic xmlns:mml=http://www.w3.org/1998/Math/MathML xmlns:xlink=http://www.w3.org/1999/xlink>C2LGM</i> also directly absorbs the prior knowledge of structural information from the proposed structural spatial attention module, and leverages the texture distribution sampling to improve the quality of synthesized content. As a result, our <italic xmlns:mml=http://www.w3.org/1998/Math/MathML xmlns:xlink=http://www.w3.org/1999/xlink>C2LGM</i> achieves a 0.42 dB improvement over competing methods in terms of the PSNR metric while coping with a 40~ 50% corruption ratio in the Places2 dataset. Extensive experiments on three benchmark datasets, including Paris Street View, CelebA-HQ, and Places2, demonstrate the advantages of our proposed <italic xmlns:mml=http://www.w3.org/1998/Math/MathML xmlns:xlink=http://www.w3.org/1999/xlink>C2LGM</i> over other state-of-the-art methods for image inpainting both qualitatively and quantitatively." @default.
- W4313124789 created "2023-01-06" @default.
- W4313124789 creator A5019407908 @default.
- W4313124789 creator A5058167646 @default.
- W4313124789 creator A5082086115 @default.
- W4313124789 date "2022-01-01" @default.
- W4313124789 modified "2023-10-10" @default.
- W4313124789 title "Collaborative Contrastive Learning-Based Generative Model for Image Inpainting" @default.
- W4313124789 cites W1967577110 @default.
- W4313124789 cites W1993120651 @default.
- W4313124789 cites W1999360130 @default.
- W4313124789 cites W2100415658 @default.
- W4313124789 cites W2105038642 @default.
- W4313124789 cites W2113404166 @default.
- W4313124789 cites W2133665775 @default.
- W4313124789 cites W2145023731 @default.
- W4313124789 cites W2174817186 @default.
- W4313124789 cites W2194775991 @default.
- W4313124789 cites W2295936755 @default.
- W4313124789 cites W2559264300 @default.
- W4313124789 cites W2611104282 @default.
- W4313124789 cites W2732026016 @default.
- W4313124789 cites W2752782242 @default.
- W4313124789 cites W2796286534 @default.
- W4313124789 cites W2798365772 @default.
- W4313124789 cites W2944294033 @default.
- W4313124789 cites W2962770929 @default.
- W4313124789 cites W2962785568 @default.
- W4313124789 cites W2963255313 @default.
- W4313124789 cites W2963270367 @default.
- W4313124789 cites W2963420272 @default.
- W4313124789 cites W2964148878 @default.
- W4313124789 cites W2965965567 @default.
- W4313124789 cites W2981682056 @default.
- W4313124789 cites W2982763192 @default.
- W4313124789 cites W2985764327 @default.
- W4313124789 cites W2989207674 @default.
- W4313124789 cites W2990886896 @default.
- W4313124789 cites W2991377405 @default.
- W4313124789 cites W2997669187 @default.
- W4313124789 cites W2998075999 @default.
- W4313124789 cites W3003244550 @default.
- W4313124789 cites W3026446890 @default.
- W4313124789 cites W3034482833 @default.
- W4313124789 cites W3035251567 @default.
- W4313124789 cites W3035512475 @default.
- W4313124789 cites W3035524453 @default.
- W4313124789 cites W3035574324 @default.
- W4313124789 cites W3043547428 @default.
- W4313124789 cites W3080517269 @default.
- W4313124789 cites W3148142747 @default.
- W4313124789 cites W3176050697 @default.
- W4313124789 cites W3193508667 @default.
- W4313124789 cites W3203538104 @default.
- W4313124789 cites W4292787105 @default.
- W4313124789 doi "https://doi.org/10.1109/access.2022.3211961" @default.
- W4313124789 hasPublicationYear "2022" @default.
- W4313124789 type Work @default.
- W4313124789 citedByCount "0" @default.
- W4313124789 crossrefType "journal-article" @default.
- W4313124789 hasAuthorship W4313124789A5019407908 @default.
- W4313124789 hasAuthorship W4313124789A5058167646 @default.
- W4313124789 hasAuthorship W4313124789A5082086115 @default.
- W4313124789 hasBestOaLocation W43131247891 @default.
- W4313124789 hasConcept C115961682 @default.
- W4313124789 hasConcept C11727466 @default.
- W4313124789 hasConcept C154945302 @default.
- W4313124789 hasConcept C160633673 @default.
- W4313124789 hasConcept C167966045 @default.
- W4313124789 hasConcept C184337299 @default.
- W4313124789 hasConcept C199360897 @default.
- W4313124789 hasConcept C204321447 @default.
- W4313124789 hasConcept C23123220 @default.
- W4313124789 hasConcept C2776436953 @default.
- W4313124789 hasConcept C39890363 @default.
- W4313124789 hasConcept C41008148 @default.
- W4313124789 hasConceptScore W4313124789C115961682 @default.
- W4313124789 hasConceptScore W4313124789C11727466 @default.
- W4313124789 hasConceptScore W4313124789C154945302 @default.
- W4313124789 hasConceptScore W4313124789C160633673 @default.
- W4313124789 hasConceptScore W4313124789C167966045 @default.
- W4313124789 hasConceptScore W4313124789C184337299 @default.
- W4313124789 hasConceptScore W4313124789C199360897 @default.
- W4313124789 hasConceptScore W4313124789C204321447 @default.
- W4313124789 hasConceptScore W4313124789C23123220 @default.
- W4313124789 hasConceptScore W4313124789C2776436953 @default.
- W4313124789 hasConceptScore W4313124789C39890363 @default.
- W4313124789 hasConceptScore W4313124789C41008148 @default.
- W4313124789 hasFunder F4320327051 @default.
- W4313124789 hasLocation W43131247891 @default.
- W4313124789 hasOpenAccess W4313124789 @default.
- W4313124789 hasPrimaryLocation W43131247891 @default.
- W4313124789 hasRelatedWork W1971303903 @default.
- W4313124789 hasRelatedWork W2005185696 @default.
- W4313124789 hasRelatedWork W2780405157 @default.
- W4313124789 hasRelatedWork W2895462099 @default.
- W4313124789 hasRelatedWork W2953250761 @default.
- W4313124789 hasRelatedWork W2964148878 @default.