Matches in SemOpenAlex for { <https://semopenalex.org/work/W4312983485> ?p ?o ?g. }
Showing items 1 to 67 of
67
with 100 items per page.
- W4312983485 abstract "Humans perceive the world through many channels, such as images viewed by the eyes, or voices heard by the ears. Though any individual channel might be incomplete or noisy, humans can naturally align and fuse information collected from multiple channels in order to grasp the key concepts needed for a better understanding of the world. One of the core aspirations in Artificial Intelligence (AI) is to develop algorithms that endow computers with an ability to effectively learn from multimodal (or, multi-channel) data. This data is similar to sights and sounds attained from vision and language that help humans make sense of the world around us. For example, computers could mimic this ability by searching the most relevant images to a text query (or vice versa), and by describing the content of an image using natural language. Vision-and-Language (VL), a popular research area that sits at the nexus of Computer Vision and Natural Language Processing (NLP), aims to achieve this goal. This monograph surveys vision-language pre-training (VLP) methods for multimodal intelligence that have been developed in the last few years. Approaches are grouped into three categories: (i) VLP for image-text tasks, such as image captioning, image-text retrieval, visual question answering, and visual grounding; (ii) VLP for core computer vision tasks, such as (open-set) image classification, object detection, and segmentation; and (iii) VLP for video-text tasks, such as video captioning, video-text retrieval, and video question answering. For each category, a comprehensive review of state-of-the-art methods is presented, and the progress that has been made and challenges still being faced are discussed, using specific systems and models as case studies. In addition, for each category, advanced topics being actively explored in the research community are presented, such as big foundation models, unified modeling, in-context few-shot learning, knowledge, robustness, and computer vision in the wild, to name a few." @default.
- W4312983485 created "2023-01-05" @default.
- W4312983485 creator A5003075563 @default.
- W4312983485 creator A5028783832 @default.
- W4312983485 creator A5047233371 @default.
- W4312983485 creator A5048295582 @default.
- W4312983485 creator A5066666034 @default.
- W4312983485 creator A5073435344 @default.
- W4312983485 date "2022-01-01" @default.
- W4312983485 modified "2023-10-17" @default.
- W4312983485 title "Vision-Language Pre-Training: Basics, Recent Advances, and Future Trends" @default.
- W4312983485 doi "https://doi.org/10.1561/9781638281337" @default.
- W4312983485 hasPublicationYear "2022" @default.
- W4312983485 type Work @default.
- W4312983485 citedByCount "7" @default.
- W4312983485 countsByYear W43129834852023 @default.
- W4312983485 crossrefType "monograph" @default.
- W4312983485 hasAuthorship W4312983485A5003075563 @default.
- W4312983485 hasAuthorship W4312983485A5028783832 @default.
- W4312983485 hasAuthorship W4312983485A5047233371 @default.
- W4312983485 hasAuthorship W4312983485A5048295582 @default.
- W4312983485 hasAuthorship W4312983485A5066666034 @default.
- W4312983485 hasAuthorship W4312983485A5073435344 @default.
- W4312983485 hasBestOaLocation W43129834852 @default.
- W4312983485 hasConcept C115961682 @default.
- W4312983485 hasConcept C121332964 @default.
- W4312983485 hasConcept C1276947 @default.
- W4312983485 hasConcept C1517167 @default.
- W4312983485 hasConcept C154945302 @default.
- W4312983485 hasConcept C157657479 @default.
- W4312983485 hasConcept C171268870 @default.
- W4312983485 hasConcept C177264268 @default.
- W4312983485 hasConcept C195324797 @default.
- W4312983485 hasConcept C199360897 @default.
- W4312983485 hasConcept C204321447 @default.
- W4312983485 hasConcept C41008148 @default.
- W4312983485 hasConcept C44291984 @default.
- W4312983485 hasConceptScore W4312983485C115961682 @default.
- W4312983485 hasConceptScore W4312983485C121332964 @default.
- W4312983485 hasConceptScore W4312983485C1276947 @default.
- W4312983485 hasConceptScore W4312983485C1517167 @default.
- W4312983485 hasConceptScore W4312983485C154945302 @default.
- W4312983485 hasConceptScore W4312983485C157657479 @default.
- W4312983485 hasConceptScore W4312983485C171268870 @default.
- W4312983485 hasConceptScore W4312983485C177264268 @default.
- W4312983485 hasConceptScore W4312983485C195324797 @default.
- W4312983485 hasConceptScore W4312983485C199360897 @default.
- W4312983485 hasConceptScore W4312983485C204321447 @default.
- W4312983485 hasConceptScore W4312983485C41008148 @default.
- W4312983485 hasConceptScore W4312983485C44291984 @default.
- W4312983485 hasLocation W43129834851 @default.
- W4312983485 hasLocation W43129834852 @default.
- W4312983485 hasOpenAccess W4312983485 @default.
- W4312983485 hasPrimaryLocation W43129834851 @default.
- W4312983485 hasRelatedWork W128392744 @default.
- W4312983485 hasRelatedWork W1483367581 @default.
- W4312983485 hasRelatedWork W207304934 @default.
- W4312983485 hasRelatedWork W2795359650 @default.
- W4312983485 hasRelatedWork W2803367139 @default.
- W4312983485 hasRelatedWork W2962935746 @default.
- W4312983485 hasRelatedWork W3021346453 @default.
- W4312983485 hasRelatedWork W3041777105 @default.
- W4312983485 hasRelatedWork W3107474891 @default.
- W4312983485 hasRelatedWork W3198730297 @default.
- W4312983485 isParatext "false" @default.
- W4312983485 isRetracted "false" @default.
- W4312983485 workType "book" @default.