Matches in SemOpenAlex for { <https://semopenalex.org/work/W1764703668> ?p ?o ?g. }
- W1764703668 abstract "Traditionally, taking experimental measurements of a physical or biological phenomenon was an expensive, laborious and very slow process. However, significant advances in device technologies and computational techniques have sharply reduced the costs of data collection. Capturing thousands of images of developing biological organisms, or recording enormous amounts of video footage from a network of cameras monitoring an observation space, or obtaining a large number of neural measurements of brain signal patterns via non-invasive devices are some of the examples of such data proliferation. Analyzing such large volumes of multi-dimensional data through expert supervision is neither scalable nor cost-effective. In this context, there is a need for systems that complement the expert user by learning meaningful and compact representations from large collections of multidimensional data (images, videos etc.) with minimal supervision. In this dissertation, we present minimally supervised solutions to two such scenarios generally encountered. The first scenario arises when a large set of labeled noisy observations are available from a given class (or phenotype) with an unknown generative model. An interesting challenge here is to estimate the underlying generative model and the distribution over the distortion parameters that map the observed examples to the generative model. For example, this is the scenario encountered while attempting to construct high-throughput data-driven spatial gene expression atlases from many thousands of noisy images of Drosophila melanogaster imaginal discs. We discuss improvements to an existing information theoretic approach for joint pattern alignment (JPA) in order to address such high-throughput scenarios. Along with the discussion of the assumptions, advantages and limitations of our approach (Chapter 2), we show how this framework can be applied to a variety of applications (Chapters 3, 4, 5). The second scenario arises when there are observations available from multiple classes (phenotypes) without any labels. An interesting challenge here is to estimate a data driven organizational hierarchy that facilitates efficient retrieval and easy browsing of the observations. For example, this is the scenario encountered while organizing large collections of unlabeled activity videos based on the spatio-temporal patterns, such as actions of human beings, embedded in the videos. We show how some insights from computer vision and data-compression can be efficiently leveraged to provide a high-speed and robust solution to the problem of content-based hierarchy estimation (based on action similarity) for large video collections with minimal user supervision (Chapter 6). We demonstrate the usefulness of our approach on a benchmark dataset of human action videos." @default.
- W1764703668 created "2016-06-24" @default.
- W1764703668 creator A5006728914 @default.
- W1764703668 creator A5062722286 @default.
- W1764703668 date "2008-01-01" @default.
- W1764703668 modified "2023-10-06" @default.
- W1764703668 title "Learning data driven representations from large collections of multidimensional patterns with minimal supervision" @default.
- W1764703668 cites W114610159 @default.
- W1764703668 cites W145708202 @default.
- W1764703668 cites W1493021400 @default.
- W1764703668 cites W1537335979 @default.
- W1764703668 cites W1538241598 @default.
- W1764703668 cites W1555683961 @default.
- W1764703668 cites W1586254614 @default.
- W1764703668 cites W1599260277 @default.
- W1764703668 cites W1605949098 @default.
- W1764703668 cites W1831185418 @default.
- W1764703668 cites W1857352599 @default.
- W1764703668 cites W186513846 @default.
- W1764703668 cites W1870026203 @default.
- W1764703668 cites W1874027545 @default.
- W1764703668 cites W1889738812 @default.
- W1764703668 cites W1974717034 @default.
- W1764703668 cites W2007153649 @default.
- W1764703668 cites W2016479629 @default.
- W1764703668 cites W2019502123 @default.
- W1764703668 cites W2024060531 @default.
- W1764703668 cites W2029441111 @default.
- W1764703668 cites W2034328688 @default.
- W1764703668 cites W2041398617 @default.
- W1764703668 cites W2045798786 @default.
- W1764703668 cites W2046799970 @default.
- W1764703668 cites W2057175746 @default.
- W1764703668 cites W2064365621 @default.
- W1764703668 cites W2070628092 @default.
- W1764703668 cites W2075280134 @default.
- W1764703668 cites W2084333685 @default.
- W1764703668 cites W2089208780 @default.
- W1764703668 cites W2096840836 @default.
- W1764703668 cites W2097327139 @default.
- W1764703668 cites W2097905551 @default.
- W1764703668 cites W2099801199 @default.
- W1764703668 cites W2100983000 @default.
- W1764703668 cites W2103042454 @default.
- W1764703668 cites W2104684930 @default.
- W1764703668 cites W2105581244 @default.
- W1764703668 cites W2107032474 @default.
- W1764703668 cites W2109553605 @default.
- W1764703668 cites W2113856781 @default.
- W1764703668 cites W2119799051 @default.
- W1764703668 cites W2121532090 @default.
- W1764703668 cites W2123238701 @default.
- W1764703668 cites W2123491442 @default.
- W1764703668 cites W2133774033 @default.
- W1764703668 cites W2138035768 @default.
- W1764703668 cites W2140199336 @default.
- W1764703668 cites W2141652419 @default.
- W1764703668 cites W2145579799 @default.
- W1764703668 cites W2152029399 @default.
- W1764703668 cites W2154346517 @default.
- W1764703668 cites W2156360858 @default.
- W1764703668 cites W2159455684 @default.
- W1764703668 cites W2161612652 @default.
- W1764703668 cites W2162630772 @default.
- W1764703668 cites W2336509789 @default.
- W1764703668 cites W2517784546 @default.
- W1764703668 cites W2612978538 @default.
- W1764703668 cites W3022859973 @default.
- W1764703668 cites W3169507310 @default.
- W1764703668 cites W66407604 @default.
- W1764703668 cites W2109421890 @default.
- W1764703668 hasPublicationYear "2008" @default.
- W1764703668 type Work @default.
- W1764703668 sameAs 1764703668 @default.
- W1764703668 citedByCount "1" @default.
- W1764703668 countsByYear W17647036682019 @default.
- W1764703668 crossrefType "journal-article" @default.
- W1764703668 hasAuthorship W1764703668A5006728914 @default.
- W1764703668 hasAuthorship W1764703668A5062722286 @default.
- W1764703668 hasConcept C111919701 @default.
- W1764703668 hasConcept C119857082 @default.
- W1764703668 hasConcept C124101348 @default.
- W1764703668 hasConcept C151730666 @default.
- W1764703668 hasConcept C154945302 @default.
- W1764703668 hasConcept C167966045 @default.
- W1764703668 hasConcept C199360897 @default.
- W1764703668 hasConcept C2779343474 @default.
- W1764703668 hasConcept C2780801425 @default.
- W1764703668 hasConcept C39890363 @default.
- W1764703668 hasConcept C41008148 @default.
- W1764703668 hasConcept C48044578 @default.
- W1764703668 hasConcept C77088390 @default.
- W1764703668 hasConcept C86803240 @default.
- W1764703668 hasConcept C98045186 @default.
- W1764703668 hasConceptScore W1764703668C111919701 @default.
- W1764703668 hasConceptScore W1764703668C119857082 @default.
- W1764703668 hasConceptScore W1764703668C124101348 @default.
- W1764703668 hasConceptScore W1764703668C151730666 @default.
- W1764703668 hasConceptScore W1764703668C154945302 @default.
- W1764703668 hasConceptScore W1764703668C167966045 @default.