Matches in SemOpenAlex for { <https://semopenalex.org/work/W177341915> ?p ?o ?g. }
Showing items 1 to 71 of
71
with 100 items per page.
- W177341915 endingPage "103" @default.
- W177341915 startingPage "98" @default.
- W177341915 abstract "The explosive increase in computing power, network bandwidth and storage capacity has largely facilitated the production, transmission and storage of multimedia data. Compared to alpha-numeric database, non-text media such as audio, image and video are different in that they are unstructured by nature, and although containing rich information, they are not quite as expressive from the viewpoint of a contemporary computer. As a consequence, an overwhelming amount of data is created and then left unstructured and inaccessible, boosting the desire for efficient content management of these data. This has become a driving force of multimedia research and development, and has lead to a new field termed multimedia data mining. While text mining is relatively mature, mining information from non-text media is still in its infancy, but holds much promise for the future. In general, data mining the process of applying analytical approaches to large data sets to discover implicit, previously unknown, and potentially useful information. This process often involves three steps: data preprocessing, data mining and postprocessing (Tan, Steinbach, & Kumar, 2005). The first step is to transform the raw data into a more suitable format for subsequent data mining. The second step conducts the actual mining while the last one is implemented to validate and interpret the mining results. Data preprocessing is a broad area and is the part in data mining where essential techniques are highly dependent on data types. Different from textual data, which is typically based on a written language, image, video and some audio are inherently non-linguistic. Speech as a spoken language lies in between and often provides valuable information about the subjects, topics and concepts of multimedia content (Lee & Chen, 2005). The language nature of speech makes information extraction from speech less complicated yet more precise and accurate than from image and video. This fact motivates content based speech analysis for multimedia data mining and retrieval where audio and speech processing is a key, enabling technology (Ohtsuki, Bessho, Matsuo, Matsunaga, & Kayashi, 2006). Progress in this area can impact numerous business and government applications (Gilbert, Moore, & Zweig, 2005). Examples are discovering patterns and generating alarms for intelligence organizations as well as for call centers, analyzing customer preferences, and searching through vast audio warehouses." @default.
- W177341915 created "2016-06-24" @default.
- W177341915 creator A5090108098 @default.
- W177341915 date "2011-05-24" @default.
- W177341915 modified "2023-09-27" @default.
- W177341915 title "Audio and Speech Processing for Data Mining" @default.
- W177341915 cites W158835737 @default.
- W177341915 cites W1998720920 @default.
- W177341915 cites W2002890640 @default.
- W177341915 cites W2014474048 @default.
- W177341915 cites W2050309898 @default.
- W177341915 cites W2080921589 @default.
- W177341915 cites W2083837083 @default.
- W177341915 cites W2098318492 @default.
- W177341915 cites W2102346549 @default.
- W177341915 cites W2104212264 @default.
- W177341915 cites W2121940249 @default.
- W177341915 cites W2150637221 @default.
- W177341915 cites W2154221499 @default.
- W177341915 cites W2159591770 @default.
- W177341915 cites W2166533296 @default.
- W177341915 cites W2169091586 @default.
- W177341915 cites W2107615347 @default.
- W177341915 doi "https://doi.org/10.4018/978-1-60566-010-3.ch017" @default.
- W177341915 hasPublicationYear "2011" @default.
- W177341915 type Work @default.
- W177341915 sameAs 177341915 @default.
- W177341915 citedByCount "3" @default.
- W177341915 countsByYear W1773419152013 @default.
- W177341915 countsByYear W1773419152015 @default.
- W177341915 crossrefType "book-chapter" @default.
- W177341915 hasAuthorship W177341915A5090108098 @default.
- W177341915 hasConcept C157968479 @default.
- W177341915 hasConcept C204201278 @default.
- W177341915 hasConcept C28490314 @default.
- W177341915 hasConcept C41008148 @default.
- W177341915 hasConcept C61328038 @default.
- W177341915 hasConceptScore W177341915C157968479 @default.
- W177341915 hasConceptScore W177341915C204201278 @default.
- W177341915 hasConceptScore W177341915C28490314 @default.
- W177341915 hasConceptScore W177341915C41008148 @default.
- W177341915 hasConceptScore W177341915C61328038 @default.
- W177341915 hasLocation W1773419151 @default.
- W177341915 hasOpenAccess W177341915 @default.
- W177341915 hasPrimaryLocation W1773419151 @default.
- W177341915 hasRelatedWork W1541261507 @default.
- W177341915 hasRelatedWork W1541790149 @default.
- W177341915 hasRelatedWork W199112251 @default.
- W177341915 hasRelatedWork W2002276695 @default.
- W177341915 hasRelatedWork W2029199293 @default.
- W177341915 hasRelatedWork W2122802944 @default.
- W177341915 hasRelatedWork W2122924390 @default.
- W177341915 hasRelatedWork W2188841161 @default.
- W177341915 hasRelatedWork W2401036325 @default.
- W177341915 hasRelatedWork W2539140714 @default.
- W177341915 hasRelatedWork W2547431347 @default.
- W177341915 hasRelatedWork W2746607154 @default.
- W177341915 hasRelatedWork W2758849341 @default.
- W177341915 hasRelatedWork W3132946990 @default.
- W177341915 hasRelatedWork W3159882232 @default.
- W177341915 hasRelatedWork W3177035523 @default.
- W177341915 hasRelatedWork W46679383 @default.
- W177341915 hasRelatedWork W99318944 @default.
- W177341915 hasRelatedWork W1599408644 @default.
- W177341915 hasRelatedWork W2970718645 @default.
- W177341915 isParatext "false" @default.
- W177341915 isRetracted "false" @default.
- W177341915 magId "177341915" @default.
- W177341915 workType "book-chapter" @default.