Matches in SemOpenAlex for { <https://semopenalex.org/work/W2168698906> ?p ?o ?g. }
- W2168698906 endingPage "45" @default.
- W2168698906 startingPage "45" @default.
- W2168698906 abstract "In recent years, the expansion of acquisition devices such as digital cameras, the development of storage and transmission techniques and the success of tablet computers facilitate the development of many large image databases as well as the interactions with the users. This thesis [1] deals with the problem of Content-Based Image Retrieval (CBIR) on these huge masses of data. Traditional CBIR systems generally rely on three phases: feature extraction, feature space structuring and retrieval. In this thesis, we are particularly interested in the structuring phase (normally called indexing phase), which plays a very important role in finding information in large databases. This phase aims at organizing the visual feature descriptors of all images into an efficient data structure in order to facilitate, accelerate and improve further retrieval. We assume that the feature extraction phase is completed and the image feature descriptors which are usually low-level features describing the color, shape, texture, etc. of all images are available. Instead of traditional structuring methods, clustering methods which organize image descriptors into groups of similar objects (clusters), without any constraint on the cluster size, are studied. The aim is to obtain an indexed structure more adapted to the retrieval of high dimensional and unbalanced data. Clustering process can be done without prior knowledge (unsupervised clustering) or with a limited amount of prior knowledge (semi-supervised clustering).Due to the “semantic gap” between high-level semantic concepts expressed by the user via the query and the low-level features automatically extracted from the images, the clustering results and therefore the retrieval results are generally different from the wishes of the user. In this thesis, we proposed to involve the user in the clustering phase so that he/she can interact with the system so as to improve the clustering results, and thus improve the performance of the further retrieval. The idea is as follows. Firstly, images are organized into clusters by using an initial clustering. Then, the user visualizes the clustering result and provides feedback to the system in order to guide the re-clustering phase. The system then re-organizes the dataset by using not only the similarity between objects, but also the feedback given by the user in order to reduce the semantic gap. The interactive loop can be iterated until the clustering result satisfies the user. In the case of large database indexing, we assume that the user has no prior knowledge about the image database. Therefore, an unsupervised clustering method is suitable to be used for the initial clustering, when no supervised information is available yet. Then, after receiving the user feedback in each interactive iteration, a semi-supervised clustering can be used for the re-clustering process.Based on a deep study of the state of the art of different unsupervised clustering methods [4] as well as semi- supervised clustering approaches [2, 3], we propose in this thesis a new interactive semi-supervised clustering model [3] involving the user in the clustering phase in order to improve the clustering results. From the formal analysis of different unsupervised clustering methods [4], we chose to experiment some methods which appear to be the most suitable to be used in an incremental context involving the user in the clustering stage. The hierarchical BIRCH unsupervised clustering (Zhang et al., 1996) which gives the best performance from these experiments [4] is chosen to be used as the initial clustering in our model. Then, an interactive loop in which the user provides the feedback to the system and the system re-organizes the image database using the new semi-supervised clustering method proposed in this thesis is iterated until the clustering result satisfies the user. As the user has no prior knowledge about the image database, it is difficult for him/her to label the clusters or the images in the clusters using classes. Therefore, we provide to the user an interactive interface allowing him/her to easily visualize the clustering result and give feedback to the system. Based on the majority of the images displayed for each cluster, the user can specify, by some simple clicks, relevant or non-relevant images for each cluster. The user can also drag and drop images between clusters in order to change the cluster assignment of some images. Then, supervised information is deduced from the user feedback in order to be used for the re-clustering phase using the proposed semi-supervised clustering method. According to our study of the state of the art of different semi-supervised clustering methods, supervised information may consist of class labels for some objects or pairwise constraints (must-link or cannot-link) between objects. The experimental analysis of different semi-supervised clustering methods in the interactive context [2, 3] shows a high performance of the HMRF-kmeans (Basu et al., 2004) which uses pairwise constraints compared with the other methods. Inspired from the HMRF-kmeans method, we proposed a new semi-supervised clustering method [3] for the re-clustering process. Instead of using pairwise constraints between images, our method uses pairwise constraints between the leaf entries (CF entries) of the BIRCH tree as supervised information for guiding the re-organization of the CF entries in the re-clustering phase. As each CF entry groups a set of similar images, pairwise constraints between images can be replaced by a smaller number of pairwise constraints between CF entries, without reducing the quality of supervised information. And therefore, the processing time could be reduced without decreasing the performance. In our model, after receiving user feedback in each interactive iteration, pairwise constraints can be deduced based not only on the user feedback but also on the neighbourhood information. Neighbourhood information groups images according to the willingness of the user to classify them in the same clusters (via user feedback of all interactive iterations). This kind of information helps to maximize the supervised information (pairwise constraints) gained from a same number of user clicks.In order to avoid the subjective dependence of the clustering results on the human user, a software agent simulating the behaviour of the human user for providing feedback to the system is used for the experimental analysis of our system using different image databases of increasing sizes (Wang, PascalVoc2006, Caltech101, Corel30k). Moreover, different strategies for deducing pairwise constraints from user feedback and neighbour- hood information were investigated. Among these strategies, the strategy which keeps only the most “diffi- cult” constraints (must-link between the most distant objects and cannot-links between the closest objects) was shown to give the best trade-off between the performance and the processing time. Furthermore, the experi- mental results show that our model helps to improve the clustering results by involving the user and that our semi-supervised clustering outperforms the HMRF-kmeans, in both performance and processing time. Note that our clustering structure can be used not only for facilitating the further image retrieval, but also for helping the navigation in large image databases. Moreover, in this thesis, we propose a 2D interface for visualizing the group structure of high dimensional image databases." @default.
- W2168698906 created "2016-06-24" @default.
- W2168698906 creator A5051376143 @default.
- W2168698906 creator A5051777602 @default.
- W2168698906 creator A5067931703 @default.
- W2168698906 creator A5089473504 @default.
- W2168698906 date "2014-06-07" @default.
- W2168698906 modified "2023-10-03" @default.
- W2168698906 title "Towards an interactive index structuring system for content-based image retrieval in large image databases" @default.
- W2168698906 cites W121311982 @default.
- W2168698906 cites W1499599687 @default.
- W2168698906 cites W1519486656 @default.
- W2168698906 cites W1528775006 @default.
- W2168698906 cites W1546703457 @default.
- W2168698906 cites W1549660424 @default.
- W2168698906 cites W1566114229 @default.
- W2168698906 cites W1591394246 @default.
- W2168698906 cites W1673310716 @default.
- W2168698906 cites W1917380066 @default.
- W2168698906 cites W1946309384 @default.
- W2168698906 cites W1947400014 @default.
- W2168698906 cites W1965680834 @default.
- W2168698906 cites W1969294188 @default.
- W2168698906 cites W1978441807 @default.
- W2168698906 cites W1980420376 @default.
- W2168698906 cites W1992419399 @default.
- W2168698906 cites W2005838285 @default.
- W2168698906 cites W2007364060 @default.
- W2168698906 cites W2007995029 @default.
- W2168698906 cites W2012064098 @default.
- W2168698906 cites W2014932484 @default.
- W2168698906 cites W2016381774 @default.
- W2168698906 cites W2016405781 @default.
- W2168698906 cites W2017927472 @default.
- W2168698906 cites W2030644393 @default.
- W2168698906 cites W2033403400 @default.
- W2168698906 cites W2039051707 @default.
- W2168698906 cites W2040274959 @default.
- W2168698906 cites W2040786062 @default.
- W2168698906 cites W2050576295 @default.
- W2168698906 cites W2059432853 @default.
- W2168698906 cites W2066799613 @default.
- W2168698906 cites W2073308541 @default.
- W2168698906 cites W2073435644 @default.
- W2168698906 cites W2077371116 @default.
- W2168698906 cites W2086936614 @default.
- W2168698906 cites W2088698696 @default.
- W2168698906 cites W2090042335 @default.
- W2168698906 cites W2091503252 @default.
- W2168698906 cites W2091623617 @default.
- W2168698906 cites W2094360362 @default.
- W2168698906 cites W2099524725 @default.
- W2168698906 cites W2102662521 @default.
- W2168698906 cites W2106642566 @default.
- W2168698906 cites W2110867479 @default.
- W2168698906 cites W2111308925 @default.
- W2168698906 cites W2111679734 @default.
- W2168698906 cites W2115657355 @default.
- W2168698906 cites W2115729625 @default.
- W2168698906 cites W2118269922 @default.
- W2168698906 cites W2119100458 @default.
- W2168698906 cites W2119605622 @default.
- W2168698906 cites W2126626732 @default.
- W2168698906 cites W2127218421 @default.
- W2168698906 cites W2134089414 @default.
- W2168698906 cites W2138079527 @default.
- W2168698906 cites W2138584058 @default.
- W2168698906 cites W2138615112 @default.
- W2168698906 cites W2145087912 @default.
- W2168698906 cites W2145725688 @default.
- W2168698906 cites W2147232928 @default.
- W2168698906 cites W2151135734 @default.
- W2168698906 cites W2153233077 @default.
- W2168698906 cites W2155776210 @default.
- W2168698906 cites W2155906060 @default.
- W2168698906 cites W2160063258 @default.
- W2168698906 cites W2160642098 @default.
- W2168698906 cites W2164547069 @default.
- W2168698906 cites W2165558283 @default.
- W2168698906 cites W2167912153 @default.
- W2168698906 cites W2238624099 @default.
- W2168698906 cites W2374263241 @default.
- W2168698906 cites W2901608006 @default.
- W2168698906 cites W2943363290 @default.
- W2168698906 cites W3016960874 @default.
- W2168698906 cites W3099514962 @default.
- W2168698906 cites W65738273 @default.
- W2168698906 cites W87092222 @default.
- W2168698906 doi "https://doi.org/10.5565/rev/elcvia.618" @default.
- W2168698906 hasPublicationYear "2014" @default.
- W2168698906 type Work @default.
- W2168698906 sameAs 2168698906 @default.
- W2168698906 citedByCount "0" @default.
- W2168698906 crossrefType "journal-article" @default.
- W2168698906 hasAuthorship W2168698906A5051376143 @default.
- W2168698906 hasAuthorship W2168698906A5051777602 @default.
- W2168698906 hasAuthorship W2168698906A5067931703 @default.
- W2168698906 hasAuthorship W2168698906A5089473504 @default.