+for both numeric addition and string concatenation. Numeric parameters were not correctly converted from string to number, which caused slight misconfiguration of the extractor. The problem was fixed in version 0.2.
This demo presents a feature extraction method that captures color and texture information from an image and produces adaptive signatures for similarity search models, where distances like SQFD or EMD can be used. The method and its parallel implementation for GPUs is presented in our publication Kruliš M., Lokoč J., Skopal T.: Efficient Extraction of Clustering-Based Feature Signatures Using GPU Architectures, in Multimedia Tools and Applications 2015. We are currently transforming the code, it can be used as OpenCV module.
The demo is also a proof of concept that goes against the current trends in web applications. We propose to offload computations from the servers (or cloud) to end users by performing computationally demanding tasks in the browser. In this case, we claim that in a web application that collects the images from the users, the feature extraction process can be performed by the browser while the image is being uploaded. More details can be found in paper Kruliš M.: Is There a Free Lunch for Image Feature Extraction in Web Applications, accepted for publication in Similarity Search and Applications, Glasgow, Springer, pp. 283-294, 2015
Martin Kruliš is assistant professor at Department of Software Engineering, Faculty of Mathematics and Physics, Charles University in Prague, Czech Republic. He is member of Similarity Retrieval research group and Parallel Architectures/Algorithms/Applications Research Group. His research areas of interest cover mainly parallel programming, (multimedia) databases, similarity search, and web technologies.
This work was supported by Czech Science Foundation (GAČR), grant no. P103-14-14292P
The demo is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
The extractor has many parameters which affect the sampling method and the subsequent k-means clustering. We revise these parameters briefly, for more details please refer to the publications mentioned above. The default values are designed to produce decent results for most images.
|image width and height||size in pixels to which the image is resampled first|
|keep aspect ratio||if checked, aspect ratio of image width and height is maintained (the image is resampled so it would not exceed the specified width and height values)|
|sampling points||number of initial samples taken from the image (gaussian sampling points are currently used); these sampled features become the input for clustering|
|FS weights||weights (multiplicative constants) that linearly stretch individual axes of the feature space (x,y = position; L,a,b = color in CIE Lab space; c = contrast. e = entropy)|
|greyscale bits||color resolution of the greyscale bitmap represented in allocated bits (i.e., value 4 means that 16 shades of grey are used); the greyscale bitmap is used for computing contrast and entropy values|
|sampling window||size of the texture sampling window used to compute contrast and entropy (center of the window is always in the pixel selected by x,y coordinates of the corresponding feature sample)|
|iterations||number of iterations of the k-means clustering; we use fixed number of iterations, since the modified clustering is pruning clusters (not iteratively refining k clusters)|
|seeds||number of initial seeds (initial number of clusters) for the k-means algorithm|
|join threshold||threshold euclidean distance between two centroids; if two cluster centers are closer than this distance, one of the centroid is dismissed and points are reassigned|
|cluster min. size||this parameter multiplied by the index of iteration gives lower limit for cluster size; clusters containing fewer points than specified by the limit have their centroid dismissed and points are reassigned|
The signatures may be downloaded in a simple text format, which is specified as follows. Each signature is placed on one row as a sequence of comma-separated values. First three values are image name, signature length (N), and feature space dimension (7 in our case). These three values are followed by N blocks of 8 numbers, where each octuplet represents one feature sample (i.e., cluster). The first value of each octuplet is the weight of that cluster/feature and the subsequent seven numbers are feature coordinates (x, y, L, a, b, c, e).