Benjamin Bustos (University of Chile)
3D Object Retrieval: history and future challenges
3D object retrieval has been a very active research area during the last
decade. This is mainly because of the wide variety of practical
application domains that benefit from research in this area, for example
cultural heritage, archeology, medicine, entertainment, biology,
chemistry, industrial manufacturing, biometry, etc. As a first approach,
methods for 3D shape matching and retrieval usually resorted to computing
a descriptor (so-called feature vector) that globally described a 3D
model. These feature vectors could be used, for example, for implementing
a similarity search engine. Due to the nature of these global descriptors,
they were not well suited for tasks like partial matching. In more recent
years, a large amount of research has been concentrating on defining local
descriptors for 3D models, which take into account local characteristics
for computing several feature vectors for each model. While most of the
research in this area has aimed at the effectiveness of the shape
matching, the efficiency problem is still an open problem. In this talk, I
will give an overview of the historic and current techniques for 3D Object
Retrieval. Also, I will present the SHREC datasets, the de facto standard
for evaluating 3D Object Retrieval algorithms. Finally, I will discuss
about the future challenges of this fascinating research domain.
Jiri Matas (Czech technical university, Prague)
Beyond Vanilla Visual Retrieval
The talk will start with a brief overview of the state of the art in visual
retrieval of specific objects. The core steps of the standard pipeline will be
introduced and recent development improving both precision and recall as well as
the memory footprint will be reviewed.
Going off the beaten track, I will present a visual retrieval method applicable
in conditions when the query and reference images differ significantly in one or
more properties like illumination (day, night), the sensor (visible, infrared) ,
viewpoint, appearance (winter, summer), time of acquisition (historical,
current) or the medium (clear, hazy, smoky).
In the final part, I will argue that in image-based retrieval it might be often
more interesting to look for most *dissimilar* images of the same scene rather
than the most similar ones as conventionally done, as especially in large
datasets these are just near duplicates.. As an example of such problem
formulation, a method efficiently searching for images with the largest scale
difference will be presented. A final demo will for instance show that the
method finds surprisingly fine details on landmarks, even those that are hardly
noticeable for human.
The presentation from professor Jiri Matas is available here.