This demo is a simple example of how to search artworks using textual description or selecting a reference image to search for visually (and semantically, thanks to the use of smart embeddings) similar artworks.
In this demo we use the NoisyArt dataset, a challenging dataset designed to support research of the multimedia and computer vision communities on webly-supervised recognition of artworks. The dataset was also designed to support multi-modality learning and zero-shot learning, thanks to it's multi-modal nature. Our method has obtained state-of-the-art results on this dataset. Note that the system does not use any metadata for retrieval !
The dataset contains several real-world variations of the same artwork, including both professional and user-generated images, to evaluate the robustness of the system even in cases of strong differences of the visual content. Since this is a search system, even if nothing relevant is available in the dataset, the most similar object will be shown.