VIVA: Visual Media Reasoning (VMR)

vmrmain.jpg

Project Info

Title Building detection, Content based Image retrieval, Segmentation, Feature voting
Goal Develop image analysis techniques to to extract identities and classification of objects.
Funding Information Innovative Office - Defense Advanced Research Projects Agency (DARPA)
VIVA Team Members Scott Acton, Kevin Skadron, Sedat Ozer, Rituparna Sarkar

Purpose

Automatically extract the identities of people, objects, etc. from an image or video with appropriate image analysis algorithms.

Discussion

This project funded by Defense Advanced Research Projects Agency (DARPA) aims at developing a prototype system to demonstrate the capability of content-based image retrieval. This prototype will also demonstrate the ability to dynamically select the appropriate image analysis algorithms based on query type and image contents. The objective of this project is to show that in the unstructured environment that the VMR program envisions, there remains significant risk that query images will not meet the requirements of any available directed recognition algorithm, producing a high rate of false negatives, or a high volume of unusable, false-positive results that cannot enhance the experience of the analyst. The prototype developed would contain a meta-algorithm consisting of segmentation and classification for selecting which segments to analyze and the choice of algorithms to apply for the analysis.

vmr segmentation1.PNG
Fig.1 Segmentation Example

Furthermore, this prototype will demonstrate the ability to dynamically select the appropriate image analysis algorithm based on query type and image contents, as well as the ability to "plug in" new image analysis algorithms and image databases without requiring modifications to other software modules. The CBIR approach has the powerful advantage of employing multiple algorithms in order to maximize likelihood of identifying different types of features, and will also benefit from a meta-algorithm to direct the set of analysis steps. This leads to a self-nomination paradigm in which, for any analysis task (such as segmenting an image, or classifying or recognizing an object), a large pool of algorithms self-score their suitability based on the query image. Identification of individual buildings forms another important part of the project. Specifically, we hypothesize that a combination of scale-invariant feature detection (i.e., using SIFT and new morphological alternatives proposed here) combined with a novel global-local feature detection algorithm can accurately identify buildings even when the candidates are very similar.

Publications

Links

  • Final Software [.zip]
    This compressed file includes all the files listed in the final report.
  • Segmentation [.zip]
    Matlab implementation of some segmentation algorithms. Example included.
  • Self-Nomination by MKL: the code with example [.zip]
    Matlab implementation for the feature-nomination example with MKL. This file uses three input files: (1) RenamedSiftHistograms1.mat, (2) RenamedHOGHistograms1.mat and (3) RenamedRGBHistograms1.mat.
    These 3 input files include the Bag-of-features (or Bag-of-Words) histograms for a subset of the ImageNet data set. The each file contains a 30 by 300 dimensional cell where 30 is the total number of categories and 300 is the total number of images in each category. These images are selected from ImageNet data set. Each histogram is 1000 dimensional in each file. If you like to add additional images to each category, you would need to use the BoW centroids that are also included on this page.
    HOW TO RUN THE CODE: Please download all 3 histogram files under the same folder where the "FeatureSelectionwithMKL3.m" is and run "getresults.m" file which calls the FeatureSelectionwithMKL3.m file internally. This code is tested on a Windows-based system. For other operating systems, you might need to compile all the .C files (of SimpleMKL) with your own system's compiler. In the MKL code, Gaussian kernel is used where each kernel is formed with a different kernel parameter (for each feature-type). A total of 10 kernel parameter is used for each feature-type. (Thus for 3 feature-types 3*10=30 kernel matrices are computed). This code uses a modified version of SimpleMKL.
  • RenamedSiftHistograms1.mat
    The input file for MKL example(includes the SIFT-based Bag-of-Features histograms).
  • RenamedHOGHistograms1.mat
    The input file for the MKL example (includes the HOG-based Bag-of-Features histograms).
  • RenamedRGBHistograms1.mat
    The input file for the MKL example (includes the color histograms).
Topic revision: r5 - 19 Sep 2016, AndreaVaccari
 
banner.png
©2017 University of Virginia. Privacy Statement
Virginia Image and Video Analysis - School of Engineering and Applied Science - University of Virginia
P.O. Box 400743 - Charlottesville, VA - 22904 - E-Mail viva.uva@gmailREMOVEME.com