CENTRIST : Visual descriptor for indoors localization

Following Jianxin Wu’s work about visual descriptor CENTRIST, i dig in his libHIK code to extract the interesting part : CENTRIST descriptor calculation. His code was bit too much hard coded with parameters, tests and even database names!

In Computer Vision, one needs to find a way how to describe the visual content of an image in numeric form. This numeric form should possess at least the properties of repeatability, robustness and comparison.

  • Repeatability – for similar patches of image the descriptor should be the same
  • Robustness – for similar patches and under some deformations, the descriptor should also be the same
  • Comparison – we should be able to compute a similarity measure between any two descriptors

Wu showed in his PAMI article [HERE] that this CENTRIST type descriptor is particularly well suited for indoors localization tasks. Other features like SURF, SIFT and some gist-like features are too sensitive to light changes … and does not capture the structure information!

This descriptor is very easy to compute. And the best metric to compare each too descriptors is Histogram Intersection kernel [LINK] (exponential version of it is even better but requires a parameter).

Wu also showed that it is possible to use CENTRIST descriptors in Bag of Features framework and gives very good performance. For example :

  1. Compute CENTRIST descriptors for each image
  2. Create visual vocabulary using k-means with Histogram Intersection metric
  3. Obtain BOF signature for each image by counting in each bin the number of hits in this or that visual word
  4. Normalize each signature using TF-IDF scheme

The CENTRIST descriptor extraction utility can be downloaded from [HERE].

I cannot guarantee that there are no bugs but any comments or improvements are welcome.

About

Vladd... call me Vladd
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *