Yesterday: (Wrote it up yesterday).
Today: Added support for NaN values (ie. when feature points aren't always visible). Created test cases where feature points are visible 80% of the frames (5 frames)
Notice how with SURF-only, we've only enough features to identify one book. Using more feature extends our observations. By allowing features to go in and out of visibility (as we'll need later for occlusions), we have much more to work with, and can identify the 3 books. Unfortunately the stationary wall/floor is not identified in this example.
An asterix indicates an unlabelled vertex. The resulting feature point labelling is overlaid on the original point cloud dataset here. Notice that the first and last frame only have 2 feature points for the Algorithms text book, yet are able to use the intermediary 4 points to correctly label the book.
Roadblocks: How to identify (and generate) fundamental contributions for this work.
Where Does this Fit In: Adding support for points that are not always visble expands on the ability to support more involved scenes. Recall, the features points themselves will help guide better object segmentation/reconstruction.