Advanced Topics in Computer Vision - Final Project
3D Reconstruction
Erika Harrison
University of Calgary, 2013
For the course project, I used 2D feature matching on the RGB-portion of an RGB-D image to determine relative transformations between frames and iteratively construct a dense point cloud. Fundamentally, the modern approaches - employing normal comparison for improved fitting, alongside other techniques (eg. KinectFusion) - generate a better resulting surface mesh. The objective, however of this approach was to understand the techniques and pitfalls for some of the core fundamentals of merging RBG-D photos. In understanding the fundamentals, it may help with implementing or improving how moving RGB-D frames can be used to build 3D models of dynamic objects (eg. wildlife from RGB-D wildlife cameras).
Used a whole bundle of different libraries, including Kinect SDK, SIFT, SURF and other feature detectors, RANSAC, Normal Distribution Transform (NDT), Iterative Closest Point (for alignment), Point Cloud Library (PCL), C++, etc.