At the moment in the research world there is a lot of activity around ground based LiDAR data capture with a number of companies mounting rigs onto vans and driving around cities. We are not convinced by LiDAR per se, the data output is overwhelming and it is limited to high end hardware.
The University of North Carolina, Chapel Hill and University of KentuckyUrban have been working on techniques to develop 3D Models direct from video, thus negating the need for a LiDAR rig. Although it is early days, indeed we are at the dawn of this automatic modelling technology, the results are encouraging.
Embedded below is a model created of the Capel Campus using the technique:
In the words of Jan-Micael Fraham, Research Assistant Professor on the project, the research aims at developing a system for automatic, geo-registered, real-time 3D reconstruction from video of urban scenes.
From 2005-2007 the team developed a system that collects video streams, as well as GPS and inertia measurements in order to place the reconstructed models in geo-registered coordinates.
It is designed using current state of the art real-time modules for all processing steps employing commodity graphics hardware and standard CPU’s to achieve real-time performance.
The second video embedded below provides an overview of the process:
The system extends existing algorithms to meet the robustness and variability necessary to operate out of the lab. To account for the large dynamic range of outdoor videos the processing pipeline estimates global camera gain changes in the feature tracking stage and efficiently compensates for these in stereo estimation without impacting the real-time performance.
The required accuracy for many applications is achieved with a two-step stereo reconstruction process exploiting the redundancy across frames.
Will we eventually see 3D modelling via consumer digital cameras with integrated machine vision? We think this could well happen within the next ten years…
See the UrbanScape website for more details.