Visual Deep Sea Mapping and Quantitative Imaging Using Cameras

Deep Quanticams
Visual Deep Sea Mapping and Quantitative Imaging Using Cameras
General information
On land and in space, visual mapping (quantitative imaging, photogrammetry, structure-from-motion) is one of the key technologies applied from satellites, airplanes and by ground surveyors, and more recently vision technology is also employed in drones, autonomous cars or robots. In contrast, visual mapping of the ocean (floor) is by far less developed, because of limited visibility, attenuation and scattering of light as well as the need for more complex observation models due to refraction at the interfaces of camera housings. Additionally, due to the lack of underwater GPS, localization in the deep sea is very challenging, with acoustic techniques often leading to systematic errors and absolute position uncertainties of tens or hundreds of meters in practice; communication is limited and the absence of natural light requires to bring flashes (which, when mounted on a robot, induce challenging moving/dynamic illumination and introduce backscatter problems). Consequently, while there is ample visual coverage of the Moon or the Mars surface, we are lacking systematic visual maps and detailed 3D models from the ocean floor. The goal of this proposal is to push forward the limits of computer vision for the deep sea and to enable cameras as quantitative underwater measurement instruments. This will enable many scientific applications in the ocean sciences and ultimately allow automated underwater vehicles to explore and map the largest uncharted territory on Earth: the Deep Sea. In particular the project will advance automated visual mapping of the ocean floor from photo or video, including the steps camera calibration, finding correspondending points in underwater images, underwater multi-view geometry, image registration, visual localization, seafloor surface geometry estimation and faithful color correction in order to facilitate the next generation ocean mapping and measurement applications.The objectives are - to derive geometric calibration techniques for deep sea cameras and to investigate multi-view relations for underwater cameras, - to derive photometric calibration techniques that enable distance independent color monitoring (in particular for moving light sources),- to assess and extend image feature matching in the context of backscatter, blur and dynamic illumination,- to extend visual surface estimation to the case of moving light sources,- to obtaining the “true” surface colors from several attenuated underwater photos, - to extend location recognition to the deep sea case,and, finally, to apply the theoretical findings to current ocean sciences research questions such as genesis of black smokers, habitat mapping, gas seepage from the seafloor, resource estimation or impact assessment for deep sea mining.
June, 2019
May, 2022
Funding (total)
Funding (GEOMAR)
Funding body / Programme
    DFG / Emmy Noether
Helmholtz-Zentrum für Ozeanforschung Kiel (GEOMAR), Germany