Abstract
This paper presents a depth estimation method that leverages rich representations learned from cascaded convolutional and fully connected neural networks operating on a patch-pooled set of feature maps. Our method is very fast and it substantially improves depth accuracy over the state-of-the-art alternatives, and from this, we computationally reconstruct an all-focus image and achieve synthetic re-focusing, all from a single image. Our experiments on benchmark datasets such as Make3D and NYU-v2 demonstrate superior performance in comparison to other available depth estimation methods by reducing the root-mean-squared error by 57% & 46%, and blur removal methods by 0.36 dB & 0.72 dB in PSNR, respectively. This improvement is also demonstrated by the superior performance using real defocus images.
| Original language | English |
|---|---|
| Title of host publication | British Machine Vision Conference 2017, BMVC 2017 |
| Publisher | BMVA Press |
| ISBN (Electronic) | 190172560X, 9781901725605 |
| DOIs | |
| State | Published - 2017 |
| Externally published | Yes |
| Event | 28th British Machine Vision Conference, BMVC 2017 - London, United Kingdom Duration: 4 Sep 2017 → 7 Sep 2017 |
Publication series
| Name | British Machine Vision Conference 2017, BMVC 2017 |
|---|
Conference
| Conference | 28th British Machine Vision Conference, BMVC 2017 |
|---|---|
| Country/Territory | United Kingdom |
| City | London |
| Period | 4/09/17 → 7/09/17 |
Bibliographical note
Publisher Copyright:© 2017. The copyright of this document resides with its authors.
ASJC Scopus subject areas
- Computer Vision and Pattern Recognition