Diminution Reality Objects –based on Image Depth
DOI:
https://doi.org/10.61841/m5h7qp63Keywords:
Augmented Reality, Diminished Reality, Depth Map, Planar RegionsAbstract
Augmented Reality (AR) applications increasing popularity in a variety of industries, education, and marketing. AR techniques combined with the real world with virtual objects enable augmented reality applications to provide a better understanding and display of information for products. Diminished Reality Objects (DRO) techniques that visually eliminate real objects from AR applications. While an increasing interest in the Diminution Reality technique can be observed, most of the diminished reality research focuses on the consistency of the real–virtual and texture generated on a marker area. This paper handles the preservation depth consistency from edges and planar regions to build a depth map In order to develop DRO methods. the depth mask is built to this method, which has been consisting of two-stage run concurrence and each stage associated with error measuring to corrects stage. Results instantly, which are Planarity and Boundary Depth techniques. The proposed method evaluated on RGB images dataset acquired by the digital camera with high-properties. The authenticity of the proposed method is displayed in experimental results with a variety of criteria measurements.
Downloads
References
[1] Keisuke Tateno, Federico Tombari, Iro Laina, and Nassir Navab. Cnn-slam: Real-time dense monocular
slam with learned depth prediction. arXiv preprint arXiv:1704.03489, 2017.
[2] Steven M. Seitz, Brian Curless, James Diebel, Daniel Scharstein, and Richard Szeliski. A comparison and
evaluation of multi-view stereo reconstruction algorithms. In Computer Society Conference on Computer
Vision and Pattern Recognition, volume 1, pages 519–528, 2006. doi:10.1109/CVPR.2006.19.
[3] Berthold K. P. Horn. Shape from shading: A method for obtaining the shape of a smooth, opaque object
from one view. Technical report, MIT, Cambridge, MA, USA, 1970. URL: https://dspace.mit.edu/
handle/1721.1/6885.
[4] Ruo Zhang, Ping-Sing Tsai, James Edwin Cryer, and Mubarak Shah. Shape from shading: A survey. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 21(8):690–706, 1999.
[5] Paolo Favaro and Stefano Soatto. A geometric approach to shape from defocus. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 27(3):406–417, 2005. doi: 10.1109/TPAMI.2005.43.
[6] Supasorn Suwajanakorn, Carlos Hernandez, and Steven M. Seitz. Depth from focus with your mobile
phone. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3497–3506, 2015.
[7] Andrea J. van Doorn, Jan J. Koenderink, and Johan Wagemans. Light fields and shape from shading.
Journal of Vision, 11(3):21.1–21.21, 2011.
[8] Stefan Heber and Thomas Pock. Convolutional networks for shape from light field. In IEEE Conference on
Computer Vision and Pattern Recognition, pages 3746–3754, June 2016.
[9] Trung Thanh Ngo, Hajime Nagahara, and Rin-ichiro Taniguchi. Shape and light directions from shading
and polarization. In IEEE Conference on Computer Vision and Pattern Recognition, pages 2310–2318,
2015.
[10] Achuta Kadambi, Vage Taamazyan, Boxin Shi, and Ramesh Raskar. Polarized 3d: High-quality depth
sensing with polarization cues. In IEEE International Conference on Computer Vision, pages 3370–3378,
2015.
[11] David Eigen and Rob Fergus. Predicting depth, surface normals and semantic labels with a common multiscale convolutional architecture. In Proceedings of the IEEE International Conference on Computer Vision,
pages 2650–2658, 2015.
[12] [12] David Eigen, Christian Puhrsch, and Rob Fergus. Depth map prediction from a single image using a
multi-scale deep network. In International Conference on Neural Information Processing Systems, volume
2, pages 2366–2374, 2014.
[13] Bo Li, Yuchao Dai, Huahui Chen, and Mingyi He. Single image depth estimation by dilated deep residual
convolutional neural network and soft-weight-sum inference. arXiv preprint arXiv:1705.00534, 2017.
[14] Dan Xu, Elisa Ricci, Wanli Ouyang, Xiaogang Wang, and Nicu Sebe. Multi-scale continuous crfs as
sequential deep networks for monocular depth estimation. arXiv preprint arXiv:1704.02157, 2017.
[15] Christoph Strecha, Wolfgang Von Hansen, Luc Van Gool, Pascal Fua, and Ulrich Thoennessen. On
benchmarking camera calibration and multi-view stereo for high-resolution imagery. In Computer Vision
and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1–8. IEEE, 2008.
Downloads
Published
Issue
Section
License
Copyright (c) 2020 AUTHOR

This work is licensed under a Creative Commons Attribution 4.0 International License.
You are free to:
- Share — copy and redistribute the material in any medium or format for any purpose, even commercially.
- Adapt — remix, transform, and build upon the material for any purpose, even commercially.
- The licensor cannot revoke these freedoms as long as you follow the license terms.
Under the following terms:
- Attribution — You must give appropriate credit , provide a link to the license, and indicate if changes were made . You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
Notices:
You do not have to comply with the license for elements of the material in the public domain or where your use is permitted by an applicable exception or limitation .
No warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material.