Diminution Reality Objects –based on Image Depth

Authors

  • Sallama Resen Vocational Education, Ministry of Education Author
  • Muthanna Ibrahim AL-Iraqi University, Collage of Art History Department. Author

DOI:

https://doi.org/10.61841/m5h7qp63

Keywords:

Augmented Reality, Diminished Reality, Depth Map, Planar Regions

Abstract

Augmented Reality (AR) applications increasing popularity in a variety of industries, education, and marketing. AR techniques combined with the real world with virtual objects enable augmented reality applications to provide a better understanding and display of information for products. Diminished Reality Objects (DRO) techniques that visually eliminate real objects from AR applications. While an increasing interest in the Diminution Reality technique can be observed, most of the diminished reality research focuses on the consistency of the real–virtual and texture generated on a marker area. This paper handles the preservation depth consistency from edges and planar regions to build a depth map In order to develop DRO methods. the depth mask is built to this method, which has been consisting of two-stage run concurrence and each stage associated with error measuring to corrects stage. Results instantly, which are Planarity and Boundary Depth techniques. The proposed method evaluated on RGB images dataset acquired by the digital camera with high-properties. The authenticity of the proposed method is displayed in experimental results with a variety of criteria measurements. 

Downloads

Download data is not yet available.

References

[1] Keisuke Tateno, Federico Tombari, Iro Laina, and Nassir Navab. Cnn-slam: Real-time dense monocular

slam with learned depth prediction. arXiv preprint arXiv:1704.03489, 2017.

[2] Steven M. Seitz, Brian Curless, James Diebel, Daniel Scharstein, and Richard Szeliski. A comparison and

evaluation of multi-view stereo reconstruction algorithms. In Computer Society Conference on Computer

Vision and Pattern Recognition, volume 1, pages 519–528, 2006. doi:10.1109/CVPR.2006.19.

[3] Berthold K. P. Horn. Shape from shading: A method for obtaining the shape of a smooth, opaque object

from one view. Technical report, MIT, Cambridge, MA, USA, 1970. URL: https://dspace.mit.edu/

handle/1721.1/6885.

[4] Ruo Zhang, Ping-Sing Tsai, James Edwin Cryer, and Mubarak Shah. Shape from shading: A survey. IEEE

Transactions on Pattern Analysis and Machine Intelligence, 21(8):690–706, 1999.

[5] Paolo Favaro and Stefano Soatto. A geometric approach to shape from defocus. IEEE Transactions on

Pattern Analysis and Machine Intelligence, 27(3):406–417, 2005. doi: 10.1109/TPAMI.2005.43.

[6] Supasorn Suwajanakorn, Carlos Hernandez, and Steven M. Seitz. Depth from focus with your mobile

phone. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3497–3506, 2015.

[7] Andrea J. van Doorn, Jan J. Koenderink, and Johan Wagemans. Light fields and shape from shading.

Journal of Vision, 11(3):21.1–21.21, 2011.

[8] Stefan Heber and Thomas Pock. Convolutional networks for shape from light field. In IEEE Conference on

Computer Vision and Pattern Recognition, pages 3746–3754, June 2016.

[9] Trung Thanh Ngo, Hajime Nagahara, and Rin-ichiro Taniguchi. Shape and light directions from shading

and polarization. In IEEE Conference on Computer Vision and Pattern Recognition, pages 2310–2318,

2015.

[10] Achuta Kadambi, Vage Taamazyan, Boxin Shi, and Ramesh Raskar. Polarized 3d: High-quality depth

sensing with polarization cues. In IEEE International Conference on Computer Vision, pages 3370–3378,

2015.

[11] David Eigen and Rob Fergus. Predicting depth, surface normals and semantic labels with a common multiscale convolutional architecture. In Proceedings of the IEEE International Conference on Computer Vision,

pages 2650–2658, 2015.

[12] [12] David Eigen, Christian Puhrsch, and Rob Fergus. Depth map prediction from a single image using a

multi-scale deep network. In International Conference on Neural Information Processing Systems, volume

2, pages 2366–2374, 2014.

[13] Bo Li, Yuchao Dai, Huahui Chen, and Mingyi He. Single image depth estimation by dilated deep residual

convolutional neural network and soft-weight-sum inference. arXiv preprint arXiv:1705.00534, 2017.

[14] Dan Xu, Elisa Ricci, Wanli Ouyang, Xiaogang Wang, and Nicu Sebe. Multi-scale continuous crfs as

sequential deep networks for monocular depth estimation. arXiv preprint arXiv:1704.02157, 2017.

[15] Christoph Strecha, Wolfgang Von Hansen, Luc Van Gool, Pascal Fua, and Ulrich Thoennessen. On

benchmarking camera calibration and multi-view stereo for high-resolution imagery. In Computer Vision

and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1–8. IEEE, 2008.

Downloads

Published

31.05.2020

How to Cite

Resen, S., & Ibrahim, M. (2020). Diminution Reality Objects –based on Image Depth. International Journal of Psychosocial Rehabilitation, 24(3), 1215-1224. https://doi.org/10.61841/m5h7qp63