Dimensionality Reduction using Deep Learning Techniques
DOI:
https://doi.org/10.61841/d3j4vc49Keywords:
Autoencoder, Dimensionality Reduction,, Principal Component Analysis (PCA),, Reconstruction of data.Abstract
Dimensionality reduction is a compelling way to deal with scaling back the information. It is a strategy that endeavors to extend a lot of high dimensional vectors to a lower dimensionality space while holding measurements among them. The Artificial Intelligence and information mining systems may not be successful for high-dimensional information due to the scourge of dimensionality and question precision and efficiency will debase quickly as the measurement increments. It is a typical preprocessing step for feature extraction, classification and different undertakings. Learning a classifier on low-dimensional data sources is quick. All the more critically, Dimensionality reduction can help to get familiar with a superior classifier, especially when the information has a low-dimensional structure, and with little datasets, where it has a regularizing impact that can help abstain from over fitting. This paper deals with the working of deep learning technique for dimensionality reduction on various datasets. It is also compared with the traditional machine learning method i.e. Principal Component Analysis (PCA).
Downloads
References
1. Lei Yu Binghamton University, Jieping Ye,Huan Liu ,Arizona State University, “Dimensionality Reduction for datamining-Techniques, Applications and Trends”.
2. Mizuta, M. Dimension Reduction Methods. Hum¬boldt-Universität Berlin, Center for Applied Statistics and Economics (CASE), 2007, 15.
3. Sorzano, C. O. S., Vargas, J., Montano, A. P. A Survey of Dimensionality Reduction Techniques. 2014. https:// arxiv.org/abs/1403.2877 (last accessed 2 November 2017).
4. Zubova, J., Kurasova, O., Liutvinavicius, M. Parallel Computing for Dimensionality Reduction. Information and Software Technologies, Springer-Verlag, 230-241, 2016. ISBN 978-3-319-46254-7.
5. Sugiyama, M. 2007. Dimensionality reduction of multimodal labeled data by local Fisher discriminant analysis. J. Machine Learning Research 8:1027–1061.
6. vander Maaten, L. J. P., and Hinton, G. E. 2008. Visualizing data using t-SNE. J. Machine Learning Research 9:2579– 2605.
7. Wang, W., and Carreira-Perpin˜an, M. ´ A. 2012. Nonlinear ´ low-dimensional regression using auxiliary coordinates. In Lawrence, N., and Girolami, M., eds., Proc. of the 15th Int. Workshop on Artificial Intelligence and Statistics (AISTATS 2012), 1295–1304.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
You are free to:
- Share — copy and redistribute the material in any medium or format for any purpose, even commercially.
- Adapt — remix, transform, and build upon the material for any purpose, even commercially.
- The licensor cannot revoke these freedoms as long as you follow the license terms.
Under the following terms:
- Attribution — You must give appropriate credit , provide a link to the license, and indicate if changes were made . You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
Notices:
You do not have to comply with the license for elements of the material in the public domain or where your use is permitted by an applicable exception or limitation .
No warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material.