GENETICALLY OPTIMIZED GAIN RATIO FEED FORWARD NEURAL NETWORK ALGORITHM FOR REVIEW OPINION CLASSIFICATION

Authors

  • Dr. Helen Josephine V L Associate Professor, Department of Computer Applications, CMR Institute of Technology, Bengaluru Author
  • Dr. V.S Prakash Assistant Professor, Department of Computer Applications, Kristu Jayanthi College, Bengaluru Author
  • Dr. Aruna Devi M Professor, Department of Computer Applications, Cambridge Institute of Technology, Bengaluru Author

DOI:

https://doi.org/10.61841/kv6dcw06

Keywords:

Genetic Algorithm, Classification, Machine Learning, Neural Network, Opinion Mining

Abstract

Opinion mining and sentiment analysis have become significant and vital areas of web content mining and text mining, and many researchers have paid attention over the past decade. This research paper identified the importance of the genetic optimization technique and used it to optimize the neural network classifier algorithm Gain Ratio Feedforward Neural Network algorithm (GR_FFNN). The proposed Genetically Optimized GR_FFNNs algorithm is used to optimize the vital parameter momentum and learning rate by using one of the leading soft computing approaches, Genetic Optimization algorithms. The genetically optimized GR-FFNN algorithm was investigated with the mobile learning app reviews. It classifies the reviews into three different classes based on the opinions extracted along with their polarity. The methodology, architecture, and results of the proposed Genetically Optimized Gain Ratio FeedForward Neural Network (Genetically Optimized GR-FFNN) algorithm are discussed elaborately. 

Downloads

Download data is not yet available.

References

1. Helen Josephine V.L. and Dr. S. Duraisamy, “Gain Ratio Feed Forward Neural Network Algorithm to Improve Classification Accuracy," International Journal Engineering & Technology (UAE) (IJET), ISSN: 2227-524X, Vol. 07, Issue 4, pp. 3579-3582, 2018.

2. N. Abdul Hamid, N. Mohd Nawi, and R. Ghazali. (2011), The Effect of Adaptive Gain and Adaptive Momentum in Improving Training Time of Gradient Descent Back Propagation Algorithm on Classification Problems, in Proceeding of the International Conference on Advanced Science, Engineering, and Information Technology, pp. 178-184.

3. Haruna Chiroma, Ahmad Shukri Mohd Noor, Sameem Abdulkareem, Adamu I. Abubakar, Arief Hermawan, Hongwu Qin, Mukhtar Fatihu Hamza and Tutut Herawan (2017), Neural Networks Optimization through Genetic Algorithm Searches: A Review, Applied Mathematics & Information Sciences: An International Journal, vol. 11, no. 6, pp. 1543-1564

4. S. Vofl, S. Martello, I.H. Osman, and C. Roucairol (1999), Advances and Trends in Local Search Paradigms for Optimization, Kluwer Academic, Boston, MA.

5. H. Maaranen, Miettinen, M. Makela (2004), Quasi-Random Initial Population for Genetic Algorithms, Computer with Applications, vol. 47, pp. 1885-1895.

6. Y. Du and Y. Li (2008), Sonar array azimuth control system based on genetic neural network, Proceedings of the 7th World Congress on Intelligent Control and Automation, pp. 6123-612.

7. D. Montana and L. Davis, “Training feedforward neural networks using genetic algorithms," published in “IJCAI'89 Proceedings of the 11th International Joint Conference on Artificial Intelligence," pp. 762–767, vol. 1, 1989.

8. H. Shi, Evolving Artificial Neural Networks Using GA and Momentum, 2009 Second International Symposium on pp. 475-478D.

9. Whitley, “Genetic Algorithms and Neural Networks," book title “Genetic Algorithms in Engineering and Computer Science," publisher “John Wiley, pp. 191–201, August 1995.

10. R. Mahajan, Gaganpreet Kaur (2013), Neural Networks using Genetic Algorithms, International Journal of Computer Applications, Vol. 77, No. 14, pp. 0975-8887.

11. K.M. Sardakis, A.J. Dentsoras Integration of fuzzy logic, genetic algorithms, and neural networks in collaborative parametric design. Advanced Engineering Informatics 20 (2006) 379-399.

12. Goldberg D. Genetic algorithms in search optimization and machine learning. Massachusetts: AddisonWesley;1989.

13. R. Akbari and Koorush Ziarati (2011), A multilevel evolutionary algorithm for optimizing numerical functions, International Journal of Industrial Engineering Computations, Vol. 2, pp. 419-430.

14. H. Chiroma, A. Shukri, M. Noor et al. (2017), Neural Networks Optimization through Genetic Algorithm Searches: A Review, Vol. 1564, Issue. 6, pp. 1543-1564.

15. Z. Michalewicz, Genetic Algorithms + Data Structures -- Evolution Programs, Springer-Verlag (1994).

16. Goldberg, D.E., ”A note on Boltzmann tournament selection for genetic algorithms and population-oriented simulated annealing,” Complex Systems, 4 (1990), 445-460.

17. Eremeev, Anton V(2017), Hitting Times Of Local And Global Optima In Genetic Algorithms With Very High Selection Pressure, Yugoslav Journal of Operations Research 27, Number 3, pp. 323–339 DOI: 10.2298/YJOR160318016E.

18. Randall S. Sexton, Robert E. Dorsey, et. al., Optimization of Neural Networks: A Comparative Analysis of the Genetic Algorithm and Simulated Annealing, European Journal of Operational Research, Vol. 114, Issue 3, 1 May 1999, pp. 589-601, https://doi.org/10.1016/S0377-2217(98)00114-3

19. Dorsey, R. E., Johnson, J. D., & Mayer, W. J., “A Genetic Algorithm for the Training of Feedforward Neural Networks,” Advances in Artificial Intelligence in Economics, Finance, and Management (J. D. Johnson and A. B. Whinston, eds., 93-111). Vol. 1. Greenwich, CT: JAI Press Inc., 1994.

20. Dharmistha D. Vishwakarma (2012), Genetic Algorithm-Based Weights Optimization of Artificial Neural Networks, International Journal of Advanced Research in Electrical, Electronics, and Instrumentation Engineering, ISSN 2278-8875, Vol. 1, Issue 3, pp. 206-211.

21. Randall S. Sexton, et. al. (1998), Toward global optimization of neural networks: A comparison of the genetic algorithm and backpropagation, Decision Support Systems Vol 22, Issue 2, pp. 171-185. https://doi.org/10.1016/S0167-9236(97)00040-7.

22. Bumghi Choi, Ju-Hong Lee, and Deok-Hwan Kim (2008), Solving local minima problem with a large number of hidden nodes on a two-layer feedforward artificial neural network, Neurocomputing vol. 71, issues 16–18, October 2008, pp. 3640–3643.

23. Norhamreeza Abdul Hamid, Nazri Mohd Nawi, Rozaida Ghazali, and Mohd Najib Mohd Salleh (2011), Solving Local Minima Problem in Back Propagation Algorithm Using Adaptive Gain, Adaptive Momentum, and Adaptive Learning Rate on Classification Problems, International Conference Mathematical and Computational Biology, pp. 448–455.

24. O. Kaynar, S. Tastan, and F. Demirkoparan (2010), Crude oil price forecasting with artificial neural networks, Ege Acad. Rev. 10, 2, 559-573.

25. D.T. Pham and D. Karaboga (1999), Training Elman and Jordan Networks for system identification using genetic algorithms. Artif. Intell. Eng. 13, 10-117.

26. Helen Josephine V. L., S. Duraisamy, "Novel pre-processing framework to improve classification accuracy in opinion mining," International journal of computing, vol. 17, no. 4, pp. 234-242, 2018.

27. Richa Mahajan and Gaganpreet Kaur, “Neural Networks using Genetic Algorithms," International Journal of Computer Applications 77(14):6-11, September 2013.

Downloads

Published

30.04.2020

How to Cite

Josephine V L, H., Prakash, V., & Devi M, A. (2020). GENETICALLY OPTIMIZED GAIN RATIO FEED FORWARD NEURAL NETWORK ALGORITHM FOR REVIEW OPINION CLASSIFICATION. International Journal of Psychosocial Rehabilitation, 24(2), 4429-4441. https://doi.org/10.61841/kv6dcw06