Performance Analysis of Different Classifiers for American Sign Language Recognition
Rajesh B. Mapari1, Govind Kharat2
1Prof. Rajesh B. Mapari, Department of Electronics & Telecommunication Engineering, Anuradha Engineering College, Chikhli, India.
2Prof. Govind Kharat, Principal, Sharadchandra Pawar College of Engineering, Otur, India
Manuscript received on February 09, 2016. | Revised Manuscript received on February 15, 2016. | Manuscript published on March 05, 2016. | PP: 90-95 | Volume-6 Issue-1, March 2016. | Retrieval Number: A2789036116/2016©BEIESP
Open Access | Ethics and Policies | Cite
© The Authors. Published By: Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Abstract: American Sign Language alpha-numeric character recognition without using any embedded sensor, color glove or without the constraints of an environment is a really difficult task. This paper describes a novel method of static sign recognition using a leap Motion sensor by obtaining feature set based on hand position, distance and angle between different points of hand. A feature set is later trained and tested using different classifiers like MLP (Multilayer Perceptron), GFFNN (Generalized Feed forward Neural Network), SVM (Support Vector Machine). We have collected dataset from 146 people including students of age 20-22 years and few elders age between 28-38 who have performed 32 signs resulting in total dataset of 4672 signs. Out of this 90% dataset is used for training and 10% dataset is used for testing/Cross validation. we have got maximum classification accuracy as 90% on CV/testing dataset using MLP Neural Network.
Keywords: ASL, MLP, GFFNN, SVM.