NEURAL NETWORK CLASSIFIER FOR CONTINUOUS SIGN LANGUAGE RECOGNITION WITH SELFIE VIDEO
This work brings sign language closer to real time implementation on mobile platforms on a video database of Indian sign language created with a mobile front camera in selfie mode. Pre-filtering, segmentation and feature extraction on video frames create a sign language feature space. Artificial neural network classifier on the sign feature space is trained with feed forward nets and tested. ASUS smart phone with 5M pixel front camera captures continuous sign videos containing on average of 220 frames for 18 single handed signs at a frame rate of 30fps. Sobel edge operator’s power is enhanced with morphology and adaptive thresholding giving a near perfect segmentation of hand and head portions. Word matching score (WMS) gives the performance of the proposed method with an average WMS of around 90% for ANN with an execution time of 0.5221 seconds during classification.
Indian sign language, Sobel adaptive threshold, morphological differencing, artificial neural networks, word matching score.