Development Of Speech Recognition System For Hearing Impaired In Native language



This paper presents the performance of the speech recognition system with reference to children with normal hearing and children with hearing impairment. Though the nasal and oral cavities of the hearing impaired are perfect, they cannot produce sounds since they cannot hear anything. The reason is that the ability to understand language and speech production is coordinated by the brain. So a person with a problem in the ear or damage in brain activities due to an accident, stroke or birth defect may have problems in producing speech. They are classified as profoundly deaf and hard of hearing, based on the degree of hearing ability. Early detection of deafness would enable the hearing impaired to produce sounds by speech therapy. If deafness is detected at a later stage, it is difficult to make the speech of the hearing impaired understandable. So, it is necessary to develop the system for recognizing their speeches, especially in the native language. In this paper, a system is developed for Tamil language by using, Melfrequency cepstral coefficient feature extraction at the front end and Hidden Markov Model tool kit at the back end. System is evaluated and the comparison is done between the speeches of normal speakers and the hearing impaired. Recognition accuracy is 92.4% for hearing impaired speeches and 98.4% for normal speeches. Though it is difficult for the unfamiliar listeners to understand the hearing impaired speeches, this system can be utilized for recognizing the speeches of Hearing impaired by others.


Mel frequency cepstral coefficients (MFCC), Speech recognition (SR), Deaf or hearing impaired speech, Hidden markov model (HMM), Perceptual Linear Prediction (PLP), Hidden Markov Model tool kit (HTK), American sign language (ASL)

Full Text:



Arvind Kumar and Reena Dhanda, The Identification and Management of Deaf Children, Indian J Pediatr, Vol .64, No.6,1997, pp 785-792.

Dr.Colin Brooks, Speech to text system for deaf, deafened and hard of hearing people, The Institution of Electrical Engineers, IEE-2000.

Speech Processing Aids for the Deaf: an overview, IEEE Transactions on audio and Electro acoustics, Vol.Au-1,No.3, June 1973,pp 269.

Valerie Henderson-Summet1, Rebecca E. Grinter1, Jennie Carroll, and Thad Starner, Electronic Communication: Themes from a Case Study of the Deaf Community, IFIP International Federation for Information Processing 2007, Part I, pp. 347–360.

Rabiner, L. R., and B. H. Juang, Fundamentals of Speech Recognition, Prentice Hall, New Jersey, 1993.

B.H. Juang; L.R.Rabiner, Hidden Markov Models for speech recognition, Technometrics, Vol.33, No.3,1991, pp.251-272.

Murty, K.S.R., & Yegnanarayana, B, Combining evidence from residual phase and MFCC features for speaker recognition, IEEE Signal Processing Letters, vol.13, No.1, 2006, pp 52–55.

Lim Sin Chee,Ooi Chia Ai, M.Hariharan and Sazali Yaacob, “MFCC based recognition of repetitions and prolongations in stuttered speech using K-NN and LDA” Proc. of SCOReD vol.16-18, 2009,(IEEE) pp 416–419.

Shaughnessy D.O., Speech communication: human and machine, Addison-Wesley, 2003.

Umesh S. and Cohen L. and Nelson D. "Fitting the mel scale", Proc. ICASSP, vol.1 ,1999 (IEEE) pp 217–220.

L.R.Rabiner, A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition - Proceedings of the IEEE, vol.77, no. 8, Feb 1989.

Picone J, Signal modeling techniques in speech recognition, Proceedings of the IEEE, Vol.81,No.9,

, pp 1215–1247.

Pujol P, Pol S, Nadeu C, Hagen A, Bourlard H, “Comparison and combination of features in a hybrid HMM/MLP and a HMM/GMM speech recognition system”, IEEE Transactions on Speech and Audio processing, vol.13, Issue.1, page14-22, 2005.

Steve Young, Gunnar Evermann, Thomas Hain, Dan Kershaw, Gareth Moore, Julian Odell, Dave Ollason, Dan Povey, Valtcho Valtchev, Phil Woodland, The HTK Book, version 3.2.1, copyright 1995-1999 Microsoft Corporation copyright 2001-2002 Cambridge University Engineering department


Tolba.H, El Torgoman.A.S, Towards the improvement of automatic recognition of dysarthric speech, IEEE International Conference on Computer Science and Information Technology, pp. 277-281,Aug.2009.

Alireza A. Dibazar, Hyung O, Park, and Theodore W. Berger, “Nonlinear dynamic modeling of impaired voice” 32nd Annual International Conference of the IEEE EMBS, pp. 2770 – 2773, August 31-sep’4 2010.

Harry Levitt, Member, “Acoustic Analysis of Deaf Speech Using Digital Processing Techniques” IEEE Fall Electronics Conference, Chicago, IEEE 1971.

J. M. Picjlett, “Some Applications of Speech Analysis to Communication Aids for the Deaf”, IEEE Transactions on Audio Electro acoustics, AU-17, NO. 4. 1969.

Prashant S. Dikshit', Edward L. Goshom2, and Ronald L. Seaman “Differences in fundamental frequency of deaf speech using FFT and Electroglottograph”, Biomedical Engineering Conference, Proceedings of the Twelfth Southern IEEE, pp 111 – 113,1993.

Thomas R. Willemine Francis F. Lee Fellow IEEE,“Tactile

Pitch Displays for the Deaf”, IEEE Transaction on Audio and Electroacoustics VolL. AU-20, No.1, 1972.

Chris J. Clement, Florien J. Koopmans-van Beinum and Louis C. W. Pols, “Acoustical characteristics of sound production of deaf and normally hearing infants” Fourth international conference on spoken language, vol.3, 1549-1552, 1996.

Xuejing Sun,“Pitch determination and voice quality analysis using subharmonic-to-harmonic ratio”, International conference on Acoustics, Speech and Signal Processing, IEEE. Proceedings. (ICASSP '02). vol.1, Page(s): I-333 - I-336.


  • There are currently no refbacks.