A challenging task in recognizing the speech of the Hearing impaired using normal hearing models in classical Tamil language

  • Revathi dhanabal Sastra Univerisity
  • jeya lakshmi K.RAMAKRISHNAN COLLEGE OF ENGINEERING
Keywords: Mel frequency Perceptual linear predictive coefficients (MFPLP), Vector quantization, Clustering, Centroids, Recursive least square (RLS) filtering, Speech recognition, Hearing impaired (HI), Normal hearing (NH).

Abstract

Objective: We develop a system to recognize the speeches of Hearing impaired children using normal hearing models.

Background: Hearing impaired speakers normally use sign language to communicate with others though they have perfect vocal structure. Their speeches are much distorted and may not be understood even by the teachers and parents. So, it is necessary to develop the system for recognizing their speeches especially in the native language and we have considered Tamil speaking hearing impaired persons in our work.

Method: Performance of the system is analysed by applying the hearing impaired speeches directly on the models developed using the speeches of the normal speakers. This work mainly highlights the use of speeches of hearing impaired speakers for testing only and it is not necessary to create the database for hearing impaired which is difficult to create.

Results:Using RLS filtering selected normal speech is subsequently applied to the models and performance is evaluated. Recognition accuracy is 100% for clustering model with cluster size 256.

Conclusion: Recognized speeches will be heard and interpreted clearly by the normal speakers and the system can be used to help the hearing impaired at large and it helps to improve their status in society.

Author Biographies

Revathi dhanabal, Sastra Univerisity

Professor

Sastra university

jeya lakshmi, K.RAMAKRISHNAN COLLEGE OF ENGINEERING

ASSOCIATE PROFESSOR

DEPARTMENT OF ECE

References

Brain C.J.Moore, 2003, ‘Speech processing for the hearing impaired: successes,failures, and implications for speech mechanisms’, Speech communication, vol.41,pp.81-91.

Craig w. newman, Sharon a. sandridge, Hearing loss is often undiscovered but screening is easy, Audiology Research Laboratory, Department of Otolaryngology and Communicative Disorders, The Cleveland Clinic Foundation.

Jeyalakshmi, C Revathi, A & Krishnamurthi, V 2014, ‘Development of speech recognition system in native language for hearing impaired’, Journal of Engineering research, (Kuwait Journal of Science and Engineering), vol. 2, no.2, pp. 81-99.

Jeyalakshmi, C., Revathi, A. and Krishnamurthi, V. (2015), ‘Investigation of voice disorders and recognising the speech of children with hearing impairment in classical Tamil language’, Int. J. Biomedical Engineering and Technology, Vol. 17, No. 4, pp.356–370.

Harry Levitt, Member, 1971. Acoustic Analysis of Deaf Speech Using Digital Processing Techniques, IEEE Fall Electronics Conference, Chicago.

Hynek Hermansky, Kazuhiro Tsuga, Shozo Makino and Hisashi Wakita, 1986,“Perceptually based processing in automatic speechrecognition”, Proc. IEEE int. conf. on Acoustics, speech and signal processing, Tokyo, 11, pp.1971-1974.

Hynek Hermansky, Nelson Margon, Aruna Bayya and Phil Kohn, “The challenge of Inverse E: The RASTA PLP method”, Proc. Twenty fifth IEEE Asilomar conf. on signals, systems and computers, Pacific Grove, CA, USA, November 1991,2, pp.800- 804

Hynek Hermansky and Nelson Morgan, “RASTA processing of speech”, IEEE transactions on speech and audio processing, 1994, 2, (4), pp.578-589.

J. M. Picjlett, 1969. Some Applications of Speech Analysis to Communication Aids for the Deaf, IEEE Transactions on Audio and Electro acoustics,AU-17,no. 4.

Lim sin chee, Ooi chia ai, M.Hariharan,and Sazali yaacob 2009, ‘MFCC based recognition of repetitions and prolongations in stuttered speech using k-NN and LDA’, IEEE student conference on Research and development,Malaysia,pp.146-149.

Murty, K.S.R., & Yegnanarayana. B,2006. Combining evidence from residual phase and MFCC features for speaker recognition, IEEE Signal Processing Letters, vol.13, no.1, pp. 52–55.

Nickerson, R.S., Stevens, K.N., Boothroyd, A. and Rollins, A.(1974) Some Observations on Timing in the Speech of Deaf and Hearing Speakers, Report No. 2905.

Prasad D.Polur &Gerald E. Miller,2005, ‘Effect of high frequency spectral componentsin computer recognition of dysarthric speech based on a Mel-cepstral stochastic model’ , Journal of Rehabilitation research and development’,vol.42 no.3,pp.363-372.

Prasad D.Polur &Gerald E. Miller,2006, ‘Investigation of an HMM/ANN hybrid structure in pattern recognition application using cepstral analysis of dysarthric (distorted) speech signals’ , Medical engineering & Physics, vol.28, pp.741-748.

Pickett, J.M. and Costam, A. (1968) ‘A visual speech trainer with simplified indication to vowel spectrum’, American Annals of the Deaf, Vol. 113, pp.253–258.

Rabiner, L. R., and B. H. Juang, 1993. Fundamentals of Speech Recognition, Prentice Hall, New Jersey.

Revathi, A & Venkataramani, Y 2009, ‘Perceptual features based isolated digit and continuous speech recognition using iterative clustering approach’, First International Conference on Networks and Communications, IEEE, pp. 155-160.

Revathi, A & Venkataramani, Y, 2011, ‘Speaker independent continuous speech and isolated digit recognition using VQ and HMM’ International Conference on Communications and Signal Processing (ICCSP), IEEE, pp.198-02.

Revathi, A & Venkataramani, Y, 2012,’ Evaluate multi-speaker isolated word recognition using concatenated perceptual feature’ EE Times-India, pp. 1-10.

Ruscello, D.M., Sholtis, D.M. and Moreau, V.K. (1980) ‘Adult’s awareness of certain articulatory gestures’, Perceptual and Motor Skills, Vol. 50, No. 3(2), pp.1156–1158.

Tin lay nwe, Say wei foo, Liyanage C. De silva, 2003, ‘Speech emotion recognition using Hidden markov models’, Speech communication, vol.41, pp.603-623.

Yoshinori yamada, Hector javkin, Karan youdelman, 2000, ‘Assistive speech technology for persons with speech impairments’, Speech communication, vol.30, pp.179-187.

Published
2017-08-01
Section
Electrical Engineering