Image Processing Based Language Converter for Deaf and Dumb  
  Authors : Dushyant Dhapte ; Ishwar Bendre; Santosh Aher

 

In the present world it is very difficult for the deaf & dumb people to talk with the ordinary people. So it becomes impossible for them to communicate with the ordinary people unless and until ordinary people like us learn the sign language for the purpose of communication. The sign language of deaf and dump is quite difficult to learn and it is not possible for everybody to learn that language. So every person cannot come and share their thoughts with these physically impaired people. So we have come up with a system which would enable the deaf and dump to communicate with each and everyone. In our system a webcam is placed in front of the physically impaired person. The physically impaired person would be wearing colored rings in his fingers. When he makes the gestures of the alphabets, the webcam will capture the exact positions of the rings and perform image processing using color recognition to determine the co-ordinates of the colors. The co-ordinates captured will be mapped with the one previously stored and accordingly exact alphabet will be captured. Continuing in this way physically impaired person will be able to go through the entire sentence that he wants to communicate. Later on this sentence will be translated into speech so that it would be audible to everyone.

 

Published In : IJCAT Journal Volume 2, Issue 2

Date of Publication : March 2015

Pages : 47 - 51

Figures :04

Tables : 02

Publication Link :Image Processing Based Language Converter for Deaf and Dumb

 

 

 

Dushyant Dhapte : Dept of Computer, Sandip Institute of Engineering & Management, Mahiravani, Nashik

Ishwar Bendre : Dept of Computer, Sandip Institute of Engineering & Management, Mahiravani, Nashik

Santosh Aher : Dept of Computer, Sandip Institute of Engineering & Management, Mahiravani, Nashik

 

 

 

 

 

 

 

Sign Language

Image Processing

Machine Learning

Our project aims to bridge the gap by introducing an inexpensive computer in the communication path so that the sign language can be automatically captured, recognized and translated to speech for the benefit of blind people. In the other direction, speech must be analyzed and converted to either sign or textual display on the screen for the benefit of the hearing impaired.

 

 

 

 

 

 

 

 

 

[1] Christopher Lee and YangshengXu, “Online, interactive learning of gestures for human robot interfaces” Carnegie Mellon University, the Robotics Institute, Pittsburgh, Pennsylvania, USA, 1996 [2] Richard Watson, “Gesture recognition techniques”, Technical report, Trinity College, Department of Computer Science, Dublin, July, Technical Report No. TCD-CS-93-11, 1993 [3] Etsuko Ueda, Yoshio Matsumoto, Masakazu Imai, Tsukasa Ogasawara, ”Hand Pose Estimation for Vision Based Human Interface”, IEEE Transactions on Industrial Electronics,Vol.50,No.4,pp.676- 684,2003. [4] 2009 International Conference on Advances in Recent Technologies in Communication and Computing” [5] A Color Edge Detection Algorithm in RGB Color Space International Multi Conference of Engineers and Computer Scientists 2009”, Hong Kong , Vol I IMECS 2009, March 18 - 20, 2009” [6] http://www.wikipedia.org [7] https://www.youtube.com/watch?v=Fjj9gqTCTfc