American Standard Sign Language Representation Using Speech Recognition

Similar documents
School Class Monitoring System Based on Audio Signal Processing

TOOLS for DEVELOPING Communication PLANS

An Avatar Based Translation System from Arabic Speech to Arabic Sign Language for Deaf People

Speech Signal Processing: An Overview

A 5 Degree Feedback Control Robotic Arm (Haptic Arm)

GLOVE-BASED GESTURE RECOGNITION SYSTEM

LOW COST HARDWARE IMPLEMENTATION FOR DIGITAL HEARING AID USING

From Concept to Production in Secure Voice Communications

Robotics and Automation Blueprint

Establishing the Uniqueness of the Human Voice for Security Applications

Sign Language to Speech Translation System Using PIC Microcontroller

ARM7 Based Smart ATM Access & Security System Using Fingerprint Recognition & GSM Technology

EARLY INTERVENTION: COMMUNICATION AND LANGUAGE SERVICES FOR FAMILIES OF DEAF AND HARD-OF-HEARING CHILDREN

Algorithm & Flowchart & Pseudo code. Staff Incharge: S.Sasirekha

USABILITY OF A FILIPINO LANGUAGE TOOLS WEBSITE

Laser Gesture Recognition for Human Machine Interaction

Developing an Isolated Word Recognition System in MATLAB

Accurate Measurement of the Mains Electricity Frequency

Emotion Detection from Speech

Automatic Evaluation Software for Contact Centre Agents voice Handling Performance

dspace DSP DS-1104 based State Observer Design for Position Control of DC Servo Motor

Chapter 5 Objectives. Chapter 5 Input

Degree programme in Automation Engineering

INTERNATIONAL JOURNAL OF ADVANCED RESEARCH IN ENGINEERING AND TECHNOLOGY (IJARET) BUS TRACKING AND TICKETING SYSTEM

Thirukkural - A Text-to-Speech Synthesis System

Mouse Control using a Web Camera based on Colour Detection

Lab Experience 17. Programming Language Translation

Myanmar Continuous Speech Recognition System Based on DTW and HMM

Text-To-Speech Technologies for Mobile Telephony Services

Guide: Technologies for people who are Deaf or hard of hearing

Marathi Interactive Voice Response System (IVRS) using MFCC and DTW

A DESIGN OF DSPIC BASED SIGNAL MONITORING AND PROCESSING SYSTEM

The Design of DSP controller based DC Servo Motor Control System

FPGA Implementation of Human Behavior Analysis Using Facial Image

Keywords RFID READER, FPGA, GSM.

62 Hearing Impaired MI-SG-FLD062-02

DEAFNESS AND HARD OF HEARING

Learning Styles and Aptitudes

ARTIFICIALLY INTELLIGENT COLLEGE ORIENTED VIRTUAL ASSISTANT

Introduction to Simulink & Stateflow. Coorous Mohtadi

Feasibility Study of Implementation of Cell Phone Controlled, Password Protected Door Locking System

Baby Signing. Babies are born with an inherent body language that is common to all cultures.

have more skill and perform more complex

Speech: A Challenge to Digital Signal Processing Technology for Human-to-Computer Interaction

Program curriculum for graduate studies in Speech and Music Communication

A Smart Telephone Answering Machine with Voice Message Forwarding Capability

Technology in Music Therapy and Special Education. What is Special Education?

Chapter 1 Communicating in Your Life

Address for Correspondence

Real Time Water Quality Monitoring System

L9: Cepstral analysis

Unit 3. Effective Communication in Health and Social Care. Learning aims

Hand Gestures Remote Controlled Robotic Arm

Microcontroller Based Smart ATM Access & Security System Using Fingerprint Recognition & GSM Technology

Hardware Implementation of Probabilistic State Machine for Word Recognition

Raghavendra Reddy D 1, G Kumara Swamy 2

KS3 Computing Group 1 Programme of Study hours per week

Lecture 4: Jan 12, 2005

A secure face tracking system

Modern foreign languages

Teaching Methodology for 3D Animation

Department of Electronics Engineering, Associate Professor, Tatysaheb kore Istitute of Engineering and Technology,Warnanagar, India

Implementation of a Wireless Gesture Controlled Robotic Arm

Bachelor of Games and Virtual Worlds (Programming) Subject and Course Summaries

Care Pathway for Rehabilitation Team (Following allocation of Cochlear Implant surgery date)

Open-Source, Cross-Platform Java Tools Working Together on a Dialogue System

An Arabic Text-To-Speech System Based on Artificial Neural Networks

Lone Star College System Disability Services Accommodations Definitions

Data Transfer between Two USB Devices without using PC

TYPES OF COMPUTERS AND THEIR PARTS MULTIPLE CHOICE QUESTIONS

Introduction to Data Acquisition

Cost Effective Automated Test Framework for Touchscreen HMI Verification

Analecta Vol. 8, No. 2 ISSN

MECE 102 Mechatronics Engineering Orientation

Android based Secured Vehicle Key Finder System

IC 1101 Basic Electronic Practice for Electronics and Information Engineering

Digital Systems Based on Principles and Applications of Electrical Engineering/Rizzoni (McGraw Hill

Avaya Model 1408 Digital Deskphone

HAND GESTURE BASEDOPERATINGSYSTEM CONTROL

Simulation of VSI-Fed Variable Speed Drive Using PI-Fuzzy based SVM-DTC Technique

3 SOFTWARE AND PROGRAMMING LANGUAGES

CIM Computer Integrated Manufacturing

COMMUNICATION AIDS - GLOSSARY OF TERMS. AAC See Augmentative and Alternative Communication

Hearing and Deafness 1. Anatomy & physiology

Reading and writing processes in a neurolinguistic perspective

Reconfigurable Low Area Complexity Filter Bank Architecture for Software Defined Radio

Terminal, Software Technologies

Accessibility-MiVoice-Business.docx Page 1

How the Computer Translates. Svetlana Sokolova President and CEO of PROMT, PhD.

ARCHITECTURE FOR INDUSTRIAL MONITORING SYSTEMS

PS 29M DUAL CHANNEL BELTPACK IN METAL CASE

Access Control Using Smartcard And Passcode

Design and Development of a Wireless Remote POC Patient Monitoring System Using Zigbee

Intelligent Submersible Manipulator-Robot, Design, Modeling, Simulation and Motion Optimization for Maritime Robotic Research

Keywords ATM Terminal, Finger Print Recognition, Biometric Verification, PIN

An Introduction to Computer Science and Computer Organization Comp 150 Fall 2008

Parallel Ray Tracing using MPI: A Dynamic Load-balancing Approach

American Sign Language

Natural Language to Relational Query by Using Parsing Compiler

Transcription:

American Standard Sign Language Representation Using Speech Recognition 1 Attar Shahebaj, 2 Deth Krupa, 3 Jadhav Madhavi 1,2,3 E&TC Dept., BVCOEW, Pune, India Abstract: For many deaf people, sign language is the principle means of communication. This increases the isolation of hearing impaired people. This paper presents a system prototype that is able to automatically recognize speech which helps to communicate more effectively with the hearing or speech impaired people. This system recognizes speech signal. Recognized spoken words are represented using American standard sign language via a robotic arm and also on the computer using visual basic.in this project a software package is provided to convert the speech signal, (which does not have any meaning for the deaf and the dumb) into the sign language. The main purpose of this project is to bridge the communication and expression gap between the normal people who cannot understand the sign language, and the deaf and dumb who cannot understand the normal speech. Keywords: Speech recognition, Mel frequency cepstral coefficients, Robotic arm. SIGN LANGUAGE: I. INTRODUCTION Sign language is the language used by deaf and mute people. It is a combination of shapes and movements of different parts of the body. These parts include face and hands. The area of performance of the movements may be from well above the head to the belt level. Signs are used in a sign language to communicate words and sentences to audience. A gesture in a sign language, is a particular movement of the hands with a specific shape made out of them. Facial expressions also count toward the gesture, at the same time. A posture on the other hand, is a static shape of the hand to indicate a sign. A sign language usually provides signs for whole words. It also provides signs of letters to perform words that don t have a corresponding sign in that sign language. So, although sentences can be made using the signs for letters, performing with signs of words is faster. The sign language chosen for this project is the American Sign Language. AMERICAN STANDARD SIGN LANGUAGE: It is the most well documented and most widely used language in the world. American Sign Language (ASL) is a complex visual-spatial language that is used by the Deaf community in the United States and English-speaking parts of Canada. It is a linguistically complete, natural language. It is the native language of many Deaf men and women, as well as some hearing children born into Deaf families. ASL shares no grammatical similarities to English and should not be considered in any way to be a broken, mimed, or gestural form of English. A-S-S-L enables the dumb and def to speak with their hands which represent their tongue. The signs are shown below: Page 1

II. LITERATURE REVIEW American standard sign language was demonstrated around 1980 in United States and many other countries, to provide education for the people who have problem in speaking and hearing the words (the def and the dumb).a-s-s-l enables the dumb and deaf to speak with their hands which represent their tongue. Chuan Xie,Xiaoli Cao,Lingling He[1] Praposed idea of Characteristics extraction has a great effect on the audio training and recognition in the audio recognition system.mfcc algorithm is a typical characteristics extraction method with stable performance and high recognition rate. For the situation that MFCC has a large amount of computation, an improved algorithm MFCC_E is introduced. The computation of MFCC_E is reduced by 50% compared with the standard algorithm MFCC, and it make the hardware implementation is easy. Mohd Aliff, Shujiro Dohta, Tetsuya Akagi, Hui Li[2] says that the master-slave attitude control and the trajectory control of the flexible robot arm are proposed. This robot arm has three degree-of freedom that is bending, expanding and contracting and will be applied into a rehabilitation device for human wrist. The master-slave control system which is proposed in this paper is necessary when a physical therapist wants to give a rehabilitation motion to a patient. While the trajectory control system is necessary when a sequential rehabilitation motion is applied to a patient. Jordan Fenlon, Adam Schembri, Ramas Rentelis, Kearsy Cormier [3] proposed the idea for Variation in hand shape and orientation in British Sign Language in case of 1 hand configuration. This paper investigates phonological variation in British Sign Language (BSL) signs produced with a 1 hand configuration in citation form. Multivariate analyses of 2084 tokens reveals that hand shape variation in these signs is constrained by linguistic factors (e.g., the preceding and following phonological environment, grammatical category, indexicality, lexical frequency). The only significant social factor was region. For the subset of signs where orientation was also investigated, only grammatical function was important. Dan Song, Xuan Wang, Xinxin Xu[4] state the Chinese Sign Language Synthesis System on Mobile Device. In this paper, we proposed an effective approach for Chinese Sign Language (CLS) Synthesis System on mobile device. It can translate messages or other text information into Chinese sign languages in real time. In this system, it mainly includes two parts: construction of motion database and sign language synthesis based on text-driven. Construction of motion database is a prerequisite for the Chinese sign language synthesis system. Its motion quality, motion data structure and motion data size together determine the final performance of system. We also introduce a method for motion smoothing in synthesis module. Daphne Bavelier, David P. Corina and Helen J. Neville [5] say that sign languages are not a concatenation of universally understood pantomimes or concrete gestures has been hard to unroot. In fact, just as there are many spoken languages, there are many unique and different signed languages. Recent advances in linguistics have revealed that sign languages such as ASL encompass the same abstract capabilities as spoken languages and contain all the different levels of linguistic representations found in spoken language, including phonology, morphology, syntax, semantics, and pragmatics. Page 2

III. SYSTEM BLOCK DIAGRAM Transmitter Receiver Fig 1 Block Diagram of System Block diagram of the overall system will be implemented. The system will be operating in close to real time and will take the speech input from the microphone and will convert it to synthesized speech or finger spelling. Speech recognition will be implemented for the considered languages. Language models will be used to solve ambiguities. Finger spelling synthesis will be implemented. Language model in matlab is used to synthesis speech in to text. And given to arm controller which will have LCD to verify words spell and processed by processor. Robotic arm is connected to arm to represent the vocal language in to sign language. The robotic arm uses servo motors for their movement. Speech recognition system based on the speech reading and the samples passed to the processing unit. The processing system consists of a speech recognition unit with symbol generator, which determines the speech signal and produces an equivalent coded symbol for their recognized speech signal. IV. SPEECH RECOGNITION BY MFCC In PC, using speech recognition algorithm in MATLAB speech will get recognized taking input from microphone.then that recognized speech will get transfer serially to the ARM7 controller. To the Arm7 controller, servo motors are interfaced so according to letter or word the movement of servo will be done to move hand assembly according sign languages. Following steps are taken to recognition of speech: Speech is one of the oldest and most natural means of information exchange between human beings. We humans speak and listen to each other in human=human interface. For centuries people have tried to develop machines that can understand speech as humans do so naturally. For doing so we have used the method of MFCC coefficients. We have used MFCC because it resembles the human ear in many ways. The output of the human ear is linear above 1 khz and so is the output of MFCC and the output is logarithmic above 1 khz for the human ear as well as MFCC. So it resembles the human auditory features. Page 3

Mel-Frequency Transformation: According to psychophysical scientists, Human hearing perception of frequency contents is not linear. It is linear up-to 1000 Hz and logarithmic over 1000 Hz. This logic can be implemented by using this formula, Mel-frequency =2595*log (1+linearfrequency/700) Mel frequency cepstral coefficients Each region on the basilar membrane acts as a filter bank. There are more receptors for frequencies from 0-1Khz and the number decrease rapidly thereon. Mel filter banks are used to simulate this Fig 2 Block diagram speech recognition by MFCC V. SPECIFICATIONS 1] Input power supply: 12V 1A 2] Processor: ARM7 LPC2138 3] LCD: 16*2 4] Serial communication: RS232 5] Servo Motor: VTS050A 6] MATLAB for MFCC algorithm 7] Embedded C for ARM7 coding. VI. APPLICATIONS 1] Public places like airports, railway stations and counters of banks, hotels etc. where there is communication between different people. 2] Assuming the fact that we are able to convert spoken English into American standard sign language, we can manufacture a handy and portable hardware device having this translating system built in as a chip. 3] With the help of this hardware device, which has Mice along with the robotic arm a mute person can communicate to any normal person anywhere. Page 4

VII. CONCLUSION This project was meant to be a prototype to check the feasibility of recognizing sign languages using speech recognition.. More words and sentences can be recognized by developing some complex algorithm. VIII. FUTURE SCOPE This design being a prototype uses MATLAB software which is readily available to us in the institute, but for practical application MATLAB is not available readily and is also costly. Hence suitable DSP processor should be selected for carrying out Speech Recognition. Motion can be detected to produce speech. REFERENCES [1] Chuan Xie,Xiaoli Cao,Lingling He Algorithm of Abnormal Audio Recognition Based on Improved MFCC Elsevier, Procedia Engineering 29 (2012) 731 737. [2] Mohd Aliffa, Shujiro Dohtaa, Development of a simple-structured pneumatic robot arm and its control using lowcost embedded controller Tetsuya Akagia, Hui Lia, Elsevier, Procedia Engineering 41 ( 2012 ) 134 142. [3] Jordan Fenlon, Adam Schembri, Ramas Rentelis, Kearsy Cormier. Variation in hand shape and orientation in British Sign Language:The case of the 1 hand configuration Elsevier, Language & Communication 33 (2013) 69 91. [4] Dan Song, Xuan Wang,Xinxin Xu Chinese Sign Language Synthesis System on Mobile Device,Elsevier,2012 International Workshop on Information and Electronics Engineering (IWIEE), Procedia Engineering 29 (2012) 986 992. [5] Daphne Bavelier, David P. Corina, and Helen J. Neville,Neuron, Brain and Language a Perspective from Sign Language Vol. 21, 275 278, August, 1998, Copyright ã1998 by Cell Press. Page 5