ShowMeTheSign. Accessibility App. Dimitrios Papastogiannidis. Supervised by Elaine Farrow. Second reader Helen Hastie

Size: px
Start display at page:

Download "ShowMeTheSign. Accessibility App. Dimitrios Papastogiannidis. Supervised by Elaine Farrow. Second reader Helen Hastie"

Transcription

1 ShowMeTheSign Accessibility App Dimitrios Papastogiannidis Supervised by Elaine Farrow Second reader Helen Hastie MSc in Software Engineering Heriot - Watt University August

2 Declaration I, Dimitrios Papastogiannidis confirm that this work submitted for assessment is my own and is expressed in my own words. Any uses made within it of the works of other authors in any form (e.g., ideas, equations, figures, text, tables, programs) are properly acknowledged at any point of their use. A list of the references employed is included. 13/8/2012 2

3 Acknowledgments I would like to thank my supervisor, Elaine Farrow, for the support, the guidelines and the feedback that gave me along with her vision for the app and knowledge of languages. I would also like to thank my parents, my brother, my friends and my colleagues, especially Hyder, that helped me and support me till the end. Also I want to thank Bill Nicol for his help on building the survey for my research. 3

4 Abstract In this research paper different speech recognition systems are presented and how these systems are used to translate speech to text and then to sign language. My main concern was to obtain knowledge of how the signs of a sign language are connected to spoken and written languages and try to create a mobile application that will perform this operation. The aim of this study is to review the state of the art applications that use sign language and speech recognition combined with sign language interpretation so I can develop an application that will follow some standards in this technology. I also discuss the methods that I am going to use during the development; I define the requirements for this project and discuss the evaluation methods and the development methodology that I will use. With organized planning and careful risk management I introduce how the design and implementation phases of the application will be done. I discovered that implementing continuous and accurate speech recognition to sign language system is a major research challenge, far beyond the scope of a Masters project, so I did develop this application in a smaller scale. The implementation sections of the application and the usability and evaluation is introduced. The importance of the usability and evaluation sections is discussed where the app is given to the users for testing and to be compared with another similar application. Also some ideas on the future work that can be done on this app are illustrated. Keywords: British sign language, signing, mobile application, voice recognition, speech to text systems, and speech to sign language systems, sign language animation. 4

5 5

6 Table of Contents Contents 1. Introduction and Objectives: Survey Discussion on the survey Android and What the Application will do Literature Review A trip to Computer science and languages Implementation of Speech to text systems Implementing hidden Markov model Automatic Speech recognition Different Systems of Speech Recognition Embedded ASR systems Network Speech Recognition Distributed speech recognition systems Speech to text and Sign Language Systems Pyramid icommunicator Tessa The Architecture behind text to sign language Representation Description of the system Parsing Process Mobile Applications that use sign language Kuurojen Museo Mobile ASL Wisdom Text to sign translation systems The Atlas Project Dicta Sign Critical Analysis of literature review Requirements analysis

7 3.1 Functional Requirements Objects and Aims during the Development: Non-Functional Requirements Use case diagrams Professional, Legal and Ethical Issues Professional Issues Legal Issues Ethical Issues Methodology Steps of methodology Advantages on using this methodology on this project Disadvantages on using this methodology on this project Project Planning Gant Chart Information about planning and Evaluation Evaluation Risk Management Table Designing the application Designing the database Implementation Usability of the application and testing by users (Evaluation Part I) An Important Issue Usability interviews Evaluation (Part II: Comparing Applications) Future Work Conclusion References Appendix A: Class Diagram Appendix B: Questionnaires for usability and evaluation sections Appendix C: Ethical approval and human consent

8 List of Figures 1.Image taken from: Studies on Market and Technologies [2] Architecture of Android. Image taken from Android Developers [8] A sequence diagram of this procedure Image taken from the: The HTK book [14] Hidden Markov Model. Image taken from (S.J. Melnikoff, S.F. Quigley & M.J. Russell)[13] Image Taken from(s.j. Melnikoff, S.F. Quigley & M.J. Russell)[13] Image taken from (Dmitry Zaykovskiy-2006) [15] shows the recognition rates of state of the art desktop ASR systems Embedded ASR system figure taken from: Dmitry Zaykovskiy[15] ASR based on Network transmission of data. Image taken from Dmitry Zaykovskiy [15] Distributed ASR. Image taken from Dmitry Zaykovskiy[15] Milestones in speech recognition technology. Image taken from B.H. Juang et al [17] The post office communication system.image taken from Cox et al [3] A section from the recognition network.image taken from Cox et al [3] Static Image showing the movement of the hand close to mouth to indicate eating or food (ASL). Taken from: asu- asl blog [22] Avatar signing in space. This is a moving avatar and the most preferred solution on presenting signs for a system that converts Text to sign language. The movements of the heads and the body are clearer than static pictures. Also it does not use as bandwidth as a video because the format is different and the size is less than a video of human being signing. Image taken from: Science photo Library [23] Stage of English text translation to sign language. Image taken from Eva Safar and Ian Marshall [4] CMU parser.image taken from Eva Safar and Ian Marshall [4] The CR rule Image Taken from Semantics C [25] The signer will provide the information that the visitor will need and in the background you can see the artefact. Image Taken from [26] (a) Using motion vectors in the video to (b) distinguish macro block level encoding. Image taken from: Mobile ASL [27] Using skin detection algorithms to find important areas in the video. Image taken from Mobile ASL [27] Sign language recognition system that Wisdom uses. Image taken from Deaf Studies Trust [30] Wisdom Architecture. Image taken from Deaf Studies Trust [30] Use case representing the actors and their interaction with the system

9 25. Iterative Waterfall Model. Image taken from Evoloutionairy Model Development [34] Gant Chart The above displays the word buss but let s take a look to the word apologise Class dependencies Figure of the database Text input screen Main Screen no results Signing of a word List of words in the history screen Option to delete a word List of tables Survey Table Risk Managment Usability results Table I Usability results Table II Evaluation results Table I Evaluation results Table II Evaluation results Table III

10 1. Introduction and Objectives: The 21 st century has made technology a significantly big part of many peoples life s. While some of them argue still about the consequences of technology in mankind and its bad use, there are others that have been benefited a lot by the use of technology. A group of people that may have been helped by the evolution of technology is people with disabilities. But that is not entirely true for deaf man.often a person that is deaf does not see himself as man with disabilities but as a man that speaks a different language. This is true and the fact that some of these men can adapt quick to the communication between deaf and hearing people does not mean that everybody can. Learning a different language that will help someone with communication has sometimes difficulties and it has to be done in an early age of a man. Technology can help deaf people to their well beings and to a very significant part of their lives: communication. Communication plays one of the most important roles for the individual to progress in life and coexist with other individuals. People with hearing loss are very common to experience various forms of lack of communication, that s why technology can be helpful by all means. For deaf people it is not always essential to learn some sign language for their communication with other people and between them. Some of them are using hearing aids or cochlear implants or rely on lip-reading. The problem here is that even if these people learn sign language still the communication between hearing man and deaf man will have some difficulties in their everyday life. Interpreting speech to signs is a complex procedure because you need to consider the things that are changing from speech to signs. Rachel Sutton- Spence and Bencie Woll [1] point out that, It is only possible to have simultaneous production of signs because there is more than one articulator. In spoken language there is only one major articulator, the mouth. In BSL each hand can act as an independent major articulator in two or more channels, each channel carrying meaning units. So we are now talking about a language which has its own grammar and alphabet. Now it is time for the technology to step in. It is widely known that technology helps the procedure of learning by many different ways. This research has to do with the ability of technology to help and also teach people with disabilities. Learning some words of British sign language (BSL) through an application on a mobile phone which is able to receive words in speech and translate it to some basic signs of BSL that can provide a meaning to its user. This application is addressed not only to people with disabilities but also to people that they are 10

11 trying to learn the British sign language. Individuals with hearing problems can use the application to understand others and interpret their voice to signs; also people that interesting on knowing BSL signs in order to communicate with deaf man that instant. Of course building an application like this is not an easy task. I have to consider the user s needs and the technical limitation that exists on the developing part. The general aim of developing this application is to connect some basic words from a speech to text engine to signs of British sign language. The idea of the whole project is to link sign language with an application in a portable device (smartphone).today the amount of people that uses smartphones are extremely big considering the same amount some years ago. 1. Image taken from: Studies on Market and Technologies [2]. But I am concentrating on the deaf people. This application also aims people that do not have a problem but they need to practice and learn BSL. There are a lot projects trying to capture the essence of translation from speech to sign language and some of them are really well developed but not for mobile phones. Some of the most important are projects that helps with everyday transactions between public services like Tessa Project [3] or projects that can translate to sign language the text at real time like the system Safar, Eva and Marshall, Ian [4] implemented. The idea of a simple dictionary that someone can type a word and the sign will pop up has been implemented well too and there are a lot of applications for smartphones that give someone the opportunity to explore sign language using just his mobile, Alphabet learning of American sign language is a famous application ASL American sign language [5] or applications that act as tutorials to learn sign language like First Steps [6]. This literature review will focus on two things; i will try to review some of the best known speech to text systems and some text to sign language systems developed so far. But before that i need to see through a survey that i built how the future user feels about mobiles, speech recognition and sign languages. 11

12 1.1 Survey This survey has been made so i can determine the aims and goals of this project. The application that i am going to build aims a small amount of people so i had to see their opinion on questions regarding disabilities and use of smartphones. The survey lasted around 20 days and 25 people took part in it. This survey was advertised by to the students and staff of Heriot-Watt University, information about people s age and gender and country is not known as this was an anonymous survey. The results were expected in some point and i will discuss it in this section. This survey consists of ten questions and aimed people that are experiencing some form of disability or deaf people. Question Response Response Response Response Response Are you using a mobile phone or a smartphone? Yes 100% No 0% How often are us accessing the web from your mobile or smartphone? What is your Disability? Daily 70.8 % Hearing Loss 75 % Once a Week 12.5 % Physical Problems 50% Maybe two times a month 0% Vision Impairment 50 % Not at all 16.7 % Learning Disability 0 % Other 25 % Are you using any particular application that helps you with your disability? Are you using voice recognition software or any similar services in your mobile? Do you think that an n application can be built to help you with your disability? If no can you specify why? Yes 12.5 % No 87.5 % Yes 4.2 % No 70.8 % Sometimes 25 % Yes 56.5 % No 17.4 % It would be difficult 26.1 % Not really sure. Whilst my physical disability is relatively ok to deal with and lets me use a phone normally, my hearing As of now I don t have any disability. I use the spellchecker every day. 12

13 Have you ever used an application on your mobile for educational purposes? What kind of application might help you cope with your disability? impairment is likely to never be adequately overcome with an app. Yes 50 % No 37.5 % Sometimes 12.5 % "I would like an app which could translate the spoken word to text so I can watch during meetings that I am understanding what is been said. I would also like to have such an app for private use to watch TV or cinema Discussion on the survey We can see from the survey that 70.8 per cent of people that uses smartphones are accessing the web daily. Given the fact that my application demands an internet connection to be present in the smartphone this is something encouraging. Only 12.5 per cent of the people completing this survey are using an application that helps them with their disability, the remaining 87.5 per cent is not. You can see the applications that the people that answered yes are using but the percentage is quite disappointing. A reason for that can be that there is not appropriate application for the particular disability or the people that the application aims are not finding it useful. Only 4.2 per cent of the people are using speech recognition services and 25 per cent are using it occasionally. This shows us that the users that are experience some kind of disability are not using voice recognition software often. In our case we need from people to use the speech-voice recognition software. A 56.5 per cent of the people think that software can be built to help them with their disability and that is something very encouraging too. There are some opinions that state there is no possibility that an application can be built to help them but the reason is not known in this survey. Fifty per cent of the people are using applications to their mobiles for educational reasons and 12.5 occasionally. My application can be used for educational reason especially if someone wants to explore the British sign language words and learn them. The last question was only answered by one man explaining what he needs from an application to his smartphone. 13

14 1.2 Android and What the Application will do I will design this application to handle speech recognition by using Google s voice recognition API Demos for Android. The Google s source code that android Software Development Kit uses is an open source code used by many developers who want to create their application and they can easily upload it to Android Market [7], a website that has a collection of android applications made by different developers around the world. Android is an operating system used for mobiles and uses Java as a programming language. Developers have full access to the same framework APIs used by the core applications. The application architecture is designed to simplify the reuse of components; any application can publish its capabilities and any other application may then make use of those capabilities (subject to security constraints enforced by the framework) [8]. An Image that shows the architecture of the Operating system that is going to be used for the developing of this application 2. Architecture of Android. This image taken from Android Developers [8]. Now I am going to illustrate the basic components that Android Software development kit provides to us to build my Application. First of all for this application it will be used the Androids 4.0 Platform SDK and for the reason that this is the last and most updated SDK it has some new features such as: 1. Better voice recognition engine: this is important for my application because I want to handle the errors in speech both quick and in an acceptable way. 2. Better accessibility APIs: Since this application aims people with disabilities I will need something better and more accessible for our users. Accessibility features that this application will use: Basically this application is aiming deaf people and the interpretation of spoken words to sign language words so the only accessibility feature that this application will make use of is the speech recognizer that can interpret spoken words to written text. 14

15 Let us briefly discuss the procedure based on my application will work and afterwards i will review all of its basic features. The user under a friendly environment, who can be either a person with hearing disability or a person without any disability at all, can activate the listening button when someone else is speaking pointing the mobile device to the speaker. Then when the interaction between the speaker and the mobile ends, the system of the mobile will try to interpret the words of the speaker to signs based on the sign language. Gif s images will give the result to the users screen with both the word written in text. Due to some errors that might come up, the application may have different options of text for the user to choose to translate into signs (this aims mostly people who can hear and they are learning sign language for a purpose). Speech will be interpreted to Strings using Google s speech recognizer which will discuss later on. There will be a SQLite database on the device, because android development uses SQLite, who will keep the record of the String and the corresponding place in the directory of android layout that will contain the image GIF. When the matching of the speaker input with the String value in the database will be done successfully then the sign will be shown to the user. The sign will be shown to the user when the speaker stops speaking or the user presses the button which indicates the end of the input speech. Of course because of the lack of perfectly interpretation of the spoken word to String, voice input engine brings more than one results available to the user*, there might be enough errors that are able to give a mistaken result to the user. So in some cases maybe the result will be separated to two possible answers of signs, so the user can choose which one is more appropriate based on the different text words that the system will provide to him. *Android 4.0 Platform will process the voice input that the speaker gives and afterwards it will connect to Google s servers to return the recognized Strings, if the input was not clear the recognizer will return a list of choices for the user that is close to the recognized word, and then the user can select which one is the correct one. 3. A sequence diagram of this procedure 15

16 2. Literature Review 2.1 A trip to Computer science and languages Dr. Becky Sue Parton [10] is an Assistance Professor at the University of North Texas and has a long experience on research sectors as Educational software development for Deaf students, Multimedia design and programming, Virtual reality and signing avatar and Mobile technology and also distance education. She states [9], Technology is rapidly changing and improving the way the world operates. Barriers for people who are deaf are diminishing as projects of the past two decades have unfolded. Through the use of artificial intelligence, researchers are striving to develop hardware and software that will impact the way deaf individuals communicate and learn. There has been a long technological investigation between the differences between sign languages and written or spoken languages. Computer scientists and experts on sign languages are trying to solve the problem of interpretation of signs into text and vice versa. Some of the scientific projects have succeed to a point that a computer can read text or speech and translate it into signs and these signs can be understandable by a signer.ibm is one of the companies that have developed a system that can be considered very accurate in this procedure[11]. Also with the help of motion detectors signs can be translated to words. Big projects have been developed to connect spoken language with sign language and a lot of work has been done by scientists in this sector like Cox, et al [3] that developed Tessa a project that i will review later. There are many differences between these two kinds of languages and computers require intelligent systems and faster algorithms to capture and interpret. Looking more closely to sign to text systems someone will immediately understand the difficulties that the system has to overcome to give the correct result. Peculiarities of the two languages make the translation process taking into consideration a lot of details. Let s see some of them as Sáfár and Marshall described them [4] and originally stated by Brien [12] and Sutton- Spence & Woll [1]. Sign Order: It has to do with the order that the signs are expressed. Focuses on the direction of the verbs and what is signed first. Signing Space: Defines the area of signing, the importance of this is what is been signed in what space. Directional or agreement verbs: Looks at the direction of the verb and the subject. The verb (signing) must begin at the position of where the subject is and finish in the same position. Now i can see that the amount of the details that someone has to go through when creating a sign to text system are big. The same thing happens with speech to sign translation because each verb and the subject has to be analysed by some advanced speech analysing tool in order to be translated into the right sign. Consider that before technology evolved to faster computer machines some very deep and time consuming procedures on 16

17 translating speech to signs or the opposite has been done. Now speech to sign language projects delivers the best result in time [4]. But in mobile world some things are different, the limited computational resources that the mobiles or smartphones have does not allow that much processing power. It is difficult to be able to run such accurate and big systems to a mobile that its memory and its CPU are limited. This was in the past as now new and bright developers, as the mobile technology improving, are trying to make the best use of a mobile system and actually build quite complex applications. There are a lot of applications projects that trying to connect spoken languages with the sign languages through technology and as the speech to text technology evolve there will be even more. For mobile phones and smartphones so far there are no applications that can translate speech to any sign language, although there are a lot mobile applications that act as dictionaries and can translate text words to signs. My aim is to develop a smartphone application that will act as a learning dictionary for deaf people translating words to signs but also spoken words to signs too. I will not look at whole sentences and their translation to signs so i will avoid looking at the grammar of the sentences as this will become very complicated, but t it will be helpful and interesting to review some of the most important processes that speech to sign language developers use to create accurate and robust software that help people with hearing loss. There are two main procedures for the development of such systems. First the translation of speech from spoken words to Strings and then the sentences of written Strings to signs. 2.2 Implementation of Speech to text systems Implementing hidden Markov model Let s see some implementation of a speech to text system to understand the idea behind speech to text systems. S.J. Melnikoff, S.F. Quigley & M.J. Russell (2001) [13] have implemented a text to speech model for the translation of spoken words to text. In their paper they explain that a particular decoder that is needed to interpret a continuous spoken sentence to written words in a computer needs a lot of computational power. This led to an arguing of whether an improvement of the existing algorithms is needed or a new approach that will change the procedure of speech to text to its best performance. The authors also mention that some already existed systems work more accurately if the speaker speaks with a certain way (slow, carefully) and there is not noise from the environment. To understand the process of translation let s see some of the theory as Melnikoff et al [13] described it in their paper. The writers agree that HMM (Hidden Markov Models) are the most used procedure for translating speech to text. HMM were invented by Markov and Shannon but other scientists like Baum, Petri Soules Weiss help on a more in depth development of the theory [38]. Speech for the computer is a series of wavelength signals and by the help of Markov Models someone can isolate sequences and associate them to words. 17

18 4. Image taken from the: The HTK book [14]. The actual problem as the writers state it is that we always have an observation sequence such as O=O0, O1 Ot-1 where O (t) represents the data of speech on small and fixed intervals of time. Now given a number of models M each of them is a representation of a spoken utterance, our problem is to find the model M that fits best or describes best this sequence of spoken words [13]. Understanding the use of Hidden Markov Models in speech to text translation is of a great importance for us to understand the procedures that the computer goes through when processing a number of spoken words. If someone looks more deeply into the Hidden Markov model process he can be able to understand the computational power needed for that kind of processes. It would me more appropriate to see the Hidden Markov Model process through a picture and discuss it. 5. Hidden Markov Model. Image taken from (S.J. Melnikoff, S.F. Quigley & M.J. Russell) [13]. At (1) we can see the observation sequence from a state sequence of length t. A state can emit only one observation each time t. Now routes are built based on the sets of sequences through the states (2).And (3) showing the finite state machine. Looking at the state machine we can see that a node (j, t) which is a state inside the route agrees that an observation O (t) was generated by the state j. 18

19 Now that we have seen the basic function of Hidden Markov models it is time to take a look at the structure of a speech to text system as Melnikoff et al [13] implemented it. It is better to describe the system process by looking again a figure of the system. Quigley & M.J. Russell) [13]. 6. Image Taken from (S.J. Melnikoff, S.F. The elements that is going to be processed is inside a Hidden Markov model block (1) these elements will produce δt(j) which is the probability of the observation sequence in time t. Also it computes the most likely predecessors Ψt (j).after this procedure the δt (j) values goes to Scaler, there the data is been scaled so the required accuracy can be achieved (3) [13]. In the current system and the fact that its creators made it simple there is no language model used, so the same probability in all exits and entries are the same for all the HMMs. The probability of a transition is a value that can be computed by a unique block (4) which then transfers this value to a Delta Delay block. The Delta Delay (2) process initializes the values of δt (j) and routes these values to the HMM block, also checks the data streams and their correct synchronization [13]. Of course systems like the above one has to use an amount of hardware to perform these operations in time and accurately. Application Specific integrated circuits (ASICs) are necessary to be used in these kinds of applications [13]. There are a number of ASICs but I am not going to discuss them here Automatic Speech recognition I will now discus furthermore the architecture of an Automatic Speech Recognition (ASR) system and moreover the different techniques that this systems provides to mobiles with speech recognition technology. I am going to investigate the work of Dmitry Zaykovskiy 2006 [15] who made a survey on Speech Recognition Techniques for Mobile Devices. First I will look at the architecture of ASR systems for mobile devises, then I am going to explore Embedded Speech Recognition Systems, Network Speech Recognition and Distributed Speech Recognition all this as the author describes them. Before I begin discussing details let s see some results of some desktop ASR systems and learn about word error rate. Most of the new systems that can perform speech to text translation have an output called word error rate (WER) [15]. WER is the edit distance between two words (Strings).This edit distance is the minimum number of some operations. These operations of deletion insertions or substitutions that a String takes in order to be change to another String [36]. As stated by Iain McCowan et al [36]: WER is the edit distance between a reference word sequence and its automatic transcription normalised by the length of the reference word sequence. 19

20 7. Image taken from (Dmitry Zaykovskiy-2006) [15] shows the recognition rates of state of the art desktop ASR systems. We have seen before the processes of Hidden Markov Models in speech to text technology now let s see some of the basic operations that are used in speech recognition processes. As Zaykovskiy [15] explains what an accurate ASR has to do is to find the most likely to happen sequence of words W= (w1, w2 ) that belong to a fixed vocabulary when this system will receive a set of acoustic observations O. Based on the above a speech recognizer has to be able to perform some operation in order to give a desired output [15]: a) Extract observations (acoustic) from spoken utterance. b) Calculate P(W) which is the probability of an individual word sequence to take place without looking at the acoustic observations c) Calculate P (O W) which is the probability a set of observation to be derived from a certain sequence of words. d) To be able to find the word sequence that delivers the best result. As Zaykovskiy [15] states, each word consists of a different acoustic units, each of this acoustic units is modelled by a connection of state S that each one has a probability density function P associated with it. If we combine the states likelihoods and the state transitions we can have the final likelihood P which is the probability a set of observations to be derived from a sequence of words as we saw before. Of course implementing a robust speech to text system for a mobile would be a difficult procedure. Let s see some of the problems that mobile ASR systems have on being developed for mobile devices as Zaykovskiy [15] sees them. 1. limited available storage volume 2. Tiny cache of 8-32 KB and slow RAM memory from 1MB to 32MB 3. Low processor clock-frequency 4. No hardware made floating point arithmetic 5. No access to operating systems for mobile phones 6. Very challenging acoustic environment 7. High energy consumption during algorithms execution. Although all this statements is actually true about mobile phones at the time Zaykovskiy [15] investigated ASR on mobiles (2006) the technology in the mobile area from 2006 and on was evolved with great steps and their hardware capabilities increased to a point that a robust speech to text system 20

21 can be built for most of the mobile devices that are on market today. But things like the above statements were taking under consideration of mobile companies when they were building the next generation smartphones. 2.3 Different Systems of Speech Recognition Zaykovskiy [15] distinguishes the ASR systems based on the two ends, the front end and the back end. Front end is where the process of elements are collected and back end when the elements are processed of an algorithm based on the acoustic characteristics. Furthermore he presents three different system structures of ASR systems. The client-based, where the front end the back end are implemented on the terminal. Server-Based, where speech is been sent to a server and the recognition happens there. Client Server, where the recognition of speech is done on the terminal but the classification is done on sever. In the application that I am going to build we will use Google s API for speech recognition that mobiles with Android Operation system use, so the server based procedure is the way that the ASR will work. Google s API works as sever based application for the translation of voice to speech but I will discuss more on this on another section of the research paper. Now we are going to see these three different categories of ASR systems more analytically and later I will focus on network speech recognition which is the technology that my application will use. I am still following the research of Zaykovskiy [15] Embedded ASR systems Starting with embedded ASR systems, in these systems the procedure of translation from speech to text is done in the client system (meaning the mobile). No exchange of data with a server is required but this system needs a large amount of memory and fast speed on execution. We can see in the next figure that the whole procedure takes place on the client. 8. Embedded ASR system figure taken from: Dmitry Zaykovskiy [15]. 21

22 2.3.2 Network Speech Recognition Looking now on the Network Speech Recognition systems we can realize that a wireless communication has to be available for the client. As we can see to the figure below the data is been transmitted from the client through the net to the server which decodes the speech and passes the ASR features to the appropriate language model and goes through the acoustic models trying to find the best available word (ASR algorithm we have seen).then the result returns back to the user. 9. ASR based on Network transmission of data. Image taken from Dmitry Zaykovskiy [15]. Also this model supports the use of a server from many clients to in order to perform the translation tasks to more than one client as long as the server supports it Distributed speech recognition systems In Distributed speech recognition systems the actual recognition is done on the server rather than the client. This is similar to Network speech recognition systems but with some differences on the extraction of features and their compression. Before I continue discussing the distributed ASR systems let s see what do we mean by extraction of feature and compression. Feature extraction has to do with the analysis of the speech signal; in general some of the feature extraction techniques involve temporal and spectral analysis. Considering temporal analysis then we can say that the waveform of the speech can be used for the analysis of the signal and in spectral analysis we use a spectral representation for the analysis of the signal [16]. Compression techniques are used for the compression of voice data files so the process of transferring data to the server will not require a lot of time. 22

23 10.Distributed ASR. Image taken from Dmitry Zaykovskiy [15]. All the current available smartphones and mobiles fall into these three categories of ASR systems. It is know that the technology around mobiles and their capabilities is rapidly evolving. During this evolution speech recognition systems became more robust and accurate due to new systems and more clever algorithms. Let s see some milestones on speech recognition from 60 s to Milestones in speech recognition technology. Image taken from B.H. Juang et al [17]. 2.4 Speech to text and Sign Language Systems Pyramid Pyramid [18] (Personalized Interactions with Resources on Ami-enabled Mobile Dynamic Environments) is a big project funded by the Spanish government and the European Union. It is a huge project that started at 2008 with five SME s, 3 large companies, 5 universities and 3 non-profit organizations (ABAMA, AECOC, AGOTEK, Research Industries Abocation Carnicas, Inixa, Polytechnique University of Madrid and others).its main aim is to evolve the mobiles that we are using to more clever devices trying to help people use their 23

24 mobiles for a lot of their everyday activities and to increase accessibility for deaf people. The aim of the Pyramid Project can be described as: The Pyramid Project's mission is the transformation of our mobile devices in a 6th sense that we assist and mediate for us to facilitate and improving our daily interactions with everyday objects that surround us in our workplaces, homes or public facilities (hospitals, municipalities, etc..) [18]. These last years, mobiles and smartphones can have a direct access to internet and people can use them to not only obtain information but also use critical resources. Accessibility is a major advantage that the mobile technology allows us to use and there are big projects that help and give information to people with disabilities. The Global Accessibility reporting initiative (GARI) [19] is one of them and its creators have as a goal to provide information about the accessibility features of the mobiles. Due to the fact that I am trying to develop an application which will increase the use of the smartphones and mobiles and help deaf community with an application it will be good to follow some of the ideas that can transform our mobile through various applications to a very usable device for special use. Almost one in five of the world's population lives with some kind of recognized disability. Sooner or later, everyone will develop at least some limitations in vision, hearing, dexterity or learning. To improve usability for those of us with sensory or physical limitations, phones have features for accessibility, which are continually improving and becoming more prevalent as technologies advance [19] icommunicator Software for translation of speech to sign language is i-comunicator [20]. It was developed by Interactive Solutions, Inc. (ISI) when in 1999 this company approached by a 16 years old boy that was deaf and wanted software to help him communicate with hearing people [20]. icommunicator can be used for education or daily communication and even on emergency situations as its developers suggest. It can translate speech to text, speech/text to video clips of American Sign Language and to different other formats. The difference between this and the application that I am developing is that this software is able to translate big sentences of speech into sign language videos while the software that I am developing will translate single words. The video of sign language is been displayed to the screen and the database of 9000 available video clips that correspond to recognized words. When it s needed it can use a dictionary to search for definitions or even connect to internet to bring useful information. It uses one of the top speech recognition software in the market right now the Dragon NaturallySpeaking and it is one of the most accurate on recognizing the human speech [21] Tessa Cox et al [3] have developed a very interesting speech to sign language system that can be used by deaf people in daily transactions with a post office. The system simulates a clerk of a post office that can have direct communication with a deaf person. The transaction orders of the system are basic phrases that are used to a daily transaction in a post office, it is worth to review this system so we can understand the basics behind speech to text and text to sign language systems. Due to the fact that the system can deal with only a small amount of phrases, this phrases were pre-stored in British sign language using a 3D avatar to display them on the screen. But let s see a picture with the structure of the system so we can understand its components and its processes. 24

25 12. The post office communication system.image taken from Cox et al [3]. As we can see from the above image the clerk of the post office speaks, the speech recognizer translates the speech into written words and displays them to a screen that the clerk can look at, also the phrase Lookup tool will get the words and will move on to assembly the sign sequence which will be shown to the customer. The speech recognition that was used was Entropic HAPI that uses Hidden Markov Models to find the sequence of words [3]. A very important issue is the work of the phrase network that the creators of the systems have implemented. A network of specified words and sentences that forces the recognizer to move to a defined path of these words,as the number of the transactions are limited the sentences and the communication should be limited to the necessary words that a transaction can have between a clerk of a post office and a customer[3]. To better understand this let s see a section of the recognition network. 13. A section from the recognition network.image taken from Cox et al [3]. 25

26 In order to present the signs to the customers a 3D avatar was used named Tessa that can sign the result of the speech recognition process from the clerk. Cyber gloves that can record the movements of the fingers and thumb were used for the input of the signs so Tessa can learn to sign. Polhemus magnetic sensors for the movements of the wrist upper arms and torso also used. Other sensors were placed in the head area or in places of interesting as mouth or eyebrows to have a more accurate signing [3]. Various problems might occur on building speech recognition on sign language software even if it aims to cover a small part of a communication (Post office transactions). The record of signs using sensors and cyber gloves is made only once by person and the avatar learns it, this procedure should be done very carefully and each sign must be given to a very accurate point to the computer. Sign language is a language that requires movement of more than one members of the body. Movement of hands, fingers, mouth and very important the space that each signer starts and finishes signing. This thing has to been taken under consideration when an avatar is used to display them in a screen. Another important issue is the translation of the Natural Spoken language to British sign language [3]. From tests that was made on the Post-office transaction system most of the users said that it would be more helpful to for the system to support both text and sign language output and even better if an output on Sign Supported English (SSE) which is another way of signing much closer to the structure of sentences, and order of words that the English grammar has. Let s see an example of two sentences as Cox, S. J et al. [3] describe it, they also state that SSE is like a system for encoding English : e.g. The sentence: The man is standing on the bridge On SSE Is equivalent with MAN + STAND + ON- BRIDGE And the sentence: The cat jumps on the ball On SSE is: CAT + JUMP + ONTO + BALL I have discussed some of the problems that a system like this can have but let s see now the importance of accurate identification of phrases and their translation in signs. The goal is the system to be used from the users (deaf people) and in order this to happen it must deliver some expectable results. The system that Cox et al [3], build was assessed in two very important ways. 1.The accuracy of identification that the system had for whole phrases 2. The important role of semantic units within a phrase. To understand the second way of assessment I will focus on what the scientists stated. For example the phrase It should arrive by Tuesday but it s not guaranteed will require five semantic units so should arrive Tuesday not guaranteed will have an identification success of 100% but the phrase should arrive Tuesday will have a success of 66% [3]. To conclude with the system that Cox et al [3] build I will look on the measures that they used to assess their results. The Intelligibility of the system, which counts how clear, was the signs and understandable to the user and the ease of identification that shows how easily the signs and the speech inputs were identified. And Identification success based on identification errors and how clear or unclear was the signs delivered in the user. 26

27 2.5 The Architecture behind text to sign language Representation Now that we have seen one system that uses speech to sign language recognition I need to see in more details the process that text goes through in order to be translated to sign language images or signing of an avatar. I will investigate the architecture behind a text to sign language system as Eva Safar and Ian Marshall [4] describes it in their paper The Architecture of an English-Text-to-Sign-Languages Translation System [4]. To begin with the writers state that the there are two main points in text to sign language translation. First if someone is looking to translate text to signs he needs to handle the English text as a "semantic-based representation"[4] and after this procedure he should be ready to translate this representation to a form of graphical representation. This graphical representation can be video an image or even better for the deaf User a virtual avatar. Examples of these outputs can be found below. 14. Static Image showing the movement of the hand close to mouth to indicate eating or food (ASL). Taken from: asu- ASL blog [22] 27

28 15. Avatar signing in space. This is a moving avatar and the most preferred solution on presenting signs for a system that converts text to sign language. The movements of the heads and the body are clearer than static pictures. Also it does not use as bandwidth as a video because the format is different and the size is less than a video of human being signing. Image taken from: Science photo Library [23] Description of the system Now I am ready to go through the phases of the processing architecture of the text to sign language systems and review the work of Eva Safar and Ian Marshall. I will look at the processing architecture and then the syntactic parsing of words and translation to semantic representation. Before I start looking at text processing it will be useful to see the stages of an English text translation to sign language model [4]. 16. Stage of English text translation to sign language. Image taken from Eva Safar and Ian Marshall [4]. I am now going to discuss the model as its creators describe it. In the first stage in the model of translation the user has an active role as in some of the following stages. This is happening because the automatic translation techniques maybe insufficient or fail to some point. The user can change the text so he can avoid phrases that are not supported by the system [4]. Later in the syntactic stage the CMU (Carnegie Mellon University) is performing the parsing process to the text. CMU is a link grammar parser. CMU process can be described by Davy Temperley et al [23] as "Given a sentence, the system assigns to it a syntactic structure, which consists of a set of labelled links connecting pairs of words. In this procedure the user can correct the output of CMU parser and moreover to select between different outputs that the software will deliver. The next is stage is that the grammar parse will become a semantic representation, a Discourse Representation Structure (DRS) [4].The user here could also interfere and change the alignment of words by him or he can use a tool appropriate for this process. We are now on the stage that signs can be produced using the semantic representation from the previous steps [4].With the help of a framework called Head Driven Phrase Structure Grammar (HPSG) signs can be generated by changing the movements of hand shapes or other parts of the body that helps on signing and choose appropriate sign grammar for the semantic representation. After that we are ready to link the linguistic analysis we have with software that will generate the physical gestures that make up the signs [4], Signing Gesture Markup Language (SiGML) is used for that Parsing Process Now let s look in a more detailed way the process of parsing the English text. Parsing process can be describe as a way to connect a word inside a sentence or in another way the correct incorporation of the word inside the sentence and the meaning that the sentence will have when its linked with this word. This works as dictionary for the words to be parsed. The CMU parser dictionary will link words in an appropriate way and handle the meaning of nouns and verbs inside a sentence. There is the possibility that the parser can have more than one 28

29 output, in this case the user has to select which output is the required one [4]. This is close related to the error handling that I am going to face in the implementation of the system for a mobile device. Let s see an image of the CMU parser that I discussed above: 17. CMU parser.image taken from Eva Safar and Ian Marshall [4]. We can see the sentence on the script area and the lexical units associated with the sentence. Then the CMU parser will produce a set of links for the sentence to show the different word linking to the user. A very important aspect and part of the translation process is the semantic representation of the sentences and words. The different way of structure of the sentences in English and British sign language requires a very accurate procedure to take place in order the meaning to stay the same and not be lost in translation between these two languages. In the project that we are currently reviewing the approach from English to Sign language has been done based on the Discourse Representation Theory (DRT). The structure of the Discourse Representation (DRS) is based on creating two parts: firstly a list of variables that states the nominal discourse referents and secondly a collection of sentences that will keep the meaning and the semantics of the discourse [4]. Let s see a small example of the Discourse Representation Theory from Kamp & Reyle 1993 [24] so we can understand how this theory works in practice work. 18. The CR rule Image Taken from Semantics C [25]. Here we can see the sentence on a top down representation where S denotes the sentence: John runs. NP denotes the Name Phrase and VP the Verb phrase. In this example the Verb Phrase is runs. 29

30 Now with the construction rule (CR) Proper Name (PN) we get D witch gives the variable x to John and the same variable to declare that he is male. So the sentence becomes: A male whose name is John runs [25]. Considering nouns, the approach that must be taken has to be very strict and careful. Nouns are treated very differently in British sign language and that s because pronouns are translated by pointing to the location that is associated with the noun. In BSL the position of the noun in the signing space plays an important role for the phrase so the handling of sentences that include nouns has to be handled carefully [4]. Until now I understand that text to sign systems is somehow analogous to text to speech systems. The basic principle on text to speech systems is that a text detection process is made along with normalization and linguistic analysis so the text can become sounding speech [48], while on text to sign the parsing process that translates the given sentence to sign language sentence recognizes basic elements on the sentence that must be part of the sign language sentence. So in both these cases we come across a form of interpretation of text. 2.6 Mobile Applications that use sign language Due to the fact that I am building a mobile application that uses Sign language it will be interesting to see what other mobile application were developed and use sign language Kuurojen Museo This is a very innovating application that can help deaf people guide inside a museum in Finland [26]. This application is built for mobiles for people to carry with along their visit in the museum. Most of the museums offer a tour and an explanation from an expert about the things that are displayed. For deaf people this is difficult situation but using this application can really become something great. In the entrance of the museum someone can take a mobile with him and select the room that he is into right now or wants the description and further details about the artefacts that are displayed. The application, when someone will choose the room (by map view) will inform him about the artefacts that exist in the room of his choice and when he press on one a description on sign language will begin. This application uses pre-recorded videos of real person to saw the signs to the user. One benefit of this is that there is no need for the signs to be signed by an avatar so it works as a database that when we select a place inside the museum the corresponded video comes up. This is something that I will use in my application too. Databases holding the information we need to provide to the user, in this case video explanations about the artefacts. A disadvantage on using a pre-recorded video is that size will be bigger and there is no chance to modify the content. E.g. adding more information about an artefact, then you will need to record the video from the beginning [26]. 19. The signer will provide the information that the visitor will need and in the background you can see the artefact. Image Taken from [26]. 30

31 2.6.2 Mobile ASL Mobile ASL [27] a very innovating project that helps with the direct communication between two people that use sign language. This app doesn't try to understand the signs or translate them into another language (e.g., English). It just tries to optimise the video encoding to minimise information loss. Mobile ASL [27] developers created this application for mobile phones and they had to face the problem with the bandwidth on video transition on wireless networks using video calling. Video transmission with today s wireless networks is not something reliable to communicate with sign language. Due to the complexity of signs during the communication and the compression of the video the output on the screen was not clear enough. So these developers have used video encoders which are compatible with H.264 compression [28] standard using an open source codec x264. These developers focused on visual processes that take place in the conversion with sign language and tried successfully to optimize the encoding of the video in real time. This led them to a more quick and successful communication that can take place any time anywhere using just the mobile phone. The quick and effective video compression in the particular signing areas of a person that communicates through a phone helped them deliver a very useful product for the deaf community. a) b) 20. (a) Using motion vectors in the video to (b) distinguish macro block level encoding. Image taken from: Mobile ASL [27]. 31

32 21. Using skin detection algorithms to find important areas in the video. Image taken from Mobile ASL [27] Wisdom One of the most interesting application that exists in the word of mobile and sign languages is Wisdom [29]. Many companies from UK, Germany, Sweden & Spain and major companies like Vodafone and Ericsson took part on building this application. Wisdom (Wireless Information Services for Deaf people On the Move) [29]. Wisdom takes advantage of the wireless networks and provides a set of very useful information to the users of sign language. It can provide services like [29]: 1. Real time conversation between two signers. 2. A video relay service that help with the communication between hearing and deaf persons. 3. Interpreting services from distance. 4. Interworking for emergency calls with text services. 5. Online Information for sign language. 6. Automatic sign language recognition. These six services that Wisdom offers help the deaf community to a better quality of life and considering the communication more equal with the hearing community in the terms of the Information area. Looking them separately, a real time conversation between signers is something necessary. A good quality in the communication is also necessary so the sign language communication can be carried out with success [29]. The video relay service is something truly remarkable. The real time communication between hearing and deaf persons is available now with a video relay service. The signer can sign to the camera and then the video relay service communicates the input with an interpreter at another remote position, the process of the interpreter is to connect with the deaf user and translate the sign to voice and the opposite. Then the hearing person can communicate with the deaf person in real time [29]. Distance sign language interpreting services is the procedure that I described above. The real time communication between two signers or a hearing and a deaf person is now reality from anywhere at any time using the advantage of wireless communication and relay video services [29]. Interworking of network and text telephones is something that can help Deaf people to communicate when an emergency has occurred. Text telephony and wireless IP conversation have been connected through a small gateway to make this happen [29]. 32

33 Very interesting automatic sign language recognition developed for wisdom also. Let s see some images to understand the procedures. 22. Sign language recognition system that Wisdom uses. Image taken from Deaf Studies Trust [30]. And let s take a look to the parts of the system that information s pass from: 23. Wisdom Architecture. Image taken from Deaf Studies Trust [30]. 33

34 2.7 Text to sign translation systems The Atlas Project ATLAS (Automatic translation into sign language) [31] is large project that started in Italy ON 2007 and has as its aim the translation of many everyday interactions that the deaf people can have with hearing people. Interactions like real time conversations, translation of TV programs or DVD movies. One of the projects aims as well is to integrate Deaf people to the society giving them the possibility to explore television multimedia and entertainment. The whole project [31] is designed for Italian sign language and the object of the program is a system to be created that can translate Italian words and sentences to Italian sign language sequence of signs. Linguistics of the language had to be studied and the structure of the language including the semantics. But let us see some of the main procedures that Atlas hopes to deliver to the user [31]: a) Translation of television programs from the Italian language to LIS (Lingua dei Segni Italiana). This is happens simultaneously. b) Translation to Italian Sign Language a variety of multimedia contents from DVD or web c) Creation of services for an easy access to public services d) Information for deaf individuals can be displayed on mobile terminals Considering the how big can this project become ATLAS creators also state that this can help a lot of deaf people to find work, given the fact that a lot of translators will be used. The project has an aim the immediate translation of not only DVD movies but for some channels on the television. The remote control will have an extra button and when the user will press it the signer will appear and start translating on real time Dicta Sign Dicta Sign [32] is a very interesting project in the translation of sign language and the communication of deaf people that use web 2.0 technologies. The creators of this project has looked beyond communication, they researched the way that information s in the web are distributed. The web is full of sources and web addresses that people interact with each other share information and contribute to others ideas and knowledge. For a deaf person the written text technology that these sources have may be sometimes unfriendly. The deaf community can use video of signing to provide information to the web but now there is the problem of anonymity of the user. That s why Dicta Sign designers came up with the idea of procedure that will record the sign of the user and will represent it with the use of an avatar. This can help the user to act anonymously and share or contribute information through the web. Also a sign language to a sign language translator will help the users of the web that use different sign languages. 34

35 The great innovation of that system [32] is the creation of Sign language Wiki. This Wiki works exactly as the current Wikipedia works but with signs. Information on sites as Wikipedia is editable from users who wish to add their knowledge about the subject. But for deaf community this cannot be done unless a video that will show the signer providing information will be uploaded in the web. With the technology that I mentioned before with which a person can record the signs from a webcam and an avatar will display them, a wiki for deaf people can become reality where each one can contribute anonymously. 2.8 Critical Analysis of literature review In this literature review I have done some reviews on systems that connect speech recognition software with sign language interpretation software but not only. I have seen enough applications that use sign language and the way that these applications work. This gave me enough knowledge about applications that aims a sensitive group of people and how this people use these applications. I have seen big projects in comparison with the one that I am going to develop. A very important issue that comes up from this literature review is that every system that is been developed aims to a better communication between hearing people and deaf people. This will be my first concern when I will start developing the application. Most of the systems are using a real time translation between spoken language and sign language which is the ultimate aim of an application like this. But this project is not on such a large scale so single words will be translated to sign language images. From this review I understand that there is no available software for Android operating system, except one application that is still under developing for real time translation by the university of Aberdeen and Technabling company [33] and I cannot review it yet, that makes smartphones that are using this operating system to act as a dictionary between spoken words and sign language images. It will be very useful to provide such an application to the users that want to learn or practice British sign language. One of the most important issues in this kind of technologies is the correct interpretation of speech to sign language. Even if the speech recognizer brings the required results during a speech procedure the real time translation requires big computational power. Multithreading will be a good but with the CPU that smartphones have nowadays this is challenging too. One common issue that all smartphones have is the use of battery. The idea of portable is that we need battery power during the day enough to cover all our needs when it comes on using our smartphone. The applications that I reviewed (speech to sign language) demands a lot of computational power and that will cause a fast decrease on our mobiles battery power. 35

36 3 Requirements analysis 3.1 Functional Requirements Note: To become clearer the requirements are highlighted with bold. The application that I am developing aims deaf people, the fact that there is no similar application available for Android operating system tells us a lot about the difficulty that I may face during developing the app. I am designing an application that will be used as dictionary that can help with the translation of speech to text and British sign language signs and considering the people that they are going to be the future users I need to deliver a special design. The users of this application can be of two types: a) A deaf person who uses the sign language as a mean of communication and b) A hearing person who is currently learning sign language and wants a quick way of translating English words into signs. This analysis has to focus to both kind of users but I will give more importance to the fact that the main user will be deaf people. I will discuss this further when I will present a use case diagram that will describe the use of the software and the dependencies. The context of application will be a database of British sign language GIF files that each one of them corresponds to an English word and will provide more than basic words to the user to communicate or learn. The speech recognition will be implemented by the programmer in an Android Software Development Kit and one of the most critical phases in that stage will be the error handling of the speech. The general idea is the delivering of the sign in the screen to be 100% matching with the spoken word that the user will give as an input to the phone. Due to the fact that this application is acting like a dictionary and not a full time continuous speaking and translating procedure, the animated image files should be accurate in their meaning of the word so the user can understand immediately and resume to the next word. One very important issue will be the output on the screen to include the spoken word as a text string in English bellow the BSL sign so the user can act as tester too in case the correct sign does not appear on the screen. Very careful work should be done in the speech recognition error handling because here I might face the problem that the speaking person can use a word but our system will not understand it correctly or paraphrase it with another word and the recognizer will return a wrong result. This can lead to a very bad situation because the BSL sign that will be displayed on the screen will be also wrong. This can be avoid by a)very careful error handling on speech b)the collaboration between the speaking user and the deaf user. The first case is a job of the programmer, about the second case the users involving must be able to interact with each other if the result is unexpected. This has minimum possibilities to happen but if the deaf user is not sure about the translation of the word he can show the written word (String) to the talking person to validate its correctness, again this is situations is something that the programmer is called to predict and act upon. One of the most important requirements of this application is that the user is required to have access to the web to use the app. Voice recognition that Android uses (Goggle API) is a distributed network speech 36

37 recognition process, that means that the voice is transferred through network to the servers of Google where the translation to text happens and the text returns to the users IP, so a wireless connection with the smartphone is absolutely necessary to exist. 3.2 Objects and Aims during the Development: 1) Delivering an easy to handle with a friendly environment application that can help someone learn the sign language but also to be helpful in a moment that needs an immediately translation. 2) Consecutive communication with a deaf person in order to receive feedback about the application 3) Having in mind that the application is aimed for people who may not have good knowledge of English, complex procedures involving the user to complicated actions with his mobile should be avoid. 4) Maximize with any mean the usability features that Android operating system provides. This application will be developed on Eclipse IDE that supports Android SDK 3.3 Non-Functional Requirements One of the most important things to consider is the environment of the application. The user may not have good knowledge of English so I need to offer a simple but very useful to handle environment. I am focusing to the display (screen) environment of the application where the interaction with the user will take place. 1) Speed of database: One of the most important Parts of this application. The Database that will hold the GIF files will start as a small database around 100 images but the possibility to be extended during the development is very big. Android uses My SQLite and the speed that database will be accessed and traversed to return the image is something to take under serious consideration. 2) Speed on the String result: The speed that the String result that will be returned from Google s servers has to be taken under consideration too. 3) Size: The application will not take more than 10 Mb of size. In general the application file packages (apk) files that Android application uses for installation does not take a lot space on the mobiles disk space (from 5Mb to 10Mb) depends on the application but due to the big database of PNG images more space should be available for use. If the database is stored on the smartphone itself, the app will be limited to handling a small number of signs 4) Ease of use / Usability: One of the primary aims. The application is going to be used deaf people, but there is the possibility that someone does not understands complicated English written language so a lot of efforts on presenting a usable and easy functioning application will be done. 37

38 5) Portability: No special portability because only the Smartphones with the Android operating system can run this application. Tablets that use Android can use it too. 6) Accessibility: Very important aspect. The application will provide assistance on education and communication on deaf people. This application comes to help with the communication difficulties that a deaf people can have during a conversation or in his everyday life with a hearing person by translating the speech to BSL signs. Also in education for both a hearing person that wants to learn BSL so he can communicate with a deaf person. 3.4 Use case diagrams 24. Use case representing the actors and their interaction with the system. Main success scenario: 1. The speaker gives the word in an appropriate way 2. The system returns the correct word 3. The correct sign image is displayed on the screen Alternatives: 1a. the speaker does not give the word in an appropriate way or talks too fast or there is noise in the environment. 2b. the system does not return the correct word. 2b. the system returns a variety of possible choices for the user. 2c. the word is correctly recognised but is not in the database of signs. 38

39 About 2c.The system will be also useful if it shows the English word as a String in the screen but it does not show the sign. Originally the application was to be designed on Android 4 version but due to the fact that there are few devices that support this new version of android, most of them support 2.3.3, I decided to develop the application on version. 4. Professional, Legal and Ethical Issues 4.1 Professional Issues As the developer of this application I will try my best to reach to a point that this application can be used from people and fulfil their needs. Moreover the appropriate citation and credits has been given to the work of other developers of this area of developing. 4.2 Legal Issues There is no legal issue raised in the developing of this application. The voice data that is going to be sent on Google servers will not be saved to any kind of servers and my supervisor and me took permission from[39] for the animated GIFS that I am going to use. 4.3 Ethical Issues Ethical issues maybe rose for this project because the application will be used by people with disabilities. This application maybe used by deaf people in Heriot Watt University but also outside of the area of University. Also a survey has been made to determine some requirements so people volunteer to complete the survey anonymously. The fact that this application will be used by humans and the fact that is going to be an evaluation by humans requires an ethical approval along with a human consent that will be found at the end of this dissertation in Appendix C. The deaf people who will help the developer to build his app (by filling in questionnaires or doing testing) need to get some benefit from this work, so a free copy of the app will be given to them. 39

40 5. Methodology There are a lot of methodologies that can be followed in the development of a project. Each project that is been developed needs to adapt a methodology and follow its steps so the developer can know what is needed to be done next or how the project procedures will be carried out. For this particular project and given the fact that it is a personal project the methodology that is going to be used is the Iterative Waterfall Model [37] with prototyping. By prototyping I mean the creation of a prototype of the basic functions of my application before I move to develop the final and full functioning version. This is done so I can get feedback from my prototype and if the feedback is positive I can continue the implementation of the full system. The waterfall model consists of five categories that the developer must follow and go through and the iterative nature of the model allows the developer to go back to another category or step. 5.2 Steps of methodology 1. Requirements analysis In this first step the requirements for the software that is going to be delivered are defined. There are ways for the developer to find out the requirements; the user is the most important person on this phase because he is the one that will define what he wants to be build. Through interviews, questionnaires, surveys the requirements have to be clear from the beginning so the design can start. 2. Systems Planning In this phase the planning of how the system will be developed is carried out. Define dates and use Gant charts to show the future work and how it is going to be done. 3. System Design In this section the design of the system has to be done. What will be the parts of the system, how this parts will interact with each other, how the user will operate the system. All this and more other questions depending of the system that is going to be developed must be taken into consideration. The phase that the requirements will be taken into consideration is the design. The system has to be designed following the analysis of the requirements. 4. Implementation of the system This is the phase that the implementation begins. By implementation someone can describe the coding of an application based on the requirements of the user and most important based on the design that has been done. The implementation phase it is necessary to be followed as planned on planning. 5. Testing of the system 40

41 Here the developer must check if the product meets the requirements and if it is been implemented within the specifications of the design, also if the product actually works and delivers the expected result. 6. Evaluation In the evaluation phase the software will be checked if meets the specification. User also can evaluate the software before its release. 25. Iterative Waterfall Model. Image taken from Evolutionary Model Development [34]. 5.3 Advantages on using this methodology on this project The advantages on using waterfall model in this project will be that I will follow the top down phases.i need the previous phase to be completed so I can move to the next phase. Another advantage will be that I can go back to a phase if I think that I need to change something or reconsider some of the specifications. Another advantage is that if I find a minor difficulty or I forgot something that needed to be done the iterative nature of this model allows us to return and reconsider my mistake. 5.4 Disadvantages on using this methodology on this project Usually there are disadvantages when someone uses iterative waterfall project and one of the main is that there is a difficulty to go back to a phase to change something even if the model allow this to him he can face problems on changing a lot of work content, but given the fact that my application is a small application and not a huge project I will not face this kind of difficulties. 41

42 6. Project Planning 6.1 Gant Chart 26. Gant Chart 6.2 Information about planning and Evaluation This project will take 78 days to be completed without working on Sundays. The work will be done from Monday through Saturday from 8:00 to 12:00 and 15:00 to 19:00.The documentation will be carried out and the end of each phase of the developing process. The application will be tested and evaluated by the developer at first stage and then from people that they might use it when it is complete. This people can be deaf people or colleagues. Feedback will be given to the developer by his supervisor for the code structure at the developing stage and from people that will evaluate and test the application. The difference between the prototype and the final application is that the prototype will be developed for one BSL (British sign language) image while the final application will be developed for more than one. 6.3 Evaluation Evaluation Process: This application will be evaluated by a group of deaf people or people that are interesting on learning British sign language. Questionnaires will be created in the end of this project so the people that will 42

43 test and evaluate the application can write their opinion and if they found this app useful or not. Succeeding on implementing all the requirements (see Requirements Analysis section on this research) will lead to a successful and positive evaluation. An ethical approval will be signed by these persons before they evaluate the software. The evaluation process will take 5 days as stated on the Project Planning and when the developer gathers his feedback he will proceed to changes regarding the final version of the application. The fact that there is no other Android application similar to this application that I am developing makes the evaluation-comparison process difficult, but there are some text to British sign language translators available in the Android Market [7] that will help the developer to compare his application to these apps. 7. Risk Management Table Risk Approach Google voice Recognition Integration. Due to inexperience on working with Google s voice recognition system and no available documentation of the system problems maybe faced. If big problems are faced maybe I will consider using different speech recognition system. Sign Language testing output takes time more than expected. The developer of that application does not know British sign language and he will be based to some online dictionaries [35]. More time will be needed on exhaustive testing on the output in the mobile to understand if the image is the correct one. The images that are going to be uploaded on the mobile phone may lower the performance of the device and the application. Because of the limited resources of the mobile even in these modern technologies this is a big risk. Maybe changing the way of getting the images e.g. connect the webpage and show the images from this source if that gives better performance. Inexperience with managing files and folders with integration with database. This might lead to some latency in the Implementation phase. Maybe also some of our buffer time will be spent here. 43

44 Documentation Time takes more than expected. The documentation stage will be carried out at the end of each phase of the project. Sometimes documentation tends to become more time consuming so I will manage the time of documentation better or add use some of the buffer time to complete it. 8. Designing the application 8.1 Designing the database The application that I am developing needs to get information from a database. Android operating system supports SQLite as I have discussed previously. The developing of the database has been done by the developer as the first stage of the application. The developer and his supervisor took permission from the website [39] to use the displayed signs (GIF images) for the application that is going to be built. What was needed to be done is to create a database that will provide to the user some very useful words that can be translated to British sign language signs. The developer was constrained in the number of words that he is going to use and by the words meaning as he must choose the words from the list of words that the website provided. By using SQLite Expert Personal 3 [40] software that can create databases SQLite, I created my database for the initial prototype of the application that consists of 7 words. The GIF images that the website provided to me could not be stored in the database as GIFS (Animated Images) that declare movement because SQLite does not support animated GIFS and also Android does not support animated GIFS to be displayed(displays a static image), so I had to follow another approach. Splitting the GIFS I have decided that the appropriate way of designing my database is to split the GIF image to bitmap images and then store them into the database next to a primary key column. The actually primary key is the word itself so 44

45 the queries will be done quick and accurate. For that purpose I have used image splitter software [41] that helped me split the GIFS. In this phase the problem was that each GIF image had variable number of single images. For example the word buss as you can see above has two single image that if they put together form the word buss. 27. The above displays the word buss but let s take a look to the word apologise: The word apologise takes five single images to form the GIF, so my database needs to heave enough columns available for the images and one column available for the word. Having seven columns for the images is enough because there is not a word that needs more than seven images to be displayed. Designing the database like that, some of the queries that will be done on words that have less than seven images will have as a result a null. This will be further discussed on the implementation section of the paper. The splitter software did the operation of splitting the GIF files into frames automatically. When the different frames of one GIF were available I did start the storing in the database adding the word and the frames to the corresponding column, this was done manually and the developer here had to be careful on putting the imagesframes into the right order. This was a long procedure that took time because the final version of the database has 420 words that had to be stored along with their separated frames. 45

46 Now that I have split the GIF images we had to find a way of forming them back together so the meaning of the word can be displayed to the user. By using Androids animation library I can form different images together to make the animation available for the user. This will also be explained in the implementation phase of the paper. 9. Implementation The whole application depends to a huge point by the database that will hold the words and the images, so i had to develop a quick but fully functional prototype of the application so i can be able to test the speed of the queries that will be done on the database. This speed is the key point on the application because my requirements stated that a fast interaction between the user and the application is necessary. The main difference between the prototype and the final application is the number of signs that will be available in the database. The application should support both input by text, so the user can enter the desired word but also by voice so the user can speak to the machine and the correct sign will be displayed. One of the developer s first choices is to keep the interface of the application as simple as it can be. Due to the fact that there is no need of complex screen navigations the basic buttons and screens are developed so the designer can see the functionality of the application. Through discussions with potential users, I as designer decided to create an interface that supports simplicity and ease of use. The user wants to interact with the machine in two ways. The first way is by text input. He should be able to give a word by text and the system to give the sign corresponding to the word or information that this word is not available. The second way is to interact by voice using Google s voice recognition service where the user will speak the word to the machine and the machine will return the word so the user can select it and see the sign. For that I need the basic components that android provides to us. I have discussed the Android operating system in a previous section of this paper but now let s see what features of Android I have used to build the application. Android libraries provide me with important tools that help me using Java programming language to build application for mobile phones. Android View- Views are the basic tool that help us design and built the interface of the application. Buttons, text-fields, image-views and other basic components that the applications screen consist of. Also it gives us methods to handle events when the user interacts with these components. Android Activities Very important point on an android application is the activities. The activities are different classes that provide different screens for my application; an activity can start from another activity by pressing a button or by interacting with the system in some way. Android Intents Also important issue in my application. Intents are the elements that can start an activity through another one. For example if I want to show to the user a new screen by the pressing of button I can start a new screen activity with intent. For example the snippet of code below uses Intent to start the voice recognition activity by the pressing of a button. Intent myintent = new Intent(MainScreenActivity.this, VoiceRec.class); MainScreenActivity.this.startActivity(myIntent); 46

47 Android XML files Every android application consists of XML files that are necessary for the creation of the screens and the designing of the elements on these screens. These files are placed under the layout folder of the application. Android SDK provides both XML code and visual graphical representation of the layout that the developer can use and they are connected. By connected I mean that every change that is happening in the graphical representation it happens at the XML file too. In these files you can assign a large amount of attributes for every element that the developer decides to include to his application. The snippet above shows us the design of a button in XML code <Button android:id="@+id/resume_signing" android:layout_width="150dp" android:layout_height="50dp" android:layout_alignparentbottom="true" android:layout_alignparentright="true" android:layout_marginbottom="34dp" android:background="@drawable/square_button_11" android:text="resume Signing" /> I assign attributes such as the id of the button that I am going to call in activity to set the action of that button, width and height of the button. The background image that I assign to the button which exists in the drawable folder of the application and the text displayed on the button. Android Manifest The Android Manifest file is the most important XML file of an application that is built under Android. In this file I declare the name of the application and most important the activities that my application consists of and the sequence of these activities. By sequence I mean what activity is going to run first. Also I declare the version of my app and the package that I build in. The snippet bellow shows the activities are organized in the XML code of Android Manifest <activity android:name=".mainscreenactivity" android:label="@string/app_name" > </activity> <activity android:name=".voicerec" android:label="@string/app_name" android:theme="@style/codefont" > 47

48 </activity> When on activity ends then the next activity starts. The MainScreenActivity is declared before the VoiceRec activity because in order to access the VoiceRec activity I have to start the MainScreenActivity first. I declare the activities in the manifest by the name of the classes and the applications name which is a String stored in the Strings xml file on the app. 28. The following image shows the dependencies between the classes of the application. TheSplashScreen Class This class is an activity, a class that extends the android activity interface and is represented as new screen that a UI can be placed in. The SplashScreen activity class is the first activity that runs in my application, so it needs to be written first in the android manifest file. The purpose of this class is to display the first screen in the application commonly known as splash screen which shows an image in the screen. This image includes the applications name so that the user opens the application this screen appears with an image displaying the name of the application in a graphical way. This screen stays for five seconds active and then the main screen of the application appears to the user. The activity uses a new layout to show the screen with the image and then a Handler is introduced. The purpose of a Handler on android is to create a new thread of the Runnable interface and that s the reason I use it in this situation. I am creating the Handler and use the public method postdelayed to create a new Runnable object that will be executed after five seconds. Inside the postdelayed method a new 48

49 Intent is created that will start the Main Screen Activity class. So when this handler is executed it will stay active for five seconds and then the new intent will start the new activity with the main screen. Bellow you can a see a snippet of how the Handler public method works. This was taken by a chapter from a book that explains how to use threads to achieve this [46]. new Handler().postDelayed(new Runnable(){ public void run() { Intent mainintent = new Intent(TheSplashScreen.this,MainScreenActivity.class); TheSplashScreen.this.startActivity(mainIntent); TheSplashScreen.this.finish(); } }, timedisplayed); The intent will start the MainScreenActivity after the timedisplayed variable which is five seconds ends. HelpMenu and AboutMenu classes These two classes are also activities that start from a menu choice of the user. As every application so this one needs to provide information about the application itself and give to the user a quick help to go through the application. These two classes are responsible for two different screens with different layouts that are displaying information about the application and provide a quick tutorial of how the user can use the application. By pressing the back button the user can enter the main screen again and exit these screens. DatabaseCreation Class An early idea of this project is to store the data (images showing the movement-signing) in the assets folder of my application. This could save me space because now the database file is already in the assets folder of the app and I can load the database at the begging of the app too. I abandoned this idea because I believed that if I store the information on the database and load the database into the app the programming part would be easier and the information more organized. Also I had to think the future of the application; a database would be easier to change in the future undependably from the app itself. Another thing was that the database is not too big in size. The 420 words with their images is a big database regarding the size and could not be easily stored in some mobile phones. The applications database is 138 MB and the user needs to heave at least 140 MB available to install the app. I will discuss more on the drawbacks and benefits of this on the future work section of this document. In this application I already have created a database with software to add the words and the image frames of the signs. So there is no need to create a database programmatically inside the application instead I must find a way 49

50 to take the database that I have already created and copy it inside the application so the user can use it. This is the responsibility of DatabaseCreation class as it is implemented by the help of an online tutorial [42]. Two main methods that are calling a series of other methods are implemented in this class. The createdb() method that creates a database with the method DBexists() checks if there is a database inside the application and if not (if database is null) is calling the copydbfromre () method. The DBexists() method creates a database with a path and name that I have already define as private members of the class and opens the database so I can be able to read from it and write from it. The above snippet that creates the database path and declares how the database will be opened is taken from Juan-Manuel Fluxà [42] on his tutorial on retrieving databases that have already being build. String databasepath = DATABASE_PATH + DATABASE_NAME ; db = SQLiteDatabase.openDatabase(databasepath, null, SQLiteDatabase.OPEN_READWRITE); I have placed the database in the assets folder of the android project and now the method copydbfromre () will start a process so the database will be copied from the assets folder in the new created database that createdb() method made. Here I use two streams, one input stream to open the database from the assets folder and one output stream so I can copy the elements of database in the new database by calling the method write() of the output stream. You can see below a snippet of code of how the streams are helping us copy the elements of the database. This also with the help of Juan-Manuel Fluxà [42]. inputstream = mycontext.getassets().open(database_name); outputstream = new FileOutputStream(databasepath); byte [] buffer = new byte[1024]; int length; while ((length=inputstream.read(buffer)) > 0) { outputstream.write(buffer, 0, length); After the copying of the database is over another method is created to open the database with a permission to read only because now I don t need to make further changes to the database once it is loaded in the application. You can see below how the method to open the database works String mypath =DATABASE_PATH + DATABASE_NAME ;; dbsqlite = SQLiteDatabase.openDatabase(myPath, null, SQLiteDatabase.OPEN_READONLY); 50

51 DatabaseMethods class In this class I did implement all the methods that I need in order to make the queries in the database. All the methods in this class are called by other classes in or application so I can search the database by the choice of the user. My database consists of one table of eight columns. The first column which is the _id column and it is the most important because all the queries are based on this column. This column corresponds to the word that exists in the database (see fig 29). There are seven other columns that correspond to the frames of the sign that this word has. Each column has one frame-image, some of them have more than one as I discussed earlier. If the word has for example four image frames the rest three which is actually empty are returning a null value to the query that will be done on this word. 29. Figure of the database The queries is done with the use of rawquery method that android database library provides to the developer. The most used methods are two. The sendtheword(string) method that takes as an argument a String and does a query in the database to find if this String exists and returns it. Let s see a snippet of code of how this method works 51

52 public String sendtheword(string word){ String result=""; String p_query = "select * from signs where _id =?"; Cursor c= database.rawquery(p_query, new String[] { word.tostring() }); for (c.movetofirst();!c.isafterlast();c.movetonext()){ result = c.getstring((c.getcolumnindex(key_rowid))); } return result; } Now I also need to search the database given a String that corresponds to a word to find the frame-image of that word. The methods sendallframes(string) take as an argument a String and searches the database to return all the image-frames that is stored in the database and matches the search. The method returns an array of files of type blob (byte arrays). In the mainactivity where I call the method, I will de-serialize the bytes and decode it to get the images as bitmap. This is something that I am going to discuss later when we see how the method is called. For now let s see how the query to get the image is done in the database methods class. ArrayList <byte []> data = new ArrayList<byte []>(); String p_query = "select * from signs where _id =?"; Cursor c= database.rawquery(p_query, new String[] { sign1.tostring() }); for (c.movetofirst();!c.isafterlast();c.movetonext()){ blob1 = c.getblob(c.getcolumnindex(sign)); blob2 = c.getblob(c.getcolumnindex(sign2)); The method searches the database with the help of the cursor to find the word return the blob files (images) from the columns of the corresponding entry and store them in an ArrayList (data). MainScreenActivityClass This class is the main screen of the application where all the buttons and the pop-up windows are defined and implemented. The signing of the word is displayed to the user through an animation of the frame-images of the database. Three buttons that can handle the user s decisions are available in this screen. The text input button where the user can enter the word that he wants in a pop-up window, the voice input button where the VoiceRec class-activity is been called to prompt the user to speak the word and one button to handle the stop and the resume the signing of the word. Also a menu with two options for the user, a help option with a quick tutorial of the app as I discussed and an about the app option were the user can know information about the application. 52

53 Text Input Basically this application is divided in two main categories, the text input of the word and the voice input of the word. In this section I will discuss the text input of the word where the user can enter manually the word by typing it in an EditText provided by a pop-up window (see fig 30).This pop-window except from an EditText it has two buttons to help with the users action, the ok button and the cancel button. 30. Text input screen The pop-up window uses different layout from the main activity but it is displayed in the main activity instead of a new screen. Android provide us with a LayoutInflater service that can help the developer to show a new layout and most of the times is used with a PopupWindow element that is been displayed on top of the current activity [43]. Both of these elements are part of the android library. Let s see a snippet of code of how the popup window is been created and is shown to the user. The showatlocation method shows where the pop-up window will be displayed on the screen. View layout = inflater.inflate(r.layout.popup, (ViewGroup) findviewbyid(r.layout.main)); pw = new PopupWindow(layout,300, 300, true); pw.showatlocation(this.findviewbyid(r.id.img), Gravity.CENTER, 0, 0); The user has two options when he chooses the text input, either he can write a word and press ok or enter on the keyboard or cancel or back-button on the keyboard, when the pop-up window appears the keyboard also becomes visible so the user can instantly type. If he chooses the cancel the application will return to the main screen if he chooses to write a word and press ok a series of methods begins to show to the user the word displayed in British Sign Language. When the user writes the word and presses ok a method that searches the database for the given word is been called and if that word exists in the database another method is called to display the sign, if not then a message in a TextView appears to the user telling him that the word is not available ( see fig 31). 53

54 31. Main Screen no results. If the word is available then the String is passed as an argument in a new method called displayanimatedsign(string). This method is probably the most important method in this class. It calls methods from the databasemethods class to make the queries and gets the bytes of the image that corresponds to word that is stored in the database. At first I get the image on bytes calling the method sendimage(string) that I have discussed. After that I need to read the contents of the byte array using input streams and then decode these streams using androids BitmapFactory library to get the bitmap as single image. You can see this operation in code in the above snippet. byte [] blob= dbhelper.sendimage(image_string); inputstream = new ByteArrayInputStream(blob); Bitmap bitmap = BitmapFactory.decodeStream(inputstream); Now that I have the images I need to re-create the animation of the sign. Some of the methods that return the images may have the value null so I need to check if there is a null value for an image so I can know how many frames there are for the word in order to start the animation. Animation of the sign To recreate the animation of the sign I have to use Androids animationdrawable library. Android animation works with drawable frame items, so I need first to create the drawable frame from the bitmap image.each frame needs to be created from a bitmap image and then a new AnimationDrawable object is created and is used to add the frames to the animation. Later on I start the animation with a new Handler and the class Starter that implements the Runnable interface. The above snippet was taken by [47] a very helpful tutorial that explains how Handler works. 54

55 Handler startanimation = new Handler() { public void handlemessage(message msg) { super.handlemessage(msg); animdrawable.setoneshot(false); } class Starter implements Runnable { public void run() { animdrawable.start(); } When this method reaches the Starter class then the sign is displayed in the screen in an ImageView at the centre of the screen (see fig 32). Also a TextView that displays the word that is being signed is available to the user so he can see the word too. Stop the Signing button will have as a result to freeze the animation of the sign while the resume signing to resume the paused animation. 32. Signing of a word Voice Input This application besides the text input of a word gives the option to the user to also speak the word to the machine. Google voice recognition API is used in this app to receive the voice input of the user and return the possible results of the recognition. Voice recognition systems are supported in most of the smartphones 55

56 available in the market today and this technology is continuously improving. In this app I use the Google s API as it supported for Android mobile phones and tablets. The button voice input starts a new activity-class named VoiceRec and prompts the user to speak the word to the microphone of the device. VoiceRec activity uses different layout from main activity and when the results of recognition are available to the system then it stores them to an ArrayList of words (see Snippet of code). This array of words is checked with an array of words that exists in the database and if there are words that exist in both arrays then these words are saved in a new ArrayList which will be displayed as a ListView to the user. Also the rest of the words that came up from voice recognition activity will be displayed to the user with the indication ( not available ).This is done because the application is addressed to people with hearing problems and it would be helpful for them to see the word written that someone else will speak even if there is no sign available. The words will be displayed in order based with the available words being displayed first and then the unavailable based on the recognition confidence score. Now the user can choose the word from the list and the main screen will appear to show to the user the sign of the word. The above snippet was created with the help of an online tutorial [45] that explains how voice recognition works on android. Intent i =new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); i.putextra(recognizerintent.extra_language_model, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); i.putextra(recognizerintent.extra_prompt, "Give The Word"); XML layout files This application consists of seven layout files and the Android Manifest file written in XML code for android. These layouts represent the different screens of this application, the components that are used for these screens and their attributes. 1) The app_menu layout where I declare a menu for the application to help us start the activities aboutmenu and HelpMenu. 2) The menu_about layout where I give information about the application to the user and can be accessed by pressing the About button on the menu of the app. 3) The menu_options layout where I give a short tutorial to the user so he can go through the application and can be accessed by pressing the Help button on the menu of the app. 4) The my_splash_screen layout that consists of an ImageView where the intro screen is displayed to the user when he starts the application for the first time. 5) The main layout where the main screen of the application is created. It consists of two buttons as I discussed earlier a TextView with quick info for the user and an ImageView were the signing of the words will be displayed. 6) The popup layout were a pop-up window is been designed and it appears on the screen when the user chooses the text input. It consists of an EditText where the user can enter the word and two buttons, the ok button so the search can start and the cancel button to return to the main screen. 7) The voice layout where the voice recognition activity starts when the user chooses voice input. It consists of one button called speak so the voice recognition service to start and a ListView for the available words that the service will return. 8) The displayhistory layout. In this layout screen a ListView helps us show the available words to the user (please see extra features added after the evaluation.) 9) The android manifest layout. This file is written in XML code too and provides information on the version of the application, its name, the package that the code is written, the icon of the application 56

57 and most important the activities that take part in the app and the sequence that this activities are executed. Now let s see an example of how a button and an EditText are written in an XML code layout. <EditText android:id="@+id/take_word_from_pop_up" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignparenttop="true" android:layout_centerhorizontal="true" android:layout_margintop="19dp" android:ems="10" /> Attributes such as name, width and height must be assigned and other optional attributes that shows the position that the Editext appears on the screen. Drawable folder Every android application project has a folder that the developer can store images that will be used in the application. The drawable folder contains images, icons, and different kind of graphical formats that will be used in the XML layout files and can be loaded to the screen. Here I have stored images such as the intro screen of the app, the icon of the app and other images that I did load in the apps screen. Extra features added after the evaluation During the evaluation of the app some users indicated that a History features is necessary for a best use of this app. So I proceed to develop a History service that will store the word that the user has previously entered so he can find it again without remembering the word. Two more classes were introduced and another xml screen. DatabaseHistoryCreation Class In this class I created a new database with the name historyofapp so I can store the words that the user types or speaks and there are available images on the database. I have created three methods, one to add a word to the database, one to get all the words that are stored in the database and another to delete a word from the database. I will not present any more snippets of code of this class due to space limitation but everything regarding the code can be found in the archived code. 57

58 DisplayHistory Class This class is similar to VoiceRec class in terms of appearance. A ListView helps us on displaying the available words that are on the historyofapp database (see fig 33). 33. List of words in the history screen The user here has two options, either he can press on a word and the MainActivity screen will appear to start the signing of the selected word or he can long press on a word and an option to delete the word will be available to the user (see fig 34). 34. Option to delete a word 58

59 Here I have to discuss the long press implementation. On android the long press button is similar with the right click button of other operating systems. By that I mean that the long pressed button gives more options to the user to select. The Context Menu service that android supports helps us create this small context menu with only one choice to the user if he desires to delete a word from the history service of the app. The above method is on the Android Library methods and creates a context menu with my choice of information. public void oncreatecontextmenu(contextmenu menu, View v, ContextMenuInfo menuinfo) { menu.setheadertitle("press Delete to delete the Word"); menu.add(0, CONTEXTMENU_DELETEITEM, 0, "Delete this word"); Later on and when the user will select one word from the ListView I need to implement the delete action. Again due to space limitation I will not present the snippet of code of how I handle the user s action but again this will be available in the archived code in the DisplayHistory class code. Increase the view of the sign During the evaluation process some users indicated that the signing takes place in a small image view. Due to a limitation of the images quality, which is not very high, I had to find a way of presenting the sign bigger without changing the images resolution. So I did change the view by making the animated image a button so when the users clicks it the animated sign increases its resolution to a point that does not affect or stretches 10. Usability of the application and testing by users (Evaluation Part I) 10.1 An Important Issue An important aspect that I had to go through before I start any usability survey or testing was the people that I am going to find so I can ask them to start testing the usability aspects of this app. The interviews on testing and usability sections and the evaluation section and the questionnaires that the users will be asked to complete had to be done face to face with developer. In most of the cases the internet surveys on usability and evaluation of applications are used by developers and in this way many people can test or evaluate your software easy and without contacting users in personal. In my case this was something very hard to do. Firstly I couldn t know if the user that I will contact through the web possesses an android machine to test the application and secondly if he possesses one, I could have asked the user to download the app by mail but I cannot ask him to download the app that I will evaluate the app against. Another thing was the voice recognition problem that the Android emulator has, even if the user does not have a suitable device to test the app I could have send him a link to an emulator but he couldn t do it properly because the emulators does not support the voice recognition service. 59

60 For these reasons I have decided to contact the users in personal, this has pros and cons. The pros are that I will have a direct face to face conversation with the user and I will understand better his view of how the app is working, with this way I can understand better what the user thinks of the app during the usability and testing section and the evaluation. One of the cons will be the amount of the users that I will contact. I as developer could not contact a large amount of users in personal, this can lead to minimum number of users that will test or evaluate the application and a small number of suggestions or not enough feedback Usability interviews First of all I must state the difference between the usability and the actual evaluation of the app. The usability questionnaire is to help the developer understand how the features of the app work and to help him identify what the users seek from this application. Also the usability survey will help us to determine if the interface of the app is easy to use and learn my mistakes on the design part. At the end usability and testing process is a kind of evaluation but is focused on how the user can manage to use this application without any drawbacks. The application that I have created it aimed at people that want to use and learn British sign language. Deaf people are the main target audience as stated in the requirements of the project but the limitation of words and the unavailability of real time translation of sentences will be a drawback on the use of this app by deaf people. This application does not provide a complete learning environment on learning BSL but acts as a dictionary that the user can find out the signing of words available for BSL. At this point the applications development is on the final stage so the developer should give his product to users for testing and to find out the usability levels of the application. I as developer will not aim to provide the app only to people with hearing problems but to a variety of users that want to learn some basics English words on sign language. For that purpose and for the purpose of testing the application I have created a usability questionnaire that will be given to the users after using the application for enough time so they can become capable of understanding the apps interface and its features. In order for the users to become adequate with the features of this application I need to guide them through the app asking them to perform some basic actions: 1. Explain how the interface works to the users 2. Ask the user to use the text input service 3. Ask the user to enter more than one word or words of his choice 4. Ask the user to use the speech recognition service 5. Ask the user to use the pause signing and the resume signing buttons 6. Ask the user to navigate through the app Asking the users to perform the above action I can then give them an online questionnaire so they can provide to the developer information regarding the usability of the application. The questionnaire can be found in one of the appendixes and I am now ready to discuss the results of this questionnaire. The number of the users that were interviewed was eight. They were five males and three females participants all in age between 22 and 28. The same participants took place during the evaluation of the app. 60

61 Results of the answers are available at the tables below. In the question: Did you find the application Show me the sign useful? Please rate from 1 to 5 (1 being the lowest and 5 the highest point), most of the users (50%) rate the application with 5 indicating that it can be used without problems, a 25 per cent rate the app with 3 and with 4 respectively. In the next question the user had to answer if the animated signs were understandable on the screen or if the signs were clearly displayed. A 37.5 per cent of the users agreed that the signs did appear clearly while a 62.5 per cent were neutral. When they were asked about that they responded that in some degree the signs were clear but the resolution of the images was too low and that could be a problem. The next question was for the user to rate some statements, I will present the answers in a table. The rating was from 1 to 5, 5 was the maximum point. Statement Average Did you find (0 %) (25%) (25%) (25%) (25%) 3.50 / 5 useful the speech Input of the application? Did you find useful the text input of the application? (0%) (0%) (0%) (25%) (75%) 4.75 / 5 Is the interface of the application easy to use and navigate? Is the animated sign appear quick enough? (0%) (0%) (0%) (25%) (75%) 4.75 / 5 (0%) (25%) (0%) (25%) (50%) 4.00 / 5 We can see the results clearer in the chart next page. 61

62 Usability Findings First Chart Did you find useful the speech Input of the application? Did you find useful the text input of the application? Is the interface of the application easy to use and navigate? Is the animated sign appear quick enough? ShowMeTheSign From the results we can say that the users did like the text input of the application more than the speech input although there was some users that said that it was easier to find which words the applications supports by using the voice recognition system. With an average of 4.25 / 5 most of the users agreed with the above statements. In the question to provide opinions to improve the application the users gave the below responses: -Voice input must be more reliable. -Faster signing. -Lower the time that the images are changing -Useful but not good looking interfaces. -Help options and About options not good looking. -The time the signing reloads is too big. I had to ask the users about specific components of the app to see how useful the design of the app was. The above statements had to be rated from 1 to 5 as previous with 1 being the lowest and 5 the highest point. Statement Average Image- (12.50%) (25%) (37.50%) (12.50%) (12.50%) 2.88 / 5 Animation resolution of the sign The button for pause resume was useful (12.50%) (12.50%) (0%) (50%) (25%) 3.63 / 5 62

63 Appearance of the application (0%) (0%) (25%) (25%) (50%) 4.25 / 5 And the chart below so the results can be more visual. Usability Findings Second Chart ShowMeTheSign 2 1 Image-Animation resolution of the sign The button for pause resume was useful Appearance of the application You can see from the above that the users liked the pause/resume button and the appearance of the application but the signing resolution was a problem to some of them, of course this is something that I cannot change but I need in any case to see the user s satisfaction on this important part of the application. Another rate question followed so I can see the usability levels of the application. After the users tested the app and used it to learn the signing of some words in BSL they were asked to rate the above statements, here are the results. Statement Average The (37.50%) (12.50%) (0%) (0%) (50%) 3.13 / 5 applications Text Input method is easy to use The (0%) (25%) (50%) (25%) (0%) 3.00 / 5 applications Voice Input is easy to use The number of the signs available is satisfying (25%) (37.50%) (25%) (0%) (12.50%) 2.38 / 5 Someone (25%) (25%) (25%) (12.50%) (12.50%) 2.63 / 5 63

64 can use this application to learn how to sign Again the chart below shows us the results in a more visual way. Usability Findings Third Chart ShowMeTheSign 1 The applications Text Input method is easy to use The applications Voice Input is easy to use The number of the signs available is satisfying Someone can use this application to learn how to sign After the usability survey finished I reviewed the results and I started an immediate evaluation on the results. A large amount of users did not found the voice recognition service useful enough and when they were asked about this they said that the desired word did not come up when they used voice input. This can be the problem for three reasons, as the user stated. Firstly the voice recognition will return only words that are available on the database so the user does not need to select a word that there is not a sign available and gets frustrated, secondly the majority of the users was not native English people and that has a major impact on the recognition service because the pronunciation of the word plays an important role in voice recognition services. Thirdly the voice recognition service demands available network so the mobile can be connected to the web, this sometimes cannot be guaranteed to the user. I had to change the way the voice recognition works in the application after this negative feedback from the users. In collaboration with my supervisor I reviewed the way that I have implemented the voice recognition. I did change the way it works by adding all the words that the recognition service returns in the list that is available to the user. Then the user can see which of these words is available on the database or not by looking at the availability at the end of each word. If the word is not available then a not avail follows the word in the list. In other sections of these questionnaire/interviews the users found useful the interface of the application but I did not receive good comments on the number of signs that are available in the application. This was something that I have expected, due to the limitation of words that we took permission for. Many users-testers of the app point out the fact that the sign is displayed on a small scale and they could not figure out the gestures of the signer for some words. This was something very important because the user must be able to understand the signing in every case. So I have added an extra feature in the app: The user can press on the sign when this is displayed on the screen and the signing of the word (the images) will increase its size to be viewed more clearly. The user can change back to the original size by just pressing on the image again. 64

65 Another section of the questionnaire/ interviews was the improving application opinions. I have taken under consideration all the opinions of the user and implemented them. I have changed the application interface to make it better looking and I have lowered the time between the displayed images and I have fixed the menu About and Help. The usability survey and interviews were very helpful for the developer of the app. The fact that I had a direct contact with the user and saw his reaction on the use of the app was something very interesting as I had this experience for the first time. Their warnings and indications made me reconsider basic components and services of the app. 11. Evaluation (Part II: Comparing Applications) In this chapter the evaluation of the application is introduced. I have created an application that meets the standards of the requirements discussed and now is the time to evaluate this software to its usability among software that has a purpose similar to the purpose of this application. There are a lot of dictionaries of sign languages and British sign language available for mobile phones and smartphones to the web, a lot of them are very small in amount of words or they only have the alphabet. It will be wise to evaluate the application versus one on the similar operating system (Android). In the Android market website there are similar applications that are trying to deliver the best result on learning words of sign language. Nevertheless, none of these applications use a voice recognition system so the user can be able to enter the word by voice. The purpose of this section of the document is to provide the user with two applications, the one that I have created and another that is similar to mine so the user can compare them. I need to learn their opinion on which application delivers the best result and which app the user finds more usable and fits his needs. I have chosen an application for Android operating system created by the University Of Bristol Centre Of Deaf Studies [44]. The applications name is MobileSign and it is available for download from Android Market. I am going to provide to a number of users these two applications and ask them through questionnaire basic points of interest so I can learn in what points my app is not efficient enough or the opposite. I will focus on the following major points during this evaluation: Usability of the application. Which of these two applications the user finds friendlier to use. Speed of the signs delivered back to the user. Speech recognition service usability on the current app. Number of words-signs included. About this section I already know that the application MobileSign supports a very large number of words and my app is limited in that number. The application that I have created uses the animated signs of [39] website that we have taken permission from, so I am limited on the number of words that exists in this website. However this website includes words that are used the most in our everyday interaction with people. I will not focus on what app has the maximum number of words but if the users found the words that exist in the application quite helpful. 65

66 I have created an evaluation questionnaire which I will discuss the results here and you can find in one of the appendixes of this paper. The users have been asked to complete tasks in ShowMeTheSign and MobileSign and after the end of these tasks I interviewed each user in personal and asked them to complete the evaluation questionnaire. The results are the following In the question: Please rate the applications based on what app you found more usable. (1 is the lowest and 5 the highest point), the users did rate the applications as follows. ShowMeTheSign 4.50 / 5 (90%) MobileSign 2.75 / 5 (55%) To make it more visual you can see the answers from the users at the pie below. More Usable Application ShowMeTheSign MobileSign This shows us that the users preferred to use ShowMeTheSign than the MobileSign. When they were asked why or which features they liked or not in the two applications they responses were clear. In the question: Which of the two applications are faster in the display of the signs? The users gave to ShowMeTheSign 100 per cent of the ratting leaving MobileSign with zero per cent. The next question was a rating question where the users asked to rate some statements with 1 to 5, with 1 being the lowest score and 5 the maximum as in the usability section. You can see their responses in the table that follows. Statement Average Is the displayed signs appear (0%) (0%) (25%) (75%) (0%) 3.75 / 5 (75%) 66

67 clear to the user in ShowMeTheSign App? In what point the application helped you on learning words of the British Sign Language? Did you find the speech input option on ShowMeTheSign application helpful? (0%) (25%) (25%) (50%) (0%) 3.25 / 5 (65%) (0%) (25%) (25%) (50%) (0%) 3.25 / 5 (65%) The chart below presents the results in a more visual way. Evaluation Findings Second Chart Is the displayed signs appear clear to the user ShowMeTheSign App? In what point the application helped you on learning words of the British Sign Language? Did you find the speech input option on ShowMeTheSign application helpful? ShowMeTheSign Versus MobileSign Another question of the same type was followed so the users can compare the services and how good they are working of the two applications. Statement Average The application presents the (37.50%) (12.50%) (0%) (25%) (25%) 2.88 / 5 (57.60%) 67

68 sign with fairly good speed. I liked the way that ShowMeTheSign presents the results I liked the way that MobileSign presents the results. I would prefer if ShowMeTheSign presented the signing of words with video. ShowMeTheSign needs more words to be available to the user. I have spotted some errors on the functionality of ShowMeTheSign (12.50%) (37.50%) (12.50%) (25%) (12.50%) 2.88 / 5 (57.60%) (12.50%) (25.00%) (37.50%) (25%) (0%) 2.75 / 5 (55%) (37.50%) (25%) (0%) (25%) (12.50%) 2.50 / 5 (50%) (50.00%) (12.50%) (12.50%) (25%) (0%) 2.13 / 5 (42.60%) (62.50%) (12.50%) (25%) (0%) (0%) 1.63 / 5 (32.60%) The chart at the next page presents the results in a more visual way. 68

69 Evaluation Findings Third Chart ShowMeTheSign Versus MobileSign After the evaluation process ended I analysed the results and I focused on the users answers. Many users clearly want more words to be available for the ShowMeTheSign, although they have stated that the available words is a good start and they are the most used in their everyday social conversations. This limitation is something that I have already discussed in a previous section of our interaction with the users. Clearly the way that ShowMeThe Sign presents the sign to the user was more helpful than MobileSign but not so clear in the display on the screen. This is because MobileSign uses video to display the signing of the words and not animated GIFS, this makes it clearer in terms of understanding the sign to the user. When the users have been asked about that, half of them 69

70 responded that the video output of MobileSign gives a better result and a more understandable signing. The video output has better resolution than the animated images that ShowMeTheSign uses. The majority of the users stated that the process of finding the word-sign on MobileSign was easy but the process of displaying it is frustrating. When they have been asked about that, they said that switching between two applications (MobileSign & Web Browser) takes time and is not helpful at all. Also when they need to see the signing again they had to go back to the app because the reload button on the browser will not cause the video to replay. Another thing that annoyed them was the fact that the MobileSign needs internet connection at all times. Also the majority of the users liked the extra features that ShowMeTheSign has. The pause and resume buttons and voice input were a nice idea they stated but they could not use them excessively because of their poor accent in British language. A lot of users found the history feature of MobileSign interesting and helpful, so I as developer decided to include a second database in my application so I could create a history feature. Maybe a second table on the database could be more appropriate way of implementation but I followed another approach (built a new database). In the menu of the app I have included a history screen so the user can see which words he previously used but also to delete the unwanted words. In general the users liked the instant signing that ShowMeTheSign provides and the extra features such as the button to resume and pause the signing and the voice input service. They stated that the numbers of the words are poor and they would prefer a video output of the signing. They liked the fact that ShowMeTheSign can be used without internet connection available all the time. 12. Future Work In this MSc project I have created a mobile application that someone can use to learn basic words in British sign language. I did research and developed a simple application that translates these words into animated images so the signing can be understandable by the user, but is not always that the developer made the correct implementation choice. This project will be an available open source project so even if the MSc period is over additional work after this period can be done. Regarding the database with the signs a future work can be done to handle the alternative spelling of the words. A more advanced database can be built to indicate if the word is a noun or a verb etc. For example, the word "box" could mean a container, or it could refer to the sport of boxing. Ideally, my database would have two tables, one linking the animations to a unique ID, and the other mapping from the English words to the IDs. Another point is the way that I store information, the number of words allows us to build the database and add it on the assets folder of the application but there are other ways to store information. Maybe more buttons can be introduced to slow and increase the signing speed (image animation) or a browse button to let the user know which words are available. Also [39] provided us with the animated GIFs but the amount of words was low, so maybe in the future more words can be added to the database by other available providers. One of them can be a server that the user could connect to, to add new entries (words) in the applications database. The size constraints need to be taken under consideration also because newer versions of the app can have bigger databases. 70

71 13. Conclusion In this research paper I have reviewed applications and projects that use speech recognition systems and written word to sign language recognition systems. I have introduced my aims and objectives for the application that I built and learned a lot about the technologies that these projects and applications use. The importance of building an application that aims a certain group of people is that you need to understand their needs and the way of thinking. What do they need from the app? How can you deliver a product worth using? Which technology is going to give the best result to the user? This projects goal is to present the different applications that use spoken systems and sign language and the way that they were implemented, also to present the way that the developer will develop his application. After that the implementation of such an application is introduced and the way it is implemented. In this project I have reached to some findings. Given the fact that it is the first time that I am developing an application on Android that will be used by deaf people I had to investigate how the British sign language is used and what are the differences between sign and spoken languages. The findings on that affected the view that I had about sign languages and how people are using them; I want to mention that because it is one of the most important findings in this project. I know realized that sign languages are just as complex as spoken languages and this was a difficulty that I had to deal with. Another important finding was the difficulty that these systems (speech to Sign Language) have in order to be implemented accurately. A lot of projects have been done but the speech recognition area is still an area that is evolving year by year and in my opinion that will lead to a better and more robust speech to sign language systems in the future. The implementation phases along with usability testing and evaluation took two and half months and in that time I have learned a lot about android, sign languages, databases and voice recognition service. In fact I gain knowledge of how the projects are designed, implemented and reviewed or evaluated. This was the most important part, and I hope my work will be appreciated and people can use this application according to their needs or even for a programmer to further develop this applications service. 14. References [1] Sutton-Spence & Woll 99 R Sutton-Spence and B Woll. The Linguistics of British Sign Language. An Introduction. University Press, Cambridge, [2] Studies on Market and Technologies for IMT in the Next Decade - CJK-IMT Working Group online at: [3] Cox, S. J., Lincoln, M., Tryggvason, J., Nakisa, M., Wells, M., Tutt, M. and Abbott, S. (2003) The Development and Evaluation of a Speech to Sign Translation System to Assist Transactions. International Journal of Human-Computer Interaction, 16 (2). pp ISSN Online 71

72 [4] Sáfár, Éva and Marshall, Ian (2001). The Architecture of an English-Text-to-Sign-Languages Translation System. In: Recent Advances in Natural Language Processing (RANLP). In: Conference or Workshop Item (Speech), 28 Jul :23, Faculty of Science > School of Computing Sciences. pp.2. [5] Alphabet Android on Market ( 2011 Google) [Accessed 24/03/2012]. Available from: search_result#?t=w251bgwsmswxldesimnvbs50zwfjagvyc3bhcmfkaxnllmfzbgftzxjpy2fu c2lnbmxhbmd1ywdlil0. [6] First Steps on Android Market ( 2011 Google) [Accessed 24/03/2012]. Available from: wsmswxldesimnvbs5szxzlbdeuc3rlcdiuyijd. [7] Android Market [online]. ( 2011 Google) [Accessed 24/03/2012]. Available from: < [8] Android Developers. (). What is Android? [Accessed 14/03/2012]. Available: Last accessed 11/2/2010. [9] Becky Sue Parton. (online September 28, 2005). Sign Language Recognition and Translation: A Multidiscipline Approach From the Field of Artificial Intelligence. Available: Last accessed 11/02/2012 also in J. Deaf Stud. Deaf Educ. (Winter 2006) 11(1): [10] Dr Beckie Sue Parton Research Web Page [11] Research Demonstrates Innovative 'Speech to Sign Language' Translation System Online at: Accessed 19/03/2012 [12] (Brien 92) D (Ed.) Brien. Dictionary of British Sign Language/English. Faber and Faber, London,Boston, [13] S.J. Melnikoff, S.F. Quigley M.J. Russell (2001)). Implementing a Hidden Markov Model Speech Recognition System in Programmable Logic. Lecture Notes in Computer Science [online]. #2147, [Accessed 27/02/2012 ], p.pp Available from: [14] Steve Young Julian Odell Dave Ollanson Valtcho Valtchev Phil Woodland (1995). The HTK Book. E_book. At: Cambridge University, Entropic Cambridge Research Laboratory Ltd. [15] Dmitry Zaykovskiy ( 2006, June). Survey of the Speech Recognition Techniques for Mobile Devices. (-). SPECOM'2006, St. Petersburg. Available from: < Accessed: 28/02/2012. [16] Manish P. Kesarkar (2003). FEATURE EXTRACTION FOR SPEECH RECOGNITON. M.Tech. Credit Seminar Report, Electronic Systems Group, EE. Dept,IIT Bombay [online]. -, [Accessed 72

73 28/02/2012 ], p.-. Available from: < [17] B.H. Juang & Lawrence R. Rabiner ( 10/08/2004). Automatic Speech Recognition A Brief History of the Technology Development. -. At: USA: Georgia Institute of Technology, Atlanta * Rutgers University and the University of California, Santa Barbara, -. Rabiner.pdf. [18] Pyramid (Personalized Interactions with Resources on Ami-enabled Mobile Dynamic Environments) [online]. (2008) [Accessed 18/2/2012]. Available from: < [19] Global Accessibility Reporting Initiative (GARI) Mobile Accesibility [online]. (-) [Accessed 18/2/2012]. Available from: < [20] icommunicator [online]. (-) [Accessed 19/02/2012]. Available from: < [21] Nuance- [online]. (2002/2012). Available from: < [Accessed 30/07/2012]. [22] Asu-Asl Blog [online]: Last accessed on 24/03/2012. Last accessed on 1/3/2012 [23] Science photo Library Last accessed on 24/03/2012 [24] Kamp & Reyle 93. H Kamp and U Reyle. From Discourse to Logic. Introduction to Model theoretic Semantics of Natural Language, Formal Logic and Discourse Representation Theory. Kluwer Academic Publishers, Dordrecht, [25] Semantics C Brasoveanu Kamp and Reyle May by Lauren Winans lwinans@ucsc.edu Chapters 1 & 2 Online < > last accessed 1/3/2012 [26] Signfuse [online]. ( February 2010) [Accessed 10/03/2012]. Available from: < [27] Mobile ASL [online]. (last updated 3/3/10) [Accessed 10/03/2012]. Available from: < [28] Thomas Wiegand, Gary J. Sullivan, Senior Member, IEEE, Gisle Bjøntegaard, and Ajay Luthra, Senior Member, IEEE (7, JULY 2003). Overview of the H.264/AVC Video Coding Standard. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY [online]. VOL. 13, [Accessed 10/03/2012 ], p Available from: < [29] WISDOM (1 Jan to 31 March. 2004). Bristol. University of Bristol UK, United Kingdom. 73

74 [30] Deaf Studies Trust online at < Last accessed on 5/03/2012 [31] ATLAS Automatic translation into Sign Language [online]. (2007) [Accessed 24/03/2012]. Available from: < [32] What is Dicta-Sign? [online]. (last modified on March 12) [Accessed 20/03/2012]. Available from: < [33] Real time translator which is portable [34] Evolutionary Development Model Developing the Foundations for Data Management System - Software Development Cycle by Rob Abdul< Cycle.asp >Last accessed on 10/03/2012 [35]Dictionary of words sign language l last Accessed on 26/03/2012. [36] Iain McCowan, Darren Moore, John Dines, Daniel Gatica-Perez, Mike Flynn, Pierre Wellner, Herve Boulard (March 2005). On the Use of Information Retrieval Measures for Speech Recognition Evaluation. IDIAP RR [online]. -, p [Accessed 29/03/2012]. Available from: < [37] FCLS-Software Development Productivity [online]. (2006) [Accessed 10/03/2012]. Available from: < [38] Alan B. Poritz (1988). Hidden Markov Models: A guided tour. Institute for Defense Analyses, Princeton, NJ 08540: available on-line at : Last accessed 30/07/2012. [39] British Sign Language [online]. (20012). Available from: < [Accessed 10/6/2012]. [40] SQLite Expert [online]. ( ). Available from: < [Accessed 1/6/2012]. [41] GifSplitter World [online]. ( ). Available from: < [Accessed 30/5/2012]. [42]Juan-Manuel Fluxà (MARCH 3RD, 2009). Using your own SQLite database in Android applications [online]. Available from: < [Accessed 20/5/2012]. [43] Android Developers [online]. (2012). Available from: < [Accessed 12/07/2012]. [44] University of Bristol Center of Deaf Studies [online]. ( ). Available from: < [Accessed 20/07/2012]. [45] James Elsey (26/Feb/2011). Android; A really easy tutorial on how to use Text To Speech (TTS) and how you can enter text and have it spoken [online]. Available from: 74

75 < [Accessed 15/6/2012]. [46] J. F. DiMarzio (2011). Press Start: Making a Menu. In: James Markham, (ed). PRACTICAL ANDROID 4 GAMES DEVELOPMENT, New York: Paul Manning, pp [47] Post From "vikashiran" (January 7th, 2012). Android Threads, Handlers and AsyncTask- Background Processing using Threads [online]. Available from: < [Accessed 25/6/2012]. [48] Juergen Schroeter Text to-speech (TTS) Synthesis. Chapter 16,AT&T Laboratories,16.1 Introduction. Available online at [Accessed 5/8/2012]. Appendix A: Class Diagram Here the classes of the application are available. The two classes HelpMenu and AboutMenu are not displayed because of their small content. 75

76 76

77 77

78 78

An Avatar Based Translation System from Arabic Speech to Arabic Sign Language for Deaf People

An Avatar Based Translation System from Arabic Speech to Arabic Sign Language for Deaf People International Journal of Information Science and Education. ISSN 2231-1262 Volume 2, Number 1 (2012) pp. 13-20 Research India Publications http://www. ripublication.com An Avatar Based Translation System

More information

have more skill and perform more complex

have more skill and perform more complex Speech Recognition Smartphone UI Speech Recognition Technology and Applications for Improving Terminal Functionality and Service Usability User interfaces that utilize voice input on compact devices such

More information

Website Accessibility Under Title II of the ADA

Website Accessibility Under Title II of the ADA Chapter 5 Website Accessibility Under Title II of the ADA In this chapter, you will learn how the nondiscrimination requirements of Title II of 1 the ADA apply to state and local government websites. Chapter

More information

VOICE RECOGNITION KIT USING HM2007. Speech Recognition System. Features. Specification. Applications

VOICE RECOGNITION KIT USING HM2007. Speech Recognition System. Features. Specification. Applications VOICE RECOGNITION KIT USING HM2007 Introduction Speech Recognition System The speech recognition system is a completely assembled and easy to use programmable speech recognition circuit. Programmable,

More information

1. Find a partner or a small team of three or four classmates to work on this lesson.

1. Find a partner or a small team of three or four classmates to work on this lesson. Culture Inspiration for this lesson came from ESL Special Collection found at: http://www.literacynet.org/esl/tta5.html. Within that website, there is Building Bridges: A Peace Corps Guide to Cross-Cultural

More information

Chapter 10: Multimedia and the Web

Chapter 10: Multimedia and the Web Understanding Computers Today and Tomorrow 12 th Edition Chapter 10: Multimedia and the Web Learning Objectives Define Web-based multimedia and list some advantages and disadvantages of using multimedia.

More information

TOOLS for DEVELOPING Communication PLANS

TOOLS for DEVELOPING Communication PLANS TOOLS for DEVELOPING Communication PLANS Students with disabilities, like all students, must have the opportunity to fully participate in all aspects of their education. Being able to effectively communicate

More information

GET THINKING. Lesson: Get Thinking Museums. Teacher s notes. Procedure

GET THINKING. Lesson: Get Thinking Museums. Teacher s notes. Procedure Level: Intermediate + Age: Teenagers / Adult Time: 90 minutes + Language objectives: collocations, understanding vocabulary in context Key life skills: learner autonomy, giving feedback, social responsibility

More information

INTERNATIONAL JOURNAL OF ADVANCES IN COMPUTING AND INFORMATION TECHNOLOGY An International online open access peer reviewed journal

INTERNATIONAL JOURNAL OF ADVANCES IN COMPUTING AND INFORMATION TECHNOLOGY An International online open access peer reviewed journal INTERNATIONAL JOURNAL OF ADVANCES IN COMPUTING AND INFORMATION TECHNOLOGY An International online open access peer reviewed journal Research Article ISSN 2277 9140 Virtual conferencing using Artificial

More information

Voice Driven Animation System

Voice Driven Animation System Voice Driven Animation System Zhijin Wang Department of Computer Science University of British Columbia Abstract The goal of this term project is to develop a voice driven animation system that could take

More information

Quick Start Guide: Read & Write 11.0 Gold for PC

Quick Start Guide: Read & Write 11.0 Gold for PC Quick Start Guide: Read & Write 11.0 Gold for PC Overview TextHelp's Read & Write Gold is a literacy support program designed to assist computer users with difficulty reading and/or writing. Read & Write

More information

PUSD High Frequency Word List

PUSD High Frequency Word List PUSD High Frequency Word List For Reading and Spelling Grades K-5 High Frequency or instant words are important because: 1. You can t read a sentence or a paragraph without knowing at least the most common.

More information

Specialty Answering Service. All rights reserved.

Specialty Answering Service. All rights reserved. 0 Contents 1 Introduction... 2 1.1 Types of Dialog Systems... 2 2 Dialog Systems in Contact Centers... 4 2.1 Automated Call Centers... 4 3 History... 3 4 Designing Interactive Dialogs with Structured Data...

More information

Android Phone Controlled Robot Using Bluetooth

Android Phone Controlled Robot Using Bluetooth International Journal of Electronic and Electrical Engineering. ISSN 0974-2174, Volume 7, Number 5 (2014), pp. 443-448 International Research Publication House http://www.irphouse.com Android Phone Controlled

More information

Open-Source, Cross-Platform Java Tools Working Together on a Dialogue System

Open-Source, Cross-Platform Java Tools Working Together on a Dialogue System Open-Source, Cross-Platform Java Tools Working Together on a Dialogue System Oana NICOLAE Faculty of Mathematics and Computer Science, Department of Computer Science, University of Craiova, Romania oananicolae1981@yahoo.com

More information

100 SEO Tips. 1. Recognize the importance of web traffic.

100 SEO Tips. 1. Recognize the importance of web traffic. 1. Recognize the importance of web traffic. 100 SEO Tips SEO or Search Engine Optimization is one of the techniques to use in order to achieve more volume of traffic to your site. Without a good number

More information

Managing Variability in Software Architectures 1 Felix Bachmann*

Managing Variability in Software Architectures 1 Felix Bachmann* Managing Variability in Software Architectures Felix Bachmann* Carnegie Bosch Institute Carnegie Mellon University Pittsburgh, Pa 523, USA fb@sei.cmu.edu Len Bass Software Engineering Institute Carnegie

More information

Teaching Methodology for 3D Animation

Teaching Methodology for 3D Animation Abstract The field of 3d animation has addressed design processes and work practices in the design disciplines for in recent years. There are good reasons for considering the development of systematic

More information

Visualisation in the Google Cloud

Visualisation in the Google Cloud Visualisation in the Google Cloud by Kieran Barker, 1 School of Computing, Faculty of Engineering ABSTRACT Providing software as a service is an emerging trend in the computing world. This paper explores

More information

Cambridge English: Advanced Speaking Sample test with examiner s comments

Cambridge English: Advanced Speaking Sample test with examiner s comments Speaking Sample test with examiner s comments This document will help you familiarise yourself with the Speaking test for Cambridge English: Advanced, also known as Certificate in Advanced English (CAE).

More information

EMILY WANTS SIX STARS. EMMA DREW SEVEN FOOTBALLS. MATHEW BOUGHT EIGHT BOTTLES. ANDREW HAS NINE BANANAS.

EMILY WANTS SIX STARS. EMMA DREW SEVEN FOOTBALLS. MATHEW BOUGHT EIGHT BOTTLES. ANDREW HAS NINE BANANAS. SENTENCE MATRIX INTRODUCTION Matrix One EMILY WANTS SIX STARS. EMMA DREW SEVEN FOOTBALLS. MATHEW BOUGHT EIGHT BOTTLES. ANDREW HAS NINE BANANAS. The table above is a 4 x 4 matrix to be used for presenting

More information

School of Computer Science

School of Computer Science School of Computer Science Computer Science - Honours Level - 2014/15 October 2014 General degree students wishing to enter 3000- level modules and non- graduating students wishing to enter 3000- level

More information

How to teach listening 2012

How to teach listening 2012 How to teach listening skills "Great speakers are not born, they re trained." - Dale Carnegie (1921) Intended Learning Outcomes (ILOs) To enhance deeper understanding of the process of listening as a communicative

More information

Parsing Technology and its role in Legacy Modernization. A Metaware White Paper

Parsing Technology and its role in Legacy Modernization. A Metaware White Paper Parsing Technology and its role in Legacy Modernization A Metaware White Paper 1 INTRODUCTION In the two last decades there has been an explosion of interest in software tools that can automate key tasks

More information

Thai Language Self Assessment

Thai Language Self Assessment The following are can do statements in four skills: Listening, Speaking, Reading and Writing. Put a in front of each description that applies to your current Thai proficiency (.i.e. what you can do with

More information

Sense Making in an IOT World: Sensor Data Analysis with Deep Learning

Sense Making in an IOT World: Sensor Data Analysis with Deep Learning Sense Making in an IOT World: Sensor Data Analysis with Deep Learning Natalia Vassilieva, PhD Senior Research Manager GTC 2016 Deep learning proof points as of today Vision Speech Text Other Search & information

More information

The Impact of Using Technology in Teaching English as a Second Language

The Impact of Using Technology in Teaching English as a Second Language English Language and Literature Studies; Vol. 3, No. 1; 2013 ISSN 1925-4768 E-ISSN 1925-4776 Published by Canadian Center of Science and Education The Impact of Using Technology in Teaching English as

More information

Dragon Solutions Enterprise Profile Management

Dragon Solutions Enterprise Profile Management Dragon Solutions Enterprise Profile Management summary Simplifying System Administration and Profile Management for Enterprise Dragon Deployments In a distributed enterprise, IT professionals are responsible

More information

Integrating the Internet into Your Measurement System. DataSocket Technical Overview

Integrating the Internet into Your Measurement System. DataSocket Technical Overview Integrating the Internet into Your Measurement System DataSocket Technical Overview Introduction The Internet continues to become more integrated into our daily lives. This is particularly true for scientists

More information

Mobile Communication An overview Lesson 07 Introduction to Mobile Computing

Mobile Communication An overview Lesson 07 Introduction to Mobile Computing Mobile Communication An overview Lesson 07 Introduction to Mobile Computing Oxford University Press 2007. All rights reserved. 1 Mobile computing A Definition The process of computation on a mobiledevice

More information

How To Recognize Voice Over Ip On Pc Or Mac Or Ip On A Pc Or Ip (Ip) On A Microsoft Computer Or Ip Computer On A Mac Or Mac (Ip Or Ip) On An Ip Computer Or Mac Computer On An Mp3

How To Recognize Voice Over Ip On Pc Or Mac Or Ip On A Pc Or Ip (Ip) On A Microsoft Computer Or Ip Computer On A Mac Or Mac (Ip Or Ip) On An Ip Computer Or Mac Computer On An Mp3 Recognizing Voice Over IP: A Robust Front-End for Speech Recognition on the World Wide Web. By C.Moreno, A. Antolin and F.Diaz-de-Maria. Summary By Maheshwar Jayaraman 1 1. Introduction Voice Over IP is

More information

Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB

Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB Executive Summary Oracle Berkeley DB is used in a wide variety of carrier-grade mobile infrastructure systems. Berkeley DB provides

More information

How Can Teachers Teach Listening?

How Can Teachers Teach Listening? 3 How Can Teachers Teach Listening? The research findings discussed in the previous chapter have several important implications for teachers. Although many aspects of the traditional listening classroom

More information

American Sign Language

American Sign Language American Sign Language On this page: What is American Sign Language? Is sign language the same in other countries? Where did ASL originate? How does ASL compare with spoken language? How do most children

More information

USABILITY OF A FILIPINO LANGUAGE TOOLS WEBSITE

USABILITY OF A FILIPINO LANGUAGE TOOLS WEBSITE USABILITY OF A FILIPINO LANGUAGE TOOLS WEBSITE Ria A. Sagum, MCS Department of Computer Science, College of Computer and Information Sciences Polytechnic University of the Philippines, Manila, Philippines

More information

what operations can it perform? how does it perform them? on what kind of data? where are instructions and data stored?

what operations can it perform? how does it perform them? on what kind of data? where are instructions and data stored? Inside the CPU how does the CPU work? what operations can it perform? how does it perform them? on what kind of data? where are instructions and data stored? some short, boring programs to illustrate the

More information

Communication Process

Communication Process Welcome and Introductions Lesson 7 Communication Process Overview: This lesson teaches learners to define the elements of effective communication and its process. It will focus on communication as the

More information

not a Web- based application. not self-contained, closed products. Please refer to the attached VPAT Please refer to the attached VPAT

not a Web- based application. not self-contained, closed products. Please refer to the attached VPAT Please refer to the attached VPAT Apple Cinema Display Standards Subpart 1194.21 Software applications and operating systems. 1194.22 Web-based intranet and internet information and applications. 1194.23 Telecommunications products. 1194.24

More information

Faculty of Science and Engineering Placements. Stand out from the competition! Be prepared for your Interviews

Faculty of Science and Engineering Placements. Stand out from the competition! Be prepared for your Interviews Faculty of Science and Engineering Placements Stand out from the competition! Be prepared for your Interviews Interviews Getting an invitation to attend for an interview means you has passed the first

More information

UNIVERSAL DESIGN OF DISTANCE LEARNING

UNIVERSAL DESIGN OF DISTANCE LEARNING UNIVERSAL DESIGN OF DISTANCE LEARNING Sheryl Burgstahler, Ph.D. University of Washington Distance learning has been around for a long time. For hundreds of years instructors have taught students across

More information

CONTEXT AWARE CONTENT MARKETING

CONTEXT AWARE CONTENT MARKETING CONTEXT AWARE CONTENT MARKETING FOUR STEPS TO THE FUTURE OF CONTENT, CONTEXT AND MARKETING SUCCESS Introduction Managing, delivering and consuming web content has changed. Yes, again. The universe of options

More information

Summary Table Voluntary Product Accessibility Template

Summary Table Voluntary Product Accessibility Template PLANTRONICS VPAT 7 Product: Call Center Hearing Aid Compatible (HAC) Polaris Headsets Over the Head Noise Canceling: P161N, P91N, P51N, P251N Over the Head Voice Tube: P161, P91, P51, P251 Over the Ear

More information

Summary Table Voluntary Product Accessibility Template. Criteria Supporting Features Remarks and explanations

Summary Table Voluntary Product Accessibility Template. Criteria Supporting Features Remarks and explanations Plantronics VPAT 1 Product: Call Center Hearing Aid Compatible (HAC) Headsets Operated with Amplifier Models M12, MX10, P10, or SQD: Over the Head Noise Canceling: H161N, H91N, H51N, H251N Over the Head

More information

(Refer Slide Time: 00:01:16 min)

(Refer Slide Time: 00:01:16 min) Digital Computer Organization Prof. P. K. Biswas Department of Electronic & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture No. # 04 CPU Design: Tirning & Control

More information

Guide: Technologies for people who are Deaf or hard of hearing

Guide: Technologies for people who are Deaf or hard of hearing Guide: Technologies for people who are Deaf or hard of hearing The following are examples of how technology can aid communication if you are Deaf or hard of hearing. These options may not suit all situations.

More information

Model Policy on Eyewitness Identification

Model Policy on Eyewitness Identification Model Policy on Eyewitness Identification I. Purpose The purpose of this model policy is to outline proper protocol for eyewitness identification procedures for photographic, show-up, and live lineup identifications

More information

International Journal of Advanced Engineering Research and Applications (IJAERA) ISSN: 2454-2377 Vol. 1, Issue 6, October 2015. Big Data and Hadoop

International Journal of Advanced Engineering Research and Applications (IJAERA) ISSN: 2454-2377 Vol. 1, Issue 6, October 2015. Big Data and Hadoop ISSN: 2454-2377, October 2015 Big Data and Hadoop Simmi Bagga 1 Satinder Kaur 2 1 Assistant Professor, Sant Hira Dass Kanya MahaVidyalaya, Kala Sanghian, Distt Kpt. INDIA E-mail: simmibagga12@gmail.com

More information

WEB, HYBRID, NATIVE EXPLAINED CRAIG ISAKSON. June 2013 MOBILE ENGINEERING LEAD / SOFTWARE ENGINEER

WEB, HYBRID, NATIVE EXPLAINED CRAIG ISAKSON. June 2013 MOBILE ENGINEERING LEAD / SOFTWARE ENGINEER WEB, HYBRID, NATIVE EXPLAINED June 2013 CRAIG ISAKSON MOBILE ENGINEERING LEAD / SOFTWARE ENGINEER 701.235.5525 888.sundog fax: 701.235.8941 2000 44th St. S Floor 6 Fargo, ND 58103 www.sundoginteractive.com

More information

Module 9. Building Communication Skills

Module 9. Building Communication Skills Module 9 Building Communication Skills Essential Ideas to Convey To apply a facilitative approach to supervision, supervisors have to approach the people they manage in a different way, by using certain

More information

A STUDY ON USAGE OF MOBILE APPS AS A IMPACTFUL TOOL OF MARKETING

A STUDY ON USAGE OF MOBILE APPS AS A IMPACTFUL TOOL OF MARKETING A STUDY ON USAGE OF MOBILE APPS AS A IMPACTFUL TOOL OF MARKETING Dr. Deepali Bhatnagar Assistant Professor, Amity University, Rajasthan ABSTRACT: Mobile marketing perform two ways or multiple way communication

More information

Turkish Radiology Dictation System

Turkish Radiology Dictation System Turkish Radiology Dictation System Ebru Arısoy, Levent M. Arslan Boaziçi University, Electrical and Electronic Engineering Department, 34342, Bebek, stanbul, Turkey arisoyeb@boun.edu.tr, arslanle@boun.edu.tr

More information

Writing Reports BJECTIVES ONTENTS. By the end of this section you should be able to :

Writing Reports BJECTIVES ONTENTS. By the end of this section you should be able to : Writing Reports By the end of this section you should be able to : O BJECTIVES Understand the purposes of a report Plan a report Understand the structure of a report Collect information for your report

More information

EFFECTIVE STRATEGIC PLANNING IN MODERN INFORMATION AGE ORGANIZATIONS

EFFECTIVE STRATEGIC PLANNING IN MODERN INFORMATION AGE ORGANIZATIONS EFFECTIVE STRATEGIC PLANNING IN MODERN INFORMATION AGE ORGANIZATIONS Cezar Vasilescu and Aura Codreanu Abstract: The field of strategic management has offered a variety of frameworks and concepts during

More information

Summarizing and Paraphrasing

Summarizing and Paraphrasing CHAPTER 4 Summarizing and Paraphrasing Summarizing and paraphrasing are skills that require students to reprocess information and express it in their own words. These skills enhance student comprehension

More information

Question Bank June 2015 R001 Mock

Question Bank June 2015 R001 Mock Bank June 2015 R001 Mock Jan13 1 Spec 1 Spec 7 Jan15 7 1. The software needed to create individual graphics for animated cartoons Richard uses his smartphone to take a photograph. a) State two ways in

More information

Lab Experience 17. Programming Language Translation

Lab Experience 17. Programming Language Translation Lab Experience 17 Programming Language Translation Objectives Gain insight into the translation process for converting one virtual machine to another See the process by which an assembler translates assembly

More information

The Matrex Client/Server Specification 1.1

The Matrex Client/Server Specification 1.1 The Matrex Client/Server Specification 1.1 Table of Contents The Matrex Client/Server Specification 1.1...1 Introduction...1 A usage scenario...2 New project...4 Open project...5 Recently used projects...6

More information

Conley, D. T. (2005). College Knowledge: What it Really Takes for Students to Succeed and What We Can Do to Get Them Ready

Conley, D. T. (2005). College Knowledge: What it Really Takes for Students to Succeed and What We Can Do to Get Them Ready 1 Conley, D. T. (2005). College Knowledge: What it Really Takes for Students to Succeed and What We Can Do to Get Them Ready. San Francisco: Jossey-Bass. College Knowledge is based on research conducted

More information

Intelligent Online Business Solutions based on State-of-the-art E-Paper Devices

Intelligent Online Business Solutions based on State-of-the-art E-Paper Devices Intelligent Online Business Solutions based on State-of-the-art E-Paper Devices Peter K. Brandt Ergon Informatik AG Zürich, Switzerland 2007 Ergon Informatik AG, www.ergon.ch What is Electronic Paper?

More information

STB- 2. Installation and Operation Manual

STB- 2. Installation and Operation Manual STB- 2 Installation and Operation Manual Index 1 Unpacking your STB- 2 2 Installation 3 WIFI connectivity 4 Remote Control 5 Selecting Video Mode 6 Start Page 7 Watching TV / TV Guide 8 Recording & Playing

More information

GESE Initial steps. Guide for teachers, Grades 1 3. GESE Grade 1 Introduction

GESE Initial steps. Guide for teachers, Grades 1 3. GESE Grade 1 Introduction GESE Initial steps Guide for teachers, Grades 1 3 GESE Grade 1 Introduction cover photos: left and right Martin Dalton, middle Speak! Learning Centre Contents Contents What is Trinity College London?...3

More information

MStM Reading/Language Arts Curriculum Lesson Plan Template

MStM Reading/Language Arts Curriculum Lesson Plan Template Grade Level: 6 th grade Standard 1: Students will use multiple strategies to read a variety of texts. Grade Level Objective: 1. A.6.1: compare/contrast the differences in fiction and non-fiction text.

More information

Last name: State/ Province: Home telephone number:

Last name: State/ Province: Home telephone number: 54 Ages & Stages Questionnaires 51 months 0 days through 56 months 30 days Month Questionnaire Please provide the following information. Use black or blue ink only and print legibly when completing this

More information

Female Child s date of birth: Last name: State/ Province: Home telephone number:

Female Child s date of birth: Last name: State/ Province: Home telephone number: 60 Ages & Stages Questionnaires 57 months 0 days through 66 months 0 days Month Questionnaire Please provide the following information. Use black or blue ink only and print legibly when completing this

More information

31 Case Studies: Java Natural Language Tools Available on the Web

31 Case Studies: Java Natural Language Tools Available on the Web 31 Case Studies: Java Natural Language Tools Available on the Web Chapter Objectives Chapter Contents This chapter provides a number of sources for open source and free atural language understanding software

More information

Narrow Bandwidth Streaming Video Codec

Narrow Bandwidth Streaming Video Codec Software Requirements Specification for Narrow Bandwidth Streaming Video Codec Version 1.0 approved Internal Supervisors Dr. Shantha Frenando Dr. Chathura de Silva External Supervisors Mr. Harsha Samarawicrama

More information

Software that writes Software Stochastic, Evolutionary, MultiRun Strategy Auto-Generation. TRADING SYSTEM LAB Product Description Version 1.

Software that writes Software Stochastic, Evolutionary, MultiRun Strategy Auto-Generation. TRADING SYSTEM LAB Product Description Version 1. Software that writes Software Stochastic, Evolutionary, MultiRun Strategy Auto-Generation TRADING SYSTEM LAB Product Description Version 1.1 08/08/10 Trading System Lab (TSL) will automatically generate

More information

Mobile Accessibility. Jan Richards Project Manager Inclusive Design Research Centre OCAD University

Mobile Accessibility. Jan Richards Project Manager Inclusive Design Research Centre OCAD University Mobile Accessibility Jan Richards Project Manager Inclusive Design Research Centre OCAD University Overview I work at the Inclusive Design Research Centre (IDRC). Located at OCAD University in downtown

More information

Case Study 2 Learning though blogging: a case study with business Spanish students at Reykjavik University

Case Study 2 Learning though blogging: a case study with business Spanish students at Reykjavik University Case Study 2 Learning though blogging: a case study with business Spanish students at Reykjavik University Pilar Concheiro 1. Introduction In a knowledge society where competence domains are widened and

More information

Documenting questionnaires

Documenting questionnaires Documenting questionnaires Jean-Pierre Kent and Leon Willenborg, Statistics Netherlands 1. Introduction The growing possibilities of Blaise III have led designers to build very complex survey instruments.

More information

To download the script for the listening go to: http://www.teachingenglish.org.uk/sites/teacheng/files/learning-stylesaudioscript.

To download the script for the listening go to: http://www.teachingenglish.org.uk/sites/teacheng/files/learning-stylesaudioscript. Learning styles Topic: Idioms Aims: - To apply listening skills to an audio extract of non-native speakers - To raise awareness of personal learning styles - To provide concrete learning aids to enable

More information

Intro to the Art of Computer Science

Intro to the Art of Computer Science 1 LESSON NAME: Intro to the Art of Computer Science Lesson time: 45 60 Minutes : Prep time: 15 Minutes Main Goal: Give the class a clear understanding of what computer science is and how it could be helpful

More information

GLOVE-BASED GESTURE RECOGNITION SYSTEM

GLOVE-BASED GESTURE RECOGNITION SYSTEM CLAWAR 2012 Proceedings of the Fifteenth International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines, Baltimore, MD, USA, 23 26 July 2012 747 GLOVE-BASED GESTURE

More information

ICT in pre-service teacher education in Portugal: trends and needs emerging from a survey

ICT in pre-service teacher education in Portugal: trends and needs emerging from a survey Interactive Educational Multimedia, Number 11 (October 2005), pp. 153-167 http://www.ub.es/multimedia/iem ICT in pre-service teacher education in Portugal: trends and needs emerging from a survey João

More information

Last name: State/ Province: Home telephone number:

Last name: State/ Province: Home telephone number: 60 Ages & Stages Questionnaires 57 months 0 days through 66 months 0 days Month Questionnaire Please provide the following information. Use black or blue ink only and print legibly when completing this

More information

Jack s Dyslexia Index indicates he has dyslexic difficulties that are mild in extent.

Jack s Dyslexia Index indicates he has dyslexic difficulties that are mild in extent. Dyslexia Portfolio Report for Jack Jones Assessed by Sue Thompson on 05/08/2009 Report for parents When a child is identified as dyslexic, additional support will be needed from both school and home to

More information

VPAT 1 Product: Call Center Hearing Aid Compatible (HAC) Headsets Operated with Amplifier Models M12, MX10, P10, or SQD:

VPAT 1 Product: Call Center Hearing Aid Compatible (HAC) Headsets Operated with Amplifier Models M12, MX10, P10, or SQD: Plantronics VPAT 1 Product: Call Center Hearing Aid Compatible (HAC) Headsets Operated with Amplifier Models M12, MX10, P10, or SQD: Over the Head Noise Canceling: H161N, H91N, H51N Over the Head Voice

More information

Workplace Success Strategies for Adults with Asperger Syndrome

Workplace Success Strategies for Adults with Asperger Syndrome Workplace Success Strategies for Adults with Asperger Syndrome This is a summary of the recommendations made in Dan and Julie Coulter s June 10, 2010 APSE presentation. The presentation uses examples from

More information

Multi-Touch Ring Encoder Software Development Kit User s Guide

Multi-Touch Ring Encoder Software Development Kit User s Guide Multi-Touch Ring Encoder Software Development Kit User s Guide v2.0 Bulletin #1198 561 Hillgrove Avenue LaGrange, IL 60525 Phone: (708) 354-1040 Fax: (708) 354-2820 E-mail: instinct@grayhill.com On the

More information

Read&Write 11 Home Version Download Instructions for Windows 7

Read&Write 11 Home Version Download Instructions for Windows 7 Read&Write 11 Home Version Download Instructions for Windows 7 Any issues regarding installation or operation of this software should be directed to TextHELP via one of the following methods: Phone: toll-free

More information

BBC Learning English Talk about English Academic Listening Part 1 - English for Academic Purposes: Introduction

BBC Learning English Talk about English Academic Listening Part 1 - English for Academic Purposes: Introduction BBC Learning English Academic Listening Part 1 - English for Academic Purposes: Introduction This programme was first broadcast in 2001. This is not an accurate word-for-word transcript of the programme.

More information

Business @ the Speed of Thought

Business @ the Speed of Thought Bill Gates About the author Bill Gates wrote his first software program when he was thirteen years old. Two points about the experience seem clear. First, the ability to control something huge at a time

More information

Preparing for the IELTS test with Holmesglen Institute of TAFE

Preparing for the IELTS test with Holmesglen Institute of TAFE Preparing for the IELTS test with Holmesglen Institute of TAFE The speaking component The IELTS speaking test takes around 15 minutes and is in the form of an interview. The interview will most probably

More information

S4 USER GUIDE. Read Me to Get the Most Out of Your Device...

S4 USER GUIDE. Read Me to Get the Most Out of Your Device... S4 USER GUIDE Read Me to Get the Most Out of Your Device... Contents Introduction 4 Remove the Protective Cover 5 Charge Your S4 5 Pair the S4 with your Phone 6 Install the S4 in your Car 8 Using the Handsfree

More information

SPeach: Automatic Classroom Captioning System for Hearing Impaired

SPeach: Automatic Classroom Captioning System for Hearing Impaired SPeach: Automatic Classroom Captioning System for Hearing Impaired Andres Cedeño, Riya Fukui, Zihe Huang, Aaron Roe, Chase Stewart, Peter Washington Problem Definition Over one in seven Americans have

More information

CAD/ CAM Prof. P. V. Madhusudhan Rao Department of Mechanical Engineering Indian Institute of Technology, Delhi Lecture No. # 03 What is CAD/ CAM

CAD/ CAM Prof. P. V. Madhusudhan Rao Department of Mechanical Engineering Indian Institute of Technology, Delhi Lecture No. # 03 What is CAD/ CAM CAD/ CAM Prof. P. V. Madhusudhan Rao Department of Mechanical Engineering Indian Institute of Technology, Delhi Lecture No. # 03 What is CAD/ CAM Now this lecture is in a way we can say an introduction

More information

JHSPH HUMAN SUBJECTS RESEARCH ETHICS FIELD TRAINING GUIDE

JHSPH HUMAN SUBJECTS RESEARCH ETHICS FIELD TRAINING GUIDE JHSPH HUMAN SUBJECTS RESEARCH ETHICS FIELD TRAINING GUIDE This guide is intended to be used as a tool for training individuals who will be engaged in some aspect of a human subject research interaction

More information

CLOUD COMPUTING CONCEPTS FOR ACADEMIC COLLABORATION

CLOUD COMPUTING CONCEPTS FOR ACADEMIC COLLABORATION Bulgarian Journal of Science and Education Policy (BJSEP), Volume 7, Number 1, 2013 CLOUD COMPUTING CONCEPTS FOR ACADEMIC COLLABORATION Khayrazad Kari JABBOUR Lebanese University, LEBANON Abstract. The

More information

Frequently Asked Questions

Frequently Asked Questions Frequently Asked Questions General What does FocusVision do? FocusVision transmits live video and audio of focus groups that are taking place all over the world, eliminating the need to travel. These focus

More information

Use of Gestures in the English Classroom

Use of Gestures in the English Classroom 2 The Use of Gestures in the English Classroom Ashraf Said Bait Darwish Dhofar Region 1 INTRODUCTION A gesture is a form of non-verbal communication in which meaning is conveyed using part of the body.

More information

Unit A451: Computer systems and programming. Section 2: Computing Hardware 4/5: Input and Output Devices

Unit A451: Computer systems and programming. Section 2: Computing Hardware 4/5: Input and Output Devices Unit A451: Computer systems and programming Section 2: Computing Hardware 4/5: Input and Output Devices Input and Output devices Candidates should be able to: (a) understand the need for input and output

More information

Membering T M : A Conference Call Service with Speaker-Independent Name Dialing on AIN

Membering T M : A Conference Call Service with Speaker-Independent Name Dialing on AIN PAGE 30 Membering T M : A Conference Call Service with Speaker-Independent Name Dialing on AIN Sung-Joon Park, Kyung-Ae Jang, Jae-In Kim, Myoung-Wan Koo, Chu-Shik Jhon Service Development Laboratory, KT,

More information

01219211 Software Development Training Camp 1 (0-3) Prerequisite : 01204214 Program development skill enhancement camp, at least 48 person-hours.

01219211 Software Development Training Camp 1 (0-3) Prerequisite : 01204214 Program development skill enhancement camp, at least 48 person-hours. (International Program) 01219141 Object-Oriented Modeling and Programming 3 (3-0) Object concepts, object-oriented design and analysis, object-oriented analysis relating to developing conceptual models

More information

Data Mining Governance for Service Oriented Architecture

Data Mining Governance for Service Oriented Architecture Data Mining Governance for Service Oriented Architecture Ali Beklen Software Group IBM Turkey Istanbul, TURKEY alibek@tr.ibm.com Turgay Tugay Bilgin Dept. of Computer Engineering Maltepe University Istanbul,

More information

Surveillance System Using Wireless Sensor Networks

Surveillance System Using Wireless Sensor Networks Surveillance System Using Wireless Sensor Networks Dan Nguyen, Leo Chang Computer Engineering, Santa Clara University Santa Clara, California, USA dantnguyen84@gmail.com chihshun@gmail.com Abstract The

More information

Hardware Implementation of Probabilistic State Machine for Word Recognition

Hardware Implementation of Probabilistic State Machine for Word Recognition IJECT Vo l. 4, Is s u e Sp l - 5, Ju l y - Se p t 2013 ISSN : 2230-7109 (Online) ISSN : 2230-9543 (Print) Hardware Implementation of Probabilistic State Machine for Word Recognition 1 Soorya Asokan, 2

More information

The preliminary design of a wearable computer for supporting Construction Progress Monitoring

The preliminary design of a wearable computer for supporting Construction Progress Monitoring The preliminary design of a wearable computer for supporting Construction Progress Monitoring 1 Introduction Jan Reinhardt, TU - Dresden Prof. James H. Garrett,Jr., Carnegie Mellon University Prof. Raimar

More information

Masters in Information Technology

Masters in Information Technology Computer - Information Technology MSc & MPhil - 2015/6 - July 2015 Masters in Information Technology Programme Requirements Taught Element, and PG Diploma in Information Technology: 120 credits: IS5101

More information

One LAR Course Credits: 3. Page 4

One LAR Course Credits: 3. Page 4 Course Descriptions Year 1 30 credits Course Title: Calculus I Course Code: COS 101 This course introduces higher mathematics by examining the fundamental principles of calculus-- functions, graphs, limits,

More information

Semester Thesis Traffic Monitoring in Sensor Networks

Semester Thesis Traffic Monitoring in Sensor Networks Semester Thesis Traffic Monitoring in Sensor Networks Raphael Schmid Departments of Computer Science and Information Technology and Electrical Engineering, ETH Zurich Summer Term 2006 Supervisors: Nicolas

More information