GRAPHICAL USER INTERFACE PROCESSING UNIT UNIT NETWORK INTERFACE UNIT ASSOCIATION/ TRANSLATION // (21) Appl. No.: 10/197,470 (57) ABSTRACT

Similar documents
(Us) (73) Assignee: Avaya Technology Corp. Je?' McElroy, Columbia, SC (US); (21) Appl. No.: 10/413,024. (22) Filed: Apr. 14, 2003 (57) ABSTRACT

(71) Applicant: SPEAKWRITE, LLC,Austin, TX (US)

US Al (19) United States (12) Patent Application Publication (10) Pub. No.: US 2007/ A1 Voight et al. SUBSCRIBER DATABASE.

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2002/ A1 Fukuzato (43) Pub. Date: Jun.

60 REDIRECTING THE PRINT PATH MANAGER 1

(12> Ulllted States Patent (16) Patent N6.= US 6,320,621 B1 Fu (45) Date of Patent: Nov. 20, 2001

(30) Foreign Application Priority Data

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2002/ A1 STRANDBERG (43) Pub. Date: Oct.

US Al (19) United States (12) Patent Application Publication (10) Pub. N0.: US 2012/ A1 Lundstrom (43) Pub. Date: NOV.

Lookup CNAM / other database for calllng

(12> Ulllted States Patent (10) Patent N0.: US 6,591,288 B1 Edwards et al. (45) Date of Patent: Jul. 8, 2003

(54) RETARGETING RELATED TECHNIQUES (52) US. Cl /1453 AND OFFERINGS. (75) Inventors: Ayrnan Farahat, San Francisco, (57) ABSTRACT

remote backup central communications and storage facility

(12) Patent Application Publication (10) Pub. No.: US 2003/ A1 Wu et al. (43) Pub. Date: Feb. 20, 2003

Ulllted States Patent [19] [11] Patent Number: 6,141,545

software, and perform automatic dialing according to the /*~102

(12) United States Patent Culver

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2007/ A1 Ollis et al. HOME PROCESSOR /\ J\ NETWORK

(IP Connection) Miami (54) (76) (21) (22) (51) (52) Application

\ \ \ connection connection connection interface interface interface

Back up information data by blocks, and generate backup data of each block

US A1 (19) United States (12) Patent Application Publication (10) Pub. N0.: US 2002/ A1. Mannarsamy (43) Pub. Date: NOV.

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2005/ A1 Owhadi et al. (43) Pub. Date: Feb.

Telephone Dressing Systems - Advantages and Disadvantages

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2007/ A1. Operating System. 106 q f 108.

/ \33 40 \ / \\ \ \ M / f 1. (19) United States (12) Patent Application Publication Lawser et al. NETWORK \ 36. SERVlCE 'NTERNET SERVICE

(12) United States Patent (10) Patent N0.: US 8,326,445 B2 Baak et al. (45) Date of Patent: Dec. 4, 2012

Vadim Sheinin, Mount Kisco, NY (US) (57) ABSTRACT

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2002/ A1 Boyer et al. (43) Pub. Date: Aug.

Nuance PDF Converter Enterprise 8

(54) LOTTERY METHOD Publication Classi?cation

;111: ~~~~~~~~~~~~~~~~~~~ [73] Assigneez Rockwell Semiconductor Systems 5,754,639 5/1998 Flockhart et al...

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2014/ A1 FAN et al. (43) Pub. Date: Feb.

Quick Start Guide: Read & Write 11.0 Gold for PC

Adobe Flash Player 11.9 Voluntary Product Accessibility Template

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2007/ A1 Du et al. (43) Pub. Date: Aug.

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2013/ A1 Warren (43) Pub. Date: Jan.

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2012/ A1 Chung (43) Pub. Date: Aug.

(12) United States Patent (10) Patent N0.: US 8,282,471 B1 Korner (45) Date of Patent: Oct. 9, 2012

(54) METHODS AND SYSTEMS FOR FINDING Publication Classi?cation CONNECTIONS AMONG SUBSCRIBERS TO AN CAMPAIGN (51) Int- Cl

(12) United States Patent Halonen

US A1 (19) United States (12) Patent Application Publication (10) Pub. N0.: US 2013/ A1 DANG (43) Pub. Date: Jul.

NETWORK BOUNDARY PRIVATE NETWORK PUBLIC _1 NETWORK

Vignet (43) Pub. Date: Nov. 24, 2005

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2013/ A1 Yoder (43) Pub. Date: NOV.

US Al (19) United States (12) Patent Application Publication (10) Pub. No.: US 2009/ A1 Sanvido (43) Pub. Date: Jun.

Hay (43) Pub. Date: Oct. 17, 2002

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2012/ A1 Lee (43) Pub. Date: Mar.

VPAT for Apple MacBook Pro (Late 2013)

(12) United States Patent (10) Patent No.: US 7,142,661 B2 Erhart et al. (45) Date of Patent: Nov. 28, 2006

(12) Patent Application Publication (10) Pub. No.: US 2013/ A1 Kim et al. (43) Pub. Date: Dec. 5, 2013

(12) United States Patent (10) Patent N0.2 US 8,566,608 B2 Pemmaraju (45) Date of Patent: Oct. 22, 2013

i Load balancer relays request to selected node

Ulllted States Patent [19] [11] Patent Number: 5,943,406

Naylor, Lake OsWego, OR (US) (51) Int_ CL

IP Office Contact Center R9.0 Interactive Voice Response Voluntary Product Accessibility Template (VPAT)

7714 Evaluation 7 logic

(12) United States Patent

Adobe Flash Player 11.2 Voluntary Product Accessibility Template

Signing Physical Science Dictionary User s Guide

Third Grade Language Arts Learning Targets - Common Core

VPAT. Voluntary Product Accessibility Template. Version 1.3

Content Provider Artist?zgiputer Rgcord L1a4t6e

The use of binary codes to represent characters

Apple G5. Standards Subpart Software applications and operating systems. Subpart B -- Technical Standards

Indiana Department of Education

(12) United States Patent

(21) (22) (57) ABSTRACT. Appl. No.: 10/752,736

Cunneciiun to credit cards dltabase. The system analyzes all credit cards aeecums.

(10) Patent N0.: US 6,570,581 B1 Smith (45) Date of Patent: May 27, 2003

United States Patent [191 Brugliera et al.

US A1 (19) United States (12) Patent Application Publication (10) Pub. N0.: US 2003/ A1 Ritchc (43) Pub. Date: Jun.

Avaya Model 9608 H.323 Deskphone

not a Web- based application. not self-contained, closed products. Please refer to the attached VPAT Please refer to the attached VPAT

A text document of what was said and any other relevant sound in the media (often posted with audio recordings).

Voluntary Product Accessibility Template Blackboard Learn Release 9.1 April 2014 (Published April 30, 2014)

Summary Table Voluntary Product Accessibility Template

110-\ CALLER TERMINAL

US A1 (19) United States (12) Patent Application Publication (10) Pub. N0.: US 2013/ A1 SEO (43) Pub. Date: Mar.

Avaya Speech Analytics Desktop Client 2.0

POTENTIAL. SC DA Il'JA N INTERFACE m. (21) Appl. No.: 11/037,604

. tlllll,1! 1% 11:11 I.,W/ "-111 // out AIHI/ ) I \\ M10. 1 I! (1' 1L- 1!!! I VEHICLE} I] r20 (TRAFFIC COMPUTER 10 RECEIVING UNIT 41 I \ ")SENSOR

Snap Server Manager Section 508 Report

COPYRIGHT 2011 COPYRIGHT 2012 AXON DIGITAL DESIGN B.V. ALL RIGHTS RESERVED

Industry Guidelines on Captioning Television Programs 1 Introduction

US Al (19) United States (12) Patent Application Publication (10) Pub. No.: US 2007/ A1 Zhou et al. (43) Pub. Date: Jul.

Patent Application Publication Sep. 30, 2004 Sheet 1 0f 2. Hierarchical Query. Contact Ow FIG. 1

/12 [-16. U CIUUU Cl UUCIEI U CICIUU. (12) Patent Application Publication (10) Pub. No.: US 2002/ A1 Lawless et al.

(43) Pub. Date: Jan. 24, 2008

Summary Table Voluntary Product Accessibility Template. Criteria Supporting Features Remarks and explanations

US A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2013/ A1 Chen (57)

VPAT Voluntary Product Accessibility Template

(54) Applicant: (71) (72) Assignee: (73) (21) (22) (60)

[11] [45] USER ANSWERS TELEPHONE CALL FOR CLIENT USING WEB-ENABLED TERMINAL 18 WEB-ENABLED TERMINAL 1B LOOKS UP CLIENT

(43) Pub. Date: Feb. 16, 2012

subscription event billing event Dispatch selection to user

(12) United States Patent (10) Patent N0.: US 6,192,121 B1 Atkinson et al. (45) Date of Patent: Feb. 20, 2001

Summary Table Voluntary Product Accessibility Template

VPAT. Voluntary Product Accessibility Template. Version 1.3

Summary Table. Voluntary Product Accessibility Template

Transcription:

US 20040012643A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2004/0012643 A1 August et al. (43) Pub. Date: Jan. 22, 2004 (54) SYSTEMS AND METHODS FOR VISUALLY COMMUNICATING THE MEANING OF INFORMATION TO THE HEARING IMPAIRED (76) Inventors: atherine G. August, MataWan, NJ (US); Daniel D. Lee, Leonia, NJ (US); Michael Potmesil, Aberdeen, NJ (US) Correspondence Address: John E. Curtin, Esq. Troutman Sanders LLP Suite 1600 1660 International Drive McLean, VA 22102 (US) (21) Appl. No.: 10/197,470 (22) Filed: Jul. 18, 2002 Publication Classi?cation (51) Int. Cl?...G09G 5/00 (52) U.S. c1...... 345/865; 345/827 (57) ABSTRACT The present invention provides systems and methods for visually communicating the meaning of information to the hearing impaired by associating Written or spoken language to sign language animations. Such systems comprise asso ciating textual or audio symbols With known sign language symbols. NeW sign language symbols can also be generated in response to information Which does not have a known sign language symbol. The information can be treated as elements Which can be Weighted according to each element s contribution to the overall meaning of the infor mation sought to be communicated. Such systems can graphically display representations of both known and new sign language symbols to a hearing impaired person. 100 NETWOR 108 //- 102 PROCESSING f 104 GRAPHICAL USER A V / ASSOCIATION/ TRANSLATION 106

Patent Application Publication Jan. 22, 2004 Sheet 1 0f 4 US 2004/0012643 A1 FIG. 1 100 NETWOR 108 I 102 PROCESSING f 104 GRAPHICAL USER If 106 ASSOCIATION/ TRANSLATION

Patent Application Publication Jan. 22, 2004 Sheet 2 0f 4 US 2004/0012643 A1 FIG. 2 200 f NETWOR 108 206 102 DATA SOURCE 204 PROCESSING 104 202 DATA SOURCE GRGg'g'RCAL I } ASSOCIATION/ TRA'I'RILHAT'ON

Patent Application Publication Jan. 22, 2004 Sheet 3 0f 4 US 2004/0012643 A1 FIG. 3 300 ANALYZE LANGUAGE INFORMATION FOR MEANING l 304 306 I ASSOCIATE LANGUAGE DATA WITH SIGN LANGUAGE DATA CREATE NEW SIGN LANGUAGE DATA I F 308 DISPLAY ASSOCIATED SIGN LANGUAGE DATA

Patent Application Publication Jan. 22, 2004 Sheet 4 0f 4 US 2004/0012643 A1 FIG. 4 DETERMINE FREQUENCY OCCURRENCE OF UNASSOCIATED DATA ELEMENT 402 306 PROMPT USER FOR INPUT WHEN UNASSOCIATED DATA ELEMENTS OCCUR MORE THAN ONCE 404 406 RECEIVE USER INPUT CREATE SIGN LANGUAGE DATA ACCORDING TO USER INPUT 408

US 2004/0012643 A1 J an. 22, 2004 SYSTEMS AND METHODS FOR VISUALLY COMMUNICATING THE MEANING OF INFORMATION TO THE HEARING IMPAIRED TECHNICAL FIELD [0001] This invention relates to systems and methods for visually communicating the meaning of information to the hearing impaired. BACGROUND OF THE INVENTION [0002] Hearing impaired individuals Who communicate using sign language, such as American Sign Language (ASL), Signed English (SE), or another conventional lan guage must often rely on reading subtitles or other repre sentations of spoken language during the performance of plays, While Watching television, in theater productions, lectures, and during telephone conversations With hearing people. Conversely, hearing people, in general, are not familiar With sign language. [0003] When it comes to communicating using a tele phone, there exists technology to assist hearing impaired persons make telephone calls. For example, telecommuni cation devices for the deaf (TDD), text telephone (TT) or teletype (TTY) are just a few that come to mind. Modern TDDs permit the user to type characters into a keyboard. The character strings are then encoded and transmitted over a telephone line to a display of a remote TDD device. [0004] Systems have been developed to facilitate the exchange of telephone communications between the hearing impaired and hearing users including a voice-to-tdd sys tem in Which an operator, referred to as a call assistant, serves as a human intermediary between a hearing person and a hearing impaired person. The call assistant commu nicates by voice With the hearing person and also has access to a TDD device or the like for communicating textual translations to a hearing impaired person. After the assistant receives text via the TDD from the hearing impaired person, the assistant can read the text aloud to the hearing person. Unfortunately, TDD devices and the like are not practical for Watching television, attending theater or lectures, and impromptu meetings. [0005] Therefore, there is a need for improved systems and methods for communicating the meaning of information to the hearing impaired. SUMMARY OF THE INVENTION [0006] The present invention provides techniques for visu ally communicating information to the hearing impaired, one of Which comprises an association unit adapted to associate an information element With its known sign lan guage symbol, and generate a new sign language symbol for each element not associated With a known sign language symbol. Additionally, each element not associated With a known sign language symbol may be Weighted according to its contribution to the overall meaning of the information to be communicated. One aspect of the invention associates the meaning of a string of information as a Whole, rather than associate each element individually. Thus, the present inven tion can convey meaning Without being limited to a one-to one, element-to-symbol translation. [0007] Other features of the present invention Will become apparent upon reading the following detailed description of the invention, taken in conjunction With the accompanying drawings and appended claims. BRIEF DESCRIPTION OF THE DRAWINGS [0008] FIG. 1 is a simpli?ed block diagram of a system for visually communicating the meaning of information to the hearing impaired according to one embodiment of the present invention. [0009] FIG. 2 is a simpli?ed block diagram of a system for visually communicating the meaning of information to the hearing impaired according to an alternative embodi ment of the present invention. [0010] FIG. 3 is a simpli?ed?ow diagram depicting a method of visually communicating the meaning of informa tion to the hearing impaired according to one embodiment of the present invention. [0011] FIG. 4 is a simpli?ed?ow diagram depicting a method of creating new sign language symbols for infor mation having no known sign language equivalent according to one embodiment of the present invention. DETAILED DESCRIPTION OF THE INVENTION [0012] Referring now to the drawings, in Which like numerals refer to like parts or actions throughout the several views, exemplary embodiments of the present invention are described. [0013] It should be understood that When the terms sign language are used herein, these terms are intended to comprise visual, graphical, video and the like translations of information made according to the conventions of the American Sign Language (ASL), Signed English (SE), or other sign language system (e.g.,?nger spelling). [0014] FIG. 1 shows a system 100 adapted to translate information into sign language. System 100 comprises pro cessing unit 102, graphical user interface 104, association/ translational unit 106 (collectively referred to as associa tion unit ) and network interface unit 108. System 100 may comprise a computer, handheld device, personal data assis tant, Wireless device (e.g., cellular or satellite telephone) or other devices. The information translated may originally comprise textual, graphical, voice, audio, or visual informa tion having a meaning intended to be conveyed. The infor mation may comprise coded or encrypted elements and can be broken down into other types of elements, such as alphabetical characters, Words, phrases, sentences, para graphs or symbols. These elements in turn can be combined to convey information. [0015] Processing unit 102 is adapted to process informa tion using, for example, program code Which is embedded in the unit or Which has been downloaded locally or remotely. Processing unit 102 is operatively connected to graphical user interface 104. [0016] Graphical user interface 104 may comprise Win dows, pull-down menus, scroll bars, iconic images, and the like and can be adapted to output multimedia (e.g., some combination of sounds, video and/or motion). Graphical user interface 104 may also comprise input and output devices including but not limited to, microphones, mice,

US 2004/0012643 A1 Jan. 22, 2004 trackballs, light pens, keyboards, touch screens, display screens, printers, speakers and the like. [0017] In one embodiment of the present invention, the association unit 106 is adapted to associate information sought to be communicated to the hearing impaired With known sign language symbols or representations (hereafter collectively referred to as symbols ), etc. to convey the meaning of the information. The association unit 106 is further adapted to associate parts of any language including, but not limited to: English, Japanese, French, Spanish etc... With the equivalent sign language symbols. Individual information elements or groups of elements can be associ ated With their equivalent sign language symbol. Elements not associated With a known sign language symbol can be animated (e.g., using?nger spelling). The system 100 can associate each element With a sign language symbol or associate the meaning of a string of elements With at least one sign language symbol. [0018] NetWork interface unit 108 is adapted to connect system 100 to one or more networks, such as the Internet, an intranet, local area network, or Wide area network allowing system 100 to be accessed via standard Web browsers. NetWork interface unit 108 may comprise a transceiver for receiving and transmitting electromagnetic signals (e.g., radio and microwave). [0019] Referring now to FIG. 2, system 200 comprises components of system 100 as Well as additional compo nents. As shown, system 200 comprises association unit 202 adapted to translate textual information into sign language symbols by matching text to equivalent sign language graphical symbols. The text can be in any form including, but not limited to: electronic?les, playscripts, Closed Cap tioning, TDD, or speech Which has been converted to text. nown graphical symbols can be stored Within a local database 204 or can be retrieved from a remote database 206. System 200, via interface 104, can be adapted to display such graphical sign language symbols, etc.... as animation or video displays. Remote database 206 can be accessed using the network interface unit 108 Which, for example, may be part of a cellular telephone or an Internet connection. [0020] Alternatively, system 200 may be adapted to receive audio information (e.g., speech) and convert the audio information into text or into equivalent sign language symbols. [0021] Exemplary systems 100 and 200 (collectively the systems ) can associate information With sign language symbols, preferably sign language animations, using the exemplary process 300 described in FIG. 3. [0022] Systems 100,200 can receive language information using any conventional communication means. Once the information is received, a system is adapted to analyze the information for its meaning in step 302. Such an analysis can make use of adaptive learning techniques. In particular, exemplary systems 100,200 are adapted to determine the appropriate meaning of an element (including When an element has multiple meanings) depending on the context and use of the element. For example, the Word lead can refer to a metal or to an active verb meaning to direct. [0023] Processing unit 102, or association unit 202, can be adapted to analyze each element of information to determine the element s contribution to the overall meaning of the information. In one embodiment, processing unit 102, or association unit 202, is adapted to associate each element With a Weight (i.e., a value or score) according to the element s contribution. [0024] In another embodiment, the systems of the present invention monitor the frequency, presence, and use of each element. When an element that can have multiple meanings is encountered, systems envisioned by the present invention are adapted to perform a probability analysis Which associ ates a probability With each meaning in order to indicate the likelihood that a speci?c meaning should be used. Such an analysis can analyze the elements used in context With ambiguous elements and determine the presence or fre quency of particular elements. Those frequencies, etc.... can in?uence Whether a particular meaning should be used for the ambiguous element. For example, if the ambiguous element is lead and the system identi?es Words such as gold, silver, or metal in a string of characters near lead, systems envisioned by the present invention are adapted to determine that the de?nition of lead as a metal should be used. Additionally, systems envisioned by the present inven tion are adapted to determine Whether the ambiguous ele ment is used as a noun, verb, adjective, adverb etc.... and factor the use of the ambiguous element into the probability analysis. Thus, if a system determines that lead is used as a noun, it can additionally be adapted to determine that it is more likely than not that the element refers to a metal rather than to the verb meaning to direct. [0025] Another embodiment of the present invention pro vides a system adapted to determine the gender of a proper noun by determining the frequency of gender speci?c pro nouns in a string of characters near or relating to the proper noun. Thus, if pronouns such as his, him, or he are used near the proper noun, the system is adapted to deter mine that the proper noun is probably male. [0026] Systems envisioned by the present invention can also be adapted to translate the overall tone or meaning of a string of elements. For example, a system can be adapted to generate animations comprising sign language Wherein the position of the signing conveys a meaning in addition to the actual sign. Positioning of signing can be used to refer to multiple speakers in a conversation. For example, if the hearing impaired person is communicating With two other individuals, signs in the lower left quadrant can be intended for one individual, While signs in the upper right quadrant can be intended for another individual. Thus, symbol posi tioning can be used to convey meaning. The speed of the signing can also convey meaning such as urgency or the like. [0027] Referring back to FIG. 3, in step 304, an associa tion unit 106 is adapted to associate elements Which con tribute to the meaning of the information With sign language symbols having a corresponding meaning. The sign lan guage symbols and Weights can be stored and accessed via database 204 or remote database 206. Elements having a contribution value below a set threshold value Will not be associated With a symbol. For example, inde?nite articles such as the or a may be assigned a low contribution value depending how the articles are used and Will not be translated each time the system encounters them. [0028] In the event an element cannot be associated With a known symbol, systems envisioned by the present inven

US 2004/0012643 A1 Jan. 22, 2004 tion are adapted to generate a new symbol in step 306 to convey the meaning of the element. Association unit 202 or processing unit 102 can be adapted to generate a new sign language symbol and corresponding animation by parsing the information element into language root elements having known meanings. Language root elements can include Latin, French, German, Greek, and the like. [0029] For example, if a system encounters the Word puerile and puerile does not have a known sign language symbol, a processing unit or association unit can be adapted to parse the Word puerile into its Latin root puer. Latin roots or other language roots can be stored in the system in conjunction With the meaning of the roots or sign language symbol associations linked to the roots. Once a system identi?es the root and the root s meaning, it can be adapted to attempt to locate a sign language symbol having a similar or related meaning. In this case, puer means boy or child, and the system can be adapted to associate the Word puer ile With a sign language symbol associated With the Word child or one Which means childlike. Using grammar algorithms or software, the system can be adapted to identify Whether the information element is a noun, verb, adjective, adverb or the like and associate a sign language symbol accordingly. [0030] Alternatively, systems envisioned by the present invention may comprise a directory of sign language symbol associations linked to multiple information elements, includ ing known Words or roots that have identical, similar, or related meanings. Each link can be structured in a hierarchy depending on how close the meaning of the symbol approxi mates the meaning of the associated element, Word or root. Such systems can be adapted to provide users With a menu of symbol options that can be associated With an unknown Word or information element by?rst presenting the user With a symbol having the greatest similarity in meaning followed by symbols With lower associated meaning. Systems envi sioned by the present invention can also be adapted to present a group of symbols extrapolated from root elements of a string of information elements, the combination of symbols together representing the meaning of the string of information elements. Association units envisioned by the present invention can be adapted to generate a new sign language symbol for each element not associated With a known sign language symbol, Wherein each element not associated With a known sign language symbol is Weighted according to its contribution to the overall meaning of the information to be communicated. [0031] FIG. 4 shows an exemplary?ow diagram of step 306 broken down into steps 402-408. [0032] When an association unit determines an element has no known associated sign language symbol that can be used to convey the meaning of the element, systems envi sioned by the present invention are adapted to monitor the frequency at Which such elements are received in step 402. Next, in step 404 systems envisioned by the present inven tion can prompt a user for instructions, such as Whether or not to create a new symbol. Such systems can be adapted to receive user input in step 406 and, optionally may create new sign language symbols in step 408 based on the input. The symbols created in step 408 can be graphical symbols such as pictures or diagrams. Alternatively, the symbols can be animations. Systems envisioned by the present invention can be adapted to determine Whether information elements input by a user should be grouped together because they are needed in combination to represent the correct meaning, (e.g., a phrase) in Which case one symbol may suffice, or Whether each element or group of elements can stand alone, in Which case a new symbol for each element or group of elements must be created. [0033] It should be noted that individual information ele ments may be associated With more than one symbol because they may have more than one meaning. Said another Way, a single element can be associated With different symbols depending on its meaning. It should also be under stood that the meaning of an element can change depending on Whether the element is grouped With other speci?c elements. For example, an element represented by the Word a or the can have a different meaning than normal in limited circumstances. For instance When an a follows after Mr. to represent someone s initials it has a different meaning than normal, and, therefore, Will be associated With a new sign language equivalent symbol. [0034] Some Words (e.g., text) that are known to indicate the gender of the speaker of text have no existing sign language equivalent symbol. For example, in a play s script, each character s dialog is identi?ed With a character s name. In one embodiment of the present invention, each charac ter s dialog is processed in a different manner. In more detail, systems envisioned by the present invention can be adapted to receive user input in step 406 and then may proceed to step 408 to create new sign language symbols based on the input. Such symbols may comprise graphics such as pictures or diagrams. Alternatively, the symbols can be animations. Systems are adapted to display a male avatar or animation for text associated With a male character, and a female avatar or animation for text associated With a female character. LikeWise, children s voices can be repre sented by a display of an animated child of appropriate age, gender, etc. [0035] In the case Where systems envisioned by the present invention are connected to a computer or commu nications network, a user can provide a representation of himself or herself to be used in the avatar, or as the avatar. In such a case, the user is a typically a hearing person Who does not known sign language. In one embodiment, the user can speak into a system, and the system can translate the spoken information into an animation of the user signing the information. [0036] In another embodiment, Closed Captioning can be used as text to drive a picture-in-picture representation of the avatar signing along With the play/program. In yet another embodiment, animations can be synchronized With simultaneously running video, audio, etc....alternatively, the animations can be run asynchronously With other media, text, live presentations, or conversations. [0037] Though the present invention has been described using the examples described above, it should be understood that variations and modi?cations can be made Without departing from the spirit or scope of the present invention as de?ned by the claims Which follow: We claim: 1. A system for visually communicating information to the hearing impaired comprising:

US 2004/0012643 A1 Jan. 22, 2004 an association unit adapted to: associate each information element With its known sign language symbol; and generate a new sign language symbol for each element not associated With a known sign language symbol, Wherein each element not associated With a known sign language symbol is Weighted according to its contribution to the overall meaning of the informa tion to be communicated. 2. The system as in claim 1, further comprising a display adapted to depict the known and new sign language sym bols. 3. The system of claim 1, Wherein the known and new sign language symbols comprise animations. 4. The system of claim 1, Wherein the information ele ments comprise textual data. 5. The system of claim 1, Wherein the information ele ments comprise audio data. 6. The system of claim 1, Wherein the information ele ments comprise video data. 7. The system of claim 1, Wherein the information ele ments comprise English language characters. 8. The system of claim 1, Wherein one of the elements comprises a Word. 9. The system of claim 1, Wherein one of the elements comprises a phrase. 10. The system of claim 1, Wherein the system comprises a cellular telephone. 11. The system as in claim 1, Wherein the association unit is further adapted to generate a new sign language symbol for an element received more than once and Which is not associated With a known sign language symbol. 12. A method for visually communicating information to the hearing impaired comprising: associating each information element With its known sign language symbol; and generating a new sign language symbol for each element not associated With a known sign language symbol, Wherein each element not associated With a known sign language symbol is Weighted according to its contri bution to the overall meaning of the information to be communicated. 13. The method as in claim 12, further comprising dis playing the known and new sign language symbols. 14. The method of claim 12, Wherein the known and new sign language symbols comprise animations. 15. The method of claim 12, Wherein the information elements comprise textual data. 16. The method of claim 12, Wherein the information elements comprise audio data. 17. The method of claim 12, Wherein the information elements comprise video data. 18. The method of claim 12, Wherein the information elements comprise English language characters. 19. The method of claim 12, Wherein one of the elements comprises a Word. 20. The method of claim 12, Wherein one of the elements comprises a phrase. 21. The method as in claim 12 further comprising gener ating a new sign language symbol for an element that is received more than once and is not associated With a known sign language symbol. * * * * *