Effective Animation of Sign Language with Prosodic Elements for Annotation of Digital Educational Content
|
|
- Magdalene Chandler
- 8 years ago
- Views:
Transcription
1 Effective Animation of Sign Language with Prosodic Elements for Annotation of Digital Educational Content Nicoletta Adamo-Villani, Kyle Hayward, Jason Lestina, Ronnie Wilbur, Purdue University* 1. Introduction Computer animation of American Sign Language (ASL) has the potential to remove many educational barriers for deaf students, because it provides a low-cost, effective means for adding sign language translation to any type of digital content. Several research groups [1-3] have investigated the benefits of rendering ASL in 3D animations. Although the quality of animated ASL has improved in the past few years and it shows strong potential for revolutionizing accessibility to digital media, its effectiveness and wide-spread use is still precluded by two main limitations: (a) low realism of the signing characters, which result in limited legibility of animated signs and low appeal of virtual signers, and (b) lack of easy-to-use public domain authoring systems that allow educators to create animated ASL annotated educational materials. The general goal of our research is to overcome both limitations. Specifically, the objective of the work reported in the paper was to research and develop a software system for annotating math/science digital educational content for grades 1-3 with expressive ASL animation with prosodic elements. The system provides educators of the Deaf with an effective means of creating and adding grammatically correct, life-like sign language translation to learning materials such as interactive activities, texts, images, slide presentations, and videos. 2. The ASL Authoring System The system has been iteratively developed with continuous feedback from teachers and students at the Indiana School for the Deaf (ISD). It includes 3 components: 3D Model Support Component. This component allows importing 3D models of characters and background 3D scenes. Animation Support Component. This component enables the user to (a) import signs from a sign database, (b) create new signs, (c) create facial articulations, (d) smoothly link signs and facial articulations in ASL continuous discourse, and (e) type an ASL script in the script editor and automatically generate the corresponding ASL animation. (a) The system includes an initial database of animated signs for mathematics for grades 1-2; more signs can be added to the library. (b) If a needed sign is not available in the database, it can be created by defining character hand, limb, body poses. (c) Facial articulations are created by combining morph targets in a variety of ways, and applying them to the character. (d) The animation support module computes realistic transitions between consecutive poses and signs. (e) The ASL system includes a tool that understands ASL script syntax (which is very similar to ASL gloss): the ASL Script Editor. The ASL script editor enables a user with knowledge of ASL gloss, to type an ASL script including both ASL gloss and mathematical equations; the script is then automatically converted to the correct animations with prosodic elements. Rendering Support Component. This component implements advanced rendering effects such as ambient occlusion, motion blur, and depth-of-field to enhance visual comprehension of signs. It exports the final ASL sequences to various movie formats. * {nadamovi, khayward, jlestina, wilbur}@purdue.edu 3. ASL animation with prosodic elements Although various attempts at animating ASL for purposes of deaf education and entertainment currently exist, they all fail to provide regular, linguistically appropriate grammatical markers that are made with the hands, face, head, and body, producing animation that is stilted and difficult to process (as an analogy, try to imagine someone speaking with no intonation). That is, they lack what linguists call prosody. Prosodic markers (e.g. head nod, hand clasp, body lean, mouth gestures, shoulder raise, etc.) and prosodic modifiers (e.g. sign lengthening, jerk, pauses, etc.) are used in ASL to convey and clarify the syntactic structure of the signed discourse [4]. Research has identified over 20 complex prosodic markers/modifiers and has measured frequencies of up to 7 prosodic markers/modifiers in a two second span [5]. Adding such number and variety of prosodic elements by hand through a graphical user interface (GUI) is prohibitively slow. Our system includes a novel algorithm that automates the process of enhancing ASL animation with prosodic elements. The algorithm interprets the ASL script entered in the Script Editor (described in section 2) and identifies the signs and the prosodic markers/modifiers needed to animate the input sequence. The prosodic elements are added automatically from ASL prosody rules. Example prosody rules automated by our algorithm are: - High-level sentence structure. Appropriate prosodic modifiers are added to mark the beginning (i.e. blink before hands move) and end (i.e. longer last sign) of the sentence. Periods and commas are automatically translated to their respective prosodic modifier (longer and shorter pauses, respectively). - Sentence type. Interrogative, imperative, and conditional sentences are detected based on punctuation marks (?,!) and key words (e.g. wh-words, if, whether ) and appropriate prosodic markers (i.e. raised eyebrows) are added. The final animation is assembled by retrieving the required signs from the sign database and by translating the identified prosodic elements to corresponding animation markers/modifiers. A multitrack animation timeline is populated with the animated signs and animation markers/modifiers. Most prosody markers are layered on top of the animated signs. Some prosody markers, such as e.g. hand clasp, are inserted in between signs. Prosody modifiers are layered on top of the signs they modify. The supplementary video includes examples of ASL animated sequences enhanced with algorithmically generated prosodic elements. 4. Discussion and Conclusion The system described in the paper is the first and only animation-based sign language program that produces fluid ASL animation enhanced with automatically generated prosodic elements. The problem of advancing Deaf education decisively can only be solved if the process of increasing ASL animation quality is automated. Scalability to all age groups and disciplines can only be achieved if educational content can be easily annotated with life-like, grammatically correct ASL animation by teachers with no computer animation expertise. Our system provides a solution to this problem because it enables users with no technical background to create high quality ASL animation by simply typing an ASL script. Copyright is held by the author / owner(s). SIGGRAPH 2010, Los Angeles, California, July 25 29, ISBN /10/0007
2 Effective Animation of Sign Language with Prosodic Elements for Annotation of Digital Educational Content Nicoletta Adamo-Villani, Kyle Hayward, Jason Lestina, Ronnie Wilbur, Purdue University Supplementary Document References (for the 1-page abstract) 1. ADAMO-VILLANI, N. and WILBUR, R Two Novel Technologies for Accessible Math and Science Education. IEEE Multimedia Special Issue on Accessibility, October-December 2008, THE DEPAUL UNIVERSITY SIGN LANGUAGE PROJECT available at: 3. VCOM3D, Inc. available at: 4. BRENTARI, D A Prosodic Model of Sign Language Phonology. Cambridge, MA: MIT Press. 5. NICODEMUS, B The use of prosodic markers to indicate utterance boundaries in American Sign Language interpretation. Doctoral dissertation, University of New Mexico. THE ASL SCRIPT EDITOR DETAILS There is no generally accepted system of written ASL and it is not possible to translate English into ASL word-by-word. Therefore, to write ASL signs and sentences, linguists and educators use glossing. In ASL gloss, every sign is written in CAPITAL LETTERS, i.e. PRO.1 LIKE APPLE I like apples. Gestures that are not signs are written in lower-case letters between quote marks, i.e. "go there". Proper names, technical concepts and other items with no obvious translation into ASL may be finger-spelled, which is glossed either as fs-magnetic or m-ag-n-e-t-i-c. Upper body, head and facial articulations associated with syntactic constituents are shown above the signs with which they co-occur, with a line indicating the start and end of the articulation. wh-q wh-q YOUR NAME? or YOUR NAME WHAT? What is your name? where wh-q indicates a facial articulation with lowered eyebrows (Weast 2008). In our ASL system, to animate ASL sentences, the user types the corresponding ASL script in the script editor. The script is interpreted and automatically converted to the correct series of sign animations and facial expressions that include clearly identifiable ASL prosodic elements. The ASL script is similar, but not exactly the same, as ASL gloss, Table 1 shows differences and similarities. We anticipate that ASL users familiar with ASL gloss will learn ASL script quickly and easily. English ASL gloss See the bowl there? It has 5 apples. Now, you click and drag to remove 3 apples. br bl hn lf SEE BOWL IX a HAVE 5 APPLE. NOW, YOU CLICK DRAG REMOVE 3 APPLE
3 ASL script br = brow raise; bl = blink; hn = head nod; lf = lean forward SEE BOWL PT h2? IX LOC: none? HAVE 5 APPLE. NOW, YOU CLICKDRAG CDREMOVE 3 APPLE. Table 1. In this example, the ASL script differs from ASL gloss: there are no lines above the sign names; the pointing to the signer s non-dominant hand after signing BOWL is indicated in the script as PT-h2 (point to hand2); a question mark is used to trigger brow raise and phrase final lengthening of the final sign; a period is used to trigger a blink and phrase final lengthening; the comma after NOW will trigger the head nod; the sign YOU triggers the start of a lean forward; the name of the sign CDREMOVE (computer term: click and drag remove) calls a different sign from the lexicon than REMOVE. NOTES ON ASL PRODOSY What is prosody and why it is important To understand prosody, we begin with a brief discussion of speech, because it is much better studied and can be used as a reference by ASL prosody studies. Spoken language has a hierarchical structure of sounds (the smallest units), syllables (groups of sounds), words (made from syllables), phrases (groups of words), and discourse utterances (groups of adjacent phrases). Prosodic markers indicate which units are grouped together and serve as cues for parsing the signal and aiding comprehension. As a general notion, prosody includes the relative prominence, rhythm, and timing of articulations in the signal. For speech, the manipulated variables are pitch (fundamental frequency), intensity (amplitude), and duration. For example, a word at the end of a phrase will have a longer duration than in the middle of a phrase (Phrase Final Lengthening). While there is a good deal of correspondence between syntactic breaks and prosodic breaks, syntax by itself does not uniquely predict prosodic structure [1,2]. The Prosodic Hierarchy [3]; from smallest to largest) is: Syllable < Prosodic Word < Prosodic Phrase < Intonational Phrase. How Prosodic Phrases and Intonational Phrases are constructed from Prosodic Words may depend on: information status in the phrase (old vs. new), stress assignment (often affected by information status), speaking rate, situation formality/register (articulation distinctness), among others. From this brief introduction to prosody, we draw a number of lessons for application to ASL. First, the manipulated variables in the signed signal are displacement, time, and velocity (v=d/t), and derivatives thereof [4]. Second, like speech, ASL has Phrase Final Lengthening [5,6,7,8]. Third, like speech, there is a good deal of correspondence between syntactic breaks and prosodic breaks [9,10]. Fourth, like speech, syntax by itself does not predict all prosodic domains [11,12,13,14,15], with information structure [16,17] and signing rate strong influences. Fifth, like speech, the Prosodic Hierarchy holds for ASL [18]. To date, Syllables, Prosodic Words, and Intonational Phrases are well understood [11,19,20,21,22,23]; Prosodic Phrases are just now being investigated. Speech is made with several articulators (vocal cords, velum, tongue, teeth, lips), only a few are visible (lips, teeth, occasionally tongue), and except for speechreading, what is visible is not relevant. In contrast, the entire body is visible when signers are signing, and when they are not. Thus, the signal must contain ways to indicate to the viewer that linguistic information is being transmitted. Also, signers can use one or both hands while signing. In addition, there are 14 potential articulators besides the hands (the nonmanuals): body (leans), head (turn, nod, shake), eyebrows (up, down, neutral), eyelids (open, droop, closed), eyes (gaze) (up, down, left, right, each of four corners), nose (crinkle), cheeks (puffed), lips: upper; lower; both; lip corners (up, down, stretched), tongue (out, touch teeth, in cheek, flap, wiggle), teeth (touch lip/tongue, clench), chin(thrust) [23]. The visual signed signal may contain several simultaneous nonmanuals. Finally, the non-signing hand may be used as a phrasal prosodic marker. The ASL linguistic system has evolved so that articulations do not interfere with each other either in production or in perception [24]. ASL nonmanuals are layered: they can be subdivided into groups: those on the upper face and head [20,21,23] occur with syntactic constituents (clauses, sentences), and on the lower face with adverbial/adjectival information and mark Prosodic Words [11,23,8].
4 From this summary of ASL prosody, it should now be clear why linguistic prosody is missing from existing animated signing. The state of the art is that animators do not have most of this information available to them. Worse, if they wanted to add it into their animations, they would have to modify the code for each sign individually for each nonmanual articulator and each change in articulation movement, by hand. Thus, our project represents a major leap forward by combining known prosodic characteristics with an animation algorithm for easily adding predictable prosody to animated signing. Figure 1 shows the difference between a real signer and an avatar without prosody. Figure 1. The top row shows a Deaf signer signing with prosody, and the bottom row shows an animated signing avatar that represents the state-of-the-art in ASL animation. The frames were extracted from the ASL translation of See the bowl right there? It has 5 apples. Now you click and drag to remove 3 apples. In neutral position before the start of signing, note that the signer blinks and the avatar does not. For SEE, the signer s mouth indicates that she is mouthing the English word see, as does the avatar. However the signer also leans slightly forward, including the viewer in the conversation, and raises her eyebrows as part of a questioning face. The signer signs BOWL and says bowl, whereas the avatar only makes the sign. On point to BOWL, the signer maintains her left hand in the shape of the bowl ( non-dominant hand spread is a marker of signs being in the same phrase), continues to lean forward, keeps her brows raised, gazes at the camera to continue the contact with the viewer, and nods her head to emphasize the bowl there and to indicate the end of the syntactic clause. Producing YOU, the signer s head has straightened up, indicating new clause; she leans slightly forward to include the viewer, gazes at the camera, has her brows in neutral position, and says the English word you. On REMOVE, she leans slightly back, moves her head in the direction that her hand moves, closes her eyes, and has a facial expression indicating dismissal, a form of negation. Note also that her non-signing hand has been kept at waist height, not dropped to neutral position (non-signing hand spread again). During this entire sequence, the avatar has not used the non-signing hand, has not changed eye gaze, has not shifted body position, has not turned or nodded his head, and has not blinked. Prosody in animated speech generation The importance of facial expression and gestures as prosodic elements for enhancing speech communication is well established. The first challenge towards producing computer animation of a speaking character that is enhanced with rich prosody is to understand the various types of prosodic elements. Whereas for facial expressions accompanying speech the semantics are obvious, there is no clear taxonomy for gestures. McNeill identifies 4 types of gestures iconics (concrete objects or actions), metaphorics (abstract ideas), deictics (object location), and beats (emphasis) [25] but later suggests that the original taxonomy is too rigid [26]. Recently, [27] suggests new ways of approaching gesture analysis. The second challenge is to determine when and where prosodic elements are needed. One approach is automatic processing from a variety of input. Examples include natural language processing of text [28], processing of video [29], and pitch, intensity, and duration analysis of speech to derive body-language [30] or head motion prosody [31]. A second approach is to rely on the user to encode the need and location of prosodic elements. Several prosody annotation schemes have been proposed [32,33,34], but a standard is yet to emerge. The third challenge is the actual rendering of desired prosodic elements through animation. The approaches explored include rule-based methods [28], data-driven methods based on Hidden Markov models [31], hybrid rule-based and data-driven methods [35], video morphing methods [36], and physical muscle simulation methods [37].
5 ASL prosody animation shares some of these challenges, such as taxonomy and textual annotation methods of prosodic elements not yet being fully established. ASL animation does not benefit from input in audio form and any automatic detection of prosodic elements has to rely on video. Our project targets ASL annotation of educational materials, thus real-time interpretation of ASL is not needed. One advantage specific to ASL is that part of the prosody has linguistic function, and such prosodic elements can be added automatically to text based on simple rules. Finally, whereas for applications such as ASL story-telling data-driven animation of prosodic elements should be preferred in order to capture and convey the talent of ASL artists, in the context of our project a more suitable choice is robust and low-cost rule-based animation. ASL Prosodic elements we predict with certainty Prosodic constituency and phrasing. Grosjean & Collins [38] reported that for English, pauses with durations of greater than 445 ms occur at sentence boundaries; pauses between 245 and 445 ms occur between conjoined sentences, between noun phrases and verb phrases, and between a complement and the following noun phrase; and pauses of less than 245 ms occur within phrasal constituents. Grosjean & Lane s [9] findings for ASL were that pauses between sentences had a mean duration of 229 ms, pauses between conjoined sentences had a mean duration of 134 ms, pauses between NP and VP had a mean duration of 106 ms, pauses within the NP had a mean duration of 6 ms, and pauses within the VP had a mean duration of 11 ms. These results support the relevance of the Prosodic Hierarchy to ASL. Recent research [8] shows that pause duration and other prosodic markers are dependent on signing rate. Phrase Final Lengthening (holding or lengthening duration of signs) is also well-documented [5,6,8]. Our animation algorithm ensures that as signing rate is adjusted, prosodic indicators are also properly adjusted. The highest prosodic grouping, Intonational Phrases (IP), are marked with changes in head, body position, facial expression, and periodic blinks [1,21]. IPs can be determined by two algorithms. Wilbur [21] used Selkirk s [14,15] derivation of IPs from syntactic constituents, whereas Sandler & Lillo-Martin [13] used Nespor and Vogel s [3] Prosodic Hierarchy. Both reflect word groupings, so it seems to make no difference. The one remaining prosodic level that has been well studied is the Prosodic Word. Brentari & Crossley [11] reported that changes in lower face tension (cheeks, mouth) can separate Prosodic Words from each other. Stress. As indicated, syllables are marked by hand movement. ASL stress is marked by modifying the hand movement, In particular, peak velocity (measured with 3D motion capture) is the primary kinematic marker of stress [22], along with signs being raised in the vertical signing space. Generally, every sentence has one stressed sign (most signs are single syllables) and our own research shows that the stress in ASL is predictably at the end of the clause [16,22]. For those few signs that have more than one syllable, we have worked out the stress system and can predict which syllable gets stress [22,39]. The notion of intonation for sign languages. Intonation in sign languages is dependent on which nonmanual articulators take which poses in connection with which signs. For example, in addition to, or even instead of, a negative sign in a sentence (NOT, NEVER, NONE, etc), a negative headshake starts at the beginning of the sentence part that is negated and continues until the end. For us, this means that our algorithm need only find a negative sign in the input, and it can automatically generate the negative headshake starting and ending at the right time. (We do, however, have to investigate the correct turning rate of the head to get the right natural look.) Another example is the brow lowering that occurs with content questions, those with wh-words (who, what, when, where, why, how). In this case, the brow lowers at the beginning of the clause containing the wh-sign and continues to the end, even if the wh-sign is itself at the end of the clause (in English, it always goes to the beginning, but this is not true in some languages). A third example is brow raising, This brow position occurs with a wide variety of structures in ASL: topic phrases, yes/no questions, conditional ( if ) clauses, relative clauses, among others. In each case, the brow raises at the beginning and lowers at the end of the clause. Unlike negative headshakes and brow lowering, it is possible to have more than one brow raise in a sentence; for this reason, the input to our computational algorithms will include commas to separate clauses. The comma serves other purposes as well a sentence can start with a brow raise on a conditional clause ( if it rains tomorrow ), and then lower for a content question ( what would you like to do instead? ) or return to neutral for a statement ( I think I ll stay home and read. ). To predict the proper patterns, we have to signal the on and off of various poses for various articulators other than the hands. A fourth example is the use of eye blinks. Baker and Padden [40] first brought eyeblinks to the attention of sign language researchers as one of four components contributing to the conditional nonmanual marker (others were contraction of
6 the distance between the lips and nose, brow raise, and the initiation of head nodding). Stern and Dunham [41] distinguish three main types of blinks: startle reflex blinks, involuntary periodic blinks (for wetting the eye), and voluntary blinks. Both periodic blinks and voluntary blinks serve specific linguistic functions in ASL. Periodic blinks (short, quick) are a primary marker of the end of IPs in ASL [21]. In contrast, voluntary blinks (long, slow) occur with specific signs to show emphasis (not a predictable prosodic function, but a semantic/pragmatic one). One further study warrants attention. Weast [19] measured brow height differences at the pixel level for ASL statements and questions produced with five different emotions. She observed that eyebrow height showed a clear declination across statements and to a lesser extent before sentence-final position in questions, parallel to intonation (pitch patterns) in languages like English. Weast also shows that eyebrow height differentiates questions from statements, and yes/no questions from wh-questions, performing a syntactic function. Furthermore, maximum eyebrow heights differed significantly by emotion (sad and angry lower than neutral, happy, and surprised). The syntactic uses of eyebrow height are constrained by emotional eyebrow height, illustrating the simultaneity of multiple messages on the face and the interaction between information channels [42]. To recap, we can predict where pausing and sign duration lengthening should occur, along with changes in poses of the brows, head, and body, eye blinks, eye gaze, cheek and general mouth pose (although we have not given examples for all). Now we can see the brilliance of the ASL prosodic system: syllables are marked by hand movement; Prosodic Words are marked by lower face behaviors; and Intonational Phrases are marked by upper face (blinks, eye gaze, brows changing pose), head (nods), and body (leans) positions. Emotions affect the range of movement of articulators. Everything is visible. REFERENCES (for Notes on ASL Prosody) 1. Sandler, W. & Lillo-Martin, D. (2006). Sign Language and Linguistic Universals. Cambridge: Cambridge University Press. 2. Wilbur, R. B. & Patschke, C. (1999). Syntactic correlates of brow raise in ASL. Sign Language & Linguistics 2: Nespor, M. & Vogel, I. (1986). Prosodic phonology. Dordrecht: Foris. 4. Wilbur, R. B. & Martínez, A. (2002). Physical correlates of prosodic structure in American Sign Language. M. Andronis, E. Debenport, A. Pycha & K. Yoshimura (eds.), CLS 38: Liddell, S. K. (1978). Non manual signals and relative clauses in ASL. In P. Siple (ed.), Understandinglanguage through sign language research, (pp ), New York: Academic. 6. Liddell, S. K. (1980). American Sign Language syntax. The Hague: Mouton. 7. Wilbur, R. B. (2008). Success with deaf children: How to prevent educational failure. In D. J. Napoli, D. DeLuca, K. Lindgren (eds.), Signs and Voices, Washington, DC: Gallaudet University. 8. Wilbur, R. B. (2009). Effects of varying rate of signing on ASL manual signs and nonmanual markers. Language and Speech 52(2/3): Grosjean, F. & Lane, H. (1977). Pauses and syntax in American Sign Language. Cognition 5: Wilbur, R. B. (1994). Eyeblinks and ASL phrase structure. Sign Language Studies 84: Brentari, D. & Crossley, L. (2002). Prosody on the Hands and Face: Evidence from American Sign Language.
7 Sign Language and Linguistics 5(2): Sandler, W. (1999) Prosody in Israeli Sign Language. Language and Speech, 42(2 3): Sandler, W. & Lillo-Martin, D. (2006). Sign Language and Linguistic Universals. Cambridge: Cambridge University Press. 14. Selkirk, E On derived domains in sentence phonology. Phonology 3: Selkirk, E. O. (1995). Sentence prosody: Intonation, stress, and phrasing. In J. Goldsmith (ed.), The handbook of phonological theory, pp Cambridge, MA: Blackwell Publishers. 16. Wilbur, R. B. (1997). A prosodic/pragmatic explanation for word order variation in ASL with typological implications. In K. Lee, E. Sweetser, & M. Verspoor (eds.) Lexical and syntactic constructions and the construction of meaning, Vol. 1, pp Philadelphia: John Benjamins. 17. Wilbur, R. B. (2006). Discourse and pragmatics in sign language. In The Encyclopedia of Language and Linguistics 2nd Edition (EEL2), 11: Oxford, England: Elsevier. 18. Brentari, D. (1998). A prosodic model of sign language phonology. MIT Press. 19. Weast, T. (2008) Questions in American Sign Language: A quantitative analysis of raised and lowered eyebrows. Doctoral dissertation, University of Texas, Arlington. 20. Wilbur, R. B. (1991). Intonation and focus in American Sign Language. In Y. No & M. Libucha (eds.), ESCOL '90: Eastern States Conference on Linguistics, pp Columbus, OH: Ohio State University Press. 21. Wilbur, R. B. (1994). Eyeblinks and ASL phrase structure. Sign Language Studies 84: Wilbur, R. B. (1999). Stress in ASL: Empirical evidence and linguistic issues. Language & Speech 42: Wilbur, R. B. (2000). Phonological and prosodic layering of non-manuals in American Sign Language. In Lane, H. and Emmorey, K. (eds.), The signs of language revisited: Festschrift for Ursula Bellugi and Edward Klima, pp Hillsdale, NJ: Lawrence Erlbaum. 24. Siple, Patricia Visual constraints for sign language communication. Sign Language Studies 19: McNeill, D. (1992). Hand and Mind: What Gestures Reveal About Thought. University of Chicago Press. 26. McNeill, D. (2005). Gesture and Thought. University of Chicago Press. 27. Wilbur, R. B. & Malaia, E. (2008). Contributions of sign language research to gesture understanding: What can multimodal computational systems learn from sign language research. International Journal of Semantic Computing 2(1): Cassell, J., Vilhjalmsson, H. H. & Bickmore, T. (2001). Beat: the behavior expression animationtoolkit. In Proc. ACM SIGGRAPH, pp Neff, M., Kipp, M., Albrecht, I. & Seidl, H.P. (2008). Gesture modeling and animation based on a probabilistic
8 re-creation of speaker style. ACM Transactions on Graphics 27(1): Levine,S., Theobalt,C. & Koltun, V. (2009). Real-Time prosody-driven synthesis of body language. To appear ACM Transactions on Graphics, SIGGRAPH ASIA. 31. Busso, C., Deng, Z., Neumann,U. & Narayanan, S.(2005). Natural Head Motion Synthesis driven by acoustic prosodic features. Computer Animation and Virtual Worlds 16(3-4): Hartmann, B., Mancini, M. & Pelachaud, C. (2002). Formational parameters and adaptive prototype instantiation for mpeg-4 compliant gesture synthesis. In Proc. on Computer Animation, IEEE Computer Society, Washington, DC, USA, p Kipp, M., Neff, M. & Albrecht, I. (2007). An annotation scheme for conversational gestures: how to economically capture timing and form. Language Resources and Evaluation 41(3/4): Kopp, S. & Wachsmuth, I. (2004). Synthesizing multimodal utterances for conversational agents: Research articles. Computer Animation and Virtual Worlds 15(1): Beskow, J Talking Heads - Models and applications for multimodal speech synthesis. PhD thesis, KTH Stockholm. 36. Ezzat, T., Geiger, G. & Poggio, T. (2002). Trainable videorealistic speech animation. In SIGGRAPH 02: ACM SIGGRAPH 2002 Papers, pp ACM, New York, NY. 37. Sifakis, E., Selle, A., Robinson-Mosher A. & Fedkiw, R. (2006). Simulating speech with a physicsbased facial muscle model. In Proc. of ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp Grosjean, F. & Collins, M. (1979). Breathing, pausing, and reading. Phonetica 36: Wilbur, R.. B. (in press b). Sign syllables. In van Oostendorf, M. (ed), Companion to Phonology. NY/Oxford: Wiley-Blackwell. 40. Baker, C. & Padden, C. (1978). Focusing on the nonmanual components of ASL. In Siple, P., (ed.), Understanding language through sign language research, pp New York: Academic Press 41. Stern, J., & Dunham, D. (1990). The ocular system. In Cacioppo, J. T. & Tassinary, L. G. (eds.), Principles of psychophysiology: Physical, social, & inferential elements, pp Cambridge,England: Cambridge University Press. 42. Ladd, D. R., Jr. (1996). Intonational Phonology. Cambridge, England: Cambridge University.
A Survey of ASL Tenses
A Survey of ASL Tenses Karen Alkoby DePaul University School of Computer Science Chicago, IL kalkoby@shrike.depaul.edu Abstract This paper examines tenses in American Sign Language (ASL), which will be
More informationNonverbal Communication Human Communication Lecture 26
Nonverbal Communication Human Communication Lecture 26 Mar-14-11 Human Communication 1 1 Nonverbal Communication NVC can be communicated through gestures and touch (Haptic communication), by body language
More informationSign language transcription conventions for the ECHO Project
Sign language transcription conventions for the ECHO Project Annika Nonhebel, Onno Crasborn & Els van der Kooij University of Nijmegen Version 9, 20 Jan. 2004 URL: http://www.let.kun.nl/sign-lang/echo/docs/transcr_conv.pdf
More informationA Database Tool for Research. on Visual-Gestural Language
A Database Tool for Research on Visual-Gestural Language Carol Neidle Boston University Report No.10 American Sign Language Linguistic Research Project http://www.bu.edu/asllrp/ August 2000 SignStream
More informationLANGUAGE! 4 th Edition, Levels A C, correlated to the South Carolina College and Career Readiness Standards, Grades 3 5
Page 1 of 57 Grade 3 Reading Literary Text Principles of Reading (P) Standard 1: Demonstrate understanding of the organization and basic features of print. Standard 2: Demonstrate understanding of spoken
More informationFundamentals of Computer Animation
Fundamentals of Computer Animation Principles of Traditional Animation How to create maximum impact page 1 How to create maximum impact Early animators worked from scratch to analyze and improve upon silence
More informationTemplate-based Eye and Mouth Detection for 3D Video Conferencing
Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer
More informationSign Language Linguistics Course texts Overview Assessment Week 1: Introduction and history of sign language research
Sign Language Linguistics 27910/37910 Instructor: Jordan Fenlon (jfenlon@uchicago.edu) Office hours: Thursdays, 10:45-11:45 Winter quarter: Tuesdays and Thursdays, 9:00-10:20 The course introduces students
More informationEARLY INTERVENTION: COMMUNICATION AND LANGUAGE SERVICES FOR FAMILIES OF DEAF AND HARD-OF-HEARING CHILDREN
EARLY INTERVENTION: COMMUNICATION AND LANGUAGE SERVICES FOR FAMILIES OF DEAF AND HARD-OF-HEARING CHILDREN Our child has a hearing loss. What happens next? What is early intervention? What can we do to
More informationAn Avatar Based Translation System from Arabic Speech to Arabic Sign Language for Deaf People
International Journal of Information Science and Education. ISSN 2231-1262 Volume 2, Number 1 (2012) pp. 13-20 Research India Publications http://www. ripublication.com An Avatar Based Translation System
More informationInteractive Multimedia Courses-1
Interactive Multimedia Courses-1 IMM 110/Introduction to Digital Media An introduction to digital media for interactive multimedia through the study of state-of-the-art methods of creating digital media:
More informationTOOLS for DEVELOPING Communication PLANS
TOOLS for DEVELOPING Communication PLANS Students with disabilities, like all students, must have the opportunity to fully participate in all aspects of their education. Being able to effectively communicate
More informationGesture and ASL L2 Acquisition 1
Sign Languages: spinning and unraveling the past, present and future. TISLR9, forty five papers and three posters from the 9th. Theoretical Issues in Sign Language Research Conference, Florianopolis, Brazil,
More informationUnit 3. Effective Communication in Health and Social Care. Learning aims
Unit 3 Effective Communication in Health and Social Care Learning aims In this unit you will: investigate different forms of communication. investigate barriers to communication in health and social care.
More informationMaking Machines Understand Facial Motion & Expressions Like Humans Do
Making Machines Understand Facial Motion & Expressions Like Humans Do Ana C. Andrés del Valle & Jean-Luc Dugelay Multimedia Communications Dpt. Institut Eurécom 2229 route des Crêtes. BP 193. Sophia Antipolis.
More informationReporting of Interpreting as a Related Service. on the PEIMS 163 Student Data Record
Reporting of Interpreting as a Related Service on the PEIMS 163 Student Data Record 2008-2009 All Local Education Agencies and Charter Schools in Texas must report whether or not a student who is deaf
More informationEmotion Detection from Speech
Emotion Detection from Speech 1. Introduction Although emotion detection from speech is a relatively new field of research, it has many potential applications. In human-computer or human-human interaction
More informationStart ASL The Fun Way to Learn American Sign Language for free!
Start ASL The Fun Way to Learn American Sign Language for free! ASL 1 TEACHER GUIDE Table of Contents Table of Contents... 2 Introduction... 6 Why Start ASL?... 6 Class Materials... 6 Seating... 7 The
More informationProsodic Phrasing: Machine and Human Evaluation
Prosodic Phrasing: Machine and Human Evaluation M. Céu Viana*, Luís C. Oliveira**, Ana I. Mata***, *CLUL, **INESC-ID/IST, ***FLUL/CLUL Rua Alves Redol 9, 1000 Lisboa, Portugal mcv@clul.ul.pt, lco@inesc-id.pt,
More informationVideo-Based Eye Tracking
Video-Based Eye Tracking Our Experience with Advanced Stimuli Design for Eye Tracking Software A. RUFA, a G.L. MARIOTTINI, b D. PRATTICHIZZO, b D. ALESSANDRINI, b A. VICINO, b AND A. FEDERICO a a Department
More informationPTE Academic Preparation Course Outline
PTE Academic Preparation Course Outline August 2011 V2 Pearson Education Ltd 2011. No part of this publication may be reproduced without the prior permission of Pearson Education Ltd. Introduction The
More informationTalking Head: Synthetic Video Facial Animation in MPEG-4.
Talking Head: Synthetic Video Facial Animation in MPEG-4. A. Fedorov, T. Firsova, V. Kuriakin, E. Martinova, K. Rodyushkin and V. Zhislina Intel Russian Research Center, Nizhni Novgorod, Russia Abstract
More informationJulia Hirschberg. AT&T Bell Laboratories. Murray Hill, New Jersey 07974
Julia Hirschberg AT&T Bell Laboratories Murray Hill, New Jersey 07974 Comparing the questions -proposed for this discourse panel with those identified for the TINLAP-2 panel eight years ago, it becomes
More informationPronunciation in English
The Electronic Journal for English as a Second Language Pronunciation in English March 2013 Volume 16, Number 4 Title Level Publisher Type of product Minimum Hardware Requirements Software Requirements
More informationWinPitch LTL II, a Multimodal Pronunciation Software
WinPitch LTL II, a Multimodal Pronunciation Software Philippe MARTIN UFRL Université Paris 7 92, Ave. de France 75013 Paris, France philippe.martin@linguist.jussieu.fr Abstract We introduce a new version
More informationMobile Multimedia Application for Deaf Users
Mobile Multimedia Application for Deaf Users Attila Tihanyi Pázmány Péter Catholic University, Faculty of Information Technology 1083 Budapest, Práter u. 50/a. Hungary E-mail: tihanyia@itk.ppke.hu Abstract
More informationA System for Labeling Self-Repairs in Speech 1
A System for Labeling Self-Repairs in Speech 1 John Bear, John Dowding, Elizabeth Shriberg, Patti Price 1. Introduction This document outlines a system for labeling self-repairs in spontaneous speech.
More informationAutomatic Speech Recognition and Hybrid Machine Translation for High-Quality Closed-Captioning and Subtitling for Video Broadcast
Automatic Speech Recognition and Hybrid Machine Translation for High-Quality Closed-Captioning and Subtitling for Video Broadcast Hassan Sawaf Science Applications International Corporation (SAIC) 7990
More informationL2 EXPERIENCE MODULATES LEARNERS USE OF CUES IN THE PERCEPTION OF L3 TONES
L2 EXPERIENCE MODULATES LEARNERS USE OF CUES IN THE PERCEPTION OF L3 TONES Zhen Qin, Allard Jongman Department of Linguistics, University of Kansas, United States qinzhenquentin2@ku.edu, ajongman@ku.edu
More informationLesson Plan. Performance Objective: Upon completion of this assignment, the student will be able to identify the Twelve Principles of Animation.
Lesson Plan Course Title: Animation Session Title: The Twelve Principles of Animation Lesson Duration: Approximately two 90-minute class periods Day One View and discuss The Twelve Principles of Animation
More informationThirukkural - A Text-to-Speech Synthesis System
Thirukkural - A Text-to-Speech Synthesis System G. L. Jayavardhana Rama, A. G. Ramakrishnan, M Vijay Venkatesh, R. Murali Shankar Department of Electrical Engg, Indian Institute of Science, Bangalore 560012,
More informationLOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. indhubatchvsa@gmail.com
LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE 1 S.Manikandan, 2 S.Abirami, 2 R.Indumathi, 2 R.Nandhini, 2 T.Nanthini 1 Assistant Professor, VSA group of institution, Salem. 2 BE(ECE), VSA
More informationMATRIX OF STANDARDS AND COMPETENCIES FOR ENGLISH IN GRADES 7 10
PROCESSES CONVENTIONS MATRIX OF STANDARDS AND COMPETENCIES FOR ENGLISH IN GRADES 7 10 Determine how stress, Listen for important Determine intonation, phrasing, points signaled by appropriateness of pacing,
More informationProgram curriculum for graduate studies in Speech and Music Communication
Program curriculum for graduate studies in Speech and Music Communication School of Computer Science and Communication, KTH (Translated version, November 2009) Common guidelines for graduate-level studies
More informationSYNTHETIC SIGNING FOR THE DEAF: esign
SYNTHETIC SIGNING FOR THE DEAF: esign Inge Zwitserlood, Margriet Verlinden, Johan Ros, Sanny van der Schoot Department of Research and Development, Viataal, Theerestraat 42, NL-5272 GD Sint-Michielsgestel,
More informationAnimation Overview of the Industry Arts, AV, Technology, and Communication. Lesson Plan
Animation Overview of the Industry Arts, AV, Technology, and Communication Lesson Plan Performance Objective Upon completion of this assignment, the student will have a better understanding of career and
More informationInformation Technology Cluster
Web and Digital Communications Pathway Information Technology Cluster 3D Animator This major prepares students to utilize animation skills to develop products for the Web, mobile devices, computer games,
More informationEffects of Learning American Sign Language on Co-speech Gesture
Effects of Learning American Sign Language on Co-speech Gesture Shannon Casey Karen Emmorey Laboratory for Language and Cognitive Neuroscience San Diego State University Anecdotally, people report gesturing
More informationA dynamic environment for Greek Sign Language Synthesis using virtual characters
A dynamic environment for Greek Sign Language Synthesis using virtual characters G. Caridakis, K. Karpouzis Image, Video and Multimedia Systems Lab 9, Iroon Polytechniou Str. Athens, Greece +30 210 7724352
More informationCELTA. Syllabus and Assessment Guidelines. Fourth Edition. Certificate in Teaching English to Speakers of Other Languages
CELTA Certificate in Teaching English to Speakers of Other Languages Syllabus and Assessment Guidelines Fourth Edition CELTA (Certificate in Teaching English to Speakers of Other Languages) is regulated
More informationVirtual Patients: Assessment of Synthesized Versus Recorded Speech
Virtual Patients: Assessment of Synthesized Versus Recorded Speech Robert Dickerson 1, Kyle Johnsen 1, Andrew Raij 1, Benjamin Lok 1, Amy Stevens 2, Thomas Bernard 3, D. Scott Lind 3 1 Department of Computer
More informationThis week. CENG 732 Computer Animation. Challenges in Human Modeling. Basic Arm Model
CENG 732 Computer Animation Spring 2006-2007 Week 8 Modeling and Animating Articulated Figures: Modeling the Arm, Walking, Facial Animation This week Modeling the arm Different joint structures Walking
More informationMaster of Arts in Linguistics Syllabus
Master of Arts in Linguistics Syllabus Applicants shall hold a Bachelor s degree with Honours of this University or another qualification of equivalent standard from this University or from another university
More informationIntonation difficulties in non-native languages.
Intonation difficulties in non-native languages. Irma Rusadze Akaki Tsereteli State University, Assistant Professor, Kutaisi, Georgia Sopio Kipiani Akaki Tsereteli State University, Assistant Professor,
More informationProgram of Study. Animation (1288X) Level 1 (390 hrs)
Program of Study Level 1 (390 hrs) ANI1513 Life Drawing for Animation I (60 hrs) Life drawing is a fundamental skill for creating believability in our animated drawings of motion. Through the use of casts
More informationA static representation for ToonTalk programs
A static representation for ToonTalk programs Mikael Kindborg mikki@ida.liu.se www.ida.liu.se/~mikki Department of Computer and Information Science Linköping University Sweden Abstract Animated and static
More informationKNOWLEDGE-BASED IN MEDICAL DECISION SUPPORT SYSTEM BASED ON SUBJECTIVE INTELLIGENCE
JOURNAL OF MEDICAL INFORMATICS & TECHNOLOGIES Vol. 22/2013, ISSN 1642-6037 medical diagnosis, ontology, subjective intelligence, reasoning, fuzzy rules Hamido FUJITA 1 KNOWLEDGE-BASED IN MEDICAL DECISION
More informationOpen-Source, Cross-Platform Java Tools Working Together on a Dialogue System
Open-Source, Cross-Platform Java Tools Working Together on a Dialogue System Oana NICOLAE Faculty of Mathematics and Computer Science, Department of Computer Science, University of Craiova, Romania oananicolae1981@yahoo.com
More informationCamtasia: Importing, cutting, and captioning your Video Express movie Camtasia Studio: Windows
Camtasia: Importing, cutting, and captioning your Video Express movie Camtasia Studio: Windows Activity 1: Adding your Video Express output into Camtasia Studio Step 1: the footage you shot in the Video
More informationSpecialty Answering Service. All rights reserved.
0 Contents 1 Introduction... 2 1.1 Types of Dialog Systems... 2 2 Dialog Systems in Contact Centers... 4 2.1 Automated Call Centers... 4 3 History... 3 4 Designing Interactive Dialogs with Structured Data...
More informationThe main imovie window is divided into six major parts.
The main imovie window is divided into six major parts. 1. Project Drag clips to the project area to create a timeline 2. Preview Window Displays a preview of your video 3. Toolbar Contains a variety of
More informationThe Fundamental Principles of Animation
Tutorial #11 Prepared by Gustavo Carneiro This tutorial was based on the Notes by P. Coleman, on the web-page http://www.comet-cartoons.com/toons/3ddocs/charanim/, and on the paper Principles of Traditional
More informationProsodic focus marking in Bai
Prosodic focus marking in Bai Zenghui Liu 1, Aoju Chen 1,2 & Hans Van de Velde 1 Utrecht University 1, Max Planck Institute for Psycholinguistics 2 l.z.h.liu@uu.nl, aoju.chen@uu.nl, h.vandevelde@uu.nl
More informationTHE BACHELOR S DEGREE IN SPANISH
Academic regulations for THE BACHELOR S DEGREE IN SPANISH THE FACULTY OF HUMANITIES THE UNIVERSITY OF AARHUS 2007 1 Framework conditions Heading Title Prepared by Effective date Prescribed points Text
More informationAn Animation Definition Interface Rapid Design of MPEG-4 Compliant Animated Faces and Bodies
An Animation Definition Interface Rapid Design of MPEG-4 Compliant Animated Faces and Bodies Erich Haratsch, Technical University of Munich, erich@lis.e-tecknik.tu-muenchen.de Jörn Ostermann, AT&T Labs
More informationwww.icommunicatetherapy.com
icommuni cate SPEECH & COMMUNICATION THERAPY Milestones of speech, language and communication development 0-12 Months The rate of children's speech and language development can vary, depending on the child.
More informationPENNSYLVANIA COMMON CORE STANDARDS English Language Arts Grades 9-12
1.2 Reading Informational Text Students read, understand, and respond to informational text with emphasis on comprehension, making connections among ideas and between texts with focus on textual evidence.
More informationAdvanced Diploma of Professional Game Development - Game Art and Animation (10343NAT)
The Academy of Interactive Entertainment 201 Advanced Diploma of Professional Game Development - Game Art and Animation (10343NAT) Subject Listing Online Campus 0 Page Contents 3D Art Pipeline...2 Grasping
More informationAdvanced Diploma of Screen - 3D Animation and VFX (10343NAT)
The Academy of Interactive Entertainment 2013 Advanced Diploma of Screen - 3D Animation and VFX (10343NAT) Subject Listing Online Campus 0 Page Contents 3D Art Pipeline...2 Modelling, Texturing and Game
More informationVoice and Text Preparation Resource Pack Lyn Darnley, RSC Head of Text, Voice and Artist Development Teacher-led exercises created by RSC Education
Voice and Text Preparation Resource Pack Lyn Darnley, RSC Head of Text, Voice and Artist Development Teacher-led exercises created by RSC Education This pack has been created to give you and your students
More informationPrincipal Components of Expressive Speech Animation
Principal Components of Expressive Speech Animation Sumedha Kshirsagar, Tom Molet, Nadia Magnenat-Thalmann MIRALab CUI, University of Geneva 24 rue du General Dufour CH-1211 Geneva, Switzerland {sumedha,molet,thalmann}@miralab.unige.ch
More informationText-To-Speech Technologies for Mobile Telephony Services
Text-To-Speech Technologies for Mobile Telephony Services Paulseph-John Farrugia Department of Computer Science and AI, University of Malta Abstract. Text-To-Speech (TTS) systems aim to transform arbitrary
More informationWorlds Without Words
Worlds Without Words Ivan Bretan ivan@sics.se Jussi Karlgren jussi@sics.se Swedish Institute of Computer Science Box 1263, S 164 28 Kista, Stockholm, Sweden. Keywords: Natural Language Interaction, Virtual
More informationANIMATION a system for animation scene and contents creation, retrieval and display
ANIMATION a system for animation scene and contents creation, retrieval and display Peter L. Stanchev Kettering University ABSTRACT There is an increasing interest in the computer animation. The most of
More informationModern foreign languages
Modern foreign languages Programme of study for key stage 3 and attainment targets (This is an extract from The National Curriculum 2007) Crown copyright 2007 Qualifications and Curriculum Authority 2007
More informationSigning Physical Science Dictionary User s Guide
Signing Physical Science Dictionary User s Guide Welcome to the Mobile Signing Physical Science Dictionary (SPSD)! The Signing Physical Science Dictionary (SPSD) is an interactive 3D sign language dictionary
More informationVery Low Frame-Rate Video Streaming For Face-to-Face Teleconference
Very Low Frame-Rate Video Streaming For Face-to-Face Teleconference Jue Wang, Michael F. Cohen Department of Electrical Engineering, University of Washington Microsoft Research Abstract Providing the best
More informationDevelop Computer Animation
Name: Block: A. Introduction 1. Animation simulation of movement created by rapidly displaying images or frames. Relies on persistence of vision the way our eyes retain images for a split second longer
More informationelearning Guide: Instructional Design
elearning Guide: Instructional Design Produced by NHS Education for Scotland, 2013 Introduction This e-learning Guide provides the standards to be followed when developing content Web-based Training (WBT)
More informationThe English Language Learner CAN DO Booklet
WORLD-CLASS INSTRUCTIONAL DESIGN AND ASSESSMENT The English Language Learner CAN DO Booklet Grades 1-2 Includes: Performance Definitions CAN DO Descriptors For use in conjunction with the WIDA English
More informationIf there are any questions, students are encouraged to email or call the instructor for further clarification.
Course Outline 3D Maya Animation/2015 animcareerpro.com Course Description: 3D Maya Animation incorporates four sections Basics, Body Mechanics, Acting and Advanced Dialogue. Basic to advanced software
More informationCommon Core State Standards Grades 9-10 ELA/History/Social Studies
Common Core State Standards Grades 9-10 ELA/History/Social Studies ELA 9-10 1 Responsibility Requires Action. Responsibility is the active side of morality: doing what I should do, what I said I would
More informationTelecommunication (120 ЕCTS)
Study program Faculty Cycle Software Engineering and Telecommunication (120 ЕCTS) Contemporary Sciences and Technologies Postgraduate ECTS 120 Offered in Tetovo Description of the program This master study
More informationVisual Storytelling, Shot Styles and Composition
Pre-Production 1.2 Visual Storytelling, Shot Styles and Composition Objectives: Students will know/be able to >> Understand the role of shot styles, camera movement, and composition in telling a story
More informationStudy Plan for Master of Arts in Applied Linguistics
Study Plan for Master of Arts in Applied Linguistics Master of Arts in Applied Linguistics is awarded by the Faculty of Graduate Studies at Jordan University of Science and Technology (JUST) upon the fulfillment
More informationKINDGERGARTEN. Listen to a story for a particular reason
KINDGERGARTEN READING FOUNDATIONAL SKILLS Print Concepts Follow words from left to right in a text Follow words from top to bottom in a text Know when to turn the page in a book Show spaces between words
More informationGraphics. Computer Animation 고려대학교 컴퓨터 그래픽스 연구실. kucg.korea.ac.kr 1
Graphics Computer Animation 고려대학교 컴퓨터 그래픽스 연구실 kucg.korea.ac.kr 1 Computer Animation What is Animation? Make objects change over time according to scripted actions What is Simulation? Predict how objects
More informationSubmission guidelines for authors and editors
Submission guidelines for authors and editors For the benefit of production efficiency and the production of texts of the highest quality and consistency, we urge you to follow the enclosed submission
More informationM3039 MPEG 97/ January 1998
INTERNATIONAL ORGANISATION FOR STANDARDISATION ORGANISATION INTERNATIONALE DE NORMALISATION ISO/IEC JTC1/SC29/WG11 CODING OF MOVING PICTURES AND ASSOCIATED AUDIO INFORMATION ISO/IEC JTC1/SC29/WG11 M3039
More informationSWING: A tool for modelling intonational varieties of Swedish Beskow, Jonas; Bruce, Gösta; Enflo, Laura; Granström, Björn; Schötz, Susanne
SWING: A tool for modelling intonational varieties of Swedish Beskow, Jonas; Bruce, Gösta; Enflo, Laura; Granström, Björn; Schötz, Susanne Published in: Proceedings of Fonetik 2008 Published: 2008-01-01
More informationSimFonIA Animation Tools V1.0. SCA Extension SimFonIA Character Animator
SimFonIA Animation Tools V1.0 SCA Extension SimFonIA Character Animator Bring life to your lectures Move forward with industrial design Combine illustrations with your presentations Convey your ideas to
More informationPULP Scription: A DSL for Mobile HTML5 Game Applications
PULP Scription: A DSL for Mobile HTML5 Game Applications Mathias Funk and Matthias Rauterberg Department of Industrial Design, Eindhoven University of Technology, Den Dolech 2, 5600MB Eindhoven, The Netherlands
More informationThe Instructional Design Maturity Model Approach for Developing Online Courses
The Instructional Design Maturity Model Approach for Developing Online Courses Authored by: Brad Loiselle PMP, President ipal Interactive Learning Inc, Co Authored by: Scott Hunter PMP, CMA, President
More informationMonitoring Modality vs. Typology
Monitoring in Spoken German and German Sign Language: The Interaction of Typology and Modality NISL - Workshop on Nonmanuals in Sign Languages Frankfurt, 4. April 2009 Helen Leuninger & Eva Waleschkowski
More informationCourse Overview. CSCI 480 Computer Graphics Lecture 1. Administrative Issues Modeling Animation Rendering OpenGL Programming [Angel Ch.
CSCI 480 Computer Graphics Lecture 1 Course Overview January 14, 2013 Jernej Barbic University of Southern California http://www-bcf.usc.edu/~jbarbic/cs480-s13/ Administrative Issues Modeling Animation
More informationSocial Signal Processing Understanding Nonverbal Behavior in Human- Human Interactions
Social Signal Processing Understanding Nonverbal Behavior in Human- Human Interactions A.Vinciarelli University of Glasgow and Idiap Research Institute http://www.dcs.gla.ac.uk/~ vincia e- mail: vincia@dcs.gla.ac.uk
More informationAn accent-based approach to performance rendering: Music theory meets music psychology
International Symposium on Performance Science ISBN 978-94-90306-02-1 The Author 2011, Published by the AEC All rights reserved An accent-based approach to performance rendering: Music theory meets music
More informationFacial Expression Analysis and Synthesis
1. Research Team Facial Expression Analysis and Synthesis Project Leader: Other Faculty: Post Doc(s): Graduate Students: Undergraduate Students: Industrial Partner(s): Prof. Ulrich Neumann, IMSC and Computer
More informationIndiana Department of Education
GRADE 1 READING Guiding Principle: Students read a wide range of fiction, nonfiction, classic, and contemporary works, to build an understanding of texts, of themselves, and of the cultures of the United
More informationControl of affective content in music production
International Symposium on Performance Science ISBN 978-90-9022484-8 The Author 2007, Published by the AEC All rights reserved Control of affective content in music production António Pedro Oliveira and
More informationADVERBIAL MORPHEMES IN TACTILE AMERICAN SIGN LANGUAGE PROJECT DEMONSTRATING EXCELLENCE. Submitted to the
ADVERBIAL MORPHEMES IN TACTILE AMERICAN SIGN LANGUAGE A PROJECT DEMONSTRATING EXCELLENCE Submitted to the GRADUATE COLLEGE OF UNION INSTITUTE AND UNIVERSITY by Steven Douglas Collins In partial fulfillment
More informationHANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT
International Journal of Scientific and Research Publications, Volume 2, Issue 4, April 2012 1 HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT Akhil Gupta, Akash Rathi, Dr. Y. Radhika
More informationCreating and Implementing Conversational Agents
Creating and Implementing Conversational Agents Kenneth J. Luterbach East Carolina University Abstract First, this paper discusses the use of the Artificial Intelligence Markup Language (AIML) for the
More informationTurkish Radiology Dictation System
Turkish Radiology Dictation System Ebru Arısoy, Levent M. Arslan Boaziçi University, Electrical and Electronic Engineering Department, 34342, Bebek, stanbul, Turkey arisoyeb@boun.edu.tr, arslanle@boun.edu.tr
More informationMy Family FREE SAMPLE. This unit focuses on sequencing. These extension
Unit 5 This unit focuses on sequencing. These extension Unit Objectives activities give the children practice with sequencing beginning, middle, and end. As the learn to name family members and rooms children
More informationPerspective taking strategies in Turkish Sign Language and Croatian Sign Language
Perspective taking strategies in Turkish Sign Language and Croatian Sign Language Engin Arik Purdue University Marina Milković University of Zagreb 1 Introduction Space is one of the basic domains of human
More informationVoice Driven Animation System
Voice Driven Animation System Zhijin Wang Department of Computer Science University of British Columbia Abstract The goal of this term project is to develop a voice driven animation system that could take
More informationChapter 1. Introduction. 1.1 The Challenge of Computer Generated Postures
Chapter 1 Introduction 1.1 The Challenge of Computer Generated Postures With advances in hardware technology, more powerful computers become available for the majority of users. A few years ago, computer
More informationDegree of highness or lowness of the voice caused by variation in the rate of vibration of the vocal cords.
PITCH Degree of highness or lowness of the voice caused by variation in the rate of vibration of the vocal cords. PITCH RANGE The scale of pitch between its lowest and highest levels. INTONATION The variations
More informationToward computational understanding of sign language
Technology and Disability 20 (2008) 109 119 109 IOS Press Toward computational understanding of sign language Christian Vogler a, and Siome Goldenstein b a Gallaudet Research Institute, Gallaudet University,
More information