TOOLS FOR RESEARCH AND EDUCATION IN SPEECH SCIENCE

Size: px
Start display at page:

Download "TOOLS FOR RESEARCH AND EDUCATION IN SPEECH SCIENCE"

Transcription

1 TOOLS FOR RESEARCH AND EDUCATION IN SPEECH SCIENCE Ronald A. Cole Center for Spoken Language Understanding, Univ. of Colorado, Boulder ABSTRACT The Center for Spoken Language Understanding (CSLU) provides free language resources to researchers and educators in all areas of speech and hearing science. These resources are of great potential value to speech scientists for analyzing speech, for diagnosing and treating speech and language problems, for researching and evaluating language technologies, and for training students in the theory and practice of speech science. This article describes language resources from CSLU, and some of the ways in which these resources can be used. 1. ACCESSIBLE LANGUAGE RESOURCES In 1991, the Center for Spoken Language Understanding received a grant from the National Science Foundation to develop the OGI speech tools free software for analyzing, displaying and transcribing speech [1]. Since that time, CSLU has been developing and distributing free language resources to interested users in educational and other not-for-profit institutions. The original OGI speech tools have been transformed into the CSLU Toolkit, a comprehensive set of tools and technologies supporting research, development and education in speech and language technologies. In addition to software tools, CSLU has developed a large number of speech corpora to support basic research and development of language technologies and systems. These language resources software tools and corpora have been distributed to over 2000 sites in 65 countries, and have supported research reported in over 300 publications. In this article I describe these resources and how they might benefit educators, practitioners and researchers in speech and language science. 2. SPEECH CORPORA In areas of speech and language technology, including speech recognition, speaker recognition, language identification and message understanding, progress has been measured for the past fifteen years in terms of performance on shared speech corpora. In national and international competitions in each of these areas sponsored by DARPA and other agencies, corpora have been used to train recognition systems on designated training data, and then to evaluate and compare systems on new (and unseen) test data using standardized scoring techniques. By increasing task difficulty (e.g., from transcription of stories read from the Wall Street Journal using a 5000-word vocabulary to transcription of unknown news broadcasts with unlimited vocabulary), the field has been able to demonstrate progress in a scientific manner. While the needs and goals of speech scientists may differ from those of speech technologists, publicly available language resources are just as important to progress in speech science. Speech corpora enable researchers to study the acoustic and linguistic properties of speech, perform various analyses, and propose and test theories. When other researchers are able to share the same data and software, results can be replicated, extended or challenged. Moreover, because just about everyone today has access to powerful yet affordable computers (under $3000) that can store and process large amount of data, speech from thousands of speakers can now be collected, annotated, analyzed and shared. Speech scientists can now study statistical patterns and trends as well as the behaviors of individual speakers. CSLU provides 17 annotated speech corpora with over 1000 hours of speech from children and adults in 22 languages [2]. These corpora have been designed to support research on acoustic phonetic, lexical, syntactic and semantic manifestations of speech and to reveal how these representations change over time within and across speakers in different acoustic environments. The range of acoustic environments includes recording studios, classrooms, laboratories, cars and public places using high quality microphones, telephones and cellular phones. Callers have produced a wide range of utterances, from isolated words and phrases to one-minute intervals of extemporaneous speech. They have called a single time or multiple times over a period of years. The CSLU Web site describes all available corpora, including recording conditions, speaker population and the data collection protocol [2]. Recent corpora include Kids Speech, with utterances from 1100 children grades K-10; Speaker Recognition, with 12 calls each from 90 speakers over a two year period; and National Cellular, with 1500 calls on cellular phones from across the U.S. 3. THE CSLU TOOLKIT Since 1993, the Center for Spoken Language Understanding has focused on developing, acquiring and incorporating spokenlanguage technology into a portable, comprehensive and easy-touse software environment. The result of these efforts is the CSLU Toolkit, an affordable, accessible, portable, comprehensive and east-to-use set of tools and technologies for learning about, researching and developing interactive language systems and their underlying technologies [3, 4, 5].

2 Development of the CSLU Toolkit is motivated by our belief that the best way to accelerate progress in language technology is by involving as many people as possible by making interactive language systems and technologies accessible, affordable and easy to use. This is precisely the role that the CSLU Toolkit fulfills. We have developed learning materials, core technologies, infrastructure, and software tools that are truly universal in that they benefit a range of users, from novice to expert, and are applicable to a wide range of tasks. The toolkit provides a modular, open architecture supporting distributed, cross-platform, client/server-based networking. It includes interfaces for standard telephony and audio devices, and software interfaces for speech recognition, natural language understanding, text-to-speech synthesis, speech reading (video) and animation components. This flexible environment makes it possible to easily integrate new components and to develop scalable, portable speech-related applications. In the remainder of this section we describe the main component of the CSLU Toolkit. RAD seamlessly integrates the core technologies of facial animation, speech recognition and understanding and speech synthesis with other useful features such as word-spotting, barge-in, dialogue repair, telephone and microphone interfaces, and open-microphone capability. RAD s graphical authoring environment enables users to design interactive dialogues by specifying prompts, recognition vocabularies and actions. Prompts can be either recorded or typed in as text, in which case they are produced as speech using the toolkit s text-to-speech system. Both recorded and synthesized prompts are produced automatically by Baldi, the animated talking face [6]. The words or phrases to be recognized at any dialogue state are simply typed in by the system builder. RAD automatically creates a pronunciation model for each word. (When running an application, the recognizer spots these words and phrases in the speaker s utterance and assigns them confidence scores. These scores are used to reject extraneous speech or poorly recognized words and engage the user in conversational repair.) Arbitrary actions can be associated with recognized utterances, such as producing a new prompt, displaying an image or retrieving and displaying information from a Web site. RAD contains many useful objects for retrieving, organizing and presenting information. In addition, users can develop new objects using the Tcl/Tk programming language. By connecting RAD objects, dialogues of arbitrary complexity can be designed. Based on teachers and students requests, a large number of objects have been incorporated into RAD to make design and evaluation of learning and language training activities easier and more powerful. For example, it is easy to incorporate images into applications, highlight parts of images, associate audio files with parts of images (e.g., the sound of a waterfall or monkeys playing), have Baldi ask questions about these objects, and have the user respond by naming or clicking on the object. Figure 1. RAD Application with Baldi Rapid Application Developer (RAD). RAD is the toolkit s graphical authoring environment. This software makes it possible for people with little or no knowledge of speech technology to learn to develop speech interfaces and applications. RAD s drag and drop interface is easy to use and learn to use naïve users can build simple dialogues in minutes for conversing with an animated talking face. Experienced users can develop and deploy sophisticated interactive media systems for a variety of useful tasks in a few hours. Facial animation. Baldi is an animated three dimensional talking head developed at the in the Perceptual Science Laboratory at UC Santa Cruz. Baldi s synthesis program controls a wireframe model, with a control strategy for coarticulation, controls for paralinguistic information and affect in the face, text-to-speech synthesis, and synchronization of auditory and visual speech [6]. Most of the current parameters move vertices (and the polygons formed from these vertices) on the face by geometric functions such as rotation (e.g. jaw rotation) or translation of the vertices in one or more dimensions (e.g., lower and upper lip height, mouth widening). Other parameters work by interpolating between two different face subareas; e.g., interpolation is used to determine parameters for cheek, neck, and forehead shape. Some affect parameters such as smiling also use interpolation. Basic emotions of surprise, happiness, anger, sadness, disgust, and fear can be communicated through facial expressions. Baldi produces accurate visual speech that can be understood by skilled speech readers. Recently, a more complex and accurate tongue (consistent with electropalatography and

3 imaging data) a hard palate, and three-dimensional teeth have been added. The face can be made transparent during speech revealing the movements of the teeth and tongue, and the orientation of the face can also be changed while speaking so it can be viewed from different perspectives. These features offer unique capabilities for language instruction features that cannot be easily controlled in real faces. Speech Recognition. The toolkit includes working speech recognition systems, and tools for researching, developing and testing new speech recognition systems. It comes complete with general purpose speaker- and vocabulary-independent speech recognition engines for both word spotting and continuous speech recognition applications [7, 8]. In addition, several vocabulary-specific recognizers (e.g., alpha-digits) are included. Mexican Spanish recognizers are also included for vocabularyindependent word spotting applications and continuous digit recognition [9]. The toolkit supports research and education for several approaches to computer speech recognition including artificial neural network (ANN) classifiers, hidden Markov models (HMM) and segmental systems. The toolkit also includes stepby-step tutorials for training and testing new ANN and HMM recognizers. Natural Language Understanding. PROFER (Predictive RObust Finite-state parser,) is a semantic parser modeled after Carnegie Mellon University s Phoenix system [10]. As a robust semantic parser, PROFER can be used to extract semantic patterns from the output of the toolkit s recognizers while tolerating many of the features of spontaneous speech, such as false starts, filled pauses and ungrammatical constructions. It enables users to design conversational systems combining continuous speech recognition and natural language understanding. A step-by-step tutorial has been developed for PROFER in which students learn to develop a conversational system for retrieving movie times and locations from a Web site. Festival Speech Synthesis System. The toolkit integrates the Festival text-to-speech synthesis system, developed at the University of Edinburgh [11]. Festival provides a complete environment for learning, researching and developing synthetic speech, including modules for normalizing text (e.g., dealing with abbreviations), transforming text into a sequence of phonetic segments with appropriate durations, assigning prosodic contours (e.g., pitch, amplitude) to utterances, and generating speech using either diphone or unit-selection concatenative synthesis. CSLU has developed a waveform-synthesis plug-in component and seven voices, including male and female versions of American English and Mexican Spanish, and a German male voice. In addition, we have developed a graphical user interface that enables users to mark up a text string to control many features of the resulting synthesized speech (e.g., pitch, amplitude) and to insert pauses, filled pauses, coughs, sneezes, etc. SpeechView is the toolkit s interactive analysis and display tool. It allows users to create new waveform and label files, display data that are associated with a waveform (such as spectrograms or pitch contours), and modify existing waveforms and label files. It is used at CSLU for research, corpus development activities, and forms the basis for an interactive spectrogram reading class [12]. Several independent waveform windows, each with zero or more spectrogram and label windows may be displayed simultaneously within SpeechView for comparison and manipulation. Several spectrogram formats with user-defined signal processing and display options are available. Waveform sections corresponding to a phoneme or word label can be played back in isolation from adjacent phonemes or words. Zooming in and out is accomplished by highlighting the segment of interest. BaldiSync enables users to synchronize any speech waveform with Baldi's facial movements by supplying the waveform and the text of the utterance as a sequence of typed words. This module integrates Baldi, the Festival TTS system, the Toolkit's automatic phonetic alignment package, and SpeechView. BaldiSync assigns a phonetic sequence to each word using a pronunciation dictionary or TTS letter-to-sound rules and uses a Viterbi search to force an alignment of the phonetic segments to the speech waveform. The user can then play the entire utterance or any desired interval of labeled speech and watch Baldi produce it. Perceptual Science Laboratory (PSL) provides tools to support research in perception and cognition [13]. PSL provides a userfriendly research environment for designing and conducting multimodal experiments in speech perception, psycholinguistics, and memory. It enables users to manipulate auditory and visual stimuli; design interactive protocols for multi-media data presentation and multi-modal data capture, transcribe and analyze subjects' responses; perform statistical analyses, and summarize and display results. It supports a variety of experimental designs including factorial designs and expanded factorial designs (see [6]) enabling researchers to investigate the manner in which perceivers combine information from different knowledge sources (e.g., speech sounds and facial movements). Since PSL tools can be used to teach students to conduct research using the scientific method, it offers them new ways to conceptualize problems and investigate the world. Speech Performance Assessment and Measurement (SPAM) is a database program designed to capture and analyze all behaviors produced by the user and system during an interactive dialogue. Since all user and system behaviors are indexed in a database, data can be retrieved and analyzed to explore and test various hypotheses. In conjunction with PSL, SPAM provides an invaluable tool for designing and evaluating user interfaces for

4 conversational interactions. SPAM benefited greatly from the experiences of it s developer Daniel Solcher, who has been profoundly deaf since birth, and who overcame many obstacles to become a successful engineer in a hearing world. Programming environment: The toolkit comes with complete programming environments for both C and Tcl, which incorporate a collection of software libraries and a set of API's [14]. These libraries serve as basic building blocks for toolkit programming. They are portable across platforms and provide the speech, language, networking, input, output, and data transport capabilities of the toolkit. Natural language processing modules, developed in Prolog, interface with the toolkit through sockets. 4. OPPORTUNITIES The language resources described here offer new opportunities for research, education and practice in all areas of speech science. In education, the toolkit can be used to help students learn about speech and speech technologies through hands-on experience with state of the art software. The toolkit comes with several course modules, such as those described in [12] in a computer-based spectrogram reading course. Students in this course learn about the acoustic phonetic structure of English by recording, displaying and transcribing utterances, and playing utterances produced by Baldi with visible articulators. Toolkit courses have also been taught on building spoken dialogue systems [15,16] and text-to-speech synthesis. Course modules in speech perception and production, recognition, natural language understanding, text-to-speech synthesis and multimodal dialogue systems are all available in the toolkit. The toolkit also supports learning through interactive media systems that act like intelligent tutors. In the next three articles, my colleagues describe interactive language systems used to teach classroom subjects and speech, hearing and language skills. I believe that near-term advances in language and interface technologies will result in interactive media systems becoming the most effective means to acquire and create new knowledge. The CSLU Toolkit provides unlimited opportunities for new research. For example, phoneticians can use the toolkit to design protocols to record speech in different languages, transcribe the speech at different linguistic levels, and then compare labeling performance of different transcribers automatically [17]. Using RAD, useful systems can be developed and deployed quickly to investigate how people interact with machines using speech and to evaluate new language technologies in real-world applications. Educational researchers can use the toolkit to investigate learning using interactive language systems. In clinics and laboratories, the potential of the toolkit has just begun to be explored. Today, most evaluation of speech production is done with paper and pencil instruments. A tool such as SpeechView could be of enormous benefit to speech therapists by recording, transcribing and analyzing speech behaviors, and evaluating the effects of treatments over time. ACKNOWLEDGMENTS This work was supported in part by NSF grant ECS , NSF Challenge Grant CDA , a joint grant from the Office of Naval Research and DARPA, and Intel Corporation. REFERENCES 1. Fanty, M., Pochmara, J., and Cole, R. An interactive environment for speech recognition research. Proceedings of the International Conference on Spoken Language Processing, Banff, Canada, October Corpus information available at 3. Sutton, S., Novick, D., Cole, R.A., and Fanty, M. Building 10,000 spoken-dialogue systems. Proceedings of the ICSLP, Philadelphia, PA, October Cole, R., Sutton, S., Yan, Y., Vermeulen, P., and Fanty, M. Accessible technology for interactive systems: A new approach to spoken language research, Proceedings of the International Conference on Acoustics, Speech and Signal Processing, Seattle, WA, May Sutton, S., Cole, R.A., de Villiers, J., Schalkwyk, J., Vermeulen, P., Macon, M., Yan, Y., Kaiser, E., Rundle, B., Shobaki, K., Hosom, J.P., Kain, A., Wouters, J., Massaro, M. and Cohen, M. Universal Speech Tools: the CSLU Toolkit. Proceedings of the International Conference on Spoken Language Processing page , Sydney, Australia, November Massaro, D., Perceiving Talking Faces: From Speech Perception to a Behavioral Principle, MIT Press, Cambridge, Cosi, P., Hosom, J.P., Schalkwyk, J., Sutton, S., and Cole, R.A., Connected digit recognition experiments with the OGI toolkit s neural network and hmm-based recognizers. Proceedings of 4th IEEE Workshop on Interactive Voice Technology for Telecommunications Applications, Turin, Italy, September Hosom, J.P., Cole, R.A., and Cosi, P. Evaluation and integration of neural-network training techniques for continuous digit recognition. In Proceedings of the International Conference on Spoken Language Processing, Sydney, Australia, November Serridge, B., Cole, R. A., Barbosa, A., Munive, N., and Vargas, A. Creating a Mexican Spanish version of the CSLU toolkit. In Proceedings of the International Conference on Spoken Language Processing, Sydney, Australia, November Kaiser, E.C., Johnston, M., and Heeman, P. A. Profer: Predictive, Robust Finite-State Parsing for Spoken Language. In Proceedings of ICASSP, Phoenix, Arizona, March Black, A., and Taylor, P., "Festival Speech Synthesis System: System documentation (1.1.1)," Human Communication Research Centre Technical Report HCRC/TR-83, Edinburgh, Carmell, T., Hosom, J.P., and Cole, R. A Computer-based course in spectrogram reading. Proceedings of ESCA/SOCRATES Workshop on Method and Tool Innovations for Speech Science

5 Education, London, UK, April PSL tools and tutorials: Schalkwyk, J., de Villiers, J., van Vuuren, S., and Vermeulen, P. CSLUsh: an extendible research environment. In Proc. EUROSPEECH 97, pages , Rhodes, Greece, Colton, D., Cole, R.A., Novick, D., and Sutton, S. A laboratory course for designing and testing spoken dialogue systems. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Atlanta, Georgia, May Sutton, S., Kaiser, E., Cronk, A., and Cole, R. Bringing Spoken Language Systems to the Classroom. In Proceedings of EUROSPEECH, Rhodes, Greece, Cole, R.A., Oshika, B.T., Noel, M., Lander, T., and Fanty, M. Labeler agreement in phonetic labeling of continuous speech. In Proceedings of the International Conference on Spoken Language Processing, pages , Yokohama, Japan, September 1994.

Verification of Correct Pronunciation. of Mexican Spanish using Speech Technology 1

Verification of Correct Pronunciation. of Mexican Spanish using Speech Technology 1 Verification of Correct Pronunciation of Mexican Spanish using Speech Technology 1 Ingrid Kirschning and Nancy Aguas Tlatoa Speech Processing Group, ICT, CENTIA 2, Universidad de las Américas- Puebla.

More information

An Arabic Text-To-Speech System Based on Artificial Neural Networks

An Arabic Text-To-Speech System Based on Artificial Neural Networks Journal of Computer Science 5 (3): 207-213, 2009 ISSN 1549-3636 2009 Science Publications An Arabic Text-To-Speech System Based on Artificial Neural Networks Ghadeer Al-Said and Moussa Abdallah Department

More information

Open-Source, Cross-Platform Java Tools Working Together on a Dialogue System

Open-Source, Cross-Platform Java Tools Working Together on a Dialogue System Open-Source, Cross-Platform Java Tools Working Together on a Dialogue System Oana NICOLAE Faculty of Mathematics and Computer Science, Department of Computer Science, University of Craiova, Romania oananicolae1981@yahoo.com

More information

Artificial Neural Network for Speech Recognition

Artificial Neural Network for Speech Recognition Artificial Neural Network for Speech Recognition Austin Marshall March 3, 2005 2nd Annual Student Research Showcase Overview Presenting an Artificial Neural Network to recognize and classify speech Spoken

More information

Thirukkural - A Text-to-Speech Synthesis System

Thirukkural - A Text-to-Speech Synthesis System Thirukkural - A Text-to-Speech Synthesis System G. L. Jayavardhana Rama, A. G. Ramakrishnan, M Vijay Venkatesh, R. Murali Shankar Department of Electrical Engg, Indian Institute of Science, Bangalore 560012,

More information

Creating voices for the Festival speech synthesis system.

Creating voices for the Festival speech synthesis system. M. Hood Supervised by A. Lobb and S. Bangay G01H0708 Creating voices for the Festival speech synthesis system. Abstract This project focuses primarily on the process of creating a voice for a concatenative

More information

Robust Methods for Automatic Transcription and Alignment of Speech Signals

Robust Methods for Automatic Transcription and Alignment of Speech Signals Robust Methods for Automatic Transcription and Alignment of Speech Signals Leif Grönqvist (lgr@msi.vxu.se) Course in Speech Recognition January 2. 2004 Contents Contents 1 1 Introduction 2 2 Background

More information

ABSTRACT 2. SYSTEM OVERVIEW 1. INTRODUCTION. 2.1 Speech Recognition

ABSTRACT 2. SYSTEM OVERVIEW 1. INTRODUCTION. 2.1 Speech Recognition The CU Communicator: An Architecture for Dialogue Systems 1 Bryan Pellom, Wayne Ward, Sameer Pradhan Center for Spoken Language Research University of Colorado, Boulder Boulder, Colorado 80309-0594, USA

More information

Membering T M : A Conference Call Service with Speaker-Independent Name Dialing on AIN

Membering T M : A Conference Call Service with Speaker-Independent Name Dialing on AIN PAGE 30 Membering T M : A Conference Call Service with Speaker-Independent Name Dialing on AIN Sung-Joon Park, Kyung-Ae Jang, Jae-In Kim, Myoung-Wan Koo, Chu-Shik Jhon Service Development Laboratory, KT,

More information

Carla Simões, t-carlas@microsoft.com. Speech Analysis and Transcription Software

Carla Simões, t-carlas@microsoft.com. Speech Analysis and Transcription Software Carla Simões, t-carlas@microsoft.com Speech Analysis and Transcription Software 1 Overview Methods for Speech Acoustic Analysis Why Speech Acoustic Analysis? Annotation Segmentation Alignment Speech Analysis

More information

Text-To-Speech Technologies for Mobile Telephony Services

Text-To-Speech Technologies for Mobile Telephony Services Text-To-Speech Technologies for Mobile Telephony Services Paulseph-John Farrugia Department of Computer Science and AI, University of Malta Abstract. Text-To-Speech (TTS) systems aim to transform arbitrary

More information

Turkish Radiology Dictation System

Turkish Radiology Dictation System Turkish Radiology Dictation System Ebru Arısoy, Levent M. Arslan Boaziçi University, Electrical and Electronic Engineering Department, 34342, Bebek, stanbul, Turkey arisoyeb@boun.edu.tr, arslanle@boun.edu.tr

More information

Speech: A Challenge to Digital Signal Processing Technology for Human-to-Computer Interaction

Speech: A Challenge to Digital Signal Processing Technology for Human-to-Computer Interaction : A Challenge to Digital Signal Processing Technology for Human-to-Computer Interaction Urmila Shrawankar Dept. of Information Technology Govt. Polytechnic, Nagpur Institute Sadar, Nagpur 440001 (INDIA)

More information

31 Case Studies: Java Natural Language Tools Available on the Web

31 Case Studies: Java Natural Language Tools Available on the Web 31 Case Studies: Java Natural Language Tools Available on the Web Chapter Objectives Chapter Contents This chapter provides a number of sources for open source and free atural language understanding software

More information

Things to remember when transcribing speech

Things to remember when transcribing speech Notes and discussion Things to remember when transcribing speech David Crystal University of Reading Until the day comes when this journal is available in an audio or video format, we shall have to rely

More information

The Impact of Using Technology in Teaching English as a Second Language

The Impact of Using Technology in Teaching English as a Second Language English Language and Literature Studies; Vol. 3, No. 1; 2013 ISSN 1925-4768 E-ISSN 1925-4776 Published by Canadian Center of Science and Education The Impact of Using Technology in Teaching English as

More information

Speech Analytics. Whitepaper

Speech Analytics. Whitepaper Speech Analytics Whitepaper This document is property of ASC telecom AG. All rights reserved. Distribution or copying of this document is forbidden without permission of ASC. 1 Introduction Hearing the

More information

Efficient diphone database creation for MBROLA, a multilingual speech synthesiser

Efficient diphone database creation for MBROLA, a multilingual speech synthesiser Efficient diphone database creation for, a multilingual speech synthesiser Institute of Linguistics Adam Mickiewicz University Poznań OWD 2010 Wisła-Kopydło, Poland Why? useful for testing speech models

More information

Connected Digits Recognition Task: ISTC CNR Comparison of Open Source Tools

Connected Digits Recognition Task: ISTC CNR Comparison of Open Source Tools Connected Digits Recognition Task: ISTC CNR Comparison of Open Source Tools Piero Cosi, Mauro Nicolao Istituto di Scienze e Tecnologie della Cognizione, C.N.R. via Martiri della libertà, 2 35137 Padova

More information

Develop Software that Speaks and Listens

Develop Software that Speaks and Listens Develop Software that Speaks and Listens Copyright 2011 Chant Inc. All rights reserved. Chant, SpeechKit, Getting the World Talking with Technology, talking man, and headset are trademarks or registered

More information

SOME ASPECTS OF ASR TRANSCRIPTION BASED UNSUPERVISED SPEAKER ADAPTATION FOR HMM SPEECH SYNTHESIS

SOME ASPECTS OF ASR TRANSCRIPTION BASED UNSUPERVISED SPEAKER ADAPTATION FOR HMM SPEECH SYNTHESIS SOME ASPECTS OF ASR TRANSCRIPTION BASED UNSUPERVISED SPEAKER ADAPTATION FOR HMM SPEECH SYNTHESIS Bálint Tóth, Tibor Fegyó, Géza Németh Department of Telecommunications and Media Informatics Budapest University

More information

Modern foreign languages

Modern foreign languages Modern foreign languages Programme of study for key stage 3 and attainment targets (This is an extract from The National Curriculum 2007) Crown copyright 2007 Qualifications and Curriculum Authority 2007

More information

TOOLS for DEVELOPING Communication PLANS

TOOLS for DEVELOPING Communication PLANS TOOLS for DEVELOPING Communication PLANS Students with disabilities, like all students, must have the opportunity to fully participate in all aspects of their education. Being able to effectively communicate

More information

WinPitch LTL II, a Multimodal Pronunciation Software

WinPitch LTL II, a Multimodal Pronunciation Software WinPitch LTL II, a Multimodal Pronunciation Software Philippe MARTIN UFRL Université Paris 7 92, Ave. de France 75013 Paris, France philippe.martin@linguist.jussieu.fr Abstract We introduce a new version

More information

Reading errors made by skilled and unskilled readers: evaluating a system that generates reports for people with poor literacy

Reading errors made by skilled and unskilled readers: evaluating a system that generates reports for people with poor literacy University of Aberdeen Department of Computing Science Technical Report AUCS/TR040 Reading errors made by skilled and unskilled readers: evaluating a system that generates reports for people with poor

More information

AUTOMATIC PHONEME SEGMENTATION WITH RELAXED TEXTUAL CONSTRAINTS

AUTOMATIC PHONEME SEGMENTATION WITH RELAXED TEXTUAL CONSTRAINTS AUTOMATIC PHONEME SEGMENTATION WITH RELAXED TEXTUAL CONSTRAINTS PIERRE LANCHANTIN, ANDREW C. MORRIS, XAVIER RODET, CHRISTOPHE VEAUX Very high quality text-to-speech synthesis can be achieved by unit selection

More information

COMPUTER TECHNOLOGY IN TEACHING READING

COMPUTER TECHNOLOGY IN TEACHING READING Лю Пэн COMPUTER TECHNOLOGY IN TEACHING READING Effective Elementary Reading Program Effective approach must contain the following five components: 1. Phonemic awareness instruction to help children learn

More information

Functional Auditory Performance Indicators (FAPI)

Functional Auditory Performance Indicators (FAPI) Functional Performance Indicators (FAPI) An Integrated Approach to Skill FAPI Overview The Functional (FAPI) assesses the functional auditory skills of children with hearing loss. It can be used by parents,

More information

Technologies for Voice Portal Platform

Technologies for Voice Portal Platform Technologies for Voice Portal Platform V Yasushi Yamazaki V Hitoshi Iwamida V Kazuhiro Watanabe (Manuscript received November 28, 2003) The voice user interface is an important tool for realizing natural,

More information

62 Hearing Impaired MI-SG-FLD062-02

62 Hearing Impaired MI-SG-FLD062-02 62 Hearing Impaired MI-SG-FLD062-02 TABLE OF CONTENTS PART 1: General Information About the MTTC Program and Test Preparation OVERVIEW OF THE TESTING PROGRAM... 1-1 Contact Information Test Development

More information

Regionalized Text-to-Speech Systems: Persona Design and Application Scenarios

Regionalized Text-to-Speech Systems: Persona Design and Application Scenarios Regionalized Text-to-Speech Systems: Persona Design and Application Scenarios Michael Pucher, Gudrun Schuchmann, and Peter Fröhlich ftw., Telecommunications Research Center, Donau-City-Strasse 1, 1220

More information

Face Locating and Tracking for Human{Computer Interaction. Carnegie Mellon University. Pittsburgh, PA 15213

Face Locating and Tracking for Human{Computer Interaction. Carnegie Mellon University. Pittsburgh, PA 15213 Face Locating and Tracking for Human{Computer Interaction Martin Hunke Alex Waibel School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract Eective Human-to-Human communication

More information

THE COLLECTION AND PRELIMINARY ANALYSIS OF A SPONTANEOUS SPEECH DATABASE*

THE COLLECTION AND PRELIMINARY ANALYSIS OF A SPONTANEOUS SPEECH DATABASE* THE COLLECTION AND PRELIMINARY ANALYSIS OF A SPONTANEOUS SPEECH DATABASE* Victor Zue, Nancy Daly, James Glass, David Goodine, Hong Leung, Michael Phillips, Joseph Polifroni, Stephanie Seneff, and Michal

More information

GO Solve Word Problems Aligns to Enhancing Education Through Technology Criteria

GO Solve Word Problems Aligns to Enhancing Education Through Technology Criteria GO Solve Word Problems Aligns to Enhancing Education Through Technology Criteria The primary goal of the Enhancing Education through Technology (Ed Tech) program is to improve student academic achievement

More information

D2.4: Two trained semantic decoders for the Appointment Scheduling task

D2.4: Two trained semantic decoders for the Appointment Scheduling task D2.4: Two trained semantic decoders for the Appointment Scheduling task James Henderson, François Mairesse, Lonneke van der Plas, Paola Merlo Distribution: Public CLASSiC Computational Learning in Adaptive

More information

Using Words and Phonetic Strings for Efficient Information Retrieval from Imperfectly Transcribed Spoken Documents

Using Words and Phonetic Strings for Efficient Information Retrieval from Imperfectly Transcribed Spoken Documents Using Words and Phonetic Strings for Efficient Information Retrieval from Imperfectly Transcribed Spoken Documents Michael J. Witbrock and Alexander G. Hauptmann Carnegie Mellon University ABSTRACT Library

More information

Generating Training Data for Medical Dictations

Generating Training Data for Medical Dictations Generating Training Data for Medical Dictations Sergey Pakhomov University of Minnesota, MN pakhomov.sergey@mayo.edu Michael Schonwetter Linguistech Consortium, NJ MSchonwetter@qwest.net Joan Bachenko

More information

Teaching Methodology for 3D Animation

Teaching Methodology for 3D Animation Abstract The field of 3d animation has addressed design processes and work practices in the design disciplines for in recent years. There are good reasons for considering the development of systematic

More information

A System for Labeling Self-Repairs in Speech 1

A System for Labeling Self-Repairs in Speech 1 A System for Labeling Self-Repairs in Speech 1 John Bear, John Dowding, Elizabeth Shriberg, Patti Price 1. Introduction This document outlines a system for labeling self-repairs in spontaneous speech.

More information

Robustness of a Spoken Dialogue Interface for a Personal Assistant

Robustness of a Spoken Dialogue Interface for a Personal Assistant Robustness of a Spoken Dialogue Interface for a Personal Assistant Anna Wong, Anh Nguyen and Wayne Wobcke School of Computer Science and Engineering University of New South Wales Sydney NSW 22, Australia

More information

9RLFH$FWLYDWHG,QIRUPDWLRQ(QWU\7HFKQLFDO$VSHFWV

9RLFH$FWLYDWHG,QIRUPDWLRQ(QWU\7HFKQLFDO$VSHFWV Université de Technologie de Compiègne UTC +(8',$6

More information

Your personalised, online solution to meeting ICAO English language proficiency requirements

Your personalised, online solution to meeting ICAO English language proficiency requirements Your personalised, online solution to meeting ICAO English language proficiency requirements 1 Acknowledged as leaders in their respective fields of Aviation English training and Speech Recognition technology,

More information

Establishing the Uniqueness of the Human Voice for Security Applications

Establishing the Uniqueness of the Human Voice for Security Applications Proceedings of Student/Faculty Research Day, CSIS, Pace University, May 7th, 2004 Establishing the Uniqueness of the Human Voice for Security Applications Naresh P. Trilok, Sung-Hyuk Cha, and Charles C.

More information

Voice Driven Animation System

Voice Driven Animation System Voice Driven Animation System Zhijin Wang Department of Computer Science University of British Columbia Abstract The goal of this term project is to develop a voice driven animation system that could take

More information

Comparing Support Vector Machines, Recurrent Networks and Finite State Transducers for Classifying Spoken Utterances

Comparing Support Vector Machines, Recurrent Networks and Finite State Transducers for Classifying Spoken Utterances Comparing Support Vector Machines, Recurrent Networks and Finite State Transducers for Classifying Spoken Utterances Sheila Garfield and Stefan Wermter University of Sunderland, School of Computing and

More information

Specialty Answering Service. All rights reserved.

Specialty Answering Service. All rights reserved. 0 Contents 1 Introduction... 2 1.1 Types of Dialog Systems... 2 2 Dialog Systems in Contact Centers... 4 2.1 Automated Call Centers... 4 3 History... 3 4 Designing Interactive Dialogs with Structured Data...

More information

Program curriculum for graduate studies in Speech and Music Communication

Program curriculum for graduate studies in Speech and Music Communication Program curriculum for graduate studies in Speech and Music Communication School of Computer Science and Communication, KTH (Translated version, November 2009) Common guidelines for graduate-level studies

More information

Speech Processing Applications in Quaero

Speech Processing Applications in Quaero Speech Processing Applications in Quaero Sebastian Stüker www.kit.edu 04.08 Introduction! Quaero is an innovative, French program addressing multimedia content! Speech technologies are part of the Quaero

More information

Information Leakage in Encrypted Network Traffic

Information Leakage in Encrypted Network Traffic Information Leakage in Encrypted Network Traffic Attacks and Countermeasures Scott Coull RedJack Joint work with: Charles Wright (MIT LL) Lucas Ballard (Google) Fabian Monrose (UNC) Gerald Masson (JHU)

More information

Information Technology Career Field Pathways and Course Structure

Information Technology Career Field Pathways and Course Structure Information Technology Career Field Pathways and Course Structure Courses in Information Support and Services (N0) Computer Hardware 2 145025 Computer Software 145030 Networking 2 145035 Network Operating

More information

MICHIGAN TEST FOR TEACHER CERTIFICATION (MTTC) TEST OBJECTIVES FIELD 062: HEARING IMPAIRED

MICHIGAN TEST FOR TEACHER CERTIFICATION (MTTC) TEST OBJECTIVES FIELD 062: HEARING IMPAIRED MICHIGAN TEST FOR TEACHER CERTIFICATION (MTTC) TEST OBJECTIVES Subarea Human Development and Students with Special Educational Needs Hearing Impairments Assessment Program Development and Intervention

More information

C E D A T 8 5. Innovating services and technologies for speech content management

C E D A T 8 5. Innovating services and technologies for speech content management C E D A T 8 5 Innovating services and technologies for speech content management Company profile 25 years experience in the market of transcription/reporting services; Cedat 85 Group: Cedat 85 srl Subtitle

More information

A text document of what was said and any other relevant sound in the media (often posted with audio recordings).

A text document of what was said and any other relevant sound in the media (often posted with audio recordings). Video Captioning ADA compliant captions: Are one to three (preferably two) lines of text that appear on-screen all at once o Have no more than 32 characters to a line Appear on screen for 2 to 6 seconds

More information

Effects of Pronunciation Practice System Based on Personalized CG Animations of Mouth Movement Model

Effects of Pronunciation Practice System Based on Personalized CG Animations of Mouth Movement Model Effects of Pronunciation Practice System Based on Personalized CG Animations of Mouth Movement Model Kohei Arai 1 Graduate School of Science and Engineering Saga University Saga City, Japan Mariko Oda

More information

Christian Leibold CMU Communicator 12.07.2005. CMU Communicator. Overview. Vorlesung Spracherkennung und Dialogsysteme. LMU Institut für Informatik

Christian Leibold CMU Communicator 12.07.2005. CMU Communicator. Overview. Vorlesung Spracherkennung und Dialogsysteme. LMU Institut für Informatik CMU Communicator Overview Content Gentner/Gentner Emulator Sphinx/Listener Phoenix Helios Dialog Manager Datetime ABE Profile Rosetta Festival Gentner/Gentner Emulator Assistive Listening Systems (ALS)

More information

Spot me if you can: Uncovering spoken phrases in encrypted VoIP conversations

Spot me if you can: Uncovering spoken phrases in encrypted VoIP conversations Spot me if you can: Uncovering spoken phrases in encrypted VoIP conversations C. Wright, L. Ballard, S. Coull, F. Monrose, G. Masson Talk held by Goran Doychev Selected Topics in Information Security and

More information

Digital 3D Animation

Digital 3D Animation Elizabethtown Area School District Digital 3D Animation Course Number: 753 Length of Course: 1 semester 18 weeks Grade Level: 11-12 Elective Total Clock Hours: 120 hours Length of Period: 80 minutes Date

More information

Design and Data Collection for Spoken Polish Dialogs Database

Design and Data Collection for Spoken Polish Dialogs Database Design and Data Collection for Spoken Polish Dialogs Database Krzysztof Marasek, Ryszard Gubrynowicz Department of Multimedia Polish-Japanese Institute of Information Technology Koszykowa st., 86, 02-008

More information

VPAT for Apple MacBook Pro (Late 2013)

VPAT for Apple MacBook Pro (Late 2013) VPAT for Apple MacBook Pro (Late 2013) The following Voluntary Product Accessibility information refers to the Apple MacBook Pro (Late 2013). For more information on the accessibility features of this

More information

Video Affective Content Recognition Based on Genetic Algorithm Combined HMM

Video Affective Content Recognition Based on Genetic Algorithm Combined HMM Video Affective Content Recognition Based on Genetic Algorithm Combined HMM Kai Sun and Junqing Yu Computer College of Science & Technology, Huazhong University of Science & Technology, Wuhan 430074, China

More information

Not Just Another Voice Mail System

Not Just Another Voice Mail System Not Just Another Voice Mail System Lisa J. Stifelman MIT Media Laboratory 20 Ames Street, E15-350 Cambridge MA, 02139 L. Stifelman. Not Just Another Voice Mail System. Proceedings of 1991 Conference. American

More information

Comprehensive Reading Assessment Grades K-1

Comprehensive Reading Assessment Grades K-1 Comprehensive Reading Assessment Grades K-1 User Information Name: Doe, John Date of Birth: Jan 01, 1995 Current Grade in School: 3rd Grade in School at Evaluation: 1st Evaluation Date: May 17, 2006 Background

More information

Module Catalogue for the Bachelor Program in Computational Linguistics at the University of Heidelberg

Module Catalogue for the Bachelor Program in Computational Linguistics at the University of Heidelberg Module Catalogue for the Bachelor Program in Computational Linguistics at the University of Heidelberg March 1, 2007 The catalogue is organized into sections of (1) obligatory modules ( Basismodule ) that

More information

Selecting Research Based Instructional Programs

Selecting Research Based Instructional Programs Selecting Research Based Instructional Programs Marcia L. Grek, Ph.D. Florida Center for Reading Research Georgia March, 2004 1 Goals for Today 1. Learn about the purpose, content, and process, for reviews

More information

Voluntary Product Accessibility Template Blackboard Learn Release 9.1 April 2014 (Published April 30, 2014)

Voluntary Product Accessibility Template Blackboard Learn Release 9.1 April 2014 (Published April 30, 2014) Voluntary Product Accessibility Template Blackboard Learn Release 9.1 April 2014 (Published April 30, 2014) Contents: Introduction Key Improvements VPAT Section 1194.21: Software Applications and Operating

More information

Emotion Detection from Speech

Emotion Detection from Speech Emotion Detection from Speech 1. Introduction Although emotion detection from speech is a relatively new field of research, it has many potential applications. In human-computer or human-human interaction

More information

Learning Styles and Aptitudes

Learning Styles and Aptitudes Learning Styles and Aptitudes Learning style is the ability to learn and to develop in some ways better than others. Each person has a natural way of learning. We all learn from listening, watching something

More information

A Computer Program for Pronunciation Training and Assessment in Japanese Language Classrooms Experimental Use of Pronunciation Check

A Computer Program for Pronunciation Training and Assessment in Japanese Language Classrooms Experimental Use of Pronunciation Check A Computer Program for Pronunciation Training and Assessment in Japanese Language Classrooms Experimental Use of Pronunciation Check 5 CHIHARU TSURUTANI, Griffith University, Australia 10 E-learning is

More information

Reading Competencies

Reading Competencies Reading Competencies The Third Grade Reading Guarantee legislation within Senate Bill 21 requires reading competencies to be adopted by the State Board no later than January 31, 2014. Reading competencies

More information

Principal Components of Expressive Speech Animation

Principal Components of Expressive Speech Animation Principal Components of Expressive Speech Animation Sumedha Kshirsagar, Tom Molet, Nadia Magnenat-Thalmann MIRALab CUI, University of Geneva 24 rue du General Dufour CH-1211 Geneva, Switzerland {sumedha,molet,thalmann}@miralab.unige.ch

More information

SWING: A tool for modelling intonational varieties of Swedish Beskow, Jonas; Bruce, Gösta; Enflo, Laura; Granström, Björn; Schötz, Susanne

SWING: A tool for modelling intonational varieties of Swedish Beskow, Jonas; Bruce, Gösta; Enflo, Laura; Granström, Björn; Schötz, Susanne SWING: A tool for modelling intonational varieties of Swedish Beskow, Jonas; Bruce, Gösta; Enflo, Laura; Granström, Björn; Schötz, Susanne Published in: Proceedings of Fonetik 2008 Published: 2008-01-01

More information

Perceptive Animated Interfaces: First Steps Toward a New Paradigm for Human Computer Interaction

Perceptive Animated Interfaces: First Steps Toward a New Paradigm for Human Computer Interaction Page 1 of 22 Proceedings of the IEEE: Special Issue on Multimodal Human Computer Interface, Aug. 2003. Perceptive Animated Interfaces: First Steps Toward a New Paradigm for Human Computer Interaction Ronald

More information

INTEGRATING THE COMMON CORE STANDARDS INTO INTERACTIVE, ONLINE EARLY LITERACY PROGRAMS

INTEGRATING THE COMMON CORE STANDARDS INTO INTERACTIVE, ONLINE EARLY LITERACY PROGRAMS INTEGRATING THE COMMON CORE STANDARDS INTO INTERACTIVE, ONLINE EARLY LITERACY PROGRAMS By Dr. Kay MacPhee President/Founder Ooka Island, Inc. 1 Integrating the Common Core Standards into Interactive, Online

More information

Online Recruitment - An Intelligent Approach

Online Recruitment - An Intelligent Approach Online Recruitment - An Intelligent Approach Samah Rifai and Ramzi A. Haraty Department of Computer Science and Mathematics Lebanese American University Beirut, Lebanon Email: {samah.rifai, rharaty@lau.edu.lb}

More information

INTRODUCTION TO TRANSANA 2.2 FOR COMPUTER ASSISTED QUALITATIVE DATA ANALYSIS SOFTWARE (CAQDAS)

INTRODUCTION TO TRANSANA 2.2 FOR COMPUTER ASSISTED QUALITATIVE DATA ANALYSIS SOFTWARE (CAQDAS) INTRODUCTION TO TRANSANA 2.2 FOR COMPUTER ASSISTED QUALITATIVE DATA ANALYSIS SOFTWARE (CAQDAS) DR ABDUL RAHIM HJ SALAM LANGUAGE ACADEMY UNIVERSITY TECHNOLOGY MALAYSIA TRANSANA VERSION 2.2 MANAGINGYOUR

More information

How emotional should the icat robot be? A children s evaluation of multimodal emotional expressions of the icat robot.

How emotional should the icat robot be? A children s evaluation of multimodal emotional expressions of the icat robot. Utrecht University Faculty of Humanities TNO Human Factors Master of Science Thesis How emotional should the icat robot be? A children s evaluation of multimodal emotional expressions of the icat robot

More information

2011 Springer-Verlag Berlin Heidelberg

2011 Springer-Verlag Berlin Heidelberg This document is published in: Novais, P. et al. (eds.) (2011). Ambient Intelligence - Software and Applications: 2nd International Symposium on Ambient Intelligence (ISAmI 2011). (Advances in Intelligent

More information

A General Evaluation Framework to Assess Spoken Language Dialogue Systems: Experience with Call Center Agent Systems

A General Evaluation Framework to Assess Spoken Language Dialogue Systems: Experience with Call Center Agent Systems Conférence TALN 2000, Lausanne, 16-18 octobre 2000 A General Evaluation Framework to Assess Spoken Language Dialogue Systems: Experience with Call Center Agent Systems Marcela Charfuelán, Cristina Esteban

More information

Thai Language Self Assessment

Thai Language Self Assessment The following are can do statements in four skills: Listening, Speaking, Reading and Writing. Put a in front of each description that applies to your current Thai proficiency (.i.e. what you can do with

More information

The effect of mismatched recording conditions on human and automatic speaker recognition in forensic applications

The effect of mismatched recording conditions on human and automatic speaker recognition in forensic applications Forensic Science International 146S (2004) S95 S99 www.elsevier.com/locate/forsciint The effect of mismatched recording conditions on human and automatic speaker recognition in forensic applications A.

More information

CLOUD COMPUTING CONCEPTS FOR ACADEMIC COLLABORATION

CLOUD COMPUTING CONCEPTS FOR ACADEMIC COLLABORATION Bulgarian Journal of Science and Education Policy (BJSEP), Volume 7, Number 1, 2013 CLOUD COMPUTING CONCEPTS FOR ACADEMIC COLLABORATION Khayrazad Kari JABBOUR Lebanese University, LEBANON Abstract. The

More information

CHANWOO KIM (BIRTH: APR. 9, 1976) Language Technologies Institute School of Computer Science Aug. 8, 2005 present

CHANWOO KIM (BIRTH: APR. 9, 1976) Language Technologies Institute School of Computer Science Aug. 8, 2005 present CHANWOO KIM (BIRTH: APR. 9, 1976) 2602E NSH Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 Phone: +1-412-726-3996 Email: chanwook@cs.cmu.edu RESEARCH INTERESTS Speech recognition system,

More information

Mobile Multimedia Application for Deaf Users

Mobile Multimedia Application for Deaf Users Mobile Multimedia Application for Deaf Users Attila Tihanyi Pázmány Péter Catholic University, Faculty of Information Technology 1083 Budapest, Práter u. 50/a. Hungary E-mail: tihanyia@itk.ppke.hu Abstract

More information

Learning Today Smart Tutor Supports English Language Learners

Learning Today Smart Tutor Supports English Language Learners Learning Today Smart Tutor Supports English Language Learners By Paolo Martin M.A. Ed Literacy Specialist UC Berkley 1 Introduction Across the nation, the numbers of students with limited English proficiency

More information

Cartooning and Animation MS. Middle School

Cartooning and Animation MS. Middle School Cartooning and Animation Middle School Course Title Cartooning and Animation MS Course Abbreviation CART/ANIM MS Course Code Number 200603 Special Notes General Art is a prerequisite, or department permission

More information

Lecture 12: An Overview of Speech Recognition

Lecture 12: An Overview of Speech Recognition Lecture : An Overview of peech Recognition. Introduction We can classify speech recognition tasks and systems along a set of dimensions that produce various tradeoffs in applicability and robustness. Isolated

More information

Analysis of Data Mining Concepts in Higher Education with Needs to Najran University

Analysis of Data Mining Concepts in Higher Education with Needs to Najran University 590 Analysis of Data Mining Concepts in Higher Education with Needs to Najran University Mohamed Hussain Tawarish 1, Farooqui Waseemuddin 2 Department of Computer Science, Najran Community College. Najran

More information

Experiments with Signal-Driven Symbolic Prosody for Statistical Parametric Speech Synthesis

Experiments with Signal-Driven Symbolic Prosody for Statistical Parametric Speech Synthesis Experiments with Signal-Driven Symbolic Prosody for Statistical Parametric Speech Synthesis Fabio Tesser, Giacomo Sommavilla, Giulio Paci, Piero Cosi Institute of Cognitive Sciences and Technologies, National

More information

Frequency, definition Modifiability, existence of multiple operations & strategies

Frequency, definition Modifiability, existence of multiple operations & strategies Human Computer Interaction Intro HCI 1 HCI's Goal Users Improve Productivity computer users Tasks software engineers Users System Cognitive models of people as information processing systems Knowledge

More information

Quick Start Guide: Read & Write 11.0 Gold for PC

Quick Start Guide: Read & Write 11.0 Gold for PC Quick Start Guide: Read & Write 11.0 Gold for PC Overview TextHelp's Read & Write Gold is a literacy support program designed to assist computer users with difficulty reading and/or writing. Read & Write

More information

21st Century Community Learning Center

21st Century Community Learning Center 21st Century Community Learning Center Grant Overview The purpose of the program is to establish 21st CCLC programs that provide students with academic enrichment opportunities along with activities designed

More information

Designing Effective Projects: Thinking Skills Frameworks Learning Styles

Designing Effective Projects: Thinking Skills Frameworks Learning Styles Designing Effective Projects: Thinking Skills Frameworks Learning Styles Differences in Learning Today s teacher knows that the ways in which students learn vary greatly. Individual students have particular

More information

Bachelors of Science Program in Communication Disorders and Sciences:

Bachelors of Science Program in Communication Disorders and Sciences: Bachelors of Science Program in Communication Disorders and Sciences: Mission: The SIUC CDS program is committed to multiple complimentary missions. We provide support for, and align with, the university,

More information

DESIGNING MPEG-4 FACIAL ANIMATION TABLES FOR WEB APPLICATIONS

DESIGNING MPEG-4 FACIAL ANIMATION TABLES FOR WEB APPLICATIONS DESIGNING MPEG-4 FACIAL ANIMATION TABLES FOR WEB APPLICATIONS STEPHANE GACHERY, NADIA MAGNENAT-THALMANN MIRALab - University of Geneva 22 Rue Général Dufour, CH-1211 GENEVA 4, SWITZERLAND Web: http://www.miralab.unige.ch

More information

01219211 Software Development Training Camp 1 (0-3) Prerequisite : 01204214 Program development skill enhancement camp, at least 48 person-hours.

01219211 Software Development Training Camp 1 (0-3) Prerequisite : 01204214 Program development skill enhancement camp, at least 48 person-hours. (International Program) 01219141 Object-Oriented Modeling and Programming 3 (3-0) Object concepts, object-oriented design and analysis, object-oriented analysis relating to developing conceptual models

More information

Towards the Italian CSLU Toolkit

Towards the Italian CSLU Toolkit Towards the Italian CSLU Toolkit Piero Cosi *, John-Paul Hosom and Fabio Tesser *** * Istituto di Fonetica e Dialettologia C.N.R. Via G. Anghinoni, 10-35121 Padova (ITALY), e-mail: cosi@csrf.pd.cnr.it

More information

A secure face tracking system

A secure face tracking system International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 4, Number 10 (2014), pp. 959-964 International Research Publications House http://www. irphouse.com A secure face tracking

More information

Business Value Reporting and Analytics

Business Value Reporting and Analytics IP Telephony Contact Centers Mobility Services WHITE PAPER Business Value Reporting and Analytics Avaya Operational Analyst April 2005 avaya.com Table of Contents Section 1: Introduction... 1 Section 2:

More information

A Short Introduction to Transcribing with ELAN. Ingrid Rosenfelder Linguistics Lab University of Pennsylvania

A Short Introduction to Transcribing with ELAN. Ingrid Rosenfelder Linguistics Lab University of Pennsylvania A Short Introduction to Transcribing with ELAN Ingrid Rosenfelder Linguistics Lab University of Pennsylvania January 2011 Contents 1 Source 2 2 Opening files for annotation 2 2.1 Starting a new transcription.....................

More information

INCREASE YOUR PRODUCTIVITY WITH CELF 4 SOFTWARE! SAMPLE REPORTS. To order, call 1-800-211-8378, or visit our Web site at www.pearsonassess.

INCREASE YOUR PRODUCTIVITY WITH CELF 4 SOFTWARE! SAMPLE REPORTS. To order, call 1-800-211-8378, or visit our Web site at www.pearsonassess. INCREASE YOUR PRODUCTIVITY WITH CELF 4 SOFTWARE! Report Assistant SAMPLE REPORTS To order, call 1-800-211-8378, or visit our Web site at www.pearsonassess.com In Canada, call 1-800-387-7278 In United Kingdom,

More information