Computer Assisted Language Learning (CALL) Systems
|
|
|
- Beryl McCormick
- 10 years ago
- Views:
Transcription
1 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 1 Computer Assisted Language g Learning (CALL) Systems Tatsuya Kawahara (Kyoto University, Japan) Nobuaki Minematsu (University of Tokyo, Japan) Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 2 OUTLINE Introduction (TK) Segmental laspect t& Speech hrecognition Tech. (TK) Pronunciation Structure Model (NM) Prosodic Aspect (NM) Speech Synthesis Tech. for CALL (NM) CALL Systems (TK) Database for CALL (NM)
2 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 3 (Traditional) LL CALL (Traditional) LL: magnetic audio tape Single media, Sequential access Computer Assisted LL Multi media, Random access Easier comparison of learner s speech and model speech Speech technology can be incorporated Partly replace rater s or teacher s jobs Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 4 Speech Technology for LL Automate assessment of proficiency PhonePass Versant ETS TOEFL PSC (Putonghua Shuiping Ceshi) Assist LL With light supervision CALL classroom Self learninglearning Need to keep motivating Edutainment Need to avoid enhancement of errors
3 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 5 Target Population of CALL Non native speakers Particular L1 (ex.) English LL for Japanese people p Still diverse in proficiency level, but L1 knowledge useful Unlimited L1 (ex.) Japanese LL for people in the world Children (native) [Russel 1996] Handicapped (Hearing or Articulation impaired) people [Bernstein 1977] Accented (dialect) people Putonghua [Liu 2006] Operators at Call Centers Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 6 Target Skill of CALL Reading Writing Listening Speaking including sentence composition Vocabulary, Grammar Pragmatic Dialog (Communication) travel shopping, business negotiation Pronunciation of given sentences Phone segmental only Word.+accent Sentence +intonation Paragraph +prominence
4 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 7 Importance of Pronunciation Training [Bernstein 2003] Comm = pron * lex * (1+syn+rhet+prag+soc) comm. = communication skills pron. = pronunciation lex. = lexical control and vocabulary syn. = syntax rhet. = rhetorical form prag. = pragmatics soc. = sociolinguistics Pronunciation skill affects entire communicative performance Native sounding pronunciation may not be needed, but acceptable (intelligible enough) pronunciation is desired for smooth communication Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 8 Articulation Speech Students must learn how to control articulators t (vocal tract) But it is not easy to observe the movement of these organs Observation is feasible for acoustic aspect of speech
5 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 9 Visual Presentation of Articulation Talking Head showing correct articulation [Massaro 2006] Acoustic to articulatory inversion to estimate the articulatory movements [Badin 2010] Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 10 Segmental and Prosodic Aspects Segmental Pronunciation (cf.) speech recognition Phonemes (Sub words) Features: spectrum envelop based Prosody (cf.) speech synthesis Tones Lexical accents Intonation and rhythmpatterns Features: fundamental frequency, power, and duration
6 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 11 OUTLINE Introduction (TK) Segmental laspect & Speech Recognition Tech. (TK) Pronunciation Structure Model (NM) Prosodic Aspect (NM) Speech Synthesis Tech. for CALL (NM) CALL Systems (TK) Database for CALL (NM) Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 12 Speech Technology used in CALL Speech analysis spectrum, pitch, power Feature normalization required for objective comparison with model speaker (Constrained) speech recognition (ASR) Speech segmentation alignment Error detection Scoring Need to model non native speech and handle erroneous input Not only segmental aspect, but also prosodic aspects Speech synthesis (Minematsu)
7 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 13 Flowchart of Pronunciation Error Detection and Scoring Speech Analysis Acoustic Model Pronunciation Model Error Detection Segmentation Scoring Speech Recognition Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 14 Formant and Articulatory Features Potentially useful for effective diagnosis and feedback Directrelationship relationship with articulation Not easy to make reliable and robust estimation Not used in ASR
8 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 15 Classification of Vowels Place of articulation front middle back Openness of mouth narrow -- - wide bat vs. but Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu Relationship between Articulation and Formants 16 Place of Articulation F2 Openness Articulation Chart Formant Chart F1 This still requires expert knowledge on phonetics!
9 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 17 Classification of Consonants (Japanese) Bilabial Alveolar Palatal Glottal voiced unvoiced voiced unvoiced voiced unvoiced unvoiced Fricative Affricate Stop Semi-vowel Nasal sea vs. she Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 18 MFCC: Mel Frequency Cepstrum Coefficient Most widely used spectral feature Mel bandwidth human perception Cepstrum spectrum envelope orthogonal & less correlated appropriate for statistical model 1. DFT(FFT) power spectrum 2. Mel conversion (Mel band filter bank) 3. Logarithm + Cosine Transform (IDFT) cepstrum 4. Extract low quefrency (12) coefficients
10 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 19 Feature Normalization in Speech Analysis Feature normalization for objective comparison with model speaker for score calculation via speech recognition against speakers (native/non native) against acoustic channels (database/users) Normalization methods for MFCC Cepstrum Mean Normalization (CMN) Cepstrum Variance Normalization (CVN) Histogram Equalization Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 20 Speaker Normalization in Speech Analysis Vocal Tract Length Normalization (VTLN) Warping spectral dimension Based on acoustic model likelihood min 1 max 1 f L f U Pronunciation Structure (by Minematsu) Invariant feature (F divergence)
11 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 21 Speech Recognition for CALL Tasks Speech segmentation alignment alignment Error detection Scoring Challenges Modeling non native speech Handling erroneous speech input Constraint Target word or sentence is given Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 22 ASR vs. CALL X: speech input, W: phone label word sequence (target) ASR For given X, find W that maximizes p(w X) Solved by max p(w)*p(x W) Each phone model p(x w) is trained CALL W (oracle) and X (not reliable) given, Segmentation: Viterbi forced alignment Error detection: find W such that p(x W )>p(x W) Scoring: evaluate p(x W)?? How to train the model??
12 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 23 Flowchart of Pronunciation Error Detection and Scoring Speech Analysis Acoustic Model Pronunciation Model Error Detection Segmentation Scoring Speech Recognition Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 24 Segmentation Pre process for scoring Viterbi forced alignment with HMM representing W In fact, there may be pronunciation errors in X Insertion & deletion seriously affect alignment Error prediction/detection may be necessary
13 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 25 Segmentation s l ou slow /s l ou/ u /s u l ou/ Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 26 Flowchart of Pronunciation Error Detection and Scoring Speech Analysis Acoustic Model Pronunciation Model Error Detection Segmentation Scoring Speech Recognition
14 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 27 Error Detection Find W such that p(x W )>p(x W) Compute scores p(x w ) for alternative phones w for each segmented region x When we take into account insertions and deletions, we need to generate a network of possible errors Error prediction can be done with prior knowledge, such as L1 Alternative phones w can be taken from L1 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 28 Error Prediction in Pronunciation Model No equivalent syllable in L1 (ex.) sea she No equivalent phoneme in L1 (ex.) l r, v b Vowel linsertions (ex.) b r b uh r Rules for errors breath Pronunciation Dictionary Pronunciation Error Prediction S b uh Error r l eh th s uh E
15 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 29 Error Detection based on Classification Approach Not necessarily compute p(x w ), but test if w is more likely than w Explicit classifier (verifier) learning Incorporate many features Focus on error detection by assuming segmentation Linear Discriminant Analysis (LDA) Support Vector Machines (SVM) Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 30 Other Issues in Error Detection Filter and prioritize many (possible) individual phone errors error miss >> false alarm Not to discourage learners Feedback How to correct errors
16 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 31 Flowchart of Pronunciation Error Detection and Scoring Speech Analysis Acoustic Model Pronunciation Model Error Detection Segmentation Scoring Speech Recognition Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 32 Scoring: Standpoints Native likeness Howclose to golden native speakers? P(X W,λ G ) What is the golden model? British? American?... Impossible to free from L1 effect, speaker characteristic Intelligibility How distinguishable (less confusable) from other phones? p(w X) Some pronunciation may not be recognized as anything Need to consider L1 phones as well assume L1
17 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 33 Scoring based on Native Likeness How close to golden native speakers? Defined by p(x W,λ G ) λ G : golden model Normalized by p(x W,λ N ) λ N : non native model In summary, likelihood ratio scoring for phone w p( X W, G ) p( x w, G ) p( xt w, G ) p ( X W, ) p ( x w, ) p ( x w, ) N N Mean w.r.t. phones Mean w.r.t. time frame Π: geometric mean= arithmetic mean in logarithm t t N Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 34 Scoring based on Intelligibility How distinguishable (less confusable) from other phones? Measured by p(w X) W ' p( X W ) p( X W ') t w' p( x w) p( x w') p( xt w) max p( x w') t Often called GOP (Goodness Of Pronunciation) posterior prob. Similar to confidence measure in ASR becomes 1 if best w =w forced alignment (segmentation) Viterbi score
18 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 35 Extension of GOP Better to limit w to confusing phones, rather than all possible phones [Wei2009] (ex.) /v/ /b/ Better to set different thresholds depending on phones (or phone clusters) for error detection [Ito 2005] Any error can be detected if we use phone loop model Acoustic & Pronunciation Model Need to adapt to non native speakers Better to consider L1 phones Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 36 Scoring to Assessment Other factors Durationmodeling & evaluation Other prosodic aspects accent, intonation Speech rate Score mapping Linear regression to fit to human rater s evaluation
19 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 37 Flowchart of Pronunciation Error Detection and Scoring Speech Analysis Acoustic Model Pronunciation Model Error Detection Segmentation Scoring Speech Recognition Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 38 Acoustic Modeling: Native vs. Non native Native speech Gold standard, but does not match Non native speech Matched, but error prone There is not large database available Adaptation from native to non native Phone model of L1 is used for the same phone (in the IPA inventory)
20 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 39 Context Independent Modeling Context dependent (e.g. triphone) models are widely used in ASR Context independent (monophone) model works well, even better, in CALL Phonetic context is not reliable in non native speech insertion of vowels Better segmentation accuracy even in native speech Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 40 Speaker Adaptation of Acoustic Model to Non native Speech Pronunciation of adaptation data may not be correct Compare baseform label (automatic but error prone) and hand label (correct but costly) Phone accuracy: measured based on hand label including errors [Tsubota 2004] Acoustic model Phone accuracy (native model) No adaptation Hand label Baseform label Lexicon baseform label is sufficient
21 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 41 Acoustic Model: Native model vs. Non native (L2) model Non native speech database (MEXT project) 13129utterances by 178speakers Pronunciation errors are not annotated (too costly) Dictionary label vs. automatic label with ASR Both are error prone [Tsubota 2004] Acoustic model baseline speaker adapt Native English model Non-native model (baseform) Non-native model (ASR) Non native model is more effective, even with dictionary label The superiority is reduced with speaker adaptation Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 42 Acoustic Model: Native/L2 model + L1 model Many phones are shared by target L2 and L1 (ex.) most of consonants shared by Japanese and English Incorporate L1 acoustic model for shared phones in parallel [Tsubota 2002] Acoustic model baseline speaker adapt English native model + L1 model Non-native model + L1 model Incorporating L1 model in parallel is very effective
22 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 43 Flowchart of Pronunciation Error Detection and Scoring Speech Analysis Acoustic Model Pronunciation Model Error Detection Segmentation Scoring Speech Recognition Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 44 Pronunciation Model Standard baseform possible errors Constraint of L1 is effective Linguistic knowledge /v/ /b/, /ϑ/ /s/ Substitution with similar phone of L1 Insertion of vowels For GOP score computation, simple phone loop model (=no pronunciation model) is used Any errors can be detected, with many false alarms
23 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 45 Error Prediction in Pronunciation Model No equivalent syllable in L1 (ex.) sea she No equivalent phoneme in L1 (ex.) l r, v b Vowel linsertions (ex.) b r b uh r Rules for errors breath Pronunciation Dictionary Pronunciation Error Prediction S b uh Error r l eh th s uh E Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 46 Pronunciation Model Training Hand craft phonological rules Expert knowledge needed Too many rules cause false alarms, degrading recognition performance Tradeoff between coverage and perplexity Machine learning from annotated data Statistical learning of rewriting rules [Meng2011] Decision tree to find critical rule set [Wang 2009]
24 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 47 OUTLINE Introduction (TK) Segmental laspect t& Speech hrecognition Tech. (TK) Pronunciation Structure Model (NM) Prosodic Aspect (NM) Speech Synthesis Tech. for CALL (NM) CALL Systems (TK) Database for CALL (NM) Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 48 English CALL System: Univ. [Tsubota, Imoto, Raux 2002] For Japanese college students, so that they can introduce Japanese cultures Dedicated acoustic model & error prediction scheme for Japanese students Deployed and used in classrooms
25 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 49 English CALL System: Univ. Goal: Pinpointing the pronunciation errors which degrade intelligibilityandproviding effective feedback Practice consists of two phases 1. Dialogue based skit (for natural conversation) 2. Training on specific errors detected in the first phase (using a phrase or a word) Pronunciation error detection Segmental pronunciation hand crafted phonological rules Accent (Primary & Secondary Stress) multiple prosodic features Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 50
26 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 51 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 52
27 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 53 List of Pronunciation Errors W/Y deletion (would) SH/CH substitution (choose) R/L substitution (road) ER/A substitution (paper) Non-reduction (student) V/B substitution (problem) Final vowel insertion (let) CCV-cluster insertion (active) VCC-cluster insertion (study) H/F substitution (fire) Built from literature in ESL Remove error patterns with low detection rate Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 54 Intelligibility Assessment based on Error Statistics Estimate Find critical errors
28 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 55 Priority of Training on Specific Errors according to Intelligibility Level /r/ /l/ confusion: middle advanced vowel insertion: beginner middle WY SH ER RL VR VB FI CCV VCC HF Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 56 NativeAccent [Eskenazi 2007] Product of Fluency Project of CMU English learning Error detection and feedback on articulation Up to 28 L1: Japanese, Russian, French 800 exercises
29 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 57 English CALL CUHK [Meng 2010] For Chinese learners of English Corpus: 100 Cantonese and 111 Mandarin L1 Reading a paragraph, words Pronunciation error model Hand crafted phonological rules Data driven patterns GOP score Pre filtering based on duration models Synthesizing expressive speech to convey emphasis in feedback generation Synthesizing visual speech with articulator animation Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 58 Shadowing Exercise [Luo 2009] listening and repetition of native utterances, online Simultaneous training of listening and speaking skills High correlation between GOP and TOEIC scores (= 0.90) Higher than simple reading Corr. = 0.90
30 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 59 ETS SpeechRater for TOEFL [Zechner 2007] Assessment of unconstrained English speech TOEFL ibt Practice Online (TPO) ibt Field Study Based on LVCSR methodology Acoustic model: non native speech (30hours) Language model: non native speech + broadcast news Features: ASR results (word ID, confidence), speech rate, pause length 40 in total Scoring: linear regression model Correlation with human rater: 0.67 Inter human correlation 0.94 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 60 Dialog Based English [Lee 2010] Situated dialog (ex.) shopping Basedon SDS (Spoken Dialog System) methodology Task dependent ASR+SLU Example Based Dialog Management Corrective feedback based on example selection Field trial on elementary school
31 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 61 Japanese CALL system: Univ. [Wang 2009] Exercise of basic sentence production (text & speech), given a image scene Key features Dynamic generation of questions & ASR grammar network with error prediction Interactive hints H.Wang, C.J.Waple, and T.Kawahara. Computer assisted language learning system based on dynamic question generation and error prediction for automatic speech recognition. Speech Communication, Vol.51, No.10, pp , Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 62 Japanese CALL system: Univ. How to Try Windows only. 0. (Unzip CALLJ1.5.zip). 1. Move to the directory CALLJ. 2. Click StartCALLJ. 3. Create your account by clicking New in login window for the first time. You need some knowledge on Japanese.
32 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 64 References 1. M. Eskenazi, An overview of spoken language technology for education, Speech Communication, Vol. 51, pp , M. Russell, C. Brown, A. Skilling, R. Series, J. Wallace, B. Bonham, P. Barker. Applications of automatic speech recognition to speech and language development in young children. Proc. ICSLP, J. Bernstein. Intelligibility and simulated deaf like segmental and timing errors. Proc. ICASSP, pp , Q. Liu, S. Wei, Y. Hu, W. Guo, R. Wang, The Application of Phone Weight in Putonghua Pronunciation Quality Assessment. Proc. ISCSLP, J. Bernstein. Objective Measurement of Intelligibility. Proc. ICPhS, pp , L. Neumeyer, H. Franco, V. Digalakis andm M. Weintraub, Automatic Scoringof of Pronunciation Quality, Speech Communication, Vol.30, pp.83 93, S.M. Witt. Phone level Pronunciation Scoring and Assessment for Interactive Language Learning. Speech Communication, Vol.30, pp , J. Bernstein, M. Cohen, H. Murveit, D. Rtischev, M. Weintraub. Automatic evaluation and training in English pronunciation. Proc ICSLP, pp , Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 65 References 1. Y.Tsubota, T.Kawahara, and M.Dantsuji. An English pronunciation learning system for Japanese students based on diagnosis of critical pronunciation errors. ReCALL Journal, Vol.16, No.1, pp , Y.Tsubota, T.Kawahara, and M.Dantsuji. Recognition and verification of English by Japanese students for computer assisted language learning system. Proc. ICSLP, pp , K.Imoto, Y.Tsubota, A.Raux, T.Kawahara, and M.Dantsuji. Modeling and automatic detection of English sentence stress for computer assisted English prosody learning system. Proc. ICSLP, pp , A.Raux and T.Kawahara. Automatic intelligibility assessment and diagnosis of critical pronunciation errors for computer assisted pronunciation iti learning. Proc. ICSLP, pp , H.Wang, C.J.Waple, and T.Kawahara. Computer assisted language learning system based on dynamic question generation and error prediction for automatic speech recognition. Speech Communication, Vol.51, No.10, pp , 2009.
33 Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 66 References 1. M.Eskenazi, A.Kennedy, C.Ketchum, R.Olszewski, and G.Pelton, The NativeAccentTM pronunciation tutor: measuring success in the real world, Proc. SIG SLaTESLaTE H.Meng, W K.Lo, A.M.Harrison, P.Lee, K H.Wong, W K.Leung and F.Meng. Development of Automatic Speech Recognition and Synthesis Technologies to Support Chinese Learners of English: The CUHK Experience. Proc. APSIPA, D. Luo, N, Minematsu, Y. Yamauchi, K. Hirose, Analysis and comparison of automatic language proficiency assessment between shadowed sentences and read sentences, Proc. SLaTE, F. Ehsani, J. Bernstein, A. Najmi, O. Todic, Subarashii: Japanese interactive ti spoken language education. Proc. Eurospeech, pp , K. Zechner, D. Higgins, X. Xi. SpeechRaterTM: a construct driven approach to scoring spontaneous non native speech. Proc. SLaTE, S.Lee, H.Noh, J.Lee, K.Lee and G.Lee. POSTECH Approaches for Dialog Based English Conversation Tutoring. Proc. APSIPA,2010
ADAPTIVE AND DISCRIMINATIVE MODELING FOR IMPROVED MISPRONUNCIATION DETECTION. Horacio Franco, Luciana Ferrer, and Harry Bratt
ADAPTIVE AND DISCRIMINATIVE MODELING FOR IMPROVED MISPRONUNCIATION DETECTION Horacio Franco, Luciana Ferrer, and Harry Bratt Speech Technology and Research Laboratory, SRI International, Menlo Park, CA
Turkish Radiology Dictation System
Turkish Radiology Dictation System Ebru Arısoy, Levent M. Arslan Boaziçi University, Electrical and Electronic Engineering Department, 34342, Bebek, stanbul, Turkey [email protected], [email protected]
Thirukkural - A Text-to-Speech Synthesis System
Thirukkural - A Text-to-Speech Synthesis System G. L. Jayavardhana Rama, A. G. Ramakrishnan, M Vijay Venkatesh, R. Murali Shankar Department of Electrical Engg, Indian Institute of Science, Bangalore 560012,
Emotion Detection from Speech
Emotion Detection from Speech 1. Introduction Although emotion detection from speech is a relatively new field of research, it has many potential applications. In human-computer or human-human interaction
Pronunciation in English
The Electronic Journal for English as a Second Language Pronunciation in English March 2013 Volume 16, Number 4 Title Level Publisher Type of product Minimum Hardware Requirements Software Requirements
A CHINESE SPEECH DATA WAREHOUSE
A CHINESE SPEECH DATA WAREHOUSE LUK Wing-Pong, Robert and CHENG Chung-Keng Department of Computing, Hong Kong Polytechnic University Tel: 2766 5143, FAX: 2774 0842, E-mail: {csrluk,cskcheng}@comp.polyu.edu.hk
Effect of Captioning Lecture Videos For Learning in Foreign Language 外 国 語 ( 英 語 ) 講 義 映 像 に 対 する 字 幕 提 示 の 理 解 度 効 果
Effect of Captioning Lecture Videos For Learning in Foreign Language VERI FERDIANSYAH 1 SEIICHI NAKAGAWA 1 Toyohashi University of Technology, Tenpaku-cho, Toyohashi 441-858 Japan E-mail: {veri, nakagawa}@slp.cs.tut.ac.jp
Lecture 12: An Overview of Speech Recognition
Lecture : An Overview of peech Recognition. Introduction We can classify speech recognition tasks and systems along a set of dimensions that produce various tradeoffs in applicability and robustness. Isolated
Robust Methods for Automatic Transcription and Alignment of Speech Signals
Robust Methods for Automatic Transcription and Alignment of Speech Signals Leif Grönqvist ([email protected]) Course in Speech Recognition January 2. 2004 Contents Contents 1 1 Introduction 2 2 Background
Modern foreign languages
Modern foreign languages Programme of study for key stage 3 and attainment targets (This is an extract from The National Curriculum 2007) Crown copyright 2007 Qualifications and Curriculum Authority 2007
Establishing the Uniqueness of the Human Voice for Security Applications
Proceedings of Student/Faculty Research Day, CSIS, Pace University, May 7th, 2004 Establishing the Uniqueness of the Human Voice for Security Applications Naresh P. Trilok, Sung-Hyuk Cha, and Charles C.
Myanmar Continuous Speech Recognition System Based on DTW and HMM
Myanmar Continuous Speech Recognition System Based on DTW and HMM Ingyin Khaing Department of Information and Technology University of Technology (Yatanarpon Cyber City),near Pyin Oo Lwin, Myanmar Abstract-
AUTOMATIC PHONEME SEGMENTATION WITH RELAXED TEXTUAL CONSTRAINTS
AUTOMATIC PHONEME SEGMENTATION WITH RELAXED TEXTUAL CONSTRAINTS PIERRE LANCHANTIN, ANDREW C. MORRIS, XAVIER RODET, CHRISTOPHE VEAUX Very high quality text-to-speech synthesis can be achieved by unit selection
L2 EXPERIENCE MODULATES LEARNERS USE OF CUES IN THE PERCEPTION OF L3 TONES
L2 EXPERIENCE MODULATES LEARNERS USE OF CUES IN THE PERCEPTION OF L3 TONES Zhen Qin, Allard Jongman Department of Linguistics, University of Kansas, United States [email protected], [email protected]
The effect of mismatched recording conditions on human and automatic speaker recognition in forensic applications
Forensic Science International 146S (2004) S95 S99 www.elsevier.com/locate/forsciint The effect of mismatched recording conditions on human and automatic speaker recognition in forensic applications A.
Spot me if you can: Uncovering spoken phrases in encrypted VoIP conversations
Spot me if you can: Uncovering spoken phrases in encrypted VoIP conversations C. Wright, L. Ballard, S. Coull, F. Monrose, G. Masson Talk held by Goran Doychev Selected Topics in Information Security and
Automatic Evaluation Software for Contact Centre Agents voice Handling Performance
International Journal of Scientific and Research Publications, Volume 5, Issue 1, January 2015 1 Automatic Evaluation Software for Contact Centre Agents voice Handling Performance K.K.A. Nipuni N. Perera,
Things to remember when transcribing speech
Notes and discussion Things to remember when transcribing speech David Crystal University of Reading Until the day comes when this journal is available in an audio or video format, we shall have to rely
SPEECH OR LANGUAGE IMPAIRMENT EARLY CHILDHOOD SPECIAL EDUCATION
I. DEFINITION Speech or language impairment means a communication disorder, such as stuttering, impaired articulation, a language impairment (comprehension and/or expression), or a voice impairment, that
Reading Competencies
Reading Competencies The Third Grade Reading Guarantee legislation within Senate Bill 21 requires reading competencies to be adopted by the State Board no later than January 31, 2014. Reading competencies
CHARTES D'ANGLAIS SOMMAIRE. CHARTE NIVEAU A1 Pages 2-4. CHARTE NIVEAU A2 Pages 5-7. CHARTE NIVEAU B1 Pages 8-10. CHARTE NIVEAU B2 Pages 11-14
CHARTES D'ANGLAIS SOMMAIRE CHARTE NIVEAU A1 Pages 2-4 CHARTE NIVEAU A2 Pages 5-7 CHARTE NIVEAU B1 Pages 8-10 CHARTE NIVEAU B2 Pages 11-14 CHARTE NIVEAU C1 Pages 15-17 MAJ, le 11 juin 2014 A1 Skills-based
An Arabic Text-To-Speech System Based on Artificial Neural Networks
Journal of Computer Science 5 (3): 207-213, 2009 ISSN 1549-3636 2009 Science Publications An Arabic Text-To-Speech System Based on Artificial Neural Networks Ghadeer Al-Said and Moussa Abdallah Department
Fall 1 2015 August 24 October 16 (online classes begin August 21) Drop Deadline: September 4 Withdrawal Deadline: October 2
FALL 2015 COURSES WEBG WEBSTER TESL PROGRAM 1 FALL 2015 COURSES WEBSTER UNIVERSITY SCHOOL OF EDUCATION TEACHING ENGLISH AS A SECOND LANGUAGE ADVISERS: DR. DJ KAISER [email protected] 314-246-7153 DR.
APEC Online Consumer Checklist for English Language Programs
APEC Online Consumer Checklist for English Language Programs The APEC Online Consumer Checklist For English Language Programs will serve the training needs of government officials, businesspeople, students,
Master of Arts in Linguistics Syllabus
Master of Arts in Linguistics Syllabus Applicants shall hold a Bachelor s degree with Honours of this University or another qualification of equivalent standard from this University or from another university
62 Hearing Impaired MI-SG-FLD062-02
62 Hearing Impaired MI-SG-FLD062-02 TABLE OF CONTENTS PART 1: General Information About the MTTC Program and Test Preparation OVERVIEW OF THE TESTING PROGRAM... 1-1 Contact Information Test Development
Points of Interference in Learning English as a Second Language
Points of Interference in Learning English as a Second Language Tone Spanish: In both English and Spanish there are four tone levels, but Spanish speaker use only the three lower pitch tones, except when
Learning Today Smart Tutor Supports English Language Learners
Learning Today Smart Tutor Supports English Language Learners By Paolo Martin M.A. Ed Literacy Specialist UC Berkley 1 Introduction Across the nation, the numbers of students with limited English proficiency
WinPitch LTL II, a Multimodal Pronunciation Software
WinPitch LTL II, a Multimodal Pronunciation Software Philippe MARTIN UFRL Université Paris 7 92, Ave. de France 75013 Paris, France [email protected] Abstract We introduce a new version
TEACHER CERTIFICATION STUDY GUIDE LANGUAGE COMPETENCY AND LANGUAGE ACQUISITION
DOMAIN I LANGUAGE COMPETENCY AND LANGUAGE ACQUISITION COMPETENCY 1 THE ESL TEACHER UNDERSTANDS FUNDAMENTAL CONCEPTS AND KNOWS THE STRUCTURE AND CONVENTIONS OF THE ENGLISH LANGUAGE Skill 1.1 Understand
APPLYING MFCC-BASED AUTOMATIC SPEAKER RECOGNITION TO GSM AND FORENSIC DATA
APPLYING MFCC-BASED AUTOMATIC SPEAKER RECOGNITION TO GSM AND FORENSIC DATA Tuija Niemi-Laitinen*, Juhani Saastamoinen**, Tomi Kinnunen**, Pasi Fränti** *Crime Laboratory, NBI, Finland **Dept. of Computer
Intonation difficulties in non-native languages.
Intonation difficulties in non-native languages. Irma Rusadze Akaki Tsereteli State University, Assistant Professor, Kutaisi, Georgia Sopio Kipiani Akaki Tsereteli State University, Assistant Professor,
Proceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Speech Communication Session 2aSC: Linking Perception and Production
Automatic Speech Recognition and Hybrid Machine Translation for High-Quality Closed-Captioning and Subtitling for Video Broadcast
Automatic Speech Recognition and Hybrid Machine Translation for High-Quality Closed-Captioning and Subtitling for Video Broadcast Hassan Sawaf Science Applications International Corporation (SAIC) 7990
Speech Analytics. Whitepaper
Speech Analytics Whitepaper This document is property of ASC telecom AG. All rights reserved. Distribution or copying of this document is forbidden without permission of ASC. 1 Introduction Hearing the
Analysis and Synthesis of Hypo and Hyperarticulated Speech
Analysis and Synthesis of and articulated Speech Benjamin Picart, Thomas Drugman, Thierry Dutoit TCTS Lab, Faculté Polytechnique (FPMs), University of Mons (UMons), Belgium {benjamin.picart,thomas.drugman,thierry.dutoit}@umons.ac.be
How To Assess Dementia With A Voice-Based Assessment
Alex Sorin, IBM Research - Haifa Voice Analytics for Dementia Assessment DemAAL, September 2013, Chania, Greece Acknowledgements The experimental work presented below is supported by Dem@Care FP7 project
Carla Simões, [email protected]. Speech Analysis and Transcription Software
Carla Simões, [email protected] Speech Analysis and Transcription Software 1 Overview Methods for Speech Acoustic Analysis Why Speech Acoustic Analysis? Annotation Segmentation Alignment Speech Analysis
Speech Recognition System for Cerebral Palsy
Speech Recognition System for Cerebral Palsy M. Hafidz M. J., S.A.R. Al-Haddad, Chee Kyun Ng Department of Computer & Communication Systems Engineering, Faculty of Engineering, Universiti Putra Malaysia,
Objective Intelligibility Assessment of Text-to-Speech Systems Through Utterance Verification
Objective Intelligibility Assessment of Text-to-Speech Systems Through Utterance Verification Raphael Ullmann 1,2, Ramya Rasipuram 1, Mathew Magimai.-Doss 1, and Hervé Bourlard 1,2 1 Idiap Research Institute,
Speech: A Challenge to Digital Signal Processing Technology for Human-to-Computer Interaction
: A Challenge to Digital Signal Processing Technology for Human-to-Computer Interaction Urmila Shrawankar Dept. of Information Technology Govt. Polytechnic, Nagpur Institute Sadar, Nagpur 440001 (INDIA)
ACOUSTICAL CONSIDERATIONS FOR EFFECTIVE EMERGENCY ALARM SYSTEMS IN AN INDUSTRIAL SETTING
ACOUSTICAL CONSIDERATIONS FOR EFFECTIVE EMERGENCY ALARM SYSTEMS IN AN INDUSTRIAL SETTING Dennis P. Driscoll, P.E. and David C. Byrne, CCC-A Associates in Acoustics, Inc. Evergreen, Colorado Telephone (303)
PTE Academic. Score Guide. November 2012. Version 4
PTE Academic Score Guide November 2012 Version 4 PTE Academic Score Guide Copyright Pearson Education Ltd 2012. All rights reserved; no part of this publication may be reproduced without the prior written
3-2 Language Learning in the 21st Century
3-2 Language Learning in the 21st Century Advanced Telecommunications Research Institute International With the globalization of human activities, it has become increasingly important to improve one s
Ericsson T18s Voice Dialing Simulator
Ericsson T18s Voice Dialing Simulator Mauricio Aracena Kovacevic, Anna Dehlbom, Jakob Ekeberg, Guillaume Gariazzo, Eric Lästh and Vanessa Troncoso Dept. of Signals Sensors and Systems Royal Institute of
Touchstone Level 2. Common European Framework of Reference for Languages (CEFR)
Touchstone Level 2 Common European Framework of Reference for Languages (CEFR) Contents Introduction to CEFR 2 CEFR level 3 CEFR goals realized in this level of Touchstone 4 How each unit relates to the
Speech Production 2. Paper 9: Foundations of Speech Communication Lent Term: Week 4. Katharine Barden
Speech Production 2 Paper 9: Foundations of Speech Communication Lent Term: Week 4 Katharine Barden Today s lecture Prosodic-segmental interdependencies Models of speech production Articulatory phonology
Specialty Answering Service. All rights reserved.
0 Contents 1 Introduction... 2 1.1 Types of Dialog Systems... 2 2 Dialog Systems in Contact Centers... 4 2.1 Automated Call Centers... 4 3 History... 3 4 Designing Interactive Dialogs with Structured Data...
PTE Academic Preparation Course Outline
PTE Academic Preparation Course Outline August 2011 V2 Pearson Education Ltd 2011. No part of this publication may be reproduced without the prior permission of Pearson Education Ltd. Introduction The
Introduction to Pattern Recognition
Introduction to Pattern Recognition Selim Aksoy Department of Computer Engineering Bilkent University [email protected] CS 551, Spring 2009 CS 551, Spring 2009 c 2009, Selim Aksoy (Bilkent University)
L9: Cepstral analysis
L9: Cepstral analysis The cepstrum Homomorphic filtering The cepstrum and voicing/pitch detection Linear prediction cepstral coefficients Mel frequency cepstral coefficients This lecture is based on [Taylor,
The primary goals of the M.A. TESOL Program are to impart in our students:
Quality of Academic Program Goals The primary goals of the M.A. TESOL Program are to impart in our students: (1) knowledge of language, i.e., knowledge of the major elements of language as a system consisting
Experiments with Signal-Driven Symbolic Prosody for Statistical Parametric Speech Synthesis
Experiments with Signal-Driven Symbolic Prosody for Statistical Parametric Speech Synthesis Fabio Tesser, Giacomo Sommavilla, Giulio Paci, Piero Cosi Institute of Cognitive Sciences and Technologies, National
The sound patterns of language
The sound patterns of language Phonology Chapter 5 Alaa Mohammadi- Fall 2009 1 This lecture There are systematic differences between: What speakers memorize about the sounds of words. The speech sounds
Guide to Pearson Test of English General
Guide to Pearson Test of English General Level 3 (Upper Intermediate) November 2011 Version 5 Pearson Education Ltd 2011. No part of this publication may be reproduced without the prior permission of Pearson
Study Plan for Master of Arts in Applied Linguistics
Study Plan for Master of Arts in Applied Linguistics Master of Arts in Applied Linguistics is awarded by the Faculty of Graduate Studies at Jordan University of Science and Technology (JUST) upon the fulfillment
Colaboradores: Contreras Terreros Diana Ivette Alumna LELI N de cuenta: 191351. Ramírez Gómez Roberto Egresado Programa Recuperación de pasantía.
Nombre del autor: Maestra Bertha Guadalupe Paredes Zepeda. [email protected] Colaboradores: Contreras Terreros Diana Ivette Alumna LELI N de cuenta: 191351. Ramírez Gómez Roberto Egresado Programa
SPEAKER IDENTIFICATION FROM YOUTUBE OBTAINED DATA
SPEAKER IDENTIFICATION FROM YOUTUBE OBTAINED DATA Nitesh Kumar Chaudhary 1 and Shraddha Srivastav 2 1 Department of Electronics & Communication Engineering, LNMIIT, Jaipur, India 2 Bharti School Of Telecommunication,
21st Century Community Learning Center
21st Century Community Learning Center Grant Overview The purpose of the program is to establish 21st CCLC programs that provide students with academic enrichment opportunities along with activities designed
Hardware Implementation of Probabilistic State Machine for Word Recognition
IJECT Vo l. 4, Is s u e Sp l - 5, Ju l y - Se p t 2013 ISSN : 2230-7109 (Online) ISSN : 2230-9543 (Print) Hardware Implementation of Probabilistic State Machine for Word Recognition 1 Soorya Asokan, 2
Separation and Classification of Harmonic Sounds for Singing Voice Detection
Separation and Classification of Harmonic Sounds for Singing Voice Detection Martín Rocamora and Alvaro Pardo Institute of Electrical Engineering - School of Engineering Universidad de la República, Uruguay
Program curriculum for graduate studies in Speech and Music Communication
Program curriculum for graduate studies in Speech and Music Communication School of Computer Science and Communication, KTH (Translated version, November 2009) Common guidelines for graduate-level studies
ETS Automated Scoring and NLP Technologies
ETS Automated Scoring and NLP Technologies Using natural language processing (NLP) and psychometric methods to develop innovative scoring technologies The growth of the use of constructed-response tasks
Developing an Isolated Word Recognition System in MATLAB
MATLAB Digest Developing an Isolated Word Recognition System in MATLAB By Daryl Ning Speech-recognition technology is embedded in voice-activated routing systems at customer call centres, voice dialling
Developmental Verbal Dyspraxia Nuffield Approach
Developmental Verbal Dyspraxia Nuffield Approach Pam Williams, Consultant Speech & Language Therapist Nuffield Hearing & Speech Centre RNTNE Hospital, London, Uk Outline of session Speech & language difficulties
Speech Signal Processing: An Overview
Speech Signal Processing: An Overview S. R. M. Prasanna Department of Electronics and Electrical Engineering Indian Institute of Technology Guwahati December, 2012 Prasanna (EMST Lab, EEE, IITG) Speech
LINGUISTIC DISSECTION OF SWITCHBOARD-CORPUS AUTOMATIC SPEECH RECOGNITION SYSTEMS
LINGUISTIC DISSECTION OF SWITCHBOARD-CORPUS AUTOMATIC SPEECH RECOGNITION SYSTEMS Steven Greenberg and Shuangyu Chang International Computer Science Institute 1947 Center Street, Berkeley, CA 94704, USA
Evaluating grapheme-to-phoneme converters in automatic speech recognition context
Evaluating grapheme-to-phoneme converters in automatic speech recognition context Denis Jouvet, Dominique Fohr, Irina Illina To cite this version: Denis Jouvet, Dominique Fohr, Irina Illina. Evaluating
SWING: A tool for modelling intonational varieties of Swedish Beskow, Jonas; Bruce, Gösta; Enflo, Laura; Granström, Björn; Schötz, Susanne
SWING: A tool for modelling intonational varieties of Swedish Beskow, Jonas; Bruce, Gösta; Enflo, Laura; Granström, Björn; Schötz, Susanne Published in: Proceedings of Fonetik 2008 Published: 2008-01-01
DynEd International, Inc.
General Description: Proficiency Level: Course Description: Computer-based Tools: Teacher Tools: Assessment: Teacher Materials: is a multimedia course for beginning through advanced-level students of spoken
Text-To-Speech Technologies for Mobile Telephony Services
Text-To-Speech Technologies for Mobile Telephony Services Paulseph-John Farrugia Department of Computer Science and AI, University of Malta Abstract. Text-To-Speech (TTS) systems aim to transform arbitrary
Useful classroom language for Elementary students. (Fluorescent) light
Useful classroom language for Elementary students Classroom objects it is useful to know the words for Stationery Board pens (= board markers) Rubber (= eraser) Automatic pencil Lever arch file Sellotape
Functional Auditory Performance Indicators (FAPI)
Functional Performance Indicators (FAPI) An Integrated Approach to Skill FAPI Overview The Functional (FAPI) assesses the functional auditory skills of children with hearing loss. It can be used by parents,
PERCENTAGE ARTICULATION LOSS OF CONSONANTS IN THE ELEMENTARY SCHOOL CLASSROOMS
The 21 st International Congress on Sound and Vibration 13-17 July, 2014, Beijing/China PERCENTAGE ARTICULATION LOSS OF CONSONANTS IN THE ELEMENTARY SCHOOL CLASSROOMS Dan Wang, Nanjie Yan and Jianxin Peng*
Develop Software that Speaks and Listens
Develop Software that Speaks and Listens Copyright 2011 Chant Inc. All rights reserved. Chant, SpeechKit, Getting the World Talking with Technology, talking man, and headset are trademarks or registered
School Class Monitoring System Based on Audio Signal Processing
C. R. Rashmi 1,,C.P.Shantala 2 andt.r.yashavanth 3 1 Department of CSE, PG Student, CIT, Gubbi, Tumkur, Karnataka, India. 2 Department of CSE, Vice Principal & HOD, CIT, Gubbi, Tumkur, Karnataka, India.
Version 1.0 Copyright 2009, DynEd International, Inc. August 2009 All rights reserved http://www.dyned.com
Teaching English: A Brain-based Approach Instructor s Guide Version 1.0 Copyright 2009, DynEd International, Inc. August 2009 All rights reserved http://www.dyned.com 1 Teaching English A Brain-based Approach
Secure-Access System via Fixed and Mobile Telephone Networks using Voice Biometrics
Secure-Access System via Fixed and Mobile Telephone Networks using Voice Biometrics Anastasis Kounoudes 1, Anixi Antonakoudi 1, Vasilis Kekatos 2 1 The Philips College, Computing and Information Systems
How To Read With A Book
Behaviors to Notice Teach Level A/B (Fountas and Pinnell) - DRA 1/2 - NYC ECLAS 2 Solving Words - Locates known word(s) in. Analyzes words from left to right, using knowledge of sound/letter relationships
(in Speak Out! 23: 19-25)
1 A SMALL-SCALE INVESTIGATION INTO THE INTELLIGIBILITY OF THE PRONUNCIATION OF BRAZILIAN INTERMEDIATE STUDENTS (in Speak Out! 23: 19-25) Introduction As a non-native teacher of English as a foreign language
Articulatory Phonetics. and the International Phonetic Alphabet. Readings and Other Materials. Review. IPA: The Vowels. Practice
Supplementary Readings Supplementary Readings Handouts Online Tutorials The following readings have been posted to the Moodle course site: Contemporary Linguistics: Chapter 2 (pp. 34-40) Handouts for This
An Analysis of the Eleventh Grade Students Monitor Use in Speaking Performance based on Krashen s (1982) Monitor Hypothesis at SMAN 4 Jember
1 An Analysis of the Eleventh Grade Students Monitor Use in Speaking Performance based on Krashen s (1982) Monitor Hypothesis at SMAN 4 Jember Moh. Rofid Fikroni, Musli Ariani, Sugeng Ariyanto Language
Dr. Wei Wei Royal Melbourne Institute of Technology, Vietnam Campus January 2013
Research Summary: Can integrated skills tasks change students use of learning strategies and materials? A case study using PTE Academic integrated skills items Dr. Wei Wei Royal Melbourne Institute of
BLIND SOURCE SEPARATION OF SPEECH AND BACKGROUND MUSIC FOR IMPROVED SPEECH RECOGNITION
BLIND SOURCE SEPARATION OF SPEECH AND BACKGROUND MUSIC FOR IMPROVED SPEECH RECOGNITION P. Vanroose Katholieke Universiteit Leuven, div. ESAT/PSI Kasteelpark Arenberg 10, B 3001 Heverlee, Belgium [email protected]
Working people requiring a practical knowledge of English for communicative purposes
Course Description This course is designed for learners at the pre-intermediate level of English proficiency. Learners will build upon all four language skills: listening, speaking, reading, and writing
TEFL Cert. Teaching English as a Foreign Language Certificate EFL MONITORING BOARD MALTA. January 2014
TEFL Cert. Teaching English as a Foreign Language Certificate EFL MONITORING BOARD MALTA January 2014 2014 EFL Monitoring Board 2 Table of Contents 1 The TEFL Certificate Course 3 2 Rationale 3 3 Target
AUDIMUS.media: A Broadcast News Speech Recognition System for the European Portuguese Language
AUDIMUS.media: A Broadcast News Speech Recognition System for the European Portuguese Language Hugo Meinedo, Diamantino Caseiro, João Neto, and Isabel Trancoso L 2 F Spoken Language Systems Lab INESC-ID
How To Teach Reading
Florida Reading Endorsement Alignment Matrix Competency 1 The * designates which of the reading endorsement competencies are specific to the competencies for English to Speakers of Languages (ESOL). The
Evaluation of speech technologies
CLARA Training course on evaluation of Human Language Technologies Evaluations and Language resources Distribution Agency November 27, 2012 Evaluation of speaker identification Speech technologies Outline
