HIDDEN MARKOV MODEL BASED SPEECH SYNTHESIS IN HUNGARIAN
|
|
|
- Jacob McDonald
- 9 years ago
- Views:
Transcription
1 HIDDEN MARKOV MODEL BASED SPEECH SYNTHESIS IN HUNGARIAN Bálint Tóth, Géza Németh Department of Telecommunications and Media Informatics Budapest University of Technology and Economics {toth.b, Abstract This paper describes the first application of the Hidden-Markov-Model based textto-speech synthesis method to the Hungarian language. The HMM synthesis has numerous favorable features: it can produce high quality speech from a small database, theoretically it allows us to change characteristics and style of the speaker and emotion expression may be trained with the system as well. Keywords speech synthesis, Text-To-Speech (TTS), Hidden-Markov-Modell (HMM) 1. Introduction There are a several speech synthesis methods: articulatory and formant synthesis (trying to model the speech production mechanism), diphone and triphone based concatenative synthesis and corpus-based unit selection synthesis (based on recordings from a single speaker). The best quality is produced by the unit selection method currently, although the size of the corpus database is quite big (it may be the gigabyte range), voice characteristics are defined by the speech corpus and these features cannot be changed without additional recordings, labelling and processing which increase the cost of generating several voices. Hidden-Markov-Model (HMM) based text-to-speech (TTS) systems are categorized as a kind of unit selection speech synthesis, although in this case the units are not waveform samples, but spectral and prosody parameters extracted from the waveform. HMMs are responsible for selecting those parameters which most precisely represent the text to be read. A vocoder generates the synthesized voice from these parameters. HMM based text-to-speech systems are becoming quite popular nowadays because of their advantages: they are able to produce intelligible, natural sounding voice in good quality and the size of the database is small (1.5-2 Mbytes). It is also possible to adapt the voice characteristics to different speakers with short recordings (5-8 minutes) [1, 2, 3, 4], and emotions can also be expressed with HMM TTS [5]. The current paper gives an overview about the architecture of HMM based speech synthesis, investigates the first adaptation of an open-source HMM-based TTS to Hungarian, describes the steps of the adaptation process. The results of a MOS-like test are also introduced and future plans of the authors are mentioned as well. 2. The basics of Hidden Markov-Models (HMMs) Hidden-Markov-Models are mainly used for speech recognition [6] in speech research, although in the last decade or so they have also been applied to speech synthesis. The current section briefly introduces the basics of HMMs, a detailed description can be found in [7]. An HMM λ( A, B, π ) is defined by its parameters: A is the state transition probability, B is the output probability and π is the initial state probability. In case of text-to-speech let us assume that λ is a group of HMMs, which represent quintphones 1 in sequence (see Fig. 1.). 1 A quintphone is five phones in a sequence.
2 The series of quintphones define the word, which we would like to generate. The goal is to find the most probable state sequence of state feature vectors X, which will be used to generate the synthetic speech (see Section 3. for more details). q i X qi Figure 1. Concatenated HMM chain in q i state, at i. time, the output is X qi. The X qi output is an M-dimensional feature vector at state q i of model λ : X ( qi) ( qi) ( qi) ( qi) q i = ( x1, x2, x3... xm ) The aim is to define the x = ( X q, X q... X q ) output feature vector (length is L, i.e. the 1 2 L number of sounds/phonemes in an utterence) from model λ, which maximizes the overall likelihood P ( x λ) : x = argmax{ P( x λ )} = arg max P( x q, λ) P( q λ), x x Q Where Q = ( q1, q2... ql) is the state sequence of model λ. The P ( x λ) overall likelihood can be computed by adding the product of joint output probability P ( x q, λ) and state sequence probability P ( q λ) over all possible Q state-sequence. To be able to compute the result within moderate time the Viterbi approximation is used: x argmax x T { P( x q, λ, L) P( q λ, L) } The q state sequence of model λ can be maximized independently of x : q = argmax q { P( q λ, L) } Let us assume that the output probability distribution of each state q i is a Gaussian density function with µ i mean value and Σ i covariance matrix. The model λ is the set of all mean values and covariance matrices for all N states: Consequently the log-likelihood function is: ln LM 2 λ = µ, Σ, µ, Σ... µ, Σ ( N N 1 2 L L T 1 { P ( x q, λ) } = ln{ 2π } ln{ Σ } ( x µ ) Σ ( x µ ) t = 1 qt 1 2 ) t = 1 t qt qt t qt
3 If we maximize x then the solution is trivial: the output feature vector equals to the states mean values x = µ, µ... µ ) ( q1 q 2 ql This solution is not representing speech well because of discontinuities at the state boundaries. The feature vectors must be extended by the delta and delta-delta coefficients (first and second derivatives): x = (( x qi T ),( x qi T ),( x ) 2 T qi ) 3. HMM-based speech synthesis HMM-based speech synthesis consists of two main phases: the training phase (Figure 2.) and the speech generation phase (Figure 3.). During training the HMMs learn the spectral and prosodic features of the speech corpus, during speech generation the most likely parameters of the text to be synthesized are extracted from the HMMs. Speech database Phonetic transcription, context-dependent labels Audio signal Excitation parameter extraction HMM training Spectral parameter extraction HMM database Figure 2. Training of the HMMs. For training a rather large speech corpus, the phonetic transcription of it and the precise position of the phoneme boundaries are required. The mel cepstrum, its first and second derivatives, the pitch, its first and second derivatives are extracted from the waveform. Then the phonetic transcription should be extended by context dependent labelling (see subsection 4.2.). When all these data are prepared, training can be started. During the training phase the HMMs are learning the spectral and excitation parameters according to the context dependent labels. To be able to model parameters with varying dimensions (e.g. log{f 0 } in case of unvoiced regions) multidimensional probability density functions are used. Each HMM has a state duration density function to model the rhythm of the speech. There are two standard ways to train the HMMs: (1) with a 2-4 hour long speech corpus from one speaker or (2) with speech corpora from more speakers and then adapt it to a speaker s voice characteristics with a 5-8 minute long speech corpus [1, 2]. This way new voice characteristics can be easily prepared with a rather small speech corpus. According to [1, 2] the adaptive training technique produces better quality than training from one speech corpus only. Furthermore there are numerous methods to change the voice characteristics [3, 4].
4 To generate speech after training is done, the phonetic transcription and the context dependent labelling (see subsection 4.2.) of the text to be read must be created. Then the phones duration is extracted from the state duration density functions and the most likely spectral and excitation parameters are calculated by the HMMs. With these parameters the synthetic speech can be generated: from the pitch value the excitation signal of voiced sections is generated and then it is filtered, typically with a mel log spectrum approximation (MLSA) filter [8]. Simple vocoders were used earlier, lately mixed excitation models are applied in order to achieve better quality [9]. Text to be read HMM database Phonetic transcription Contextdependent labelling Spectral, excitation and duration parameter generation from HMMs Pitch Mel-cepstrum Excitation Filter Synthesized speech Figure 3. Speech generation with HMMs. 4. Adapting HMM-based TTS to Hungarian The authors conducted the experiments with the HTS framework [10]. For the Hungarian adaptation a speech database, the phonetic transcription of it, the context dependent labelling and language specific questions for the decision trees were necessary. In the following the most important steps will be described Preparation of the speech corpus The authors used 600 sentences to train the HMMs. All the sentences were recorded from professional speakers, it was resampled at Hz with 16 bits resolution. The content of the sentences is weather forecast and the total length is about 2 hours. The authors prepared the phonetic transcription of the sentences, the letter and word boundaries were determined by automatic methods, which are described in [11] Context-dependent labelling To be able to select to most likely units a number of phonetic features should be defined. These features are calculated for every sound. Table 1. summarizes the most important features. Sounds The two previous and the two following sounds/phonemes (quintphones). Pauses are also marked.
5 Syllables Mark if the actual / previous / next syllable is accented. The number of phonemes in the current / previous / next syllable. The number of syllables from / to the previous / next accented syllable. The vowel of the current syllable. Word The number of syllables in the current / previous / next word. The position of the current word in the current phrase (forward and backward). Phrase The number of syllables in the current / previous / next phrase. The position of the current phrase in the sentence (forward and backward). Sentence The number of syllables in the current sentence. The number of words in the current sentence. The number of phrases in the current sentence. Table 1. The prosodic features used for context dependent labelling. Labelling is done automatically, which may include small errors (e.g. defining the accented syllables), although it does not influence the quality much, as the same algorithm is used during speech generation, thus the parameters will be chosen by the HMMs consistently (even in case of small errors in the labelling) Decision trees In section 4.2. context dependent labelling was introduced. The combination of all the context dependent features is a very large number. If we take into account the possible variations of quintphones only, even that is over 160 million and this number increases exponentially if other context dependent features are included as well. Consequently it is impossible to design a natural speech corpus, where all the combinations of context dependent features are included. To overcome this problem decision tree based clustering [12, 13] is used. As different features influence the spectral parameters, the pitch values and the state durations, decision trees must be handled separately. Table 2. shows which features were used in case of Hungarian for the decision trees [14]. Phonemes Vowel / Consonant. Short / long. Stop / fricative / affricative / liquid / nasal. Front / central / back vowel. High / medium / low vowel. Rounded / unrounded vowel. Syllable Stressed / not stressed. Numeric parameters (see Table 1.). Word Numeric parameters (see Table 1.). Phrase Numeric parameters (see Table 1.). Sentence Numeric parameters (see Table 1.). Table 2. Features used for building the decision trees.
6 For example if the decision tree question regarding the length of the consonants is excluded than the HMMs will mostly select short consonants even for long ones, as these are not clustered separately and there are much more short consonants in the database Results In order to be able to define the quality of Hungarian HMM-based text-to-speech synthesis objectively the authors conducted a MOS (Mean Opinion Score) like listening test. Three synthesis engines were included in the test: a triphone based, a unit selection system and a HMM based speech synthesis engine. At the beginning of the test 3-3 sentences randomly generated from each system were played, these sentences were not scored by the subjects. The reason for doing this is to have the subjects used to synthetic voice and to show them what kind of qualities they can expect. At the next step 29 sentences generated by each system were randomly played in different sequences in order to avoid memory effects [15]. The content of the test sentences were from the weather forecast domain. The triphone based system is the ProfiVox domain independent speech synthesizer, the HMM based TTS was trained with weather forecast sentences (cca. 600 sentences) and the unit selection system had a large weather forecast speech corpus (including cca sentences). The same 29 sentences were generated by all systems, but none of these sentences were present in the speech corpora. The subjects scored the sentences from 1 to 5 (1 was the worst, 5 was the best). 12 subjects were included in the test. The results are shown in Figure 4. Figure 4. The results of the MOS like test. The triphone based system scored 2.56, the HMM based TTS scored 3.63 and the unit selection one scored 3.9 on average (mean value). The standard deviation was in the same order 0.73, 1 and Although the unit selection system was considered better then the HMM based TTS, it can read only domain specific sentences with the same quality, while the HMM based TTS can read any sentence with rather good quality. Furthermore the database of the unit selection system includes more then 11 hours of recordings, while only 1.5 hours of recordings was enough to train the HMMs. The size of the HMM database is under 2 Mbytes, while the unit selection system s database is over 1 GByte. The triphone based system was designed for any text, domain specific information is not included in the engine. This can be
7 one reason for the lower score. The absolute value of the results is not so important, rather the ratio of them contains the most relevant information. 5. Future plans The current paper introduced the first version of the Hungarian HMM-based text-tospeech system. As the next step the authors would like to record additional speech corpora in order to test adaptive training, to achieve more natural voice and to be able to create new voice characteristics and emotional speech with small (5-8 minutes long) databases. Because of the small footprint and the good voice quality, the authors would like apply the system on mobile devices as well. To be able to port the hts_engine to embedded devices, it may occur, that the core engine must be optimized to achieve real-time response on mobile devices. 6. Summary In this paper the basics of Hidden-Markov-Models were introduced, the main steps of creating the first Hungarian HMM based text-to-speech synthesis system were described and a MOS like test was presented. The main advantage of HMM-based TTS systems is that they can produce natural sounding voice from a small database, it is possible to change the voice characteristics and to express emotions. Acknowledgements We would like to thank the test subjects for their participation. We acknowledge the support of Mátyás Bartalis for creating the Web-based MOS-like test environment and the help of Péter Mihajlik in using the Hungarian speech recognition tools. The research was partly supported by the NKTH in the framework of the NAP project (OMFB-00736/2005). References [1] T. Masuko, K. Tokuda, T. Kobayashi, and S. Imai, Voice characteristics conversion for HMM-based speech synthesis system, Proc. ICASSP, 1997, pp [2] M. Tamura, T. Masuko, K. Tokuda, and T. Kobayashi, Adaptation of pitch and spectrum for HMM-based speech synthesis using MLLR, Proc. ICASSP, 2001, pp [3] T. Yoshimura, K. Tokuda, T. Masuko, T. Kobayashi, and T. Kitamura, Speaker interpolation in HMM-based speech synthesis system, Proc. Eurospeech, 1997, pp [4] M. Tachibana, J. Yamagishi, T. Masuko, and T. Kobayashi, Speech synthesis with various emotional expressions and speaking styles by style interpolation and morphing, IEICE Trans. Inf. & Syst., vol. E88-D, no. 11, pp , [5] S. Krstulovic, A. Hunecke, M. Schroeder, An HMM-Based Speech Synthesis System applied to German and its Adaptation to a Limited Set of Expressive Football Announcements, Proc. of Interspeech, [6] Mihajlik P., Fegyó T., Németh B., Tüske Z., Trón V., Towards Automatic Transcription of Large Spoken Archives in Agglutinating Languages: Hungarian ASR
8 for the MALACH Project, in Matousek V, Mautner P (ed.) Text, Speech and Dialogue: 10th International Conference, TSD 2007, Pilsen, Czech Republic, September 2007, Proceedings, Berlin; Heidelberg: Springer, Lectures Notes in Computer Sciences, pp (Lecture Notes in Artificial Intelligence; 4629.) [7] Lawrence R. Rabiner, "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition," Proc. of the IEEE, 77 (2), p , February [8] S. Imai, Cepstral analysis synthesis on the mel frequency scale, Proc. ICASSP, 1983, pp [9] R. Maia, T. Toda, H. Zen, Y. Nankaku, K. Tokuda, A trainable excitation model for HMM-based speech synthesis, Proc. Interspeech 2007, 2007 August, pp [10] H. Zen, T. Nose, J. Yamagishi, S. Sako, T. Masuko, A.W. Black, K. Tokuda, The HMM-based speech synthesis system version 2.0, Proc. of ISCA SSW6, Bonn, Germany, Aug [11] Mihajlik, P. Revesz, T. Tatai, P., Phonetic transcription in automatic speech recognition, in ACTA LINGUISTICA HUNGARICA, 2003, vol 49; number 3/4, pp [12] J. J. Odell, The Use of Context in Large Vocabulary Speech Recognition, PhD dissertation, Cambridge University, [13] K. Shinoda and T. Watanabe, MDL-based context-dependent subword modeling for speech recognition, J. Acoust. Soc. Jpn.(E), vol.21, no.2, pp.79 86, [14] Gósy M.: Fonetika, a beszéd tudománya. Budapest, Osiris Kiadó, [15] Jan P. H. van Santen: Perceptual experiments for diagnostic testing of text-to-speech systems, Computer Speech and Language, 1993, pp
Experiments with Signal-Driven Symbolic Prosody for Statistical Parametric Speech Synthesis
Experiments with Signal-Driven Symbolic Prosody for Statistical Parametric Speech Synthesis Fabio Tesser, Giacomo Sommavilla, Giulio Paci, Piero Cosi Institute of Cognitive Sciences and Technologies, National
Thirukkural - A Text-to-Speech Synthesis System
Thirukkural - A Text-to-Speech Synthesis System G. L. Jayavardhana Rama, A. G. Ramakrishnan, M Vijay Venkatesh, R. Murali Shankar Department of Electrical Engg, Indian Institute of Science, Bangalore 560012,
Hardware Implementation of Probabilistic State Machine for Word Recognition
IJECT Vo l. 4, Is s u e Sp l - 5, Ju l y - Se p t 2013 ISSN : 2230-7109 (Online) ISSN : 2230-9543 (Print) Hardware Implementation of Probabilistic State Machine for Word Recognition 1 Soorya Asokan, 2
Lecture 12: An Overview of Speech Recognition
Lecture : An Overview of peech Recognition. Introduction We can classify speech recognition tasks and systems along a set of dimensions that produce various tradeoffs in applicability and robustness. Isolated
Turkish Radiology Dictation System
Turkish Radiology Dictation System Ebru Arısoy, Levent M. Arslan Boaziçi University, Electrical and Electronic Engineering Department, 34342, Bebek, stanbul, Turkey [email protected], [email protected]
Membering T M : A Conference Call Service with Speaker-Independent Name Dialing on AIN
PAGE 30 Membering T M : A Conference Call Service with Speaker-Independent Name Dialing on AIN Sung-Joon Park, Kyung-Ae Jang, Jae-In Kim, Myoung-Wan Koo, Chu-Shik Jhon Service Development Laboratory, KT,
An Arabic Text-To-Speech System Based on Artificial Neural Networks
Journal of Computer Science 5 (3): 207-213, 2009 ISSN 1549-3636 2009 Science Publications An Arabic Text-To-Speech System Based on Artificial Neural Networks Ghadeer Al-Said and Moussa Abdallah Department
Spot me if you can: Uncovering spoken phrases in encrypted VoIP conversations
Spot me if you can: Uncovering spoken phrases in encrypted VoIP conversations C. Wright, L. Ballard, S. Coull, F. Monrose, G. Masson Talk held by Goran Doychev Selected Topics in Information Security and
Ericsson T18s Voice Dialing Simulator
Ericsson T18s Voice Dialing Simulator Mauricio Aracena Kovacevic, Anna Dehlbom, Jakob Ekeberg, Guillaume Gariazzo, Eric Lästh and Vanessa Troncoso Dept. of Signals Sensors and Systems Royal Institute of
AUTOMATIC PHONEME SEGMENTATION WITH RELAXED TEXTUAL CONSTRAINTS
AUTOMATIC PHONEME SEGMENTATION WITH RELAXED TEXTUAL CONSTRAINTS PIERRE LANCHANTIN, ANDREW C. MORRIS, XAVIER RODET, CHRISTOPHE VEAUX Very high quality text-to-speech synthesis can be achieved by unit selection
Analysis and Synthesis of Hypo and Hyperarticulated Speech
Analysis and Synthesis of and articulated Speech Benjamin Picart, Thomas Drugman, Thierry Dutoit TCTS Lab, Faculté Polytechnique (FPMs), University of Mons (UMons), Belgium {benjamin.picart,thomas.drugman,thierry.dutoit}@umons.ac.be
Speech: A Challenge to Digital Signal Processing Technology for Human-to-Computer Interaction
: A Challenge to Digital Signal Processing Technology for Human-to-Computer Interaction Urmila Shrawankar Dept. of Information Technology Govt. Polytechnic, Nagpur Institute Sadar, Nagpur 440001 (INDIA)
Robust Methods for Automatic Transcription and Alignment of Speech Signals
Robust Methods for Automatic Transcription and Alignment of Speech Signals Leif Grönqvist ([email protected]) Course in Speech Recognition January 2. 2004 Contents Contents 1 1 Introduction 2 2 Background
Myanmar Continuous Speech Recognition System Based on DTW and HMM
Myanmar Continuous Speech Recognition System Based on DTW and HMM Ingyin Khaing Department of Information and Technology University of Technology (Yatanarpon Cyber City),near Pyin Oo Lwin, Myanmar Abstract-
Establishing the Uniqueness of the Human Voice for Security Applications
Proceedings of Student/Faculty Research Day, CSIS, Pace University, May 7th, 2004 Establishing the Uniqueness of the Human Voice for Security Applications Naresh P. Trilok, Sung-Hyuk Cha, and Charles C.
SPEAKER IDENTIFICATION FROM YOUTUBE OBTAINED DATA
SPEAKER IDENTIFICATION FROM YOUTUBE OBTAINED DATA Nitesh Kumar Chaudhary 1 and Shraddha Srivastav 2 1 Department of Electronics & Communication Engineering, LNMIIT, Jaipur, India 2 Bharti School Of Telecommunication,
Speech Recognition System of Arabic Alphabet Based on a Telephony Arabic Corpus
Speech Recognition System of Arabic Alphabet Based on a Telephony Arabic Corpus Yousef Ajami Alotaibi 1, Mansour Alghamdi 2, and Fahad Alotaiby 3 1 Computer Engineering Department, King Saud University,
TEXT TO SPEECH SYSTEM FOR KONKANI ( GOAN ) LANGUAGE
TEXT TO SPEECH SYSTEM FOR KONKANI ( GOAN ) LANGUAGE Sangam P. Borkar M.E. (Electronics)Dissertation Guided by Prof. S. P. Patil Head of Electronics Department Rajarambapu Institute of Technology Sakharale,
Speech recognition for human computer interaction
Speech recognition for human computer interaction Ubiquitous computing seminar FS2014 Student report Niklas Hofmann ETH Zurich [email protected] ABSTRACT The widespread usage of small mobile devices
Text-To-Speech Technologies for Mobile Telephony Services
Text-To-Speech Technologies for Mobile Telephony Services Paulseph-John Farrugia Department of Computer Science and AI, University of Malta Abstract. Text-To-Speech (TTS) systems aim to transform arbitrary
Open-Source, Cross-Platform Java Tools Working Together on a Dialogue System
Open-Source, Cross-Platform Java Tools Working Together on a Dialogue System Oana NICOLAE Faculty of Mathematics and Computer Science, Department of Computer Science, University of Craiova, Romania [email protected]
Objective Intelligibility Assessment of Text-to-Speech Systems Through Utterance Verification
Objective Intelligibility Assessment of Text-to-Speech Systems Through Utterance Verification Raphael Ullmann 1,2, Ramya Rasipuram 1, Mathew Magimai.-Doss 1, and Hervé Bourlard 1,2 1 Idiap Research Institute,
Regionalized Text-to-Speech Systems: Persona Design and Application Scenarios
Regionalized Text-to-Speech Systems: Persona Design and Application Scenarios Michael Pucher, Gudrun Schuchmann, and Peter Fröhlich ftw., Telecommunications Research Center, Donau-City-Strasse 1, 1220
Secure-Access System via Fixed and Mobile Telephone Networks using Voice Biometrics
Secure-Access System via Fixed and Mobile Telephone Networks using Voice Biometrics Anastasis Kounoudes 1, Anixi Antonakoudi 1, Vasilis Kekatos 2 1 The Philips College, Computing and Information Systems
Emotion Detection from Speech
Emotion Detection from Speech 1. Introduction Although emotion detection from speech is a relatively new field of research, it has many potential applications. In human-computer or human-human interaction
BLIND SOURCE SEPARATION OF SPEECH AND BACKGROUND MUSIC FOR IMPROVED SPEECH RECOGNITION
BLIND SOURCE SEPARATION OF SPEECH AND BACKGROUND MUSIC FOR IMPROVED SPEECH RECOGNITION P. Vanroose Katholieke Universiteit Leuven, div. ESAT/PSI Kasteelpark Arenberg 10, B 3001 Heverlee, Belgium [email protected]
Speech Signal Processing: An Overview
Speech Signal Processing: An Overview S. R. M. Prasanna Department of Electronics and Electrical Engineering Indian Institute of Technology Guwahati December, 2012 Prasanna (EMST Lab, EEE, IITG) Speech
School Class Monitoring System Based on Audio Signal Processing
C. R. Rashmi 1,,C.P.Shantala 2 andt.r.yashavanth 3 1 Department of CSE, PG Student, CIT, Gubbi, Tumkur, Karnataka, India. 2 Department of CSE, Vice Principal & HOD, CIT, Gubbi, Tumkur, Karnataka, India.
L9: Cepstral analysis
L9: Cepstral analysis The cepstrum Homomorphic filtering The cepstrum and voicing/pitch detection Linear prediction cepstral coefficients Mel frequency cepstral coefficients This lecture is based on [Taylor,
Efficient diphone database creation for MBROLA, a multilingual speech synthesiser
Efficient diphone database creation for, a multilingual speech synthesiser Institute of Linguistics Adam Mickiewicz University Poznań OWD 2010 Wisła-Kopydło, Poland Why? useful for testing speech models
Subjective SNR measure for quality assessment of. speech coders \A cross language study
Subjective SNR measure for quality assessment of speech coders \A cross language study Mamoru Nakatsui and Hideki Noda Communications Research Laboratory, Ministry of Posts and Telecommunications, 4-2-1,
Web-Conferencing System SAViiMeeting
Web-Conferencing System SAViiMeeting Alexei Machovikov Department of Informatics and Computer Technologies National University of Mineral Resources Mining St-Petersburg, Russia [email protected] Abstract
DIXI A Generic Text-to-Speech System for European Portuguese
DIXI A Generic Text-to-Speech System for European Portuguese Sérgio Paulo, Luís C. Oliveira, Carlos Mendes, Luís Figueira, Renato Cassaca, Céu Viana 1 and Helena Moniz 1,2 L 2 F INESC-ID/IST, 1 CLUL/FLUL,
Workshop Perceptual Effects of Filtering and Masking Introduction to Filtering and Masking
Workshop Perceptual Effects of Filtering and Masking Introduction to Filtering and Masking The perception and correct identification of speech sounds as phonemes depends on the listener extracting various
Carla Simões, [email protected]. Speech Analysis and Transcription Software
Carla Simões, [email protected] Speech Analysis and Transcription Software 1 Overview Methods for Speech Acoustic Analysis Why Speech Acoustic Analysis? Annotation Segmentation Alignment Speech Analysis
MUSICAL INSTRUMENT FAMILY CLASSIFICATION
MUSICAL INSTRUMENT FAMILY CLASSIFICATION Ricardo A. Garcia Media Lab, Massachusetts Institute of Technology 0 Ames Street Room E5-40, Cambridge, MA 039 USA PH: 67-53-0 FAX: 67-58-664 e-mail: rago @ media.
Corpus Driven Malayalam Text-to-Speech Synthesis for Interactive Voice Response System
Corpus Driven Malayalam Text-to-Speech Synthesis for Interactive Voice Response System Arun Soman, Sachin Kumar S., Hemanth V. K., M. Sabarimalai Manikandan, K. P. Soman Centre for Excellence in Computational
Online Diarization of Telephone Conversations
Odyssey 2 The Speaker and Language Recognition Workshop 28 June July 2, Brno, Czech Republic Online Diarization of Telephone Conversations Oshry Ben-Harush, Itshak Lapidot, Hugo Guterman Department of
Solutions to Exam in Speech Signal Processing EN2300
Solutions to Exam in Speech Signal Processing EN23 Date: Thursday, Dec 2, 8: 3: Place: Allowed: Grades: Language: Solutions: Q34, Q36 Beta Math Handbook (or corresponding), calculator with empty memory.
Investigations on Error Minimizing Training Criteria for Discriminative Training in Automatic Speech Recognition
, Lisbon Investigations on Error Minimizing Training Criteria for Discriminative Training in Automatic Speech Recognition Wolfgang Macherey Lars Haferkamp Ralf Schlüter Hermann Ney Human Language Technology
A Sound Analysis and Synthesis System for Generating an Instrumental Piri Song
, pp.347-354 http://dx.doi.org/10.14257/ijmue.2014.9.8.32 A Sound Analysis and Synthesis System for Generating an Instrumental Piri Song Myeongsu Kang and Jong-Myon Kim School of Electrical Engineering,
Creating voices for the Festival speech synthesis system.
M. Hood Supervised by A. Lobb and S. Bangay G01H0708 Creating voices for the Festival speech synthesis system. Abstract This project focuses primarily on the process of creating a voice for a concatenative
Automatic Evaluation Software for Contact Centre Agents voice Handling Performance
International Journal of Scientific and Research Publications, Volume 5, Issue 1, January 2015 1 Automatic Evaluation Software for Contact Centre Agents voice Handling Performance K.K.A. Nipuni N. Perera,
Developing an Isolated Word Recognition System in MATLAB
MATLAB Digest Developing an Isolated Word Recognition System in MATLAB By Daryl Ning Speech-recognition technology is embedded in voice-activated routing systems at customer call centres, voice dialling
Things to remember when transcribing speech
Notes and discussion Things to remember when transcribing speech David Crystal University of Reading Until the day comes when this journal is available in an audio or video format, we shall have to rely
Speech Recognition System for Cerebral Palsy
Speech Recognition System for Cerebral Palsy M. Hafidz M. J., S.A.R. Al-Haddad, Chee Kyun Ng Department of Computer & Communication Systems Engineering, Faculty of Engineering, Universiti Putra Malaysia,
Artificial Neural Network for Speech Recognition
Artificial Neural Network for Speech Recognition Austin Marshall March 3, 2005 2nd Annual Student Research Showcase Overview Presenting an Artificial Neural Network to recognize and classify speech Spoken
Comp 14112 Fundamentals of Artificial Intelligence Lecture notes, 2015-16 Speech recognition
Comp 14112 Fundamentals of Artificial Intelligence Lecture notes, 2015-16 Speech recognition Tim Morris School of Computer Science, University of Manchester 1 Introduction to speech recognition 1.1 The
NATURAL SOUNDING TEXT-TO-SPEECH SYNTHESIS BASED ON SYLLABLE-LIKE UNITS SAMUEL THOMAS MASTER OF SCIENCE
NATURAL SOUNDING TEXT-TO-SPEECH SYNTHESIS BASED ON SYLLABLE-LIKE UNITS A THESIS submitted by SAMUEL THOMAS for the award of the degree of MASTER OF SCIENCE (by Research) DEPARTMENT OF COMPUTER SCIENCE
The effect of mismatched recording conditions on human and automatic speaker recognition in forensic applications
Forensic Science International 146S (2004) S95 S99 www.elsevier.com/locate/forsciint The effect of mismatched recording conditions on human and automatic speaker recognition in forensic applications A.
Advanced Speech-Audio Processing in Mobile Phones and Hearing Aids
Advanced Speech-Audio Processing in Mobile Phones and Hearing Aids Synergies and Distinctions Peter Vary RWTH Aachen University Institute of Communication Systems WASPAA, October 23, 2013 Mohonk Mountain
PERCENTAGE ARTICULATION LOSS OF CONSONANTS IN THE ELEMENTARY SCHOOL CLASSROOMS
The 21 st International Congress on Sound and Vibration 13-17 July, 2014, Beijing/China PERCENTAGE ARTICULATION LOSS OF CONSONANTS IN THE ELEMENTARY SCHOOL CLASSROOMS Dan Wang, Nanjie Yan and Jianxin Peng*
Recent advances in Digital Music Processing and Indexing
Recent advances in Digital Music Processing and Indexing Acoustics 08 warm-up TELECOM ParisTech Gaël RICHARD Telecom ParisTech (ENST) www.enst.fr/~grichard/ Content Introduction and Applications Components
Automatic Detection of Emergency Vehicles for Hearing Impaired Drivers
Automatic Detection of Emergency Vehicles for Hearing Impaired Drivers Sung-won ark and Jose Trevino Texas A&M University-Kingsville, EE/CS Department, MSC 92, Kingsville, TX 78363 TEL (36) 593-2638, FAX
SPEECH OR LANGUAGE IMPAIRMENT EARLY CHILDHOOD SPECIAL EDUCATION
I. DEFINITION Speech or language impairment means a communication disorder, such as stuttering, impaired articulation, a language impairment (comprehension and/or expression), or a voice impairment, that
MFCC-Based Voice Recognition System for Home Automation Using Dynamic Programming
International Journal of Science and Research (IJSR) MFCC-Based Voice Recognition System for Home Automation Using Dynamic Programming Sandeep Joshi1, Sneha Nagar2 1 PG Student, Embedded Systems, Oriental
A TOOL FOR TEACHING LINEAR PREDICTIVE CODING
A TOOL FOR TEACHING LINEAR PREDICTIVE CODING Branislav Gerazov 1, Venceslav Kafedziski 2, Goce Shutinoski 1 1) Department of Electronics, 2) Department of Telecommunications Faculty of Electrical Engineering
Lecture 1-10: Spectrograms
Lecture 1-10: Spectrograms Overview 1. Spectra of dynamic signals: like many real world signals, speech changes in quality with time. But so far the only spectral analysis we have performed has assumed
Speech Analytics. Whitepaper
Speech Analytics Whitepaper This document is property of ASC telecom AG. All rights reserved. Distribution or copying of this document is forbidden without permission of ASC. 1 Introduction Hearing the
Emotion in Speech: towards an integration of linguistic, paralinguistic and psychological analysis
Emotion in Speech: towards an integration of linguistic, paralinguistic and psychological analysis S-E.Fotinea 1, S.Bakamidis 1, T.Athanaselis 1, I.Dologlou 1, G.Carayannis 1, R.Cowie 2, E.Douglas-Cowie
From Concept to Production in Secure Voice Communications
From Concept to Production in Secure Voice Communications Earl E. Swartzlander, Jr. Electrical and Computer Engineering Department University of Texas at Austin Austin, TX 78712 Abstract In the 1970s secure
Gender Identification using MFCC for Telephone Applications A Comparative Study
Gender Identification using MFCC for Telephone Applications A Comparative Study Jamil Ahmad, Mustansar Fiaz, Soon-il Kwon, Maleerat Sodanil, Bay Vo, and * Sung Wook Baik Abstract Gender recognition is
ADAPTIVE AND DISCRIMINATIVE MODELING FOR IMPROVED MISPRONUNCIATION DETECTION. Horacio Franco, Luciana Ferrer, and Harry Bratt
ADAPTIVE AND DISCRIMINATIVE MODELING FOR IMPROVED MISPRONUNCIATION DETECTION Horacio Franco, Luciana Ferrer, and Harry Bratt Speech Technology and Research Laboratory, SRI International, Menlo Park, CA
Audio Engineering Society. Convention Paper. Presented at the 129th Convention 2010 November 4 7 San Francisco, CA, USA
Audio Engineering Society Convention Paper Presented at the 129th Convention 2010 November 4 7 San Francisco, CA, USA The papers at this Convention have been selected on the basis of a submitted abstract
APPLYING MFCC-BASED AUTOMATIC SPEAKER RECOGNITION TO GSM AND FORENSIC DATA
APPLYING MFCC-BASED AUTOMATIC SPEAKER RECOGNITION TO GSM AND FORENSIC DATA Tuija Niemi-Laitinen*, Juhani Saastamoinen**, Tomi Kinnunen**, Pasi Fränti** *Crime Laboratory, NBI, Finland **Dept. of Computer
Effect of Captioning Lecture Videos For Learning in Foreign Language 外 国 語 ( 英 語 ) 講 義 映 像 に 対 する 字 幕 提 示 の 理 解 度 効 果
Effect of Captioning Lecture Videos For Learning in Foreign Language VERI FERDIANSYAH 1 SEIICHI NAKAGAWA 1 Toyohashi University of Technology, Tenpaku-cho, Toyohashi 441-858 Japan E-mail: {veri, nakagawa}@slp.cs.tut.ac.jp
Broadband Networks. Prof. Dr. Abhay Karandikar. Electrical Engineering Department. Indian Institute of Technology, Bombay. Lecture - 29.
Broadband Networks Prof. Dr. Abhay Karandikar Electrical Engineering Department Indian Institute of Technology, Bombay Lecture - 29 Voice over IP So, today we will discuss about voice over IP and internet
Introduction to Algorithmic Trading Strategies Lecture 2
Introduction to Algorithmic Trading Strategies Lecture 2 Hidden Markov Trading Model Haksun Li [email protected] www.numericalmethod.com Outline Carry trade Momentum Valuation CAPM Markov chain
SASSC: A Standard Arabic Single Speaker Corpus
SASSC: A Standard Arabic Single Speaker Corpus Ibrahim Almosallam, Atheer AlKhalifa, Mansour Alghamdi, Mohamed Alkanhal, Ashraf Alkhairy The Computer Research Institute King Abdulaziz City for Science
THE MEASUREMENT OF SPEECH INTELLIGIBILITY
THE MEASUREMENT OF SPEECH INTELLIGIBILITY Herman J.M. Steeneken TNO Human Factors, Soesterberg, the Netherlands 1. INTRODUCTION The draft version of the new ISO 9921 standard on the Assessment of Speech
31 Case Studies: Java Natural Language Tools Available on the Web
31 Case Studies: Java Natural Language Tools Available on the Web Chapter Objectives Chapter Contents This chapter provides a number of sources for open source and free atural language understanding software
Computer Assisted Language Learning (CALL) Systems
Tutorial on CALL in INTERSPEECH2012 by T.Kawahara and N.Minematsu 1 Computer Assisted Language g Learning (CALL) Systems Tatsuya Kawahara (Kyoto University, Japan) Nobuaki Minematsu (University of Tokyo,
A STATE-SPACE METHOD FOR LANGUAGE MODELING. Vesa Siivola and Antti Honkela. Helsinki University of Technology Neural Networks Research Centre
A STATE-SPACE METHOD FOR LANGUAGE MODELING Vesa Siivola and Antti Honkela Helsinki University of Technology Neural Networks Research Centre [email protected], [email protected] ABSTRACT In this paper,
Improving Automatic Forced Alignment for Dysarthric Speech Transcription
Improving Automatic Forced Alignment for Dysarthric Speech Transcription Yu Ting Yeung 2, Ka Ho Wong 1, Helen Meng 1,2 1 Human-Computer Communications Laboratory, Department of Systems Engineering and
SPEECH SYNTHESIZER BASED ON THE PROJECT MBROLA
Rajs Arkadiusz, Banaszak-Piechowska Agnieszka, Drzycimski Paweł. Speech synthesizer based on the project MBROLA. Journal of Education, Health and Sport. 2015;5(12):160-164. ISSN 2391-8306. DOI http://dx.doi.org/10.5281/zenodo.35266
Advanced Signal Processing and Digital Noise Reduction
Advanced Signal Processing and Digital Noise Reduction Saeed V. Vaseghi Queen's University of Belfast UK WILEY HTEUBNER A Partnership between John Wiley & Sons and B. G. Teubner Publishers Chichester New
Information Leakage in Encrypted Network Traffic
Information Leakage in Encrypted Network Traffic Attacks and Countermeasures Scott Coull RedJack Joint work with: Charles Wright (MIT LL) Lucas Ballard (Google) Fabian Monrose (UNC) Gerald Masson (JHU)
