Talker-identification training using simulations of hybrid CI hearing : generalization to speech recognition and music perception

Size: px
Start display at page:

Download "Talker-identification training using simulations of hybrid CI hearing : generalization to speech recognition and music perception"

Transcription

1 University of Iowa Iowa Research Online Theses and Dissertations Summer 2014 Talker-identification training using simulations of hybrid CI hearing : generalization to speech recognition and music perception Ruth Flaherty University of Iowa Copyright 2014 Ruth Flaherty This dissertation is available at Iowa Research Online: Recommended Citation Flaherty, Ruth. "Talker-identification training using simulations of hybrid CI hearing : generalization to speech recognition and music perception." MA (Master of Arts) thesis, University of Iowa, Follow this and additional works at: Part of the Speech Pathology and Audiology Commons

2 TALKER-IDENTIFICATION TRAINING USING SIMULATIONS OF HYBRID CI HEARING: GENERALIZATION TO SPEECH RECOGNITION AND MUSIC PERCEPTION by Ruth Mei Flaherty A thesis submitted in partial fulfillment of the requirements for the Master of Arts degree in Speech Pathology and Audiology in the Graduate College of The University of Iowa August 2014 Thesis Supervisor: Professor Karen I. Kirk

3 Copyright by RUTH MEI FLAHERTY 2014 All Rights Reserved

4 Graduate College The University of Iowa Iowa City, Iowa CERTIFICATE OF APPROVAL MASTER'S THESIS This is to certify that the Master's thesis of Ruth Mei Flaherty has been approved by the Examining Committee for the thesis requirement for the Master of Arts degree in Speech Pathology and Audiology at the August 2014 graduation. Thesis Committee: Karen I. Kirk, Thesis Supervisor Carolyn J. Brown Bruce J. Tomblin

5 To my parents: Amelia and Dennis Flaherty ii

6 ACKNOWLEDGMENTS Foremost, I would like to express my sincerest gratitude to my advisor, Dr. Karen Kirk, for the continuous support of my Master s study and research, for her patience and immense knowledge. Her guidance helped me in the time of research and writing of this thesis. Additionally, I would like to thank the rest of my thesis committee: Dr. Bruce Tomblin and Dr. Carolyn Brown, for their insights and encouragement. My sincerest thanks also goes to Virginia Driscoll, for all of her quick problem solving, patience, and constant willingness to help. I also thank my fellow labmates Lauren Dowdy and Nora Prachar for their continual hard work, early mornings and late nights, and for all of the fun time we had the past couple of years. I am also extremely grateful for the support and patience of my friends, family, and loving boyfriend. Without any of the people I have mentioned, the completion of this thesis would not have been possible. iii

7 ABSTRACT The speech signal carries two types of information: linguistic information (the message content) and indexical information (acoustic cues about the talker). In the traditional view of speech perception, the acoustic differences among talkers were considered noise. In this view, the listeners task was to strip away unwanted variability to uncover the idealized phonetic representation of the spoken message. A more recent view suggests that both talker information and linguistic information are stored in memory. Rather than being unwanted noise, talker information aids in speech recognition especially under difficult listening conditions. For example, it has been shown that normal hearing listeners who completed voice recognition training were subsequently better at recognizing speech from familiar versus unfamiliar voices. For individuals with hearing loss, access to both types of information may be compromised. Some studies have shown that cochlear implant (CI) recipients are relatively poor at using indexical speech information because low-frequency speech cues are poorly conveyed in standard CIs. However, some CI users with preserved residual hearing can now combine acoustic amplification of low frequency information (via a hearing aid) with electrical stimulation in the high frequencies (via the CI). It is referred to as bimodal hearing when a listener uses a CI in one ear and a hearing aid in the opposite ear. A second way electrical and acoustic stimulation is achieved is through a new CI system, the hybrid CI. This device combines electrical stimulation with acoustic hearing in the same ear, via a shortened electrode array that is intended to preserve residual low frequency hearing in the apical portion of the cochlea. It may be that hybrid CI users can learn to use voice information to enhance speech understanding. This study will assess voice learning and its relationship to talker-discrimination, music perception, and spoken word recognition in simulations of Hybrid CI or bimodal hearing. Specifically, our research questions are as follows: (1) Does training increase talker identification? (2) Does familiarity with the talker or linguistic message enhance spoken iv

8 word recognition? (3) Does enhanced spectral processing (as demonstrated by improved talker recognition) generalize to non-linguistic tasks such as talker discrimination and music perception tasks? To address our research questions, we will recruit normal hearing adults to participate in eight talker identification training sessions. Prior to training, subjects will be administered the forward and backward digit span task to assess short-term memory and working memory abilities. We hypothesize that there will be a correlation between the ability to learn voices and memory. Subjects will also complete a talker-discrimination test and a music perception test that require the use of spectral cues. We predict that training will generalize to performances on these tasks. Lastly, a spoken word recognition (SWR) test will be administered before and after talker identification training. The subjects will listen to sentences produced by eight talkers (four male, four female) and verbally repeat what they had heard. Half of the sentences will contain keywords repeated in training and half of the sentences will have keywords not repeated in training. Additionally, subjects will have only heard sentences from half of the talkers during training. We hypothesize that subjects will show an advantage for trained keywords rather than non-trained keywords and will perform better with familiar talkers than unfamiliar talkers. v

9 TABLE OF CONTENTS LIST OF TABLES... viii LIST OF FIGURES... ix CHAPTER I. REVIEW OF THE LITERATURE... 1 A More Recent View of Speech Perception... 2 The Effects of Hearing Loss on Speech Perception... 5 The Cochlear Implant... 7 The Role of Indexical Information... 9 Electrical and Acoustic Stimulation (EAS) The Hybrid Cochlear Implant Auditory Training to Improve Speech Perception Abilities Talker-Identification Training in Simulated Binaural Hearing II. METHODS Participants Stimuli Training Stimuli Talker Training Sets Spoken Word Recognition Test Stimuli Cochlear Implant Simulation Hybrid CI Simulations Pre- and Post- Training Tests Hearing in Noise Test (HINT) Spoken Word Recognition Test (SWR) Test of Timbre Recognition (TTR) Talker Discrimination Test (TDT) Digit Span Test (Forward and Backward) Procedure Training Sessions Familiarization (Phase 1) Voice Identification Training (Phase 2) Voice Identification Testing (Phase 3) III. RESULTS Experimental Group Participants Phase 1 Familiarization Phase 2 Voice Identification Training vi

10 Phase 3 Voice Identification Testing Comparison of Pre Versus Post Tests Spoken Word Recognition Talker Discrimination Test Music Timbre Test of Recognition Digit Span Test (Forward and Backward) Relationship Between Voice Learning and performance on Other Tasks IV. DISCUSSION Research Questions Voice Learning Effect of Key Word Familiarity on Voice Learning Effect of Talker and Key Word Familiarity on Spoken Word Recognition Transfer of Learning to Other Tasks Spoken Word Recognition Talker Discrimination Music Test of Timbre Recognition Factors that Influence Learning Conclusion REFERENCES vii

11 LIST OF TABLES Table 1. Demographic Characteristics of the Participants Correlation between Changes in Pre-Post Task Performances and Change in Phase 3 Voice Identification Performance Correlation between Changes in Pre-Post Task Performances and Forward and Backward Digit Span Performances viii

12 LIST OF FIGURES Figure 1. Schematic of the 4-session talker-identification training protocol for experimental subjects Voice Identification During Training (Phase 2) across Key Word Types Voice identification During Training (Phase 2) for Novel Key Words Voice Identification During Training (Phase 2) for Repeated Key Words Voice identification during testing (Phase 3) across Key Word Types Voice Identification During Testing (Phase 3) for Novel Key Words Voice identification During Testing (Phase 3) for Repeated Key Words Average Pre-Post SWR Scores for Experimental and Control Groups Individual Pre-Post SWR Scores for Experimental and Control Groups Pre-Post SWR Scores as a Function of Talker Familiarity and Novel-key-word Type for Experimental Group Participants Pre-Post SWR Scores as a Function of Talker and Novel-key-word type for Control Group Participants Pre-Post SWR Scores as a Function of Talker Familiarity and Repeated-keyword Type for Experimental Group Participants Pre-Post SWR Scores as a Function of Talker and Repeated-key word type for Control Group Participants Average Pre-Post Talker Discrimination Test for Experimental and Control Groups Average Pre-Post Timbre Test of Recognition Scores for Experimental and Control Groups Average Digit Span Pretest Scores (Forward and Backward) by Experimental and Control Groups ix

13 1 CHAPTER I REVIEW OF THE LITERATURE In the traditional view of speech perception, indexical cues were considered noise in the speech signal obscured the linguistic message, making it more difficult to extract the meaning of the message (Nygaard & Pisoni, 1998). Talker identification was assumed to involve mechanisms distinct and separate from those mechanisms underlying comprehension of the linguistic message; therefore they were viewed as two independent processes of speech perception (Nygaard & Pisoni, 1998). Early studies in search of invariant speech cues important for speech perception hypothesized that linguistic information constituted the significant part of the acoustic signal, in contrast to background noise and linguistically irrelevant features (i.e., indexical properties) (Liberman et al., 1967). In fact, in an early investigation of the fundamental cues for the perception of speech, Liberman and colleagues proposed that each person possessed an auditory encoder in which to recover phonemes in the face of talker acoustic variability (i.e., stress, intonation, and tempo). It was believed that this auditory encoder was responsible for normalizing talker differences to uncover idealized representations of speech sound units. Keeping with this traditional perspective, early speech perception research studies sought to remove talker variability from experimental stimuli by specifically identifying the acoustic cues needed to perceive various phonemes and phoneme classes. Researchers analyzed the acoustic cues necessary for phoneme recognition and created synthetic speech to use in experiments of speech perception. However, the question of exactly how listeners are able to perceive speech amidst acoustic and contextual variability in every day speech remained. Indeed, the shortcomings of the traditional approach were revealed as theoretical developments in cognitive neuroscience and new computational tools emerged (Pisoni and Levi, 2005). Additionally, collaborative research developments between speech perception researchers and cognitive psychologists in the area of categorization (mental representations

14 2 of categories used to group new ideas and objects) and linguistics in the study of frequencybased phonology (how frequency of use of particular phonemes may affect the recognition of certain phonemes and the phonology of a language) offered new insight into the aforementioned issues involving variability and invariance in the perception of speech (Pisoni and Levi, 2005). Up to this point, speech scientists had struggled to explain the successful interpretation of the speech signal in terms of two conditions inherent in the traditional view: linearity and invariance. Linearity is defined as the condition in which each phoneme has a corresponding stretch of sound and that phonemes are adjoined, whereas invariance captures the idea that for each speech sound there are attributed acoustic features across contexts (Pisoni and Levi, 2005). The meeting of both of these conditions is clearly prevented due to coarticulatory effects. Coarticulation is the presence of multiple phonemes and their corresponding acoustic features at one point in time of the speech signal. Acoustic features also vary based on the given phonetic environment in which a speech sound is produced. These revelations led to a reconceptualization of the theory underlying speech perception and new approaches for studying speech perception and spoken word recognition transpired. A More Recent View of Speech Perception In contrast with the traditional abstractionist view of speech perception, new theoretical approaches focused on the interaction among speech perception, speech production, and memory in language processing. Such approaches are referred to as episodic approaches, rooted in the assumption that spoken words are represented in lexical memory as a collection of specific individual perceptual episodes. This view is supported by studies of the processing of stimulus variability (Pisoni & Levi, 2005). Goldinger, Pisoni, and Logan (1991) investigated the effect of talker-variability on memory and recall to examine the relationship between perception of a linguistic message and processing of talker information. Either a single talker or multiple talkers presented lists of easily recognizable words and

15 3 more difficult words at varying rates. The ease or difficulty with which words could be recognized was determined by the word s lexical properties including word frequency (how often a word occurs), lexical neighborhood density (the number of phonemically similar words, or neighbors to the target word) and neighborhood frequency (the average frequency of all lexical neighbors to a target). At faster presentation rates, words from single talker lists were recalled more accurately than words from multiple talker lists, with the strongest recall at the beginning of the list. However, when the presentation rate was slowed, words in the early list position produced by multiple talkers were recalled more accurately than single talker lists. It was hypothesized that listeners may be attending more intently to pairings of the words with the talkers when allowed more time for word rehearsal and transfer into long term memory (Goldinger, Pisoni, and Logan, 1991). This stored distinctive information can then be used to make lists more discriminable, improving the recall of the words. Although effects of presentation rate demonstrated that talker variability affects perceptual encoding and rehearsal processes, the advantage in recall for easy words versus hard words did not reach significance. The results of this study suggest that the processing of talker information is an integral part of speech perception requiring perceptual, attentional, and memory processes. Such results provide evidence against the abstractionist theory and evidence for the joint processing of indexical and lexical information. With the acknowledgement that indexical cues can assist the listener in the recovering of an intended message, more recent research studies have investigated how training of these specific factors may enhance speech perception. Nygaard and Pisoni (1998) conducted three experiments to study the recognition of spoken words in isolation and in sentence contexts after talker-recognition training. In all three experiments, normal-hearing participants were

16 4 trained to learn a set of 10 talkers voices and then given a test of spoken word recognition to assess the influence of learning the voices on the processing of the linguistic message. Word recognition tests were presented in different signal-to-noise ratios: +10 db, +5 db, 0 db, and -5 db. In the first experiment, listeners learned the voices from isolated words and were then tested with novel isolated words to assess spoken word recognition. Listeners demonstrated better word recognition performance in noise when the words were produced by a familiar talker rather than an unfamiliar talker. In the second experiment, listeners learned voices from sentence-length utterances and were then tested with isolated words. However, improved performance for familiar voices from sentences to novel words in isolation was not found. The authors hypothesized that listeners learned a different set of acoustic properties available in the sentence-length utterances from those present in isolated words, affecting the listeners word recognition performance (Nygaard and Pisoni, 1998). Although listeners who heard familiar voices were better at identifying isolated words than those who heard unfamiliar voices in the spoken word recognition task, the results did not reach statistical significance. In the final experiment, listeners learned voices from sentence-length utterances and speech recognition was then tested with sentences. Recognition of words in sentences was significantly better for sentences produced by familiar talkers, demonstrating that perceptual learning of voices from sentence-length utterances can facilitate word recognition (Nygaard and Pisoni, 1998). Ultimately, this study showed that learning to identify talkers voices increases sensitivity to phonetic information in the speech signal and enhances the perception of the linguistic properties of speech in both isolated words and sentences.

17 5 The Effects of Hearing Loss on Speech Perception As demonstrated by the previous studies, manipulating indexical cues such as talker number and speaking rate can either enhance or reduce spoken word recognition in listeners with normal hearing. However, the influence of talker information on speech perception for individuals with hearing loss, in which access to both types of information may be compromised, has not been as widely researched. In order to better understand how hearing loss may impact speech perception, perceptual processes supporting word recognition must be clearly identified for these individuals. Until recently, traditional spoken word recognition tests typically used phonetically balanced word lists produced by one talker at one speaking rate (Hirsh, Davis, Silverman, Reynolds, Eldert, & Benson, 1952; Causey, Hood, Hermanson, & Bowling, 1984; Nilsson, Soli, & Sullivan, 1994). Although such tests are useful in documenting word-recognition performance, they may not adequately assess speech perception under more natural listening conditions in which there is much stimulus variability. Kirk, Pisoni, and Miyamoto (1997) compared spoken word recognition performance by hearing impaired listeners as a function of talker number, speaking rate, and lexical complexity (based on frequency of occurrence and number of phonemically similar words). Additionally, each participant answered a 20-item questionnaire to assess his or her communication abilities in daily listening situations. The questionnaire asked participants to rate statements from various subscales of the Abbreviated Profile of Hearing Aid Benefit (APHAB) by Robyn Cox (1997) including Familiar Talkers, Reduced Cues, Background Noise, and Distortion of Sound. Kirk et al. also added two other subscales: Gender and Speaking Rate. All subscales were rated on a 7-point scale where A indicated that the

18 6 statement was always true (99%) and G indicated that the statement was never true (1%) (Kirk et al., 1997). The authors hypothesized that word recognition performance on lists involving stimulus variability would correlate better with self-reports of listening ability than conditions in which variability was constrained. Results showed that identification scores were poorer in the multiple-talker condition than the single-talker condition and that word recognition scores decreased as speaking rate increased (Kirk, et al., 1997). In contrast with the data from the study by Goldinger, Pisoni, and Logan (1991) presented above, the present study observed a significantly higher word recognition performance for lexically easy words than lexically hard words. As predicted, the subjects reported communication abilities in daily activities from the questionnaire were more highly correlated with performance under conditions involving stimulus variability than under those with minimal variability. For example, there were moderately high correlations (Total Score,.49) between the multi-talker and mixed-speaking rate conditions and the items from the APHAB subscales (i.e., Distortion of Sound, Familiar Talkers, Reduced Cues, and Background Noise) (Kirk et al., 1997). In sum, all three variables- talker variability, speaking rate, and lexical difficultysignificantly influenced speech perception abilities in individuals with hearing loss. It is important to note that all of the participants in this study scored highly on traditional tests of spoken word recognition performance (Kirk, Pisoni, and Miyamoto, 1997). It was not until the participants were tested with the perceptually robust speech materials containing different sources of stimulus variability that the effects of indexical factors on the linguistic coding of speech were revealed. It appears that these new tests are measuring several underlying aspects of speech perception in the research laboratory that are capturing the conditions

19 7 encountered in everyday listening situations. A test such as the one used in this study enables an examiner to more accurately predict how a person with hearing loss may perform in natural communication situations and when using a sensory aid such as a hearing aid or cochlear implant (CI). The Cochlear Implant Persons with mild-moderate degrees of hearing loss typically use hearing aids, while those with more severe-profound loss typically will use a CI. This thesis project is focused on adults with severe-profound hearing loss and the opportunity to improve speech perception abilities with the latest CI technology and rehabilitation strategies. A CI s function is to bypass missing or damaged sensory hair cells by directly stimulating surviving neurons in the auditory nerve (Wilson & Dorman, 2009). The cochlea is tonotopically organized, where the basal part of the cochlea conveys high frequency sounds to the brain and the apical part of the cochlea conveys low frequencies. Implant systems attempt to reproduce this tonotopic organization of the cochlea by stimulating electrodes toward either the basal or apical portions of the cochlea to represent the corresponding frequencies. The components of a CI include: (1) a microphone to sense the sound in the environment; (2) a speech processor to transform the microphone input into stimuli for the implanted electrodes; (3) a transcutaneous link for the transmission of power and stimulus information across the skin; (4) an implanted receiver/stimulator to decode the information and generate stimuli for the electrodes; (5) a cable to connect the outputs of the receiver/stimulator to the electrodes; and (6) the electrode array (Wilson & Dorman, 2009). In addition to the mechanical components of a CI, there is a biological component that includes an individual s auditory nerve, auditory

20 8 pathways in the brainstem, and auditory cortex (Wilson & Dorman, 2009). This biological component varies widely in its functional capabilities across individuals with hearing loss. In early designs of the CI dating back nearly 30 years, little more than a sensation of sound and temporal patterns were conveyed (Wilson & Dorman, 2009). Many developments in processing strategies, electrode positioning, and finer frequency representations have led to relatively great success for CI users. Mean present-day speech recognition scores for a unilateral CI user range from 50-85% depending on the conditions (i.e., in noise or in quiet, respectively) (Dorman et al., 2009; Wilson & Dorman, 2009; Gifford, Shallop, & Peterson, 2008). For example, Gifford, Shallop, and Peterson (2008) reported 85% sentence recognition accuracy and CNC monosyllabic word accuracy in quiet. Although average scores indicate good recognition of speech in quiet, word recognition is much more difficult in noise and a wide range of performance is noted across individual CI users. In spite of remarkable progress in the CI design and performance over the last three decades, there remains great variability in the outcomes for individuals with CIs. One reason for this variability can be attributed to the biological component unique to each individual as mentioned above. Other reasons may include age of implantation, limitations imposed by present electrode designs and placements, a mismatch between the number of discriminable sites and the number of effective channels, a deficit in fine structure representation ( fine frequency information related to frequency variations within band-pass channels), and/or poor representation of fundamental frequency (F0) needed for complex sounds (Wilson & Dorman, 2009).

21 9 The Role of Indexical Information It is evident from the literature that CI users perform well on regular speech recognition tasks. However, there are relatively few studies investigating how CI users process indexical information; the limited studies available suggest that they perform poorly (Gantz et al., 2009; Brown & Bacon, 2009; Zhang, Dorman, & Spahr, 2010; Buchner et al., 2009; Turner et al., 2004). Indexical cues provide important information such as gender, age, dialect, and fundamental frequency (F0). Wilson and Dorman (2009) list five reasons why accurate perception of F0 alone can be important: (1) to separate auditory streams from different sources (such as a competing voice or background noise); (2) to identify a speaker s gender; (3) to discriminate between emotion and declarative content versus inquisitive content; (4) to perceive tone languages; and (5) to perceive melody. Poor recognition of indexical cues can contribute to the noted difficulties in such speech recognition tasks, particularly those under variable or noisy conditions. Vongphoe and Zeng (2005) investigated whether temporal cues provided by a traditional CI are sufficient to support both speech (linguistic information) and speaker recognition (indexical information). Ten CI subjects and six normal-hearing subjects were recruited for this study. In one condition, the subjects were asked to recognize the vowel produced by the speaker. In the second condition, subjects were asked to identify the speaker. All subjects completed intensive training for the speaker recognition task. Normal-hearing subjects achieved nearly perfect scores in both conditions. CI subjects achieved good scores in vowel recognition (65%) but poor scores in speaker identification (23%) (Vongphoe & Zeng, 2005). The results suggest that the brain may use different strategies to process information regarding speaker and speech recognition based on which acoustic cues are

22 10 available. The authors suggested that speaker recognition relies more on low frequency cues and are highly related to F0, while vowel recognition relies more on high-frequency cues and formant frequencies (Vongphoe & Zeng, 2005). This study highlighted the limitations of traditional CI processing strategies in effectively conveying low frequency and F0 cues for speaker recognition. Vongphoe & Zeng (2005) proposed that either a slow varying form of frequency modulation or explicit F0 information should be encoded in future cochlear implants. Electrical and Acoustic Stimulation (EAS) A recent advance in cochlear implantation that may enhance talker identification by CI users is combined electrical and acoustic stimulation (EAS) of the auditory system. EAS can be used for persons with residual hearing in the low frequencies. In one configuration of EAS, high frequency information is accessed via the CI on one ear while low frequency information is provided by a hearing aid on the opposite ear, or by residual low-frequency hearing. This configuration has also been referred to as the bimodal approach. This acoustic information is thought to complement the higher frequency information provided by the CI and electrical stimulation. In comparison to the weak representations of F0 with a unilateral traditional CI, representations appear to be highly robust with EAS (Wilson & Dorman, 2009). EAS has demonstrated substantial benefit for listening to speech in quiet, in noise, in competition with another talker or multitalker babble, compared to either electrical stimulation or acoustic stimulation alone (Wilson & Dorman, 2009). Brown and Bacon (2009) evaluated the importance of F0, the acoustic amplitude envelope, and the combination of F0 and the amplitude envelope for EAS in competing backgrounds. Low frequency speech was replaced with a tone that was modulated in

23 11 frequency to track the F0 of the speech, in amplitude with the envelope of the low-frequency speech, or both. A four-channel vocoder simulated electric hearing. This was presented alone, combined with 500 Hz low-pass target speech, or combined with a tone that was either unmodulated, modulated in frequency by the dynamic change in F0, modulated in amplitude by the envelope of the low-pass speech, or modulated in both frequency and amplitude. Additionally, the 500 Hz low-pass target speech and each of the tonal cues were presented without the vocoder output. The participants repeated as much of a target sentence as they could without feedback. Results indicated a significant benefit of additional F0 or envelope cues over the simulated electric hearing alone (23-57 percentage points). Furthermore, the combination of F0 and envelope cues provided significantly more benefit than either cue alone, suggesting a synergistic effect (Brown & Bacon, 2009). The authors hypothesized that large improvements in competing backgrounds were observed due to several linguistic cues provided by F0 such as consonant voicing, lexical boundaries, contextual emphasis, and manner (Brown & Bacon, 2009). Ultimately, this study demonstrated the usefulness of both F0 and amplitude cues in simulated EAS for speech intelligibility. Recently, research on EAS has moved from testing normal hearing listeners with simulations to assessing performance of patients using a CI and low-frequency residual hearing. Zhang, Dorman, and Spahr (2010) investigated the minimum amount of lowfrequency information needed to achieve speech perception benefit while using bimodal listening. Participants were presented with monosyllabic words in quiet and sentences in noise in three listening conditions: electric stimulation alone, acoustic stimulation alone, and combined EAS. The acoustic stimuli presented to the nonimplanted ear were low-passfiltered at 125, 250, 500, or 750 Hz, or unfiltered. The level of performance for word

24 12 recognition in quiet in the EAS condition was significantly higher than the performance in either the electric only or acoustic only conditions. The same was true for recognizing sentences in noise at a +10 db signal to noise ratio (SNR). The authors determined that improvement in the EAS condition is present even when the acoustic information was limited to the 125-Hz-low-passed signal. Zhang et al. reported that F0 accounted for the majority of speech perception benefit when using EAS. They suggested that F0 information improves voicing recognition, which allows for a reduced number of word candidates in the lexicon in quiet and marks syllable structures and word boundaries in noise (Zhang et al., 2010). This ultimately resulted in significantly improved word recognition in quiet and sentence recognition in noise. A similar study by Dorman, Gifford, Spahr, and McKarns (2008) compared speech recognition, melody recognition, and gender voice discrimination performance between EAS patients with a CI in one ear and low-frequency hearing in the opposite ear and patients with only unilateral CIs. Recognition tests were presented in the same three conditions: electric stimulation alone, acoustic stimulation alone, and combined EAS. For melody recognition, acoustic stimulation alone and EAS were not significantly different from each other but both produced significantly better results than the electric stimulation only. For gender voice discrimination, there was not a significant difference in performance among the conditions possibly due to ceiling effects (Dorman et al., 2008). For speech recognition performance, the EAS condition produced the highest scores out of the three conditions, demonstrating a percentage point increase in performance on tests of word and sentence recognition in quiet and sentence recognition in noise. EAS proved especially advantageous for sentence recognition in noise. At a + 10 db SNR, 6% of conventional CI patients scored 85% correct

25 13 or better on sentences at, while 33% of EAS patients achieved scores at this level (Dorman et al., 2008). The authors suggested that fine-grained information about voice pitch allowed the listeners to segregate speech from noise. This study further supports the advantage of adding low frequency acoustic information to electric stimulation for speech recognition in both quiet and noisy conditions. Most, Harel, Shpak, and Luntz (2011) assessed the perception of suprasegmental features by adults who use a CI and a HA in opposite ears. Intonation, syllable stress, and word emphasis were assessed in a CI only condition and in a CI + HA (bimodal) condition. Participants listened to recorded speech materials and were given three closed-format tests requiring identification of the correct stimulus among printed alternatives. For example, participants listened to a recorded sentence and reported whether it was a statement or a question in the intonation subtest. Results indicated a significant advantage for the bimodal listening condition in the perception of all three suprasegmental features. Suprasegmental features require perception of time-intensity envelope and F0 information which was most likely conveyed in the low-frequency information provided by the HA (Most et al., 2011). These results further support the findings by Brown and Bacon (2009) discussed above. However, inspection of the individual data revealed that there was great variability in performance across individuals. The authors hypothesized that this could be attributed to the amount of residual hearing or the CI type and coding strategy of the individual (Most et al., 2011). Ultimately, the results demonstrate the overall advantages of bimodal stimulation, along with individual differences.

26 14 The Hybrid Cochlear Implant Thus far the literature has discussed improved speech recognition associated with EAS in individuals with a CI in one ear and acoustic hearing in the opposite ear. A recent advancement in CI design, the Hybrid CI, combines electrical stimulation for high frequency sound with acoustic hearing for low frequency information in the same ear (Gantz & Turner, 2004). The Hybrid CI consists of a shortened electrode array that intended to preserve residual low frequency hearing in the apical portion of the cochlea. This option is best for individuals with a high frequency sensorineural hearing loss, resulting from damage to the basal portion of the cochlea, but functional hair cells in the apical portion. High frequency hearing loss in the most common form of adult hearing loss usually caused by noise exposure, presbycusis, and ototoxic medications (Gantz & Turner, 2004). Therefore, the development of the Hybrid CI has great potential for improving speech recognition in many individuals with hearing loss. Gantz and Turner (2004) conducted a study using the Iowa/Nucleus Hybrid Implant in nine adults with severe high frequency hearing loss. The researchers sought to determine if placing an electrode array up to 10 mm in the inner ear would preserve or damage residual low frequency hearing. Traditional CI electrode arrays range from mm and usually damage any residual hearing. Both 6-mm and 10-mm electrode arrays were used in this study. Residual low frequency hearing was preserved to within db in all subjects following implantation of the Iowa/Nucleus Hybrid Implant and preoperative monosyllabic word and sentence scores were unchanged (Gantz & Turner, 2004). Consonant recognition performance was assessed prior to implantation in two conditions: HA in the ear to receive the implant and HAs bilaterally. Post-implantation measures were taken in three conditions:

27 15 CI only, CI + ipsilateral HA, and CI + bilateral HA. A general trend showed an increase in performance for most patients as more devices provided multiple sources of information. Preimplantation, consonant recognition scores in the bilateral HA condition ranged from 18-43%. Scores were more than doubled with the addition of the 10-mm electrode to bilateral HAs resulting in monosyllabic word recognition of 83-90% (Gantz & Turner, 2004). The results of this study indicate that a 10-mm electrode Hybrid CI can provide high frequency speech information without damaging low frequency acoustic hearing for the improvement of speech recognition. Based on the positive findings from the previous study, Gantz et al. (2009) conducted a study involving the implantation of the Iowa/Nucleus 10-mm Hybrid implant in a much larger group of 87 participants. Again, comparisons were made between patients preoperative word recognition scores in quiet using bilateral HAs and postoperative scores using the implant and bilateral HAs. Additionally, the change in the mean low frequency pure tone acoustic threshold was measured pre- and post-implantation. Improvements in either word recognition or speech reception threshold occurred in 74% of the participants. Improvements in both measures occurred in 48% of the participants (Gantz et al., 2009). A real-world comparison in which patients were tested with both ears showed significant benefit from the Hybrid CI. In the acoustic only preoperative condition, participants identified 35% of CNC words. Performance increased to 73% 12 months after Hybrid implantation and was maintained at 24 months (Gantz et al., 2009). Additionally, a subgroup of 27 Hybrid users was tested using spondee recognition in multitalker babble and their scores were compared to those of standard CI users. On average, the Hybrid users showed an advantage for speech recognition in noise unless their low-frequency postoperative hearing

28 16 levels approached profound levels (Gantz et al., 2009). This study further supports the benefit of preserving low frequency hearing in Hybrid CI users for speech perception in quiet and in noise. Results also demonstrated an advantage in speech perception ability when using a Hybrid CI in comparison to using a traditional CI with a standard electrode array. Turner, Gantz, and Reiss (2008) also compared speech perception abilities in competing backgrounds for individuals using Iowa/Nucleus Hybrid CIs with those using traditional CIs. Additionally, the researchers conducted a second experiment investigating the possible effects of assigning shifted-from-normal speech frequencies to the Hybrid implant s electrodes. Information presented via the implanted electrode is shifted in relation to the normal place-frequency map of the cochlea (Turner et al., 2008). Traditional CIs shift speech frequencies to a more basal region of the cochlea that is responsible for high frequency hearing. In a Hybrid CI, the electrode array is even shorter resulting in an even greater shift compared to a traditional CI. Therefore, the stimulation for Hybrid CI users is concentrated further on the basal end of the cochlea compared to traditional CI users. Information is also shifted in relation to the acoustically presented information at lower frequencies, with a potential gap for middle frequencies. This study s second experiment investigated the possible effects of shifted speech frequencies. In the first experiment, 19 adult Hybrid users and 20 adult traditional CI users identified spondee words presented in a two-talker background of sentences at 50% correct SNR, the value used as the speech recognition threshold. They were given a 12-alternative closed-set response set. When presented in quiet, all listeners recognized more than 90% of words (Turner et al., 2008). The purpose of this task was then to assess the listeners word recognition abilities despite background noise. First, it should be noted that speech recognition thresholds were considerably lower in some Hybrid users than those of any of the

29 17 traditional CI users (Turner et al., 2008). This indicates that the Hybrid users were able to recognize speech in more noisy conditions than the traditional CI users. Experiment results indicated a 9 db advantage of the Hybrid CI over the traditional CI for speech recognition in noise. Additionally, the researchers plotted the Hybrid users speech recognition threshold as a function of their pure tone thresholds to determine how much low frequency hearing needs to be preserved to achieve this benefit of speech recognition in noise. The data suggested that the advantage seen from Hybrid CIs exists unless the hearing loss approaches profound levels. The second investigation addressed the shifting of speech information in Hybrid users. In this experiment, Hybrid users speech recognition abilities were measured under several different types of extreme distortions of the normal place-frequency mapping of the cochlea. No consistent differences were found in speech recognition ability among the different maps suggesting flexibility of the auditory system to integrate acoustic and electrical information under distorted conditions (Turner et al., 2008). Turner et al. demonstrated a significant advantage in speech recognition in noise when using a Hybrid CI in comparison to using a traditional CI. Additionally, the second experiment showed that the Hybrid CI is able to integrate both low and high frequency information effectively despite speech information shifting. Listeners are able to adapt even with the distorted speech input and improve speech perception. Both sets of data certainly illustrate the success of the Iowa/Nucleus Hybrid CI. Auditory Training to Improve Speech Perception Abilities The literature discussed thus far demonstrates the benefit of additional low-frequency hearing for speech recognition in both quiet and noisy conditions for hearing- impaired individuals using a CI. Furthermore, Hybrid CI studies have demonstrated an even greater advantage in speech perception ability when compared to performance with a traditional CI. However, the issue of variability in outcomes remains. Learning electrically stimulated

30 18 speech patterns can be a new and difficult experience for many CI recipients. Active auditory rehabilitation may be one way to maximize the benefit of implantation for CI users. There has been some research on the effectiveness of auditory training on speech perception for HA users and traditional CI users. However, auditory training studies have focused on lexical information, and not indexical cues necessarily. Additionally, the data for auditory training on listeners using EAS is very limited. The goal of this section is to briefly present the available research on auditory training intended to improve speech perception abilities in HA users and traditional CI users. Burk, Humes, Amos, and Strauser (2006) trained hearing-impaired listeners on an isolated word list produced by a single talker. The authors evaluated the training s effectiveness on speech recognition of novel talkers with background noise. Two groups of normal-hearing subjects were recruited: noised-masked young adults, used to model the performance of elderly hearing-impaired listeners, and elderly hearing-impaired listeners who used HAs. Over a span of nine to 14 days, listeners were trained on a set of 75 monosyllabic words spoken by a single female talker. Word recognition was tested prior to auditory training and again upon completion of training. Test stimuli consisted of both trained words (i.e., words used in training) and novel words (i.e., words not used in training). Test stimuli were produced by the talker used in training and also by three unfamiliar talkers. Word-recognition performance of both trained and untrained words was measured before and after training in both open- and closed-set response conditions. When trained words were produced by the talker, both participant groups were better at recognizing familiar words than novel words. A small non-significant improvement was seen on untrained words produced by the same talker indicating some generalization to novel words (Burk et al., 2006). Similarly, when the test words were presented by unfamiliar talkers, the subjects were better at recognizing the trained words than the novel words. The significant performance increase on trained words was maintained across novel talkers suggesting that the listeners focused on word memorization rather than talker-specific cues. Older hearing-impaired listeners were

31 19 able to improve word recognition abilities to the same degree as the young normal-hearing listeners. However, training on isolated words was not sufficient to transfer to fluent speech involving sentences. Maintenance measures were taken six months after treatment for the hearing-impaired group. The subjects had decreased from 83.5% to 62.9% accuracy but still performed significantly better than pretreatment scores of 37.6%. However, it only took one hour of training for listeners to return to previous post-training performance levels. This study demonstrated that hearing-impaired listeners who use HAs were able to significantly improve word recognition performance with training in noise and with both familiar and novel talkers. However, generalization to conversational speech was not observed. Therefore, it may be best to use sentences in training rather than isolated words. Fu and Galvin (2007) developed a computer-assisted speech-training (CAST) program for adult CI users. The software targeted progressively more difficult acoustic contrasts among vowels and consonants using monosyllabic words. Visual feedback was provided as to the correctness of the response and auditory feedback was given in which the subject s incorrect response was repeatedly compared to the correct response. They tested the training program s effectiveness in improving CI recipients speech and possible generalization to music perception abilities after each week of training. Subjects were asked to use the CAST program at home for one hour each day, five days a week, for one month. Both vowel and consonant recognition significantly improved for all participants (15.8% and 13.5%, respectively). Because vowel and consonant contrasts were trained using only monosyllabic words, the results indicated that the improved recognition somewhat generalized to improve sentence recognition (Fu & Galvin, 2007). Additionally, there was some generalization from the music-training task involving melodic contour identification to an untrained task of familiar melody identification. However, there was large variability in the amount of training time it took for individuals to improve in speech perception. Some individuals made significant improvements after one day of training, while others needed the

32 20 full five days to reach significant improvement in speech recognition. Still, this study demonstrated training effectiveness for enhancing speech and music recognition in CI users. Both studies demonstrated the effectiveness of auditory training in improving speech perception for hearing-impaired adults. However, training focused on improving linguistic information perception. Having discussed the importance of low- frequency information for speech recognition in CI users, it is reasonable to inquire the effectiveness of training listeners on indexical properties of speech. Some studies have looked at the effect of training on both linguistic and indexical speech perception. However, research specifically training indexical information has been limited, mostly involving normal-hearing listeners using traditional CI simulations. Fu, Galvin, Wang, and Nogaki (2005) trained CI users in multitalker vowel and consonant recognition, and voice gender recognition. Participants partook in auditory training of speech stimuli at home for one hour each day, five days a week, for one month or longer. Training used multiple talkers (i.e., two female and two male) and targeted minimal speech contrasts presented in monosyllabic words and nonsense words. Training progressed in difficulty from phoneme discrimination training (i.e., a same/different response) to phoneme identification training. Stimuli and specific talkers used in training were not reused in the test stimulus set. Open-set word recognition scores significantly improved after four weeks of training (27.9% to 55.8%). Nevertheless, many of the phoneme recognition performances remained in the poor to fair range. Improvement was highly variable which could have been a result of the at-home implementation of the training protocol (Fu et al., 2005). Results showed significant improvement in consonant and vowel recognition after training but not voice gender recognition (Fu et al., 2005). Although materials used female and male speakers, the training focused on contrastive phoneme recognition on not on training of talker information. Furthermore, although phoneme recognition improved, most participants still performed in the poor to fair range after training. This suggests that training improvements would not

Vowel Perception in Normal and Hearing Impaired Listeners

Vowel Perception in Normal and Hearing Impaired Listeners University of Tennessee, Knoxville Trace: Tennessee Research and Creative Exchange University of Tennessee Honors Thesis Projects University of Tennessee Honors Program 5-0 Vowel Perception in Normal and

More information

Moderate auditory training can improve speech performance of adult cochlear implant patients

Moderate auditory training can improve speech performance of adult cochlear implant patients Moderate auditory training can improve speech performance of adult cochlear implant patients Qian-Jie Fu, John Galvin, Xiaosong Wang, and Geraldine Nogaki Department of Auditory Implants and Perception,

More information

Critical Review: Do Elderly Hearing Aid Users Perform Better on Speech Recognition in Noise Tests when Fitted Monaurally or Binaurally?

Critical Review: Do Elderly Hearing Aid Users Perform Better on Speech Recognition in Noise Tests when Fitted Monaurally or Binaurally? Critical Review: Do Elderly Hearing Aid Users Perform Better on Speech Recognition in Noise Tests when Fitted Monaurally or Binaurally? Ashley Blay M.Cl.Sc. (AUD.) Candidate University of Western Ontario:

More information

Does elderly speech recognition in noise benefit from spectral and visual cues?

Does elderly speech recognition in noise benefit from spectral and visual cues? INTERSPEECH 14 Does elderly speech recognition in noise benefit from spectral and visual cues? Yatin Mahajan 1, Jeesun Kim 1, Chris Davis 1 1 The MARCS Institute, University of Western Sydney y.mahajan@uws.edu.au,j.kim@uws.edu.au,chris.davis@uws.edu.au

More information

Learning to Recognize Talkers from Natural, Sinewave, and Reversed Speech Samples

Learning to Recognize Talkers from Natural, Sinewave, and Reversed Speech Samples Learning to Recognize Talkers from Natural, Sinewave, and Reversed Speech Samples Presented by: Pankaj Rajan Graduate Student, Department of Computer Sciences. Texas A&M University, College Station Agenda

More information

Effect of Training on Word-Recognition Performance in Noise for Young Normal-Hearing and Older Hearing-Impaired Listeners

Effect of Training on Word-Recognition Performance in Noise for Young Normal-Hearing and Older Hearing-Impaired Listeners Effect of Training on Word-Recognition Performance in Noise for Young Normal-Hearing and Older Hearing-Impaired Listeners Matthew H. Burk, Larry E. Humes, Nathan E. Amos, and Lauren E. Strauser Objective:

More information

MEDICAL COVERAGE GUIDELINES ORIGINAL EFFECTIVE DATE: 08/02/16 SECTION: SURGERY LAST REVIEW DATE: LAST CRITERIA REVISION DATE: ARCHIVE DATE:

MEDICAL COVERAGE GUIDELINES ORIGINAL EFFECTIVE DATE: 08/02/16 SECTION: SURGERY LAST REVIEW DATE: LAST CRITERIA REVISION DATE: ARCHIVE DATE: COCHLEAR IMPLANT Coverage for services, procedures, medical devices and drugs are dependent upon benefit eligibility as outlined in the member's specific benefit plan. This Medical Coverage Guideline must

More information

Rhythmic Perception, Music and Language: A New Theoretical Framework for Understanding and Remediating Specific Language Impairment

Rhythmic Perception, Music and Language: A New Theoretical Framework for Understanding and Remediating Specific Language Impairment 1 Rhythmic Perception, Music and Language: A New Theoretical Framework for Understanding and Remediating Specific Language Impairment Usha Goswami, Ruth Cumming and Angela Wilson Centre for Neuroscience

More information

Portions have been extracted from this report to protect the identity of the student. RIT/NTID AURAL REHABILITATION REPORT Academic Year 2003 2004

Portions have been extracted from this report to protect the identity of the student. RIT/NTID AURAL REHABILITATION REPORT Academic Year 2003 2004 Portions have been extracted from this report to protect the identity of the student. Sessions: 9/03 5/04 Device: N24 cochlear implant Speech processors: 3G & Sprint RIT/NTID AURAL REHABILITATION REPORT

More information

Functional Auditory Performance Indicators (FAPI)

Functional Auditory Performance Indicators (FAPI) Functional Performance Indicators (FAPI) An Integrated Approach to Skill FAPI Overview The Functional (FAPI) assesses the functional auditory skills of children with hearing loss. It can be used by parents,

More information

Critical Review: The Benefits of Auditory Training for Adults with Mild to Moderate Sensorineural Hearing Loss.

Critical Review: The Benefits of Auditory Training for Adults with Mild to Moderate Sensorineural Hearing Loss. Critical Review: The Benefits of Auditory Training for Adults with Mild to Moderate Sensorineural Hearing Loss. Gabriella Lesniak M.Cl.Sc AUD Candidate University of Western Ontario: School of Communication

More information

Auditory steady state response in children with auditory neuropathy

Auditory steady state response in children with auditory neuropathy Auditory steady state response in children with auditory neuropathy S O tawfik, H E samy and M O elbadry Abstract:- auditory neuropathy is a disorder characterized by preservation of outer hair cells function,

More information

Some experiments and fitting issues associated with patients having a cochlear implant in one ear and normal hearing in the other

Some experiments and fitting issues associated with patients having a cochlear implant in one ear and normal hearing in the other Some experiments and fitting issues associated with patients having a cochlear implant in one ear and normal hearing in the other Bob Carlyon, MRC CBU, Cambridge, U.K....and a cast of thousands Co-authors

More information

Evaluation of Wireless, Digital, Audio-Streaming Accessories Designed for the Cochlear Nucleus 6 Sound Processor

Evaluation of Wireless, Digital, Audio-Streaming Accessories Designed for the Cochlear Nucleus 6 Sound Processor Evaluation of Wireless, Digital, Audio-Streaming Accessories Designed for the Cochlear Nucleus 6 Sound Processor Jace Wolfe, Mila Morais Duke, and Erin Schafer Cochlear Ltd. and the GN Resound hearing

More information

62 Hearing Impaired MI-SG-FLD062-02

62 Hearing Impaired MI-SG-FLD062-02 62 Hearing Impaired MI-SG-FLD062-02 TABLE OF CONTENTS PART 1: General Information About the MTTC Program and Test Preparation OVERVIEW OF THE TESTING PROGRAM... 1-1 Contact Information Test Development

More information

Spoken Language in Children With Cochlear Implants

Spoken Language in Children With Cochlear Implants University Press Scholarship Online You are looking at 1-10 of 28 items for: keywords : cochlear implants Spoken Language in Children With Cochlear Implants Ann E. Geers in Advances in the Spoken Language

More information

. Niparko, J. K. (2006). Speech Recognition at 1-Year Follow-Up in the Childhood

. Niparko, J. K. (2006). Speech Recognition at 1-Year Follow-Up in the Childhood Psychology 230: Research Methods Lab A Katie Berg, Brandon Geary, Gina Scharenbroch, Haley Schmidt, & Elizabeth Stevens Introduction: Overview: A training program, under the lead of Professor Jeremy Loebach,

More information

J. Acoust. Soc. Am. 112 (4), October /2002/112(4)/1675/6/$ Acoustical Society of America

J. Acoust. Soc. Am. 112 (4), October /2002/112(4)/1675/6/$ Acoustical Society of America Benefits of amplification for speech recognition in background noise Christopher W. Turner and Belinda A. Henry Department of Speech Pathology and Audiology, University of Iowa, Iowa City, Iowa 52242 Received

More information

Relationship Between Acoustic Measures and Judgements of Intelligibility in Parkinson s Disease: A Within-Speaker Approach

Relationship Between Acoustic Measures and Judgements of Intelligibility in Parkinson s Disease: A Within-Speaker Approach Relationship Between Acoustic Measures and Judgements of Intelligibility in Parkinson s Disease: A Within-Speaker Approach Lynda Feenaughty, Kris Tjaden, & Joan Sussman Clinical Linguistics and Phonetics,

More information

Ruth Litovsky University of Wisconsin Madison, WI USA

Ruth Litovsky University of Wisconsin Madison, WI USA WAISMAN CENTER Binaural Hearing and Speech Laboratory Emergence of Spa?al Hearing in Pediatric Bilateral Cochlear Implant Users Ruth Litovsky University of Wisconsin Madison, WI USA ACIA, Nashville, December

More information

Expanding Performance Leadership in Cochlear Implants. Hansjuerg Emch President, Advanced Bionics AG GVP, Sonova Medical

Expanding Performance Leadership in Cochlear Implants. Hansjuerg Emch President, Advanced Bionics AG GVP, Sonova Medical Expanding Performance Leadership in Cochlear Implants Hansjuerg Emch President, Advanced Bionics AG GVP, Sonova Medical Normal Acoustic Hearing High Freq Low Freq Acoustic Input via External Ear Canal

More information

NORTHERN ILLINOIS UNIVERSITY. Effects of Recreational Drug Use on Users Speech Understanding Abilities in Noise. A Thesis Submitted to the

NORTHERN ILLINOIS UNIVERSITY. Effects of Recreational Drug Use on Users Speech Understanding Abilities in Noise. A Thesis Submitted to the NORTHERN ILLINOIS UNIVERSITY Effects of Recreational Drug Use on Users Speech Understanding Abilities in Noise A Thesis Submitted to the University Honors Program In Partial Fulfillment of the Requirements

More information

Invariance and Vowels

Invariance and Vowels Invariance and Vowels Vowels are cued by the frequencies of the first three formants. Duration, fundamental frequency and the shape of the short-term spectrum are acoustic correlates of vowel identity.

More information

EVALUATION OF KANNADA TEXT-TO-SPEECH [KTTS] SYSTEM

EVALUATION OF KANNADA TEXT-TO-SPEECH [KTTS] SYSTEM Volume 2, Issue 1, January 2012 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: EVALUATION OF KANNADA TEXT-TO-SPEECH

More information

ACOUSTICAL CONSIDERATIONS FOR EFFECTIVE EMERGENCY ALARM SYSTEMS IN AN INDUSTRIAL SETTING

ACOUSTICAL CONSIDERATIONS FOR EFFECTIVE EMERGENCY ALARM SYSTEMS IN AN INDUSTRIAL SETTING ACOUSTICAL CONSIDERATIONS FOR EFFECTIVE EMERGENCY ALARM SYSTEMS IN AN INDUSTRIAL SETTING Dennis P. Driscoll, P.E. and David C. Byrne, CCC-A Associates in Acoustics, Inc. Evergreen, Colorado Telephone (303)

More information

Quarterly Progress and Status Report

Quarterly Progress and Status Report Dept. for Speech, Music and Hearing Quarterly Progress and Status Report The relationship between residual hearing and speech intelligibility - Is there a measure that could predict a prelingually profoundly

More information

A Microphone Array for Hearing Aids

A Microphone Array for Hearing Aids A Microphone Array for Hearing Aids Bernard Widrow Professor of Electrical Engineering Stanford University widrow@isl.stanford.edu Abstract A directional acoustic receiving system is constructed in the

More information

RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 24 (2000) Indiana University

RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 24 (2000) Indiana University SPEECH AND IMPLICIT MEMORY RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 24 (2000) Indiana University Speech Perception and Implicit Memory: Evidence for Detailed Episodic Encoding of Phonetic

More information

PERCENTAGE ARTICULATION LOSS OF CONSONANTS IN THE ELEMENTARY SCHOOL CLASSROOMS

PERCENTAGE ARTICULATION LOSS OF CONSONANTS IN THE ELEMENTARY SCHOOL CLASSROOMS The 21 st International Congress on Sound and Vibration 13-17 July, 2014, Beijing/China PERCENTAGE ARTICULATION LOSS OF CONSONANTS IN THE ELEMENTARY SCHOOL CLASSROOMS Dan Wang, Nanjie Yan and Jianxin Peng*

More information

REGULATIONS FOR THE DEGREE OF MASTER OF SCIENCE IN AUDIOLOGY (MSc[Audiology])

REGULATIONS FOR THE DEGREE OF MASTER OF SCIENCE IN AUDIOLOGY (MSc[Audiology]) 224 REGULATIONS FOR THE DEGREE OF MASTER OF SCIENCE IN AUDIOLOGY (MSc[Audiology]) (See also General Regulations) Any publication based on work approved for a higher degree should contain a reference to

More information

Instrumentation (EEG Electrodes)

Instrumentation (EEG Electrodes) EE 791 EEG -1b Recording the EEG To record either evoked or ambient neural activity we can use an assembly of surface electrodes placed at standardized locations on the scalp known as the 10-20 system

More information

Tonal Detection in Noise: An Auditory Neuroscience Insight

Tonal Detection in Noise: An Auditory Neuroscience Insight Image: http://physics.ust.hk/dusw/ Tonal Detection in Noise: An Auditory Neuroscience Insight Buddhika Karunarathne 1 and Richard H.Y. So 1,2 1 Dept. of IELM, Hong Kong University of Science & Technology,

More information

Adventures in Bionic Hearing

Adventures in Bionic Hearing Adventures in Bionic Hearing Robert V. Shannon, Ph.D. House Research Institute Los Angeles, California BShannon@hei.org Eisen Otol&Neurotol 2003, Djourno, Eyries, and the First Implanted Electrical Neural

More information

The Coding of Sound by a Cochlear Prosthesis. IEEE Real World Engineering Project Background Lecture

The Coding of Sound by a Cochlear Prosthesis. IEEE Real World Engineering Project Background Lecture The Coding of Sound by a Cochlear Prosthesis IEEE Real World Engineering Project Background Lecture 1 Topic Concepts Tasks/Challenge Course time Speech generation from a sound source Voltage representation

More information

C HAPTER NINE. Signal Processing for Severe-to-Profound Hearing Loss. Stefan Launer and Volker Kühnel. Introduction

C HAPTER NINE. Signal Processing for Severe-to-Profound Hearing Loss. Stefan Launer and Volker Kühnel. Introduction C HAPTER NINE Signal Processing for Severe-to-Profound Hearing Loss Stefan Launer and Volker Kühnel Introduction Wide dynamic range compression has become the mostly used signal processing strategy for

More information

SoundRecover2 the adaptive frequency compression algorithm More audibility of high frequency sounds

SoundRecover2 the adaptive frequency compression algorithm More audibility of high frequency sounds Phonak Insight April 2016 SoundRecover2 the adaptive frequency compression algorithm More audibility of high frequency sounds Phonak led the way in modern frequency lowering technology with the introduction

More information

SNR Cooke 2006 Bradlow & Alexander Culter et al Shi Weiss & Dempsey YJCZH223

SNR Cooke 2006 Bradlow & Alexander Culter et al Shi Weiss & Dempsey YJCZH223 27 2013 3 * SNR Cooke 2006 Bradlow & Alexander 2007 Culter et al 2008 Shi 2009 Weiss & Dempsey 2008 Garcia Lecumberri et al 2010 * 10YJCZH223 392 Binns & Culling 2007 Patel et al 2010 Carroll et al 2011

More information

Effects of HA Amplification on the Neural Representation of Auditory and Visual Memory

Effects of HA Amplification on the Neural Representation of Auditory and Visual Memory Effects of HA Amplification on the Neural Representation of Auditory and Visual Memory Kristina C. Backer, Ph.D. 19 September 2016 33 rd World Congress of Audiology Acknowledgments Kelly Tremblay, Ph.D.,

More information

Development and Validation of the AzBio Sentence Lists

Development and Validation of the AzBio Sentence Lists Development and Validation of the AzBio Sentence Lists Anthony J. Spahr, 1 Michael F. Dorman, 1 Leonid M. Litvak, 2 Susan Van Wie, 1 Rene H. Gifford, 3 Philipos C. Loizou, 4 Louise M. Loiselle, 1 Tyler

More information

Your Hearing ILLUMINATED

Your Hearing ILLUMINATED Your Hearing ILLUMINATED INFORMATION FROM YOUR HEARING CARE PROFESSIONAL REDISCOVER your hearing and reconnect 1 with the important things you might have been missing. Your sense of hearing is a vital

More information

MEDICAL POLICY SUBJECT: COCHLEAR IMPLANTS AND AUDITORY BRAINSTEM IMPLANTS. POLICY NUMBER: 7.01.26 CATEGORY: Technology Assessment

MEDICAL POLICY SUBJECT: COCHLEAR IMPLANTS AND AUDITORY BRAINSTEM IMPLANTS. POLICY NUMBER: 7.01.26 CATEGORY: Technology Assessment MEDICAL POLICY SUBJECT: COCHLEAR IMPLANTS AND PAGE: 1 OF: 6 If the member's subscriber contract excludes coverage for a specific service it is not covered under that contract. In such cases, medical policy

More information

Partial deafness cochlear implantation

Partial deafness cochlear implantation Introduction Partial deafness cochlear implantation R3 楊喬森 1. Adult hearing loss: high frequency sensorineural deficit, damage to hair cells in the basal end of the cochlea, inability to distinguish the

More information

Good Practice Guidance for Adult Hearing Aid Fittings and Services Background to the Document and Consultation

Good Practice Guidance for Adult Hearing Aid Fittings and Services Background to the Document and Consultation Good Practice Guidance for Adult Hearing Aid Fittings and Services Background to the Document and Consultation The International Society of Audiology (ISA) is developing a generic template of principles

More information

Speech-Language Pathology Curriculum Foundation Course Linkages

Speech-Language Pathology Curriculum Foundation Course Linkages FACULTY OF HEALTH PROFESSIONS School of Human Communication Disorders Speech-Language Pathology Curriculum Foundation Course Linkages Phonetics (HUCD 5020) a. Vowels b. Consonants c. Suprasegmentals d.

More information

L2 EXPERIENCE MODULATES LEARNERS USE OF CUES IN THE PERCEPTION OF L3 TONES

L2 EXPERIENCE MODULATES LEARNERS USE OF CUES IN THE PERCEPTION OF L3 TONES L2 EXPERIENCE MODULATES LEARNERS USE OF CUES IN THE PERCEPTION OF L3 TONES Zhen Qin, Allard Jongman Department of Linguistics, University of Kansas, United States qinzhenquentin2@ku.edu, ajongman@ku.edu

More information

Content and Procedural Learning in Repeated Sentence Tests of Speech Perception

Content and Procedural Learning in Repeated Sentence Tests of Speech Perception Content and Procedural Learning in Repeated Sentence Tests of Speech Perception E. William Yund 1,2 and David L. Woods 2,3 Objectives: Repeated testing of speech perception is unavoidable in evaluating

More information

The Audiogram: Explanation and Significance

The Audiogram: Explanation and Significance R Recently, I ve been trying to organize some of the columns and articles I ve written over the past ten years. As I was looking through them, it became apparent that I ve neglected to discuss what is

More information

Learning to Listen with Hearing Technologies: An interdisciplinary perspective on aural rehabilitation

Learning to Listen with Hearing Technologies: An interdisciplinary perspective on aural rehabilitation Learning to Listen with Hearing Technologies: An interdisciplinary perspective on aural rehabilitation Melissa Ferrello, Au.D., CCC-A, FAAA John Henry, M.S., CCC-SLP Erika King, M.A., CCC-SLP April 13,

More information

Preservation of Hearing in Cochlear Implant Surgery: Advantages of Combined Electrical and Acoustical Speech Processing

Preservation of Hearing in Cochlear Implant Surgery: Advantages of Combined Electrical and Acoustical Speech Processing The Laryngoscope Lippincott Williams & Wilkins, Inc. 2005 The American Laryngological, Rhinological and Otological Society, Inc. Preservation of Hearing in Cochlear Implant Surgery: Advantages of Combined

More information

SO YOUR PARENT NEEDS HEARING AIDS (The Adult Child s Guide to Hearing Loss and Hearing Aids) By Patty Earl, Ph.D., CCC-A

SO YOUR PARENT NEEDS HEARING AIDS (The Adult Child s Guide to Hearing Loss and Hearing Aids) By Patty Earl, Ph.D., CCC-A SO YOUR PARENT NEEDS HEARING AIDS (The Adult Child s Guide to Hearing Loss and Hearing Aids) By Patty Earl, Ph.D., CCC-A Welcome to world of hearing loss and hearing aids. Your mother or father may have

More information

The Role of the Efferent System in Auditory Performance in Background Noise

The Role of the Efferent System in Auditory Performance in Background Noise The Role of the Efferent System in Auditory Performance in Background Noise Utah Speech-Language Hearing Association, 2015 Skyler G. Jennings Ph.D., Au.D. CCC-A Outline Hearing in a noisy background Normal

More information

Cognition and Hearing Loss

Cognition and Hearing Loss Cognition and Hearing Loss Sophia E. Kramer, Adriana A. Zekveld Dept. of Otolaryngology/Audiology EMGO Institute for Health and Care Research VU University Medical Center Amsterdam, The Netherlands Hearing

More information

Physiology of Hearing Dr. Hesham Kozou Is hearing important? Communica i ti ttion Language Localization sound sources

Physiology of Hearing Dr. Hesham Kozou Is hearing important? Communica i ti ttion Language Localization sound sources Physiology of Hearing Dr. Hesham Kozou Undergraduate Round Courses 2008-2009 2009 Is hearing important? Communication Hearing is essential to Language Localization Determining the location of unseen sound

More information

CLASSIFICATION OF STOP CONSONANT PLACE OF ARTICULATION: COMBINING ACOUSTIC ATTRIBUTES

CLASSIFICATION OF STOP CONSONANT PLACE OF ARTICULATION: COMBINING ACOUSTIC ATTRIBUTES CLASSIFICATION OF STOP CONSONANT PLACE OF ARTICULATION: COMBINING ACOUSTIC ATTRIBUTES Atiwong Suchato Speech Communication Group, Laboratory of Electronics, MIT, Cambridge, MA, USA atiwong@mit.edu ABSTRACT

More information

Stress and Accent in Tunisian Arabic

Stress and Accent in Tunisian Arabic Stress and Accent in Tunisian Arabic By Nadia Bouchhioua University of Carthage, Tunis Outline of the Presentation 1. Rationale for the study 2. Defining stress and accent 3. Parameters Explored 4. Methodology

More information

Lecture 1-8: Audio Recording Systems

Lecture 1-8: Audio Recording Systems Lecture 1-8: Audio Recording Systems Overview 1. Why do we need to record speech? We need audio recordings of speech for a number of reasons: for off-line analysis, so that we can listen to and transcribe

More information

MICHIGAN TEST FOR TEACHER CERTIFICATION (MTTC) TEST OBJECTIVES FIELD 062: HEARING IMPAIRED

MICHIGAN TEST FOR TEACHER CERTIFICATION (MTTC) TEST OBJECTIVES FIELD 062: HEARING IMPAIRED MICHIGAN TEST FOR TEACHER CERTIFICATION (MTTC) TEST OBJECTIVES Subarea Human Development and Students with Special Educational Needs Hearing Impairments Assessment Program Development and Intervention

More information

Maximizing Performance in CI Recipients: Programming Concepts. December 4-5, 2015

Maximizing Performance in CI Recipients: Programming Concepts. December 4-5, 2015 Maximizing Performance in CI Recipients: Programming Concepts December 4-5, 2015 NYU Faculty David Landsberger, PhD Laboratory for Translational Research Department of Otolaryngology William H. Shapiro,

More information

Dr. Abdel Aziz Hussein Lecturer of Physiology Mansoura Faculty of Medicine

Dr. Abdel Aziz Hussein Lecturer of Physiology Mansoura Faculty of Medicine Physiological Basis of Hearing Tests By Dr. Abdel Aziz Hussein Lecturer of Physiology Mansoura Faculty of Medicine Introduction Def: Hearing is the ability to perceive certain pressure vibrations in the

More information

Areas of Processing Deficit and Their Link to Areas of Academic Achievement

Areas of Processing Deficit and Their Link to Areas of Academic Achievement Areas of Processing Deficit and Their Link to Areas of Academic Achievement Phonological Processing Model Wagner, R.K., Torgesen, J.K., & Rashotte, C.A. (1999). Comprehensive Test of Phonological Processing.

More information

SmartFocus Article 1 - Technical approach

SmartFocus Article 1 - Technical approach SmartFocus Article 1 - Technical approach Effective strategies for addressing listening in noisy environments The difficulty of determining the desired amplification for listening in noise is well documented.

More information

Talker-specific Influences on Phonetic Boundaries and Internal Category Structure

Talker-specific Influences on Phonetic Boundaries and Internal Category Structure University of Connecticut DigitalCommons@UConn Master's Theses University of Connecticut Graduate School 6-13-2013 Talker-specific Influences on Phonetic Boundaries and Internal Category Structure Janice

More information

Adult SNHL: Hearing Aids and Assistive Devices

Adult SNHL: Hearing Aids and Assistive Devices Adult SNHL: Hearing Aids and Assistive Devices Gordon Shields, M.D. Faculty Advisor: Arun Gadre, M.D. The University of Texas Medical Branch Department of Otolaryngology Grand Rounds Presentation March

More information

Anatomy and Physiology of Hearing (added 09/06)

Anatomy and Physiology of Hearing (added 09/06) Anatomy and Physiology of Hearing (added 09/06) 1. Briefly review the anatomy of the cochlea. What is the cochlear blood supply? SW 2. Discuss the effects of the pinna, head and ear canal on the transmission

More information

Early vs. Late Onset Hearing Loss: How Children Differ from Adults. Andrea Pittman, PhD Arizona State University

Early vs. Late Onset Hearing Loss: How Children Differ from Adults. Andrea Pittman, PhD Arizona State University Early vs. Late Onset Hearing Loss: How Children Differ from Adults Andrea Pittman, PhD Arizona State University Heterogeneity of Children with Hearing Loss Chronological age Age at onset Age at identification

More information

Relationship of Hearing Loss to Listening and Learning eeds

Relationship of Hearing Loss to Listening and Learning eeds Child s ame: 16-25 db HEARI G LOSS Accommodations and Services Impact of a hearing loss that is approximately 20 db can be compared to ability to hear when index fingers are placed in your ears. Child

More information

Speech recognition in normal hearing and sensorineural hearing loss as a function of the number of spectral channels

Speech recognition in normal hearing and sensorineural hearing loss as a function of the number of spectral channels Speech recognition in normal hearing and sensorineural hearing loss as a function of the number of spectral channels Deniz Başkent a House Ear Institute, Department of Auditory Implants, 2100 West Third

More information

The Effects of Hearing Impairment and Aging on Spatial Processing

The Effects of Hearing Impairment and Aging on Spatial Processing The Effects of Hearing Impairment and Aging on Spatial Processing Helen Glyde, 1 3 Sharon Cameron, 1,2 Harvey Dillon, 1,2 Louise Hickson, 1,3 and Mark Seeto 1,2 Objectives: Difficulty in understanding

More information

HEARING IMPAIRMENT. (Including Deafness) I. DEFINITION

HEARING IMPAIRMENT. (Including Deafness) I. DEFINITION (Including Deafness) I. DEFINITION A. "Deafness" means a hearing impairment that is so severe that the child is impaired in processing linguistic information through hearing, with or without amplification,

More information

icommunicate SPEECH & COMMUNICATION THERAPY Hearing Aids and Cochlear Implants

icommunicate SPEECH & COMMUNICATION THERAPY Hearing Aids and Cochlear Implants icommunicate SPEECH & COMMUNICATION THERAPY Hearing Aids and Cochlear Implants Hearing Assessment To determine an individual s level of hearing or investigate hearing loss, an assessment needs to take

More information

Recognition of Synthetic Speech by HearingImpaired Elderly Listeners

Recognition of Synthetic Speech by HearingImpaired Elderly Listeners Journal of Speech and Hearing Research, Volume 34, 118-1184, October 1991 Recognition of Synthetic Speech by HearingImpaired Elderly Listeners Larry E. Humes Kathleen J. Nelson David B. Pisoni Indiana

More information

PERSONAL COMPUTER SOFTWARE VOWEL TRAINING AID FOR THE HEARING IMPAIRED

PERSONAL COMPUTER SOFTWARE VOWEL TRAINING AID FOR THE HEARING IMPAIRED PERSONAL COMPUTER SOFTWARE VOWEL TRAINING AID FOR THE HEARING IMPAIRED A. Matthew Zimmer, Bingjun Dai, Stephen A. Zahorian Department of Electrical and Computer Engineering Old Dominion University Norfolk,

More information

Lecture 2, Human cognition

Lecture 2, Human cognition Human Cognition An important foundation for the design of interfaces is a basic theory of human cognition The information processing paradigm (in its most simple form). Human Information Processing The

More information

PERCEPTION OF VOWEL QUANTITY BY ENGLISH LEARNERS

PERCEPTION OF VOWEL QUANTITY BY ENGLISH LEARNERS PERCEPTION OF VOWEL QUANTITY BY ENGLISH LEARNERS OF CZECH AND NATIVE LISTENERS Kateřina Chládková Václav Jonáš Podlipský Karel Janíček Eva Boudová 1 INTRODUCTION Vowels in both English and Czech are realized

More information

- Assessment of hearing by PTA provides only partial pictures of the patient's auditory status

- Assessment of hearing by PTA provides only partial pictures of the patient's auditory status - Assessment of hearing by PTA provides only partial pictures of the patient's auditory status Because it does not give any direct information regarding to the patient`s ability to hear and understand

More information

The loudness war is fought with (and over) compression

The loudness war is fought with (and over) compression The loudness war is fought with (and over) compression Susan E. Rogers, PhD Berklee College of Music Dept. of Music Production & Engineering 131st AES Convention New York, 2011 A summary of the loudness

More information

Chapter 12: Sound Localization and the Auditory Scene

Chapter 12: Sound Localization and the Auditory Scene Chapter 12: Sound Localization and the Auditory Scene What makes it possible to tell where a sound is coming from in space? When we are listening to a number of musical instruments playing at the same

More information

Speech Perception in Noisy Environments. Speech Perception in Noisy Environments. Overview. Stimuli and Task. Attention in Multi-talker Environments

Speech Perception in Noisy Environments. Speech Perception in Noisy Environments. Overview. Stimuli and Task. Attention in Multi-talker Environments NEURAL BASES OF SPEECH RECOGNITION IN NOISE: INTEGRATING WHAT WE HEAR, SEE, AND KNOW Speech Perception in Noisy Environments Speech is the most important everyday stimulus for humans Prone to corruption

More information

Directional Microphone Strategies for Adults with Dual Sensory Impairments

Directional Microphone Strategies for Adults with Dual Sensory Impairments Directional Microphone Strategies for Adults with Dual Sensory Impairments Cyndi E. Trueheart, M.S. Connecticut Veteran s Administration Medical Center West Haven, CT Dual Sensory Impairment Defined Program

More information

Speech Understanding in the Elderly

Speech Understanding in the Elderly J Am Acad Audiol 7 : 161-167 (1996) Speech Understanding in the Elderly Larry E. Humes* Abstract Three basic hypotheses regarding the speech-understanding difficulties of the elderly are reviewed : the

More information

Cochlear implants for children and adults with severe to profound deafness

Cochlear implants for children and adults with severe to profound deafness Issue date: January 2009 Review date: February 2011 Cochlear implants for children and adults with severe to profound deafness National Institute for Health and Clinical Excellence Page 1 of 41 Final appraisal

More information

Auditory Cohesion Problems. What is central auditory processing? What is a disorder of auditory processing?

Auditory Cohesion Problems. What is central auditory processing? What is a disorder of auditory processing? 1 Auditory Cohesion Problems Auditory cohesion skills - drawing inferences from conversations, understanding riddles, or comprehending verbal math problems - require heightened auditory processing and

More information

Progress Report Spring 20XX

Progress Report Spring 20XX Progress Report Spring 20XX Client: XX C.A.: 7 years Date of Birth: January 1, 19XX Address: Somewhere Phone 555-555-5555 Referral Source: UUUU Graduate Clinician: XX, B.A. Clinical Faculty: XX, M.S.,

More information

Position Paper on Cochlear Implants in Children

Position Paper on Cochlear Implants in Children Position Paper on Cochlear Implants in Children Position: The Canadian Association of Speech-Language Pathologists and Audiologists (CASLPA) supports cochlear implantation in children where appropriate

More information

Introducing Voice Analysis Software into the Classroom: how Praat Can Help French Students Improve their Acquisition of English Prosody.

Introducing Voice Analysis Software into the Classroom: how Praat Can Help French Students Improve their Acquisition of English Prosody. Introducing Voice Analysis Software into the Classroom: how Praat Can Help French Students Improve their Acquisition of English Prosody. Laurence Delrue Université de Lille 3 (France) laurence.delrue@univ-lille3.fr

More information

T here are approximately 3 million people in the US with

T here are approximately 3 million people in the US with International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Continuous Interleaved Sampled (CIS) Signal Processing Strategy for Cochlear Implants MATLAB Simulation Program

More information

SPEECH AUDIOMETRY. @ Biswajeet Sarangi, B.Sc.(Audiology & speech Language pathology)

SPEECH AUDIOMETRY. @ Biswajeet Sarangi, B.Sc.(Audiology & speech Language pathology) 1 SPEECH AUDIOMETRY Pure tone Audiometry provides only a partial picture of the patient s auditory sensitivity. Because it doesn t give any information about it s ability to hear and understand speech.

More information

Infancy: Cognitive Development

Infancy: Cognitive Development Infancy: Cognitive Development Chapter 6 Child Psychology Make sure you understand these concepts : Piaget s Stage Theory Schemas: assimilation & accommodation Developments in Sensorimotor Stage Sub-stages

More information

Technical Report. Overview. Revisions in this Edition. Four-Level Assessment Process

Technical Report. Overview. Revisions in this Edition. Four-Level Assessment Process Technical Report Overview The Clinical Evaluation of Language Fundamentals Fourth Edition (CELF 4) is an individually administered test for determining if a student (ages 5 through 21 years) has a language

More information

Hearing Tests And Your Child

Hearing Tests And Your Child HOW EARLY CAN A CHILD S HEARING BE TESTED? Most parents can remember the moment they first realized that their child could not hear. Louise Tracy has often told other parents of the time she went onto

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 213 http://acousticalsociety.org/ ICA 213 Montreal Montreal, Canada 2-7 June 213 Communication Session 3aSCb: Components of Informational Masking 3aSCb4.

More information

Cochlear Implants: A Communication Choice. Cochlear Implants: A Communication Tool. www.cochlear.com

Cochlear Implants: A Communication Choice. Cochlear Implants: A Communication Tool. www.cochlear.com Cochlear Ltd ABN 96 002 618 073 14 Mars Road, PO Box 629 Lane Cove NSW 2066 Australia Tel: 61 2 9428 6555 Fax: 61 2 9428 6353 Cochlear Americas 400 Inverness Parkway Suite 400 Englewood CO 80112 USA Tel:

More information

CONVENTIONAL AND DIGITAL HEARING AIDS

CONVENTIONAL AND DIGITAL HEARING AIDS CONVENTIONAL AND DIGITAL HEARING AIDS Coverage for services, procedures, medical devices and drugs are dependent upon benefit eligibility as outlined in the member's specific benefit plan. This Medical

More information

icommunicate SPEECH & COMMUNICATION THERAPY Hearing Assessment

icommunicate SPEECH & COMMUNICATION THERAPY Hearing Assessment icommunicate SPEECH & COMMUNICATION THERAPY Hearing Assessment Hearing Assessment To determine an individual s level of hearing or investigate hearing loss, an assessment needs to take place. There are

More information

Workshop Perceptual Effects of Filtering and Masking Introduction to Filtering and Masking

Workshop Perceptual Effects of Filtering and Masking Introduction to Filtering and Masking Workshop Perceptual Effects of Filtering and Masking Introduction to Filtering and Masking The perception and correct identification of speech sounds as phonemes depends on the listener extracting various

More information

Functional Communication for Soft or Inaudible Voices: A New Paradigm

Functional Communication for Soft or Inaudible Voices: A New Paradigm The following technical paper has been accepted for presentation at the 2005 annual conference of the Rehabilitation Engineering and Assistive Technology Society of North America. RESNA is an interdisciplinary

More information

EVALUATION OF BONE-CONDUCTION HEADSETS FOR USE IN MULTITALKER COMMUNICATION ENVIRONMENTS

EVALUATION OF BONE-CONDUCTION HEADSETS FOR USE IN MULTITALKER COMMUNICATION ENVIRONMENTS PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY 49th ANNUAL MEETING 2005 1615 EVALUATION OF BONE-CONDUCTION HEADSETS FOR USE IN MULTITALKER COMMUNICATION ENVIRONMENTS Bruce N. Walker and Raymond

More information

Speech sounds. Room acoustics

Speech sounds. Room acoustics Modeling the effects of room-acoustics on speech reception and perception. Arthur Boothroyd, 2003. 1 Introduction Communication by spoken language involves a complex chain of events, as illustrated in

More information

Audio Examination. Place of Exam:

Audio Examination. Place of Exam: Audio Examination Name: Date of Exam: SSN: C-number: Place of Exam: The Handbook of Standard Procedures and Best Practices for Audiology Compensation and Pension Exams is available online. ( This is a

More information

Understanding Hearing Loss 404.591.1884. www.childrensent.com

Understanding Hearing Loss 404.591.1884. www.childrensent.com Understanding Hearing Loss 404.591.1884 www.childrensent.com You just found out your child has a hearing loss. You know what the Audiologist explained to you, but it is hard to keep track of all the new

More information