Talker-identification training using simulations of hybrid CI hearing : generalization to speech recognition and music perception

Size: px
Start display at page:

Download "Talker-identification training using simulations of hybrid CI hearing : generalization to speech recognition and music perception"

Transcription

1 University of Iowa Iowa Research Online Theses and Dissertations Summer 2014 Talker-identification training using simulations of hybrid CI hearing : generalization to speech recognition and music perception Ruth Flaherty University of Iowa Copyright 2014 Ruth Flaherty This dissertation is available at Iowa Research Online: Recommended Citation Flaherty, Ruth. "Talker-identification training using simulations of hybrid CI hearing : generalization to speech recognition and music perception." MA (Master of Arts) thesis, University of Iowa, Follow this and additional works at: Part of the Speech Pathology and Audiology Commons

2 TALKER-IDENTIFICATION TRAINING USING SIMULATIONS OF HYBRID CI HEARING: GENERALIZATION TO SPEECH RECOGNITION AND MUSIC PERCEPTION by Ruth Mei Flaherty A thesis submitted in partial fulfillment of the requirements for the Master of Arts degree in Speech Pathology and Audiology in the Graduate College of The University of Iowa August 2014 Thesis Supervisor: Professor Karen I. Kirk

3 Copyright by RUTH MEI FLAHERTY 2014 All Rights Reserved

4 Graduate College The University of Iowa Iowa City, Iowa CERTIFICATE OF APPROVAL MASTER'S THESIS This is to certify that the Master's thesis of Ruth Mei Flaherty has been approved by the Examining Committee for the thesis requirement for the Master of Arts degree in Speech Pathology and Audiology at the August 2014 graduation. Thesis Committee: Karen I. Kirk, Thesis Supervisor Carolyn J. Brown Bruce J. Tomblin

5 To my parents: Amelia and Dennis Flaherty ii

6 ACKNOWLEDGMENTS Foremost, I would like to express my sincerest gratitude to my advisor, Dr. Karen Kirk, for the continuous support of my Master s study and research, for her patience and immense knowledge. Her guidance helped me in the time of research and writing of this thesis. Additionally, I would like to thank the rest of my thesis committee: Dr. Bruce Tomblin and Dr. Carolyn Brown, for their insights and encouragement. My sincerest thanks also goes to Virginia Driscoll, for all of her quick problem solving, patience, and constant willingness to help. I also thank my fellow labmates Lauren Dowdy and Nora Prachar for their continual hard work, early mornings and late nights, and for all of the fun time we had the past couple of years. I am also extremely grateful for the support and patience of my friends, family, and loving boyfriend. Without any of the people I have mentioned, the completion of this thesis would not have been possible. iii

7 ABSTRACT The speech signal carries two types of information: linguistic information (the message content) and indexical information (acoustic cues about the talker). In the traditional view of speech perception, the acoustic differences among talkers were considered noise. In this view, the listeners task was to strip away unwanted variability to uncover the idealized phonetic representation of the spoken message. A more recent view suggests that both talker information and linguistic information are stored in memory. Rather than being unwanted noise, talker information aids in speech recognition especially under difficult listening conditions. For example, it has been shown that normal hearing listeners who completed voice recognition training were subsequently better at recognizing speech from familiar versus unfamiliar voices. For individuals with hearing loss, access to both types of information may be compromised. Some studies have shown that cochlear implant (CI) recipients are relatively poor at using indexical speech information because low-frequency speech cues are poorly conveyed in standard CIs. However, some CI users with preserved residual hearing can now combine acoustic amplification of low frequency information (via a hearing aid) with electrical stimulation in the high frequencies (via the CI). It is referred to as bimodal hearing when a listener uses a CI in one ear and a hearing aid in the opposite ear. A second way electrical and acoustic stimulation is achieved is through a new CI system, the hybrid CI. This device combines electrical stimulation with acoustic hearing in the same ear, via a shortened electrode array that is intended to preserve residual low frequency hearing in the apical portion of the cochlea. It may be that hybrid CI users can learn to use voice information to enhance speech understanding. This study will assess voice learning and its relationship to talker-discrimination, music perception, and spoken word recognition in simulations of Hybrid CI or bimodal hearing. Specifically, our research questions are as follows: (1) Does training increase talker identification? (2) Does familiarity with the talker or linguistic message enhance spoken iv

8 word recognition? (3) Does enhanced spectral processing (as demonstrated by improved talker recognition) generalize to non-linguistic tasks such as talker discrimination and music perception tasks? To address our research questions, we will recruit normal hearing adults to participate in eight talker identification training sessions. Prior to training, subjects will be administered the forward and backward digit span task to assess short-term memory and working memory abilities. We hypothesize that there will be a correlation between the ability to learn voices and memory. Subjects will also complete a talker-discrimination test and a music perception test that require the use of spectral cues. We predict that training will generalize to performances on these tasks. Lastly, a spoken word recognition (SWR) test will be administered before and after talker identification training. The subjects will listen to sentences produced by eight talkers (four male, four female) and verbally repeat what they had heard. Half of the sentences will contain keywords repeated in training and half of the sentences will have keywords not repeated in training. Additionally, subjects will have only heard sentences from half of the talkers during training. We hypothesize that subjects will show an advantage for trained keywords rather than non-trained keywords and will perform better with familiar talkers than unfamiliar talkers. v

9 TABLE OF CONTENTS LIST OF TABLES... viii LIST OF FIGURES... ix CHAPTER I. REVIEW OF THE LITERATURE... 1 A More Recent View of Speech Perception... 2 The Effects of Hearing Loss on Speech Perception... 5 The Cochlear Implant... 7 The Role of Indexical Information... 9 Electrical and Acoustic Stimulation (EAS) The Hybrid Cochlear Implant Auditory Training to Improve Speech Perception Abilities Talker-Identification Training in Simulated Binaural Hearing II. METHODS Participants Stimuli Training Stimuli Talker Training Sets Spoken Word Recognition Test Stimuli Cochlear Implant Simulation Hybrid CI Simulations Pre- and Post- Training Tests Hearing in Noise Test (HINT) Spoken Word Recognition Test (SWR) Test of Timbre Recognition (TTR) Talker Discrimination Test (TDT) Digit Span Test (Forward and Backward) Procedure Training Sessions Familiarization (Phase 1) Voice Identification Training (Phase 2) Voice Identification Testing (Phase 3) III. RESULTS Experimental Group Participants Phase 1 Familiarization Phase 2 Voice Identification Training vi

10 Phase 3 Voice Identification Testing Comparison of Pre Versus Post Tests Spoken Word Recognition Talker Discrimination Test Music Timbre Test of Recognition Digit Span Test (Forward and Backward) Relationship Between Voice Learning and performance on Other Tasks IV. DISCUSSION Research Questions Voice Learning Effect of Key Word Familiarity on Voice Learning Effect of Talker and Key Word Familiarity on Spoken Word Recognition Transfer of Learning to Other Tasks Spoken Word Recognition Talker Discrimination Music Test of Timbre Recognition Factors that Influence Learning Conclusion REFERENCES vii

11 LIST OF TABLES Table 1. Demographic Characteristics of the Participants Correlation between Changes in Pre-Post Task Performances and Change in Phase 3 Voice Identification Performance Correlation between Changes in Pre-Post Task Performances and Forward and Backward Digit Span Performances viii

12 LIST OF FIGURES Figure 1. Schematic of the 4-session talker-identification training protocol for experimental subjects Voice Identification During Training (Phase 2) across Key Word Types Voice identification During Training (Phase 2) for Novel Key Words Voice Identification During Training (Phase 2) for Repeated Key Words Voice identification during testing (Phase 3) across Key Word Types Voice Identification During Testing (Phase 3) for Novel Key Words Voice identification During Testing (Phase 3) for Repeated Key Words Average Pre-Post SWR Scores for Experimental and Control Groups Individual Pre-Post SWR Scores for Experimental and Control Groups Pre-Post SWR Scores as a Function of Talker Familiarity and Novel-key-word Type for Experimental Group Participants Pre-Post SWR Scores as a Function of Talker and Novel-key-word type for Control Group Participants Pre-Post SWR Scores as a Function of Talker Familiarity and Repeated-keyword Type for Experimental Group Participants Pre-Post SWR Scores as a Function of Talker and Repeated-key word type for Control Group Participants Average Pre-Post Talker Discrimination Test for Experimental and Control Groups Average Pre-Post Timbre Test of Recognition Scores for Experimental and Control Groups Average Digit Span Pretest Scores (Forward and Backward) by Experimental and Control Groups ix

13 1 CHAPTER I REVIEW OF THE LITERATURE In the traditional view of speech perception, indexical cues were considered noise in the speech signal obscured the linguistic message, making it more difficult to extract the meaning of the message (Nygaard & Pisoni, 1998). Talker identification was assumed to involve mechanisms distinct and separate from those mechanisms underlying comprehension of the linguistic message; therefore they were viewed as two independent processes of speech perception (Nygaard & Pisoni, 1998). Early studies in search of invariant speech cues important for speech perception hypothesized that linguistic information constituted the significant part of the acoustic signal, in contrast to background noise and linguistically irrelevant features (i.e., indexical properties) (Liberman et al., 1967). In fact, in an early investigation of the fundamental cues for the perception of speech, Liberman and colleagues proposed that each person possessed an auditory encoder in which to recover phonemes in the face of talker acoustic variability (i.e., stress, intonation, and tempo). It was believed that this auditory encoder was responsible for normalizing talker differences to uncover idealized representations of speech sound units. Keeping with this traditional perspective, early speech perception research studies sought to remove talker variability from experimental stimuli by specifically identifying the acoustic cues needed to perceive various phonemes and phoneme classes. Researchers analyzed the acoustic cues necessary for phoneme recognition and created synthetic speech to use in experiments of speech perception. However, the question of exactly how listeners are able to perceive speech amidst acoustic and contextual variability in every day speech remained. Indeed, the shortcomings of the traditional approach were revealed as theoretical developments in cognitive neuroscience and new computational tools emerged (Pisoni and Levi, 2005). Additionally, collaborative research developments between speech perception researchers and cognitive psychologists in the area of categorization (mental representations

14 2 of categories used to group new ideas and objects) and linguistics in the study of frequencybased phonology (how frequency of use of particular phonemes may affect the recognition of certain phonemes and the phonology of a language) offered new insight into the aforementioned issues involving variability and invariance in the perception of speech (Pisoni and Levi, 2005). Up to this point, speech scientists had struggled to explain the successful interpretation of the speech signal in terms of two conditions inherent in the traditional view: linearity and invariance. Linearity is defined as the condition in which each phoneme has a corresponding stretch of sound and that phonemes are adjoined, whereas invariance captures the idea that for each speech sound there are attributed acoustic features across contexts (Pisoni and Levi, 2005). The meeting of both of these conditions is clearly prevented due to coarticulatory effects. Coarticulation is the presence of multiple phonemes and their corresponding acoustic features at one point in time of the speech signal. Acoustic features also vary based on the given phonetic environment in which a speech sound is produced. These revelations led to a reconceptualization of the theory underlying speech perception and new approaches for studying speech perception and spoken word recognition transpired. A More Recent View of Speech Perception In contrast with the traditional abstractionist view of speech perception, new theoretical approaches focused on the interaction among speech perception, speech production, and memory in language processing. Such approaches are referred to as episodic approaches, rooted in the assumption that spoken words are represented in lexical memory as a collection of specific individual perceptual episodes. This view is supported by studies of the processing of stimulus variability (Pisoni & Levi, 2005). Goldinger, Pisoni, and Logan (1991) investigated the effect of talker-variability on memory and recall to examine the relationship between perception of a linguistic message and processing of talker information. Either a single talker or multiple talkers presented lists of easily recognizable words and

15 3 more difficult words at varying rates. The ease or difficulty with which words could be recognized was determined by the word s lexical properties including word frequency (how often a word occurs), lexical neighborhood density (the number of phonemically similar words, or neighbors to the target word) and neighborhood frequency (the average frequency of all lexical neighbors to a target). At faster presentation rates, words from single talker lists were recalled more accurately than words from multiple talker lists, with the strongest recall at the beginning of the list. However, when the presentation rate was slowed, words in the early list position produced by multiple talkers were recalled more accurately than single talker lists. It was hypothesized that listeners may be attending more intently to pairings of the words with the talkers when allowed more time for word rehearsal and transfer into long term memory (Goldinger, Pisoni, and Logan, 1991). This stored distinctive information can then be used to make lists more discriminable, improving the recall of the words. Although effects of presentation rate demonstrated that talker variability affects perceptual encoding and rehearsal processes, the advantage in recall for easy words versus hard words did not reach significance. The results of this study suggest that the processing of talker information is an integral part of speech perception requiring perceptual, attentional, and memory processes. Such results provide evidence against the abstractionist theory and evidence for the joint processing of indexical and lexical information. With the acknowledgement that indexical cues can assist the listener in the recovering of an intended message, more recent research studies have investigated how training of these specific factors may enhance speech perception. Nygaard and Pisoni (1998) conducted three experiments to study the recognition of spoken words in isolation and in sentence contexts after talker-recognition training. In all three experiments, normal-hearing participants were

16 4 trained to learn a set of 10 talkers voices and then given a test of spoken word recognition to assess the influence of learning the voices on the processing of the linguistic message. Word recognition tests were presented in different signal-to-noise ratios: +10 db, +5 db, 0 db, and -5 db. In the first experiment, listeners learned the voices from isolated words and were then tested with novel isolated words to assess spoken word recognition. Listeners demonstrated better word recognition performance in noise when the words were produced by a familiar talker rather than an unfamiliar talker. In the second experiment, listeners learned voices from sentence-length utterances and were then tested with isolated words. However, improved performance for familiar voices from sentences to novel words in isolation was not found. The authors hypothesized that listeners learned a different set of acoustic properties available in the sentence-length utterances from those present in isolated words, affecting the listeners word recognition performance (Nygaard and Pisoni, 1998). Although listeners who heard familiar voices were better at identifying isolated words than those who heard unfamiliar voices in the spoken word recognition task, the results did not reach statistical significance. In the final experiment, listeners learned voices from sentence-length utterances and speech recognition was then tested with sentences. Recognition of words in sentences was significantly better for sentences produced by familiar talkers, demonstrating that perceptual learning of voices from sentence-length utterances can facilitate word recognition (Nygaard and Pisoni, 1998). Ultimately, this study showed that learning to identify talkers voices increases sensitivity to phonetic information in the speech signal and enhances the perception of the linguistic properties of speech in both isolated words and sentences.

17 5 The Effects of Hearing Loss on Speech Perception As demonstrated by the previous studies, manipulating indexical cues such as talker number and speaking rate can either enhance or reduce spoken word recognition in listeners with normal hearing. However, the influence of talker information on speech perception for individuals with hearing loss, in which access to both types of information may be compromised, has not been as widely researched. In order to better understand how hearing loss may impact speech perception, perceptual processes supporting word recognition must be clearly identified for these individuals. Until recently, traditional spoken word recognition tests typically used phonetically balanced word lists produced by one talker at one speaking rate (Hirsh, Davis, Silverman, Reynolds, Eldert, & Benson, 1952; Causey, Hood, Hermanson, & Bowling, 1984; Nilsson, Soli, & Sullivan, 1994). Although such tests are useful in documenting word-recognition performance, they may not adequately assess speech perception under more natural listening conditions in which there is much stimulus variability. Kirk, Pisoni, and Miyamoto (1997) compared spoken word recognition performance by hearing impaired listeners as a function of talker number, speaking rate, and lexical complexity (based on frequency of occurrence and number of phonemically similar words). Additionally, each participant answered a 20-item questionnaire to assess his or her communication abilities in daily listening situations. The questionnaire asked participants to rate statements from various subscales of the Abbreviated Profile of Hearing Aid Benefit (APHAB) by Robyn Cox (1997) including Familiar Talkers, Reduced Cues, Background Noise, and Distortion of Sound. Kirk et al. also added two other subscales: Gender and Speaking Rate. All subscales were rated on a 7-point scale where A indicated that the

18 6 statement was always true (99%) and G indicated that the statement was never true (1%) (Kirk et al., 1997). The authors hypothesized that word recognition performance on lists involving stimulus variability would correlate better with self-reports of listening ability than conditions in which variability was constrained. Results showed that identification scores were poorer in the multiple-talker condition than the single-talker condition and that word recognition scores decreased as speaking rate increased (Kirk, et al., 1997). In contrast with the data from the study by Goldinger, Pisoni, and Logan (1991) presented above, the present study observed a significantly higher word recognition performance for lexically easy words than lexically hard words. As predicted, the subjects reported communication abilities in daily activities from the questionnaire were more highly correlated with performance under conditions involving stimulus variability than under those with minimal variability. For example, there were moderately high correlations (Total Score,.49) between the multi-talker and mixed-speaking rate conditions and the items from the APHAB subscales (i.e., Distortion of Sound, Familiar Talkers, Reduced Cues, and Background Noise) (Kirk et al., 1997). In sum, all three variables- talker variability, speaking rate, and lexical difficultysignificantly influenced speech perception abilities in individuals with hearing loss. It is important to note that all of the participants in this study scored highly on traditional tests of spoken word recognition performance (Kirk, Pisoni, and Miyamoto, 1997). It was not until the participants were tested with the perceptually robust speech materials containing different sources of stimulus variability that the effects of indexical factors on the linguistic coding of speech were revealed. It appears that these new tests are measuring several underlying aspects of speech perception in the research laboratory that are capturing the conditions

19 7 encountered in everyday listening situations. A test such as the one used in this study enables an examiner to more accurately predict how a person with hearing loss may perform in natural communication situations and when using a sensory aid such as a hearing aid or cochlear implant (CI). The Cochlear Implant Persons with mild-moderate degrees of hearing loss typically use hearing aids, while those with more severe-profound loss typically will use a CI. This thesis project is focused on adults with severe-profound hearing loss and the opportunity to improve speech perception abilities with the latest CI technology and rehabilitation strategies. A CI s function is to bypass missing or damaged sensory hair cells by directly stimulating surviving neurons in the auditory nerve (Wilson & Dorman, 2009). The cochlea is tonotopically organized, where the basal part of the cochlea conveys high frequency sounds to the brain and the apical part of the cochlea conveys low frequencies. Implant systems attempt to reproduce this tonotopic organization of the cochlea by stimulating electrodes toward either the basal or apical portions of the cochlea to represent the corresponding frequencies. The components of a CI include: (1) a microphone to sense the sound in the environment; (2) a speech processor to transform the microphone input into stimuli for the implanted electrodes; (3) a transcutaneous link for the transmission of power and stimulus information across the skin; (4) an implanted receiver/stimulator to decode the information and generate stimuli for the electrodes; (5) a cable to connect the outputs of the receiver/stimulator to the electrodes; and (6) the electrode array (Wilson & Dorman, 2009). In addition to the mechanical components of a CI, there is a biological component that includes an individual s auditory nerve, auditory

20 8 pathways in the brainstem, and auditory cortex (Wilson & Dorman, 2009). This biological component varies widely in its functional capabilities across individuals with hearing loss. In early designs of the CI dating back nearly 30 years, little more than a sensation of sound and temporal patterns were conveyed (Wilson & Dorman, 2009). Many developments in processing strategies, electrode positioning, and finer frequency representations have led to relatively great success for CI users. Mean present-day speech recognition scores for a unilateral CI user range from 50-85% depending on the conditions (i.e., in noise or in quiet, respectively) (Dorman et al., 2009; Wilson & Dorman, 2009; Gifford, Shallop, & Peterson, 2008). For example, Gifford, Shallop, and Peterson (2008) reported 85% sentence recognition accuracy and CNC monosyllabic word accuracy in quiet. Although average scores indicate good recognition of speech in quiet, word recognition is much more difficult in noise and a wide range of performance is noted across individual CI users. In spite of remarkable progress in the CI design and performance over the last three decades, there remains great variability in the outcomes for individuals with CIs. One reason for this variability can be attributed to the biological component unique to each individual as mentioned above. Other reasons may include age of implantation, limitations imposed by present electrode designs and placements, a mismatch between the number of discriminable sites and the number of effective channels, a deficit in fine structure representation ( fine frequency information related to frequency variations within band-pass channels), and/or poor representation of fundamental frequency (F0) needed for complex sounds (Wilson & Dorman, 2009).

21 9 The Role of Indexical Information It is evident from the literature that CI users perform well on regular speech recognition tasks. However, there are relatively few studies investigating how CI users process indexical information; the limited studies available suggest that they perform poorly (Gantz et al., 2009; Brown & Bacon, 2009; Zhang, Dorman, & Spahr, 2010; Buchner et al., 2009; Turner et al., 2004). Indexical cues provide important information such as gender, age, dialect, and fundamental frequency (F0). Wilson and Dorman (2009) list five reasons why accurate perception of F0 alone can be important: (1) to separate auditory streams from different sources (such as a competing voice or background noise); (2) to identify a speaker s gender; (3) to discriminate between emotion and declarative content versus inquisitive content; (4) to perceive tone languages; and (5) to perceive melody. Poor recognition of indexical cues can contribute to the noted difficulties in such speech recognition tasks, particularly those under variable or noisy conditions. Vongphoe and Zeng (2005) investigated whether temporal cues provided by a traditional CI are sufficient to support both speech (linguistic information) and speaker recognition (indexical information). Ten CI subjects and six normal-hearing subjects were recruited for this study. In one condition, the subjects were asked to recognize the vowel produced by the speaker. In the second condition, subjects were asked to identify the speaker. All subjects completed intensive training for the speaker recognition task. Normal-hearing subjects achieved nearly perfect scores in both conditions. CI subjects achieved good scores in vowel recognition (65%) but poor scores in speaker identification (23%) (Vongphoe & Zeng, 2005). The results suggest that the brain may use different strategies to process information regarding speaker and speech recognition based on which acoustic cues are

22 10 available. The authors suggested that speaker recognition relies more on low frequency cues and are highly related to F0, while vowel recognition relies more on high-frequency cues and formant frequencies (Vongphoe & Zeng, 2005). This study highlighted the limitations of traditional CI processing strategies in effectively conveying low frequency and F0 cues for speaker recognition. Vongphoe & Zeng (2005) proposed that either a slow varying form of frequency modulation or explicit F0 information should be encoded in future cochlear implants. Electrical and Acoustic Stimulation (EAS) A recent advance in cochlear implantation that may enhance talker identification by CI users is combined electrical and acoustic stimulation (EAS) of the auditory system. EAS can be used for persons with residual hearing in the low frequencies. In one configuration of EAS, high frequency information is accessed via the CI on one ear while low frequency information is provided by a hearing aid on the opposite ear, or by residual low-frequency hearing. This configuration has also been referred to as the bimodal approach. This acoustic information is thought to complement the higher frequency information provided by the CI and electrical stimulation. In comparison to the weak representations of F0 with a unilateral traditional CI, representations appear to be highly robust with EAS (Wilson & Dorman, 2009). EAS has demonstrated substantial benefit for listening to speech in quiet, in noise, in competition with another talker or multitalker babble, compared to either electrical stimulation or acoustic stimulation alone (Wilson & Dorman, 2009). Brown and Bacon (2009) evaluated the importance of F0, the acoustic amplitude envelope, and the combination of F0 and the amplitude envelope for EAS in competing backgrounds. Low frequency speech was replaced with a tone that was modulated in

23 11 frequency to track the F0 of the speech, in amplitude with the envelope of the low-frequency speech, or both. A four-channel vocoder simulated electric hearing. This was presented alone, combined with 500 Hz low-pass target speech, or combined with a tone that was either unmodulated, modulated in frequency by the dynamic change in F0, modulated in amplitude by the envelope of the low-pass speech, or modulated in both frequency and amplitude. Additionally, the 500 Hz low-pass target speech and each of the tonal cues were presented without the vocoder output. The participants repeated as much of a target sentence as they could without feedback. Results indicated a significant benefit of additional F0 or envelope cues over the simulated electric hearing alone (23-57 percentage points). Furthermore, the combination of F0 and envelope cues provided significantly more benefit than either cue alone, suggesting a synergistic effect (Brown & Bacon, 2009). The authors hypothesized that large improvements in competing backgrounds were observed due to several linguistic cues provided by F0 such as consonant voicing, lexical boundaries, contextual emphasis, and manner (Brown & Bacon, 2009). Ultimately, this study demonstrated the usefulness of both F0 and amplitude cues in simulated EAS for speech intelligibility. Recently, research on EAS has moved from testing normal hearing listeners with simulations to assessing performance of patients using a CI and low-frequency residual hearing. Zhang, Dorman, and Spahr (2010) investigated the minimum amount of lowfrequency information needed to achieve speech perception benefit while using bimodal listening. Participants were presented with monosyllabic words in quiet and sentences in noise in three listening conditions: electric stimulation alone, acoustic stimulation alone, and combined EAS. The acoustic stimuli presented to the nonimplanted ear were low-passfiltered at 125, 250, 500, or 750 Hz, or unfiltered. The level of performance for word

24 12 recognition in quiet in the EAS condition was significantly higher than the performance in either the electric only or acoustic only conditions. The same was true for recognizing sentences in noise at a +10 db signal to noise ratio (SNR). The authors determined that improvement in the EAS condition is present even when the acoustic information was limited to the 125-Hz-low-passed signal. Zhang et al. reported that F0 accounted for the majority of speech perception benefit when using EAS. They suggested that F0 information improves voicing recognition, which allows for a reduced number of word candidates in the lexicon in quiet and marks syllable structures and word boundaries in noise (Zhang et al., 2010). This ultimately resulted in significantly improved word recognition in quiet and sentence recognition in noise. A similar study by Dorman, Gifford, Spahr, and McKarns (2008) compared speech recognition, melody recognition, and gender voice discrimination performance between EAS patients with a CI in one ear and low-frequency hearing in the opposite ear and patients with only unilateral CIs. Recognition tests were presented in the same three conditions: electric stimulation alone, acoustic stimulation alone, and combined EAS. For melody recognition, acoustic stimulation alone and EAS were not significantly different from each other but both produced significantly better results than the electric stimulation only. For gender voice discrimination, there was not a significant difference in performance among the conditions possibly due to ceiling effects (Dorman et al., 2008). For speech recognition performance, the EAS condition produced the highest scores out of the three conditions, demonstrating a percentage point increase in performance on tests of word and sentence recognition in quiet and sentence recognition in noise. EAS proved especially advantageous for sentence recognition in noise. At a + 10 db SNR, 6% of conventional CI patients scored 85% correct

25 13 or better on sentences at, while 33% of EAS patients achieved scores at this level (Dorman et al., 2008). The authors suggested that fine-grained information about voice pitch allowed the listeners to segregate speech from noise. This study further supports the advantage of adding low frequency acoustic information to electric stimulation for speech recognition in both quiet and noisy conditions. Most, Harel, Shpak, and Luntz (2011) assessed the perception of suprasegmental features by adults who use a CI and a HA in opposite ears. Intonation, syllable stress, and word emphasis were assessed in a CI only condition and in a CI + HA (bimodal) condition. Participants listened to recorded speech materials and were given three closed-format tests requiring identification of the correct stimulus among printed alternatives. For example, participants listened to a recorded sentence and reported whether it was a statement or a question in the intonation subtest. Results indicated a significant advantage for the bimodal listening condition in the perception of all three suprasegmental features. Suprasegmental features require perception of time-intensity envelope and F0 information which was most likely conveyed in the low-frequency information provided by the HA (Most et al., 2011). These results further support the findings by Brown and Bacon (2009) discussed above. However, inspection of the individual data revealed that there was great variability in performance across individuals. The authors hypothesized that this could be attributed to the amount of residual hearing or the CI type and coding strategy of the individual (Most et al., 2011). Ultimately, the results demonstrate the overall advantages of bimodal stimulation, along with individual differences.

26 14 The Hybrid Cochlear Implant Thus far the literature has discussed improved speech recognition associated with EAS in individuals with a CI in one ear and acoustic hearing in the opposite ear. A recent advancement in CI design, the Hybrid CI, combines electrical stimulation for high frequency sound with acoustic hearing for low frequency information in the same ear (Gantz & Turner, 2004). The Hybrid CI consists of a shortened electrode array that intended to preserve residual low frequency hearing in the apical portion of the cochlea. This option is best for individuals with a high frequency sensorineural hearing loss, resulting from damage to the basal portion of the cochlea, but functional hair cells in the apical portion. High frequency hearing loss in the most common form of adult hearing loss usually caused by noise exposure, presbycusis, and ototoxic medications (Gantz & Turner, 2004). Therefore, the development of the Hybrid CI has great potential for improving speech recognition in many individuals with hearing loss. Gantz and Turner (2004) conducted a study using the Iowa/Nucleus Hybrid Implant in nine adults with severe high frequency hearing loss. The researchers sought to determine if placing an electrode array up to 10 mm in the inner ear would preserve or damage residual low frequency hearing. Traditional CI electrode arrays range from mm and usually damage any residual hearing. Both 6-mm and 10-mm electrode arrays were used in this study. Residual low frequency hearing was preserved to within db in all subjects following implantation of the Iowa/Nucleus Hybrid Implant and preoperative monosyllabic word and sentence scores were unchanged (Gantz & Turner, 2004). Consonant recognition performance was assessed prior to implantation in two conditions: HA in the ear to receive the implant and HAs bilaterally. Post-implantation measures were taken in three conditions:

27 15 CI only, CI + ipsilateral HA, and CI + bilateral HA. A general trend showed an increase in performance for most patients as more devices provided multiple sources of information. Preimplantation, consonant recognition scores in the bilateral HA condition ranged from 18-43%. Scores were more than doubled with the addition of the 10-mm electrode to bilateral HAs resulting in monosyllabic word recognition of 83-90% (Gantz & Turner, 2004). The results of this study indicate that a 10-mm electrode Hybrid CI can provide high frequency speech information without damaging low frequency acoustic hearing for the improvement of speech recognition. Based on the positive findings from the previous study, Gantz et al. (2009) conducted a study involving the implantation of the Iowa/Nucleus 10-mm Hybrid implant in a much larger group of 87 participants. Again, comparisons were made between patients preoperative word recognition scores in quiet using bilateral HAs and postoperative scores using the implant and bilateral HAs. Additionally, the change in the mean low frequency pure tone acoustic threshold was measured pre- and post-implantation. Improvements in either word recognition or speech reception threshold occurred in 74% of the participants. Improvements in both measures occurred in 48% of the participants (Gantz et al., 2009). A real-world comparison in which patients were tested with both ears showed significant benefit from the Hybrid CI. In the acoustic only preoperative condition, participants identified 35% of CNC words. Performance increased to 73% 12 months after Hybrid implantation and was maintained at 24 months (Gantz et al., 2009). Additionally, a subgroup of 27 Hybrid users was tested using spondee recognition in multitalker babble and their scores were compared to those of standard CI users. On average, the Hybrid users showed an advantage for speech recognition in noise unless their low-frequency postoperative hearing

28 16 levels approached profound levels (Gantz et al., 2009). This study further supports the benefit of preserving low frequency hearing in Hybrid CI users for speech perception in quiet and in noise. Results also demonstrated an advantage in speech perception ability when using a Hybrid CI in comparison to using a traditional CI with a standard electrode array. Turner, Gantz, and Reiss (2008) also compared speech perception abilities in competing backgrounds for individuals using Iowa/Nucleus Hybrid CIs with those using traditional CIs. Additionally, the researchers conducted a second experiment investigating the possible effects of assigning shifted-from-normal speech frequencies to the Hybrid implant s electrodes. Information presented via the implanted electrode is shifted in relation to the normal place-frequency map of the cochlea (Turner et al., 2008). Traditional CIs shift speech frequencies to a more basal region of the cochlea that is responsible for high frequency hearing. In a Hybrid CI, the electrode array is even shorter resulting in an even greater shift compared to a traditional CI. Therefore, the stimulation for Hybrid CI users is concentrated further on the basal end of the cochlea compared to traditional CI users. Information is also shifted in relation to the acoustically presented information at lower frequencies, with a potential gap for middle frequencies. This study s second experiment investigated the possible effects of shifted speech frequencies. In the first experiment, 19 adult Hybrid users and 20 adult traditional CI users identified spondee words presented in a two-talker background of sentences at 50% correct SNR, the value used as the speech recognition threshold. They were given a 12-alternative closed-set response set. When presented in quiet, all listeners recognized more than 90% of words (Turner et al., 2008). The purpose of this task was then to assess the listeners word recognition abilities despite background noise. First, it should be noted that speech recognition thresholds were considerably lower in some Hybrid users than those of any of the

29 17 traditional CI users (Turner et al., 2008). This indicates that the Hybrid users were able to recognize speech in more noisy conditions than the traditional CI users. Experiment results indicated a 9 db advantage of the Hybrid CI over the traditional CI for speech recognition in noise. Additionally, the researchers plotted the Hybrid users speech recognition threshold as a function of their pure tone thresholds to determine how much low frequency hearing needs to be preserved to achieve this benefit of speech recognition in noise. The data suggested that the advantage seen from Hybrid CIs exists unless the hearing loss approaches profound levels. The second investigation addressed the shifting of speech information in Hybrid users. In this experiment, Hybrid users speech recognition abilities were measured under several different types of extreme distortions of the normal place-frequency mapping of the cochlea. No consistent differences were found in speech recognition ability among the different maps suggesting flexibility of the auditory system to integrate acoustic and electrical information under distorted conditions (Turner et al., 2008). Turner et al. demonstrated a significant advantage in speech recognition in noise when using a Hybrid CI in comparison to using a traditional CI. Additionally, the second experiment showed that the Hybrid CI is able to integrate both low and high frequency information effectively despite speech information shifting. Listeners are able to adapt even with the distorted speech input and improve speech perception. Both sets of data certainly illustrate the success of the Iowa/Nucleus Hybrid CI. Auditory Training to Improve Speech Perception Abilities The literature discussed thus far demonstrates the benefit of additional low-frequency hearing for speech recognition in both quiet and noisy conditions for hearing- impaired individuals using a CI. Furthermore, Hybrid CI studies have demonstrated an even greater advantage in speech perception ability when compared to performance with a traditional CI. However, the issue of variability in outcomes remains. Learning electrically stimulated

30 18 speech patterns can be a new and difficult experience for many CI recipients. Active auditory rehabilitation may be one way to maximize the benefit of implantation for CI users. There has been some research on the effectiveness of auditory training on speech perception for HA users and traditional CI users. However, auditory training studies have focused on lexical information, and not indexical cues necessarily. Additionally, the data for auditory training on listeners using EAS is very limited. The goal of this section is to briefly present the available research on auditory training intended to improve speech perception abilities in HA users and traditional CI users. Burk, Humes, Amos, and Strauser (2006) trained hearing-impaired listeners on an isolated word list produced by a single talker. The authors evaluated the training s effectiveness on speech recognition of novel talkers with background noise. Two groups of normal-hearing subjects were recruited: noised-masked young adults, used to model the performance of elderly hearing-impaired listeners, and elderly hearing-impaired listeners who used HAs. Over a span of nine to 14 days, listeners were trained on a set of 75 monosyllabic words spoken by a single female talker. Word recognition was tested prior to auditory training and again upon completion of training. Test stimuli consisted of both trained words (i.e., words used in training) and novel words (i.e., words not used in training). Test stimuli were produced by the talker used in training and also by three unfamiliar talkers. Word-recognition performance of both trained and untrained words was measured before and after training in both open- and closed-set response conditions. When trained words were produced by the talker, both participant groups were better at recognizing familiar words than novel words. A small non-significant improvement was seen on untrained words produced by the same talker indicating some generalization to novel words (Burk et al., 2006). Similarly, when the test words were presented by unfamiliar talkers, the subjects were better at recognizing the trained words than the novel words. The significant performance increase on trained words was maintained across novel talkers suggesting that the listeners focused on word memorization rather than talker-specific cues. Older hearing-impaired listeners were

31 19 able to improve word recognition abilities to the same degree as the young normal-hearing listeners. However, training on isolated words was not sufficient to transfer to fluent speech involving sentences. Maintenance measures were taken six months after treatment for the hearing-impaired group. The subjects had decreased from 83.5% to 62.9% accuracy but still performed significantly better than pretreatment scores of 37.6%. However, it only took one hour of training for listeners to return to previous post-training performance levels. This study demonstrated that hearing-impaired listeners who use HAs were able to significantly improve word recognition performance with training in noise and with both familiar and novel talkers. However, generalization to conversational speech was not observed. Therefore, it may be best to use sentences in training rather than isolated words. Fu and Galvin (2007) developed a computer-assisted speech-training (CAST) program for adult CI users. The software targeted progressively more difficult acoustic contrasts among vowels and consonants using monosyllabic words. Visual feedback was provided as to the correctness of the response and auditory feedback was given in which the subject s incorrect response was repeatedly compared to the correct response. They tested the training program s effectiveness in improving CI recipients speech and possible generalization to music perception abilities after each week of training. Subjects were asked to use the CAST program at home for one hour each day, five days a week, for one month. Both vowel and consonant recognition significantly improved for all participants (15.8% and 13.5%, respectively). Because vowel and consonant contrasts were trained using only monosyllabic words, the results indicated that the improved recognition somewhat generalized to improve sentence recognition (Fu & Galvin, 2007). Additionally, there was some generalization from the music-training task involving melodic contour identification to an untrained task of familiar melody identification. However, there was large variability in the amount of training time it took for individuals to improve in speech perception. Some individuals made significant improvements after one day of training, while others needed the

32 20 full five days to reach significant improvement in speech recognition. Still, this study demonstrated training effectiveness for enhancing speech and music recognition in CI users. Both studies demonstrated the effectiveness of auditory training in improving speech perception for hearing-impaired adults. However, training focused on improving linguistic information perception. Having discussed the importance of low- frequency information for speech recognition in CI users, it is reasonable to inquire the effectiveness of training listeners on indexical properties of speech. Some studies have looked at the effect of training on both linguistic and indexical speech perception. However, research specifically training indexical information has been limited, mostly involving normal-hearing listeners using traditional CI simulations. Fu, Galvin, Wang, and Nogaki (2005) trained CI users in multitalker vowel and consonant recognition, and voice gender recognition. Participants partook in auditory training of speech stimuli at home for one hour each day, five days a week, for one month or longer. Training used multiple talkers (i.e., two female and two male) and targeted minimal speech contrasts presented in monosyllabic words and nonsense words. Training progressed in difficulty from phoneme discrimination training (i.e., a same/different response) to phoneme identification training. Stimuli and specific talkers used in training were not reused in the test stimulus set. Open-set word recognition scores significantly improved after four weeks of training (27.9% to 55.8%). Nevertheless, many of the phoneme recognition performances remained in the poor to fair range. Improvement was highly variable which could have been a result of the at-home implementation of the training protocol (Fu et al., 2005). Results showed significant improvement in consonant and vowel recognition after training but not voice gender recognition (Fu et al., 2005). Although materials used female and male speakers, the training focused on contrastive phoneme recognition on not on training of talker information. Furthermore, although phoneme recognition improved, most participants still performed in the poor to fair range after training. This suggests that training improvements would not

Functional Auditory Performance Indicators (FAPI)

Functional Auditory Performance Indicators (FAPI) Functional Performance Indicators (FAPI) An Integrated Approach to Skill FAPI Overview The Functional (FAPI) assesses the functional auditory skills of children with hearing loss. It can be used by parents,

More information

Portions have been extracted from this report to protect the identity of the student. RIT/NTID AURAL REHABILITATION REPORT Academic Year 2003 2004

Portions have been extracted from this report to protect the identity of the student. RIT/NTID AURAL REHABILITATION REPORT Academic Year 2003 2004 Portions have been extracted from this report to protect the identity of the student. Sessions: 9/03 5/04 Device: N24 cochlear implant Speech processors: 3G & Sprint RIT/NTID AURAL REHABILITATION REPORT

More information

Evaluation of Wireless, Digital, Audio-Streaming Accessories Designed for the Cochlear Nucleus 6 Sound Processor

Evaluation of Wireless, Digital, Audio-Streaming Accessories Designed for the Cochlear Nucleus 6 Sound Processor Evaluation of Wireless, Digital, Audio-Streaming Accessories Designed for the Cochlear Nucleus 6 Sound Processor Jace Wolfe, Mila Morais Duke, and Erin Schafer Cochlear Ltd. and the GN Resound hearing

More information

62 Hearing Impaired MI-SG-FLD062-02

62 Hearing Impaired MI-SG-FLD062-02 62 Hearing Impaired MI-SG-FLD062-02 TABLE OF CONTENTS PART 1: General Information About the MTTC Program and Test Preparation OVERVIEW OF THE TESTING PROGRAM... 1-1 Contact Information Test Development

More information

. Niparko, J. K. (2006). Speech Recognition at 1-Year Follow-Up in the Childhood

. Niparko, J. K. (2006). Speech Recognition at 1-Year Follow-Up in the Childhood Psychology 230: Research Methods Lab A Katie Berg, Brandon Geary, Gina Scharenbroch, Haley Schmidt, & Elizabeth Stevens Introduction: Overview: A training program, under the lead of Professor Jeremy Loebach,

More information

ACOUSTICAL CONSIDERATIONS FOR EFFECTIVE EMERGENCY ALARM SYSTEMS IN AN INDUSTRIAL SETTING

ACOUSTICAL CONSIDERATIONS FOR EFFECTIVE EMERGENCY ALARM SYSTEMS IN AN INDUSTRIAL SETTING ACOUSTICAL CONSIDERATIONS FOR EFFECTIVE EMERGENCY ALARM SYSTEMS IN AN INDUSTRIAL SETTING Dennis P. Driscoll, P.E. and David C. Byrne, CCC-A Associates in Acoustics, Inc. Evergreen, Colorado Telephone (303)

More information

Expanding Performance Leadership in Cochlear Implants. Hansjuerg Emch President, Advanced Bionics AG GVP, Sonova Medical

Expanding Performance Leadership in Cochlear Implants. Hansjuerg Emch President, Advanced Bionics AG GVP, Sonova Medical Expanding Performance Leadership in Cochlear Implants Hansjuerg Emch President, Advanced Bionics AG GVP, Sonova Medical Normal Acoustic Hearing High Freq Low Freq Acoustic Input via External Ear Canal

More information

MICHIGAN TEST FOR TEACHER CERTIFICATION (MTTC) TEST OBJECTIVES FIELD 062: HEARING IMPAIRED

MICHIGAN TEST FOR TEACHER CERTIFICATION (MTTC) TEST OBJECTIVES FIELD 062: HEARING IMPAIRED MICHIGAN TEST FOR TEACHER CERTIFICATION (MTTC) TEST OBJECTIVES Subarea Human Development and Students with Special Educational Needs Hearing Impairments Assessment Program Development and Intervention

More information

Ruth Litovsky University of Wisconsin Madison, WI USA

Ruth Litovsky University of Wisconsin Madison, WI USA WAISMAN CENTER Binaural Hearing and Speech Laboratory Emergence of Spa?al Hearing in Pediatric Bilateral Cochlear Implant Users Ruth Litovsky University of Wisconsin Madison, WI USA ACIA, Nashville, December

More information

Tonal Detection in Noise: An Auditory Neuroscience Insight

Tonal Detection in Noise: An Auditory Neuroscience Insight Image: http://physics.ust.hk/dusw/ Tonal Detection in Noise: An Auditory Neuroscience Insight Buddhika Karunarathne 1 and Richard H.Y. So 1,2 1 Dept. of IELM, Hong Kong University of Science & Technology,

More information

PERCENTAGE ARTICULATION LOSS OF CONSONANTS IN THE ELEMENTARY SCHOOL CLASSROOMS

PERCENTAGE ARTICULATION LOSS OF CONSONANTS IN THE ELEMENTARY SCHOOL CLASSROOMS The 21 st International Congress on Sound and Vibration 13-17 July, 2014, Beijing/China PERCENTAGE ARTICULATION LOSS OF CONSONANTS IN THE ELEMENTARY SCHOOL CLASSROOMS Dan Wang, Nanjie Yan and Jianxin Peng*

More information

Early vs. Late Onset Hearing Loss: How Children Differ from Adults. Andrea Pittman, PhD Arizona State University

Early vs. Late Onset Hearing Loss: How Children Differ from Adults. Andrea Pittman, PhD Arizona State University Early vs. Late Onset Hearing Loss: How Children Differ from Adults Andrea Pittman, PhD Arizona State University Heterogeneity of Children with Hearing Loss Chronological age Age at onset Age at identification

More information

SmartFocus Article 1 - Technical approach

SmartFocus Article 1 - Technical approach SmartFocus Article 1 - Technical approach Effective strategies for addressing listening in noisy environments The difficulty of determining the desired amplification for listening in noise is well documented.

More information

REGULATIONS FOR THE DEGREE OF MASTER OF SCIENCE IN AUDIOLOGY (MSc[Audiology])

REGULATIONS FOR THE DEGREE OF MASTER OF SCIENCE IN AUDIOLOGY (MSc[Audiology]) 224 REGULATIONS FOR THE DEGREE OF MASTER OF SCIENCE IN AUDIOLOGY (MSc[Audiology]) (See also General Regulations) Any publication based on work approved for a higher degree should contain a reference to

More information

Your Hearing ILLUMINATED

Your Hearing ILLUMINATED Your Hearing ILLUMINATED INFORMATION FROM YOUR HEARING CARE PROFESSIONAL REDISCOVER your hearing and reconnect 1 with the important things you might have been missing. Your sense of hearing is a vital

More information

C HAPTER NINE. Signal Processing for Severe-to-Profound Hearing Loss. Stefan Launer and Volker Kühnel. Introduction

C HAPTER NINE. Signal Processing for Severe-to-Profound Hearing Loss. Stefan Launer and Volker Kühnel. Introduction C HAPTER NINE Signal Processing for Severe-to-Profound Hearing Loss Stefan Launer and Volker Kühnel Introduction Wide dynamic range compression has become the mostly used signal processing strategy for

More information

SPEECH AUDIOMETRY. @ Biswajeet Sarangi, B.Sc.(Audiology & speech Language pathology)

SPEECH AUDIOMETRY. @ Biswajeet Sarangi, B.Sc.(Audiology & speech Language pathology) 1 SPEECH AUDIOMETRY Pure tone Audiometry provides only a partial picture of the patient s auditory sensitivity. Because it doesn t give any information about it s ability to hear and understand speech.

More information

Speech-Language Pathology Curriculum Foundation Course Linkages

Speech-Language Pathology Curriculum Foundation Course Linkages FACULTY OF HEALTH PROFESSIONS School of Human Communication Disorders Speech-Language Pathology Curriculum Foundation Course Linkages Phonetics (HUCD 5020) a. Vowels b. Consonants c. Suprasegmentals d.

More information

How To Know If A Cochlear Implant Is Right For You

How To Know If A Cochlear Implant Is Right For You MEDICAL POLICY SUBJECT: COCHLEAR IMPLANTS AND PAGE: 1 OF: 6 If the member's subscriber contract excludes coverage for a specific service it is not covered under that contract. In such cases, medical policy

More information

L2 EXPERIENCE MODULATES LEARNERS USE OF CUES IN THE PERCEPTION OF L3 TONES

L2 EXPERIENCE MODULATES LEARNERS USE OF CUES IN THE PERCEPTION OF L3 TONES L2 EXPERIENCE MODULATES LEARNERS USE OF CUES IN THE PERCEPTION OF L3 TONES Zhen Qin, Allard Jongman Department of Linguistics, University of Kansas, United States qinzhenquentin2@ku.edu, ajongman@ku.edu

More information

CONVENTIONAL AND DIGITAL HEARING AIDS

CONVENTIONAL AND DIGITAL HEARING AIDS CONVENTIONAL AND DIGITAL HEARING AIDS Coverage for services, procedures, medical devices and drugs are dependent upon benefit eligibility as outlined in the member's specific benefit plan. This Medical

More information

Anatomy and Physiology of Hearing (added 09/06)

Anatomy and Physiology of Hearing (added 09/06) Anatomy and Physiology of Hearing (added 09/06) 1. Briefly review the anatomy of the cochlea. What is the cochlear blood supply? SW 2. Discuss the effects of the pinna, head and ear canal on the transmission

More information

Areas of Processing Deficit and Their Link to Areas of Academic Achievement

Areas of Processing Deficit and Their Link to Areas of Academic Achievement Areas of Processing Deficit and Their Link to Areas of Academic Achievement Phonological Processing Model Wagner, R.K., Torgesen, J.K., & Rashotte, C.A. (1999). Comprehensive Test of Phonological Processing.

More information

The loudness war is fought with (and over) compression

The loudness war is fought with (and over) compression The loudness war is fought with (and over) compression Susan E. Rogers, PhD Berklee College of Music Dept. of Music Production & Engineering 131st AES Convention New York, 2011 A summary of the loudness

More information

The Role of the Efferent System in Auditory Performance in Background Noise

The Role of the Efferent System in Auditory Performance in Background Noise The Role of the Efferent System in Auditory Performance in Background Noise Utah Speech-Language Hearing Association, 2015 Skyler G. Jennings Ph.D., Au.D. CCC-A Outline Hearing in a noisy background Normal

More information

Technical Report. Overview. Revisions in this Edition. Four-Level Assessment Process

Technical Report. Overview. Revisions in this Edition. Four-Level Assessment Process Technical Report Overview The Clinical Evaluation of Language Fundamentals Fourth Edition (CELF 4) is an individually administered test for determining if a student (ages 5 through 21 years) has a language

More information

Preservation of Hearing in Cochlear Implant Surgery: Advantages of Combined Electrical and Acoustical Speech Processing

Preservation of Hearing in Cochlear Implant Surgery: Advantages of Combined Electrical and Acoustical Speech Processing The Laryngoscope Lippincott Williams & Wilkins, Inc. 2005 The American Laryngological, Rhinological and Otological Society, Inc. Preservation of Hearing in Cochlear Implant Surgery: Advantages of Combined

More information

Lecture 2, Human cognition

Lecture 2, Human cognition Human Cognition An important foundation for the design of interfaces is a basic theory of human cognition The information processing paradigm (in its most simple form). Human Information Processing The

More information

Workshop Perceptual Effects of Filtering and Masking Introduction to Filtering and Masking

Workshop Perceptual Effects of Filtering and Masking Introduction to Filtering and Masking Workshop Perceptual Effects of Filtering and Masking Introduction to Filtering and Masking The perception and correct identification of speech sounds as phonemes depends on the listener extracting various

More information

Identifying dyslexia and other learning problems using LASS

Identifying dyslexia and other learning problems using LASS Identifying dyslexia and other learning problems using LASS 1 Outline of presentation What is LASS? What is dyslexia? Indicators of dyslexia Components and features of LASS Uses of LASS for screening and

More information

The Effects of Speech Production and Vocabulary Training on Different Components of Spoken Language Performance

The Effects of Speech Production and Vocabulary Training on Different Components of Spoken Language Performance The Effects of Speech Production and Vocabulary Training on Different Components of Spoken Language Performance Louise E. Paatsch University of Melbourne Peter J. Blamey University of Melbourne Dynamic

More information

Trigonometric functions and sound

Trigonometric functions and sound Trigonometric functions and sound The sounds we hear are caused by vibrations that send pressure waves through the air. Our ears respond to these pressure waves and signal the brain about their amplitude

More information

Dr. Abdel Aziz Hussein Lecturer of Physiology Mansoura Faculty of Medicine

Dr. Abdel Aziz Hussein Lecturer of Physiology Mansoura Faculty of Medicine Physiological Basis of Hearing Tests By Dr. Abdel Aziz Hussein Lecturer of Physiology Mansoura Faculty of Medicine Introduction Def: Hearing is the ability to perceive certain pressure vibrations in the

More information

Functional Communication for Soft or Inaudible Voices: A New Paradigm

Functional Communication for Soft or Inaudible Voices: A New Paradigm The following technical paper has been accepted for presentation at the 2005 annual conference of the Rehabilitation Engineering and Assistive Technology Society of North America. RESNA is an interdisciplinary

More information

Hearing Tests And Your Child

Hearing Tests And Your Child HOW EARLY CAN A CHILD S HEARING BE TESTED? Most parents can remember the moment they first realized that their child could not hear. Louise Tracy has often told other parents of the time she went onto

More information

Problem-Based Group Activities for a Sensation & Perception Course. David S. Kreiner. University of Central Missouri

Problem-Based Group Activities for a Sensation & Perception Course. David S. Kreiner. University of Central Missouri -Based Group Activities for a Course David S. Kreiner University of Central Missouri Author contact information: David Kreiner Professor of Psychology University of Central Missouri Lovinger 1111 Warrensburg

More information

Life is on. Interact freely. Communicate with confidence. Live without limit. Life is on. www.phonak.com www.dynamicsoundfield.com

Life is on. Interact freely. Communicate with confidence. Live without limit. Life is on. www.phonak.com www.dynamicsoundfield.com Life is on We are sensitive to the needs of everyone who depends on our knowledge, ideas and care. And by creatively challenging the limits of technology, we develop innovations that help people hear,

More information

WMS III to WMS IV: Rationale for Change

WMS III to WMS IV: Rationale for Change Pearson Clinical Assessment 19500 Bulverde Rd San Antonio, TX, 28759 Telephone: 800 627 7271 www.pearsonassessments.com WMS III to WMS IV: Rationale for Change Since the publication of the Wechsler Memory

More information

Cochlear implants for children and adults with severe to profound deafness

Cochlear implants for children and adults with severe to profound deafness Issue date: January 2009 Review date: February 2011 Cochlear implants for children and adults with severe to profound deafness National Institute for Health and Clinical Excellence Page 1 of 41 Final appraisal

More information

Hearing Tests And Your Child

Hearing Tests And Your Child How Early Can A Child s Hearing Be Tested? Most parents can remember the moment they first realized that their child could not hear. Louise Tracy has often told other parents of the time she went onto

More information

Unilateral (Hearing Loss in One Ear) Hearing Loss Guidance

Unilateral (Hearing Loss in One Ear) Hearing Loss Guidance Unilateral (Hearing Loss in One Ear) Hearing Loss Guidance Indiana s Early Hearing Detection and Intervention Program Before universal newborn hearing screening, most children with unilateral hearing loss

More information

There are many reasons why reading can be hard. This handout describes

There are many reasons why reading can be hard. This handout describes Understand the problems a child may be having with reading, and target what you can do to help! Look inside for practical, research-based information for parents and teachers about: Phonological and Phonemic

More information

The Effects of Ultrasonic Sound Generated by Ultrasonic Cleaning Systems on Human Hearing and Physiology

The Effects of Ultrasonic Sound Generated by Ultrasonic Cleaning Systems on Human Hearing and Physiology The Effects of Ultrasonic Sound Generated by Ultrasonic Cleaning Systems on Human Hearing and Physiology Questions about the effects of ultrasonic energy on hearing and other human physiology arise from

More information

The Effects of Hearing Impairment and Aging on Spatial Processing

The Effects of Hearing Impairment and Aging on Spatial Processing The Effects of Hearing Impairment and Aging on Spatial Processing Helen Glyde, 1 3 Sharon Cameron, 1,2 Harvey Dillon, 1,2 Louise Hickson, 1,3 and Mark Seeto 1,2 Objectives: Difficulty in understanding

More information

Interpretive Report of WAIS IV Testing. Test Administered WAIS-IV (9/1/2008) Age at Testing 40 years 8 months Retest? No

Interpretive Report of WAIS IV Testing. Test Administered WAIS-IV (9/1/2008) Age at Testing 40 years 8 months Retest? No Interpretive Report of WAIS IV Testing Examinee and Testing Information Examinee Name Date of Report 9/4/2011 Examinee ID Years of Education 18 Date of Birth 12/7/1967 Home Language English Gender Female

More information

Understanding Hearing Loss 404.591.1884. www.childrensent.com

Understanding Hearing Loss 404.591.1884. www.childrensent.com Understanding Hearing Loss 404.591.1884 www.childrensent.com You just found out your child has a hearing loss. You know what the Audiologist explained to you, but it is hard to keep track of all the new

More information

8.Audiological Evaluation

8.Audiological Evaluation 8. A U D I O L O G I C A L E V A L U A T I O N 8.Audiological Evaluation The external ear of the child with Progeria Behavioral testing for assessing hearing thresholds Objective electrophysiologic tests

More information

The NAL Percentage Loss of Hearing Scale

The NAL Percentage Loss of Hearing Scale The NAL Percentage Loss of Hearing Scale Anne Greville Audiology Adviser, ACC February, 2010 The NAL Percentage Loss of Hearing (PLH) Scale was developed by John Macrae of the Australian National Acoustic

More information

ELPS TELPAS. Proficiency Level Descriptors

ELPS TELPAS. Proficiency Level Descriptors ELPS TELPAS Proficiency Level Descriptors Permission to copy the ELPS TELPAS Proficiency Level Descriptors is hereby extended to Texas school officials and their agents for their exclusive use in determining

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 213 http://acousticalsociety.org/ ICA 213 Montreal Montreal, Canada 2-7 June 213 Communication Session 3aSCb: Components of Informational Masking 3aSCb4.

More information

Audio Examination. Place of Exam:

Audio Examination. Place of Exam: Audio Examination Name: Date of Exam: SSN: C-number: Place of Exam: The Handbook of Standard Procedures and Best Practices for Audiology Compensation and Pension Exams is available online. ( This is a

More information

Macroaudiology a Working Model of Hearing Presented at XXI International Congress of Audiology Morioka, Japan, 1992 R.

Macroaudiology a Working Model of Hearing Presented at XXI International Congress of Audiology Morioka, Japan, 1992 R. Macroaudiology a Working Model of Hearing Presented at XXI International Congress of Audiology Morioka, Japan, 1992 R. Bishop MNZAS I would like to present a model of hearing which gives a theoretical

More information

Cochlear Implants: A Communication Choice. Cochlear Implants: A Communication Tool. www.cochlear.com

Cochlear Implants: A Communication Choice. Cochlear Implants: A Communication Tool. www.cochlear.com Cochlear Ltd ABN 96 002 618 073 14 Mars Road, PO Box 629 Lane Cove NSW 2066 Australia Tel: 61 2 9428 6555 Fax: 61 2 9428 6353 Cochlear Americas 400 Inverness Parkway Suite 400 Englewood CO 80112 USA Tel:

More information

Cochlear Implants. Policy Number: 7.01.05 Last Review: 10/2015 Origination: 10/1988 Next Review: 10/2016

Cochlear Implants. Policy Number: 7.01.05 Last Review: 10/2015 Origination: 10/1988 Next Review: 10/2016 Cochlear Implants Policy Number: 7.01.05 Last Review: 10/2015 Origination: 10/1988 Next Review: 10/2016 Policy Blue Cross and Blue Shield of Kansas City (Blue KC) will provide coverage for cochlear implants

More information

Position Paper on Cochlear Implants in Children

Position Paper on Cochlear Implants in Children Position Paper on Cochlear Implants in Children Position: The Canadian Association of Speech-Language Pathologists and Audiologists (CASLPA) supports cochlear implantation in children where appropriate

More information

A Guide to Cambridge English: Preliminary

A Guide to Cambridge English: Preliminary Cambridge English: Preliminary, also known as the Preliminary English Test (PET), is part of a comprehensive range of exams developed by Cambridge English Language Assessment. Cambridge English exams have

More information

Room Acoustics. Boothroyd, 2002. Page 1 of 18

Room Acoustics. Boothroyd, 2002. Page 1 of 18 Room Acoustics. Boothroyd, 2002. Page 1 of 18 Room acoustics and speech perception Prepared for Seminars in Hearing Arthur Boothroyd, Ph.D. Distinguished Professor Emeritus, City University of New York

More information

Learners Who are Deaf or Hard of Hearing Kalie Carlisle, Lauren Nash, and Allison Gallahan

Learners Who are Deaf or Hard of Hearing Kalie Carlisle, Lauren Nash, and Allison Gallahan Learners Who are Deaf or Hard of Hearing Kalie Carlisle, Lauren Nash, and Allison Gallahan Definition Deaf A deaf person is one whose hearing disability precludes successful processing of linguistic information

More information

SEMI-IMPLANTABLE AND FULLY IMPLANTABLE MIDDLE EAR HEARING AIDS

SEMI-IMPLANTABLE AND FULLY IMPLANTABLE MIDDLE EAR HEARING AIDS Coverage for services, procedures, medical devices and drugs are dependent upon benefit eligibility as outlined in the member's specific benefit plan. This Medical Coverage Guideline must be read in its

More information

MEASURING BRAIN CHANGES IN HEARING LOSS AND ITS REMEDIATION

MEASURING BRAIN CHANGES IN HEARING LOSS AND ITS REMEDIATION MEASURING BRAIN CHANGES IN HEARING LOSS AND ITS REMEDIATION Blake W Johnson 1,3, Stephen Crain 2,3 1 Department of Cognitive Science, Macquarie University 2 Department of Linguistics, Macquarie University

More information

Does premium listening require premium hearing aids?

Does premium listening require premium hearing aids? Does premium listening require premium hearing aids? Effectiveness of basic and premium hearing aids on speech understanding and listening effort outcomes. Jani Johnson, Jingjing Xu, Robyn Cox Presented

More information

Introduction Bone Anchored Implants (BAI), Candidacy and Pre-Operative Testing for Adult Patients

Introduction Bone Anchored Implants (BAI), Candidacy and Pre-Operative Testing for Adult Patients Introduction Bone Anchored Implants (BAI), Candidacy and Pre-Operative Testing for Adult Patients Developed by Hakanssonand his colleagues in Sweden in the late 1970s 3 Components Sound Processor (#1)

More information

Speech sounds. Room acoustics

Speech sounds. Room acoustics Modeling the effects of room-acoustics on speech reception and perception. Arthur Boothroyd, 2003. 1 Introduction Communication by spoken language involves a complex chain of events, as illustrated in

More information

Audiology as a School Based Service. Purpose. Audiology (IDEA 2004) Arkansas SPED Regulations. IDEA 2004 Part B

Audiology as a School Based Service. Purpose. Audiology (IDEA 2004) Arkansas SPED Regulations. IDEA 2004 Part B Audiology as a School Based Service 2008 Medicaid in the Schools (MITS) Summit January 24, 2008 Donna Fisher Smiley, Ph.D., CCC-A Audiologist Arkansas Children s Hospital and Conway Public Schools Purpose

More information

Central Auditory Processing Disorder (CAPD)

Central Auditory Processing Disorder (CAPD) Central Auditory Processing Disorder (CAPD) What is CAPD? Central Auditory Processing Disorder (CAPD) - also known as Auditory Processing Disorder (APD) - is an umbrella term for a variety of disorders

More information

Hearing Loss and Aging

Hearing Loss and Aging Hearing Loss and Aging Over 25 million Americans have some degree of hearing loss and, as the average age of the population increases, this number will rise. - V. M. Bloedel Hearing Research Center Home

More information

Guide for families of infants and children with hearing loss

Guide for families of infants and children with hearing loss With early detection, Early Intervention can begin! Guide for families of infants and children with hearing loss Birth to 3 2008 Cover photograph Geneva Marie Durgin was born January 20, 2007. She lives

More information

The Clinical Evaluation of Language Fundamentals, fourth edition (CELF-4;

The Clinical Evaluation of Language Fundamentals, fourth edition (CELF-4; The Clinical Evaluation of Language Fundamentals, Fourth Edition (CELF-4) A Review Teresa Paslawski University of Saskatchewan Canadian Journal of School Psychology Volume 20 Number 1/2 December 2005 129-134

More information

Improvement of Visual Attention and Working Memory through a Web-based Cognitive Training Program

Improvement of Visual Attention and Working Memory through a Web-based Cognitive Training Program . Improvement of Visual Attention and Working Memory through a Web-based Cognitive Training Program Michael Scanlon David Drescher Kunal Sarkar Context: Prior work has revealed that cognitive ability is

More information

Vision: Receptors. Modes of Perception. Vision: Summary 9/28/2012. How do we perceive our environment? Sensation and Perception Terminology

Vision: Receptors. Modes of Perception. Vision: Summary 9/28/2012. How do we perceive our environment? Sensation and Perception Terminology How do we perceive our environment? Complex stimuli are broken into individual features, relayed to the CNS, then reassembled as our perception Sensation and Perception Terminology Stimulus: physical agent

More information

Audiology (0340) Test at a Glance. About this test. Test Guide Available. See Inside Back Cover. Test Code 0340

Audiology (0340) Test at a Glance. About this test. Test Guide Available. See Inside Back Cover. Test Code 0340 Audiology (0340) Test Guide Available See Inside Back Cover Test at a Glance Test Name Audiology Test Code 0340 Time 2 hours Number of Questions 150 Format Multiple-choice questions Approximate Approximate

More information

C HAPTER T HIRTEEN. Diagnosis and Treatment of Severe High Frequency Hearing Loss. Susan Scollie and Danielle Glista

C HAPTER T HIRTEEN. Diagnosis and Treatment of Severe High Frequency Hearing Loss. Susan Scollie and Danielle Glista C HAPTER T HIRTEEN Diagnosis and Treatment of Severe High Frequency Hearing Loss Susan Scollie and Danielle Glista Providing audible amplified signals for listeners with severe high frequency hearing loss

More information

Evaluating Real-World Hearing Aid Performance in a Laboratory

Evaluating Real-World Hearing Aid Performance in a Laboratory Evaluating Real-World Hearing Aid Performance in a Laboratory Mead C. Killion principal investigator* Lawrence J. Revit research assistant and presenter** Ruth A. Bentler research audiologist Mary Meskan

More information

EARLY INTERVENTION: COMMUNICATION AND LANGUAGE SERVICES FOR FAMILIES OF DEAF AND HARD-OF-HEARING CHILDREN

EARLY INTERVENTION: COMMUNICATION AND LANGUAGE SERVICES FOR FAMILIES OF DEAF AND HARD-OF-HEARING CHILDREN EARLY INTERVENTION: COMMUNICATION AND LANGUAGE SERVICES FOR FAMILIES OF DEAF AND HARD-OF-HEARING CHILDREN Our child has a hearing loss. What happens next? What is early intervention? What can we do to

More information

IV. Publications 381

IV. Publications 381 IV. Publications 381 382 IV. Publications ARTICLES PUBLISHED: Carter, A. (2002). A phonetic and phonological analysis of weak syllable omissions: Comparing children with normally developing language and

More information

A Microphone Array for Hearing Aids

A Microphone Array for Hearing Aids A Microphone Array for Hearing Aids by Bernard Widrow 1531-636X/06/$10.00 2001IEEE 0.00 26 Abstract A directional acoustic receiving system is constructed in the form of a necklace including an array of

More information

Hearing Tests for Children with Multiple or Developmental Disabilities by Susan Agrawal

Hearing Tests for Children with Multiple or Developmental Disabilities by Susan Agrawal www.complexchild.com Hearing Tests for Children with Multiple or Developmental Disabilities by Susan Agrawal Hearing impairment is a common problem in children with developmental disabilities or who have

More information

5th Congress of Alps-Adria Acoustics Association NOISE-INDUCED HEARING LOSS

5th Congress of Alps-Adria Acoustics Association NOISE-INDUCED HEARING LOSS 5th Congress of Alps-Adria Acoustics Association 12-14 September 2012, Petrčane, Croatia NOISE-INDUCED HEARING LOSS Davor Šušković, mag. ing. el. techn. inf. davor.suskovic@microton.hr Abstract: One of

More information

Formant Bandwidth and Resilience of Speech to Noise

Formant Bandwidth and Resilience of Speech to Noise Formant Bandwidth and Resilience of Speech to Noise Master Thesis Leny Vinceslas August 5, 211 Internship for the ATIAM Master s degree ENS - Laboratoire Psychologie de la Perception - Hearing Group Supervised

More information

SLIDE 9: SLIDE 10: SLIDE 11: SLIDE 12: SLIDE 13-17:

SLIDE 9: SLIDE 10: SLIDE 11: SLIDE 12: SLIDE 13-17: Cochlear Implants: When Hearing Aids Aren t Enough Recorded: October 8, 2013 Presenter: Howard Francis, M.D., Director of the Johns Hopkins Listening Center SLIDE 1: Good evening, thanks for joining us.

More information

Written Example for Research Question: How is caffeine consumption associated with memory?

Written Example for Research Question: How is caffeine consumption associated with memory? Guide to Writing Your Primary Research Paper Your Research Report should be divided into sections with these headings: Abstract, Introduction, Methods, Results, Discussion, and References. Introduction:

More information

Voice Communication Package v7.0 of front-end voice processing software technologies General description and technical specification

Voice Communication Package v7.0 of front-end voice processing software technologies General description and technical specification Voice Communication Package v7.0 of front-end voice processing software technologies General description and technical specification (Revision 1.0, May 2012) General VCP information Voice Communication

More information

Tracking translation process: The impact of experience and training

Tracking translation process: The impact of experience and training Tracking translation process: The impact of experience and training PINAR ARTAR Izmir University, Turkey Universitat Rovira i Virgili, Spain The translation process can be described through eye tracking.

More information

Samuel R. Atcherson, Ph.D.

Samuel R. Atcherson, Ph.D. Beyond Hearing Aids and Cochlear Implants: Helping Families Make the Most of Assistive Technology Samuel R. Atcherson, Ph.D. Assistant Professor, Clinical Audiologist, Person w/ Hearing Loss University

More information

Effects of hearing words, imaging hearing words, and reading on auditory implicit and explicit memory tests

Effects of hearing words, imaging hearing words, and reading on auditory implicit and explicit memory tests Memory & Cognition 2000, 28 (8), 1406-1418 Effects of hearing words, imaging hearing words, and reading on auditory implicit and explicit memory tests MAURA PILOTTI, DAVID A. GALLO, and HENRY L. ROEDIGER

More information

GONCA SENNAROĞLU PhD LEVENT SENNAROĞLU MD. Department of Otolaryngology Hacettepe University ANKARA, TURKEY

GONCA SENNAROĞLU PhD LEVENT SENNAROĞLU MD. Department of Otolaryngology Hacettepe University ANKARA, TURKEY GONCA SENNAROĞLU PhD LEVENT SENNAROĞLU MD Department of Otolaryngology Hacettepe University ANKARA, TURKEY To present the audiological findings and rehabilitative outcomes of CI in children with cochlear

More information

Questions and Answers for Parents

Questions and Answers for Parents Questions and Answers for Parents There are simple, inexpensive tests available to detect hearing impairment in infants during the first days of life. In the past, most hearing deficits in children were

More information

The Disability Tax Credit Certificate Tip sheet for Audiologists

The Disability Tax Credit Certificate Tip sheet for Audiologists The Disability Tax Credit Certificate Tip sheet for Audiologists Developed by: The Canadian Academy of Audiology (CAA) & Speech- Language and Audiology Canada (SAC) Purpose of This Document The Canada

More information

Thirukkural - A Text-to-Speech Synthesis System

Thirukkural - A Text-to-Speech Synthesis System Thirukkural - A Text-to-Speech Synthesis System G. L. Jayavardhana Rama, A. G. Ramakrishnan, M Vijay Venkatesh, R. Murali Shankar Department of Electrical Engg, Indian Institute of Science, Bangalore 560012,

More information

Cochlear Implant and Aural Rehabilitation Corporate Medical Policy

Cochlear Implant and Aural Rehabilitation Corporate Medical Policy Cochlear Implant and Aural Rehabilitation Corporate Medical Policy File name: Cochlear Implant & Aural Rehabilitation File code: UM.REHAB.06 Origination: Adoption of BCBSA policy 2015 Last Review: 03/2015

More information

Auditory measures of selective and divided attention in young and older adults using single-talker competition

Auditory measures of selective and divided attention in young and older adults using single-talker competition Auditory measures of selective and divided attention in young and older adults using single-talker competition Larry E. Humes, Jae Hee Lee, and Maureen P. Coughlin Department of Speech and Hearing Sciences,

More information

CHAPTER 6 PRINCIPLES OF NEURAL CIRCUITS.

CHAPTER 6 PRINCIPLES OF NEURAL CIRCUITS. CHAPTER 6 PRINCIPLES OF NEURAL CIRCUITS. 6.1. CONNECTIONS AMONG NEURONS Neurons are interconnected with one another to form circuits, much as electronic components are wired together to form a functional

More information

PURE TONE AUDIOMETRY Andrew P. McGrath, AuD

PURE TONE AUDIOMETRY Andrew P. McGrath, AuD PURE TONE AUDIOMETRY Andrew P. McGrath, AuD Pure tone audiometry is the standard behavioral assessment of an individual s hearing. The results of pure tone audiometry are recorded on a chart or form called

More information

Audiology and Hearing Science

Audiology and Hearing Science Audiology and Hearing Science Thursday, December 11, 2014 Session 1: Music Perception, Appreciation, and Therapy: Pediatric Applications 14 th Symposium on Cochlear Implants in Children December 11-13,

More information

So, how do we hear? outer middle ear inner ear

So, how do we hear? outer middle ear inner ear The ability to hear is critical to understanding the world around us. The human ear is a fully developed part of our bodies at birth and responds to sounds that are very faint as well as sounds that are

More information

TECHNICAL LISTENING TRAINING: IMPROVEMENT OF SOUND SENSITIVITY FOR ACOUSTIC ENGINEERS AND SOUND DESIGNERS

TECHNICAL LISTENING TRAINING: IMPROVEMENT OF SOUND SENSITIVITY FOR ACOUSTIC ENGINEERS AND SOUND DESIGNERS TECHNICAL LISTENING TRAINING: IMPROVEMENT OF SOUND SENSITIVITY FOR ACOUSTIC ENGINEERS AND SOUND DESIGNERS PACS: 43.10.Sv Shin-ichiro Iwamiya, Yoshitaka Nakajima, Kazuo Ueda, Kazuhiko Kawahara and Masayuki

More information

The effect of mismatched recording conditions on human and automatic speaker recognition in forensic applications

The effect of mismatched recording conditions on human and automatic speaker recognition in forensic applications Forensic Science International 146S (2004) S95 S99 www.elsevier.com/locate/forsciint The effect of mismatched recording conditions on human and automatic speaker recognition in forensic applications A.

More information

Cochlear Implant, Bone Anchored Hearing Aids, and Auditory Brainstem Implant

Cochlear Implant, Bone Anchored Hearing Aids, and Auditory Brainstem Implant Origination: 06/23/08 Revised: 10/13/14 Annual Review: 11/12/15 Purpose: To provide cochlear implant, bone anchored hearing aids, and auditory brainstem implant guidelines for the Medical Department staff

More information

How To Test For Deafness

How To Test For Deafness Technology Assessment Technology Assessment Program Effectiveness of Cochlear Implants in Adults with Sensorineural Hearing Loss Prepared for: Agency for Healthcare Research and Quality 540 Gaither Road

More information

Training the adult brain to listen

Training the adult brain to listen The Hearing Journal, Volume 58, Num. 6, 2005 Page Ten Training the adult brain to listen By Robert R. Sweetow 1 I know that Page Ten is in The Hearing Journal. But you re talking about listening. What

More information