<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neurol.</journal-id>
<journal-title>Frontiers in Neurology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neurol.</abbrev-journal-title>
<issn pub-type="epub">1664-2295</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fneur.2020.00161</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neurology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Changes in Speech-Related Brain Activity During Adaptation to Electro-Acoustic Hearing</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Balkenhol</surname> <given-names>Tobias</given-names></name>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<xref ref-type="author-notes" rid="fn002"><sup>&#x02020;</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Wallh&#x000E4;usser-Franke</surname> <given-names>Elisabeth</given-names></name>
<xref ref-type="author-notes" rid="fn002"><sup>&#x02020;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/338285/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Rotter</surname> <given-names>Nicole</given-names></name>
</contrib>
<contrib contrib-type="author">
<name><surname>Servais</surname> <given-names>J&#x000E9;r&#x000F4;me J.</given-names></name>
</contrib>
</contrib-group>
<aff><institution>Department of Otorhinolaryngology Head and Neck Surgery, Medical Faculty Mannheim, University Medical Center Mannheim, Heidelberg University</institution>, <addr-line>Mannheim</addr-line>, <country>Germany</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Agnieszka J. Szczepek, Charit&#x000E9; &#x02013; Universit&#x000E4;tsmedizin Berlin, Germany</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Dayse Tavora-Vieira, Fiona Stanley Hospital, Australia; Norbert Dillier, University of Zurich, Switzerland</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Tobias Balkenhol <email>tobias.balkenhol&#x00040;medma.uni-heidelberg.de</email></corresp>
<fn fn-type="other" id="fn001"><p>This article was submitted to Neuro-Otology, a section of the journal Frontiers in Neurology</p></fn>
<fn fn-type="other" id="fn002"><p>&#x02020;These authors share first authorship</p></fn></author-notes>
<pub-date pub-type="epub">
<day>31</day>
<month>03</month>
<year>2020</year>
</pub-date>
<pub-date pub-type="collection">
<year>2020</year>
</pub-date>
<volume>11</volume>
<elocation-id>161</elocation-id>
<history>
<date date-type="received">
<day>05</day>
<month>12</month>
<year>2019</year>
</date>
<date date-type="accepted">
<day>19</day>
<month>02</month>
<year>2020</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2020 Balkenhol, Wallh&#x000E4;usser-Franke, Rotter and Servais.</copyright-statement>
<copyright-year>2020</copyright-year>
<copyright-holder>Balkenhol, Wallh&#x000E4;usser-Franke, Rotter and Servais</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract><p><bold>Objectives:</bold> Hearing improves significantly with bimodal provision, i.e., a cochlear implant (CI) at one ear and a hearing aid (HA) at the other, but performance shows a high degree of variability resulting in substantial uncertainty about the performance that can be expected by the individual CI user. The objective of this study was to explore how auditory event-related potentials (AERPs) of bimodal listeners in response to spoken words approximate the electrophysiological response of normal hearing (NH) listeners.</p>
<p><bold>Study Design:</bold> Explorative prospective analysis during the first 6 months of bimodal listening using a within-subject repeated measures design.</p>
<p><bold>Setting:</bold> Academic tertiary care center.</p>
<p><bold>Participants:</bold> Twenty-seven adult participants with bilateral sensorineural hearing loss who received a HiRes 90K CI and continued use of a HA at the non-implanted ear. Age-matched NH listeners served as controls.</p>
<p><bold>Intervention:</bold> Cochlear implantation.</p>
<p><bold>Main Outcome Measures:</bold> Obligatory auditory evoked potentials N1 and P2, and the event-related N2 potential in response to monosyllabic words and their reversed sound traces before, as well as 3 and 6 months post-implantation. The task required word/non-word classification. Stimuli were presented within speech-modulated noise. Loudness of word/non-word signals was adjusted individually to achieve the same intelligibility across groups and assessments.</p>
<p><bold>Results:</bold> Intelligibility improved significantly with bimodal hearing, and the N1&#x02013;P2 response approximated the morphology seen in NH with enhanced and earlier responses to the words compared to their reversals. For bimodal listeners, a prominent negative deflection was present between 370 and 570 ms post stimulus onset (N2), irrespective of stimulus type. This was absent for NH controls; hence, this response did not approximate the NH response during the study interval. N2 source localization evidenced extended activation of general cognitive areas in frontal and prefrontal brain areas in the CI group.</p>
<p><bold>Conclusions:</bold> Prolonged and spatially extended processing in bimodal CI users suggests employment of additional auditory&#x02013;cognitive mechanisms during speech processing. This does not reduce within 6 months of bimodal experience and may be a correlate of the enhanced listening effort described by CI listeners.</p></abstract>
<kwd-group>
<kwd>cochlear implant</kwd>
<kwd>auditory event-related potentials</kwd>
<kwd>speech intelligibility</kwd>
<kwd>electroencephalography</kwd>
<kwd>source localization</kwd>
<kwd>auditory rehabilitation</kwd>
</kwd-group>
<counts>
<fig-count count="6"/>
<table-count count="3"/>
<equation-count count="0"/>
<ref-count count="109"/>
<page-count count="20"/>
<word-count count="16676"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>Introduction</title>
<p>Cochlear implant (CI) technology has experienced remarkable progress within its 40 years of use and, today, supports a high level of auditory performance for many CI users. However, CI performance still drops substantially in background noise, and even the best performers hear significantly worse than listeners with normal hearing (NH). Moreover, CI performance shows a high degree of variability resulting in substantial uncertainty about the performance that can be expected for any individual CI user. Insufficient knowledge on the time interval, which is needed to reach the individual&#x00027;s maximum speech understanding adds to this uncertainty.</p>
<p>While it is largely unexplored if and how alterations following hearing impairment can be reversed, or how they are compensated with CI use, it is of major interest to identify brain processes related to successful hearing rehabilitation, as well as their time course. Behavioral analysis is not sufficient in this regard. Instead, tools that allow repeated exploration of central auditory processing during the course of auditory rehabilitation are needed. Electroencephalography (EEG) is a useful tool with which to investigate these processes. Because EEG is compatible with the CI device, it allows repeated assessments due to its non-invasiveness, and its high temporal resolution is appropriate for outlining the dynamics of the brain&#x00027;s response to verbally presented speech.</p>
<p>Hearing with an implant, in particular, intelligibility in challenging conditions, needs time to develop, indicating that brain plasticity plays a role (<xref ref-type="bibr" rid="B1">1</xref>). In animal models, auditory deprivation is associated with a reduction in connections and a coarsening of the refined connectivity patterns seen with normal hearing (<xref ref-type="bibr" rid="B2">2</xref>, <xref ref-type="bibr" rid="B3">3</xref>). In humans, even post-lingual auditory impairment affects processing within the central auditory system as evidenced by loss of lateralization, recruitment of additional brain areas, and cross-modal reorganization (<xref ref-type="bibr" rid="B4">4</xref>). As the most obvious improvements in speech understanding are seen within the first 6 months of CI use (<xref ref-type="bibr" rid="B5">5</xref>), it is of interest to explore whether brain activity in response to spoken speech changes within this time interval and whether the auditory event-related potentials (AERP) of CI users approximates the responses seen in NH listeners.</p>
<p>Binaural hearing is essential for intelligibility in challenging environments, such as in noisy surroundings (<xref ref-type="bibr" rid="B6">6</xref>). Because of extensive binaural interactions in the brain&#x00027;s auditory system, disturbance of input from either ear interferes with central processing of auditory signals (<xref ref-type="bibr" rid="B7">7</xref>). Therefore, restoration of binaural input is expected to improve intelligibility, especially in challenging listening conditions. It is unclear, however, whether this can be achieved by current bimodal provision, i.e., electric hearing via CI on one ear and aided acoustic hearing with a hearing aid (HA) on the other ear. Currently, bimodal provision is a common, if not the most common, form of CI provision, but listening remains particularly challenging for this group (<xref ref-type="bibr" rid="B8">8</xref>). This may be due to the functional anatomy of the cochlea and the processing characteristics of CI and HA devices, meaning that the electrically and acoustically transmitted signals match poorly regarding frequency representation and timing (<xref ref-type="bibr" rid="B9">9</xref>, <xref ref-type="bibr" rid="B10">10</xref>). With bimodal provision, the brain has to combine the divergent signals from both ears and match their neural trace with stored representations of language elements. Extensive auditory training is necessary to achieve this and to adapt to the new set of acoustic&#x02013;phonetic cues. While there is evidence for a bimodal benefit in speech perception tests (<xref ref-type="bibr" rid="B11">11</xref>, <xref ref-type="bibr" rid="B12">12</xref>), neurophysiological alterations associated with successful bimodal comprehension remain to be explored. It is likely that cognitive (top&#x02013;down) processing compensates for some of the binaural discrepancies in sensory (bottom&#x02013;up) processing. However, this probably extends and prolongs the brain&#x00027;s occupation with a stimulus (<xref ref-type="bibr" rid="B13">13</xref>), which may be disadvantageous for speech understanding. If present, such extensions can be directly evidenced by AERP recordings.</p>
<p>While in typical ecological scenarios listeners are exposed to supra-threshold stimuli, clinical evaluation and much of auditory research is concerned with threshold evaluation, whereas testing of supra-threshold abilities is only at its beginning. When listening to supra-threshold stimuli, problems with intelligibility do arise, specifically in challenging listening conditions such as in background noise. Increasing amplification does not always result in better intelligibility. Therefore, it remains to be explored which processes besides binaural hearing promote supra-threshold intelligibility in noisy environments for CI users (<xref ref-type="bibr" rid="B14">14</xref>, <xref ref-type="bibr" rid="B15">15</xref>). Here, again, AERPs may prove to be a valuable tool with which to investigate processes that deviate between NH and CI users and to explore how CI experience changes the brain&#x00027;s response over time.</p>
<p>Bimodal CI users report persistent problems when listening to speech in noisy environments. This is despite ample listening experience. When listening to spoken speech, listeners have to integrate brief and transient acoustic cues, deal with talker variability, and map the auditory input to their mental lexicon, which contains a multitude of partially overlapping word representations. In addition, processing of single words must be rapid in order to be able to follow everyday speech. The processing of spoken speech, from acoustic signal perception to comprehension of meaning, was shown to comprise multiple dissociable steps involving bottom&#x02013;up sensory and top&#x02013;down cognitive processing (<xref ref-type="bibr" rid="B16">16</xref>&#x02013;<xref ref-type="bibr" rid="B19">19</xref>). It is generally assumed that the mental lexicon of speech elements is retained even during long periods of severe hearing impairment and is still accessible with electric hearing, as evidenced by open set speech understanding in CI users (<xref ref-type="bibr" rid="B20">20</xref>, <xref ref-type="bibr" rid="B21">21</xref>). While behavioral measures evaluate the endpoint of this process, AERPs allow continuous recording of the brain&#x00027;s response to speech stimuli and, therefore, are a means to disentangle these processes. AERPs allow exploration of changes to the temporal dynamics of the response during the course of auditory rehabilitation, and to characterize and quantify remaining difficulties. Although natural speech is acoustically complex, AERPs can be recorded in response to natural speech tokens. Responses are stable within an individual, suggesting that they are suitable for detecting changes over time (<xref ref-type="bibr" rid="B22">22</xref>). Single steps of language processing have been closely studied by electrophysiological measures in NH and hearing-impaired listeners (<xref ref-type="bibr" rid="B16">16</xref>, <xref ref-type="bibr" rid="B19">19</xref>), and they are beginning to be studied in CI users (<xref ref-type="bibr" rid="B23">23</xref>&#x02013;<xref ref-type="bibr" rid="B28">28</xref>). Importantly, AERPs of NH listeners represent a template to compare to the responses obtained from bimodal CI users.</p>
<p>Besides the time course of bimodal rehabilitation, mapping of an auditory signal to word/non-word categories is a focus of the present study. This is important for the rapid processing of speech elements, and it is learned early in development (<xref ref-type="bibr" rid="B29">29</xref>, <xref ref-type="bibr" rid="B30">30</xref>). During classification of an auditory stimulus as a word, an early N1&#x02013;P2 response is expected, indicating perception of the stimulus and may be followed by a late N400 response related to lexical access (<xref ref-type="bibr" rid="B31">31</xref>). The early N1&#x02013;P2 response is typically elicited by spectrally complex acoustic signals including words. It can be recorded from CI users and has been shown to be modulated by background noise in NH and CI listeners (<xref ref-type="bibr" rid="B22">22</xref>&#x02013;<xref ref-type="bibr" rid="B24">24</xref>, <xref ref-type="bibr" rid="B32">32</xref>). The N1&#x02013;P2 response consists of a negative deflection, peaking about 100 ms after stimulus onset (N1) and a positive deflection at around 200 ms (P2). Larger N1&#x02013;P2 amplitudes and shorter latencies of the N1 peak are associated with rising sound intensity (<xref ref-type="bibr" rid="B33">33</xref>). After CI implantation, N1 shows rapid improvement and stabilizes over the first 8&#x02013;15 weeks of CI experience for an auditory discrimination task (<xref ref-type="bibr" rid="B26">26</xref>). Furthermore, it has been suggested that the N1&#x02013;P2 complex can be used to monitor neurophysiological changes during auditory training in CI users (<xref ref-type="bibr" rid="B32">32</xref>, <xref ref-type="bibr" rid="B34">34</xref>). In addition, auditory cortex activation is dependent on the learned subjective quality of sounds, evidenced by enhanced and faster early processing of speech sounds compared to their non-speech counterparts, and by the stronger cortical response to familiar than to unfamiliar phonemes (<xref ref-type="bibr" rid="B29">29</xref>, <xref ref-type="bibr" rid="B30">30</xref>). Thus, familiarity of the sensory stimulus with the stored representation should lead to a stronger and faster N1&#x02013;P2 response, and with increasing CI experience, N1 and P2 are, therefore, expected to approximate the response seen for NH listeners.</p>
<p>Beyond sensory processing, speech tokens are subjected to higher-order processing for lexical mapping. Starting at about 200&#x02013;300 ms and peaking at about 400 ms following word onset, a broad negative deflection is typically seen, termed the N400 (<xref ref-type="bibr" rid="B31">31</xref>). This late response is observed in response to all meaningful, or even potentially meaningful stimuli, including written, spoken, and signed words, images, and environmental sounds. The N400 reflects activity within a widespread multimodal semantic memory network, and its amplitude is thought to represent the amount of new semantic information becoming available in response to the auditory input (<xref ref-type="bibr" rid="B31">31</xref>, <xref ref-type="bibr" rid="B35">35</xref>). As ease of lexical access reduces this response (<xref ref-type="bibr" rid="B31">31</xref>), difficulty in matching the incoming signal with stored representations, such as during effortful listening, may be evidenced by an increase as has been shown previously (<xref ref-type="bibr" rid="B36">36</xref>). Therefore, this late negative deflection is expected to be enhanced before CI provision but also with little experience in bimodal hearing, while it is expected to reduce to the magnitude seen in NH listeners with ample CI experience. The N400 is of long duration and does not always appear as a single clearly defined peak in individual-subject averages (<xref ref-type="bibr" rid="B37">37</xref>). Some studies were able to differentiate two separate speech-related negativities, termed N200 and N400, whereas such a distinction did not show in other studies (<xref ref-type="bibr" rid="B19">19</xref>). Because of discrepancies between studies, we, in accordance with Finke et al. (<xref ref-type="bibr" rid="B23">23</xref>, <xref ref-type="bibr" rid="B24">24</xref>), use the term N2 following the recommendation in Luck (<xref ref-type="bibr" rid="B38">38</xref>), which indicates that the N2 is a negative deflection following the N1 response.</p>
<p>Neuroimaging studies indicate that increases in listening effort are associated with increased activation in general cognitive regions such as prefrontal cortex (<xref ref-type="bibr" rid="B4">4</xref>, <xref ref-type="bibr" rid="B39">39</xref>&#x02013;<xref ref-type="bibr" rid="B41">41</xref>). This is reminiscent of developments seen in healthily aging high-performing individuals, where reduction of perceptual and cognitive abilities is compensated by increased engagement of general cognitive brain areas, such as regions of the attention and salience networks of the brain. This is evidenced by greater or more widespread activity as seen in hyper-frontality and loss of lateralization (<xref ref-type="bibr" rid="B42">42</xref>, <xref ref-type="bibr" rid="B43">43</xref>). Perceptual auditory abilities are limited in CI users who also report increased levels of listening effort. Bimodal listening appears to be particularly demanding in this respect (<xref ref-type="bibr" rid="B8">8</xref>). Therefore, recruitment of additional brain areas during word/non-word classification is expected for the CI group. It is expected to persist despite CI experience and similar intelligibility across CI and NH groups.</p>
<p>The aim of this study was to characterize the unfolding of lexical access in bimodal CI users and to explore whether it approximates the characteristics seen in NH. Our main interest was to explore whether neural efficacy, indicated by shorter latency and more spatially focused neural activation of the late N2 response, increases with CI experience for difficult listening conditions. The focus was on an early stage of language processing, namely, classification of words, as opposed to acoustically similar complex non-word stimuli. To minimize a confounding influence of age-related central alterations, age of each NH listener was matched to the age of a corresponding CI user. Hypotheses are (i) that magnitude of the N1 response is related to audibility. As loudness is individually adjusted to achieve a set intelligibility criterion, N1 amplitude and latency are expected to be similar across NH and CI listeners and to be stable between pre- and post-CI assessments; (ii) later potentials such as P2 and N2 are expected to deviate between CI and NH groups and they are expected to approximate the NH pattern with increasing CI experience; (iii) based on the familiarity of the words as opposed to the non-words, differences will exist between responses to words and non-words in NH, while they may be absent early after implantation but are expected to increase with CI experience in the bimodal group; (iv) as the task remains effortful for the bimodal CI users, additional cognitive resources are expected to be active to compensate for the distorted signals. This should lead to extended processing of the signals evidenced by prolonged activation in the AERP trace and by increased engagement of attention and salience networks of the brain. As listening effort remains high in the CI group, this type of activation is expected to remain higher than in NH despite extended CI experience.</p>
</sec>
<sec id="s2">
<title>Participants and Methods</title>
<sec>
<title>Participants</title>
<p>Before initiation, the study protocol was approved by the Institutional Review Board of the Medical Faculty of Mannheim at Heidelberg University (approval no. 2014-527N-MA). Prior to inclusion, each participant provided written consent for participation in the study. Consent was acquired in accordance with the Declaration of Helsinki. CI participants were compensated for their time at test days T3 and T4. NH listeners were compensated at their single visit.</p>
<p>Between 2014 and 2017, study participants were recruited from the patients at the CI Center of the University Medical Center Mannheim. Prospective participants were adults with previous acoustic auditory experience. Inclusion criteria comprised first-time unilateral CI provision, a HiRes 90K implant as chosen by the patient, continued HA use at the other ear, and aged between 18 and 90 years. All patients who fulfilled these criteria were approached for inclusion. Exclusion criteria were assessed during an initial interview (T1) and included the presence of an internal stimulator besides the CI, insufficient knowledge of the German language, and more than mild cognitive deficit, as assessed by the DemTect Test (<xref ref-type="bibr" rid="B44">44</xref>). The initial interview, study inclusion (T1), and pre-surgery examination (T2) took place on the same day, usually the day before surgery.</p>
<p>Patients received a CI on their poorer ear, while HA use was continued on the other ear. They left the hospital, on average, 3 days post-surgery. Two to three weeks later, they participated in a week-long in-patient program with first fitting of the speech processor, several fitting sessions, and technical instruction on CI use. Post-implantation assessments T3 and T4 were scheduled for 3 and 6 months post-implantation, respectively. At each assessment, study participants underwent audiometric testing, filled out standardized questionnaires [see Wallh&#x000E4;usser-Franke et al. (<xref ref-type="bibr" rid="B11">11</xref>) and below], and underwent AERP recordings. Aspects of hearing and tinnitus in this group apart from AERP recordings were reported previously (<xref ref-type="bibr" rid="B11">11</xref>, <xref ref-type="bibr" rid="B45">45</xref>). Independent of the study, between T3 and T4, nine of the participants took part in an in-patient program at a specialized CI rehabilitation clinic, whereas the others used regular out-patient CI rehabilitation services.</p>
<p>Control participants were recruited by word of mouth and from the employees of the University Medical Center Mannheim. Inclusion criteria were German as native language, age-adequate normal hearing, no past or present neurological or psychological problems, and right-handedness. Participants underwent the same screening and performed the same tests as the CI group.</p>
<p>Twenty-seven patients with hearing loss at both ears who planned to undergo unilateral cochlear implant provision were screened. One was excluded because of an exclusion criterion, and the remaining 26 were included in the study. Two discontinued the study following sequential bilateral implantation, one decided that study participation after T2 was too much effort, two discontinued for reasons they did not disclose, and one was excluded because of an exclusion criterion that had not been disclosed before. Reasons for not being included in the AERP analysis was missing AERP recordings at one of the assessments for one participant and left-handedness in another participant, leaving AERP data for 18 participants. Another three participants were excluded because of one incidence of sudden hearing loss in the non-implanted ear associated with Meniere&#x00027;s disease, because of not using the HA at the non-implanted ear at T4 or because of substantial changes in loudness tuning of the HA between T3 and T4. This resulted in 15 participants who contributed data toward the AERP analysis. For demographic details of this group, see <xref ref-type="table" rid="T1">Table 1</xref>. All study participants were native German speakers and used the NAIDA Q70 speech processor. At T2, 80% used a HA at both ears (<xref ref-type="table" rid="T1">Table 1</xref>), and at T3 and T4, all non-implanted ears were aided by auditory amplification. HA devices were of different brands and were used with participants&#x00027; typical daily settings during the course of testing.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Participant characteristics and stimulation level.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th/>
<th valign="top" align="center"><bold>CI group (<italic>N</italic><sub><bold>CI</bold></sub> &#x0003D; 15)</bold></th>
<th valign="top" align="center"><bold>NH group (<italic>N</italic><sub><bold>NH</bold></sub> &#x0003D; 14)</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left"><bold>Age</bold> Mean &#x000B1; SD (range) in years</td>
<td valign="top" align="center">57.67 &#x000B1; 14.95 (27&#x02013;78)</td>
<td valign="top" align="center">57.21 &#x000B1; 13.69 (24&#x02013;76)</td>
</tr>
<tr>
<td valign="top" align="left"><bold>Sex</bold> female/male</td>
<td valign="top" align="center">12/3</td>
<td valign="top" align="center">12/2</td>
</tr>
<tr>
<td valign="top" align="left"><bold>CI ear</bold> left/right</td>
<td valign="top" align="center">8/7</td>
<td/>
</tr>
<tr>
<td valign="top" align="left"><bold>Years with hearing impairment</bold> Mean &#x000B1; SD (range)</td>
<td valign="top" align="center">CI ear: 27.20 &#x000B1; 18.14 (2&#x02013;56) HA ear: 24.21 &#x000B1; 19.01 (2&#x02013;56)</td>
<td/>
</tr>
<tr>
<td valign="top" align="left"><bold>Days between implantation and assessment</bold> Mean &#x000B1; SD (range)</td>
<td valign="top" align="center">T2: 2.87 &#x000B1; 7.24 (1&#x02013;29) T3: 99.47 &#x000B1; 18.17 (75&#x02013;145) T4: 235.47 &#x000B1; 76.96 (170&#x02013;427)</td>
<td/>
</tr>
<tr>
<td valign="top" align="left"><bold>Lifetime with hearing impairment</bold> Mean &#x000B1; SD in %</td>
<td valign="top" align="center">CI ear: 53.72 &#x000B1; 39.01 HA ear: 24.21 &#x000B1; 19.01</td>
<td/>
</tr>
<tr>
<td valign="top" align="left"><bold>HA use at future CI ear</bold> yes/no</td>
<td valign="top" align="center">12/3</td>
<td/>
</tr>
<tr>
<td valign="top" align="left" colspan="3"><bold>PTA-4</bold> Mean &#x000B1; SD in dB HL</td>
</tr>
<tr>
<td valign="top" align="left"><bold>Pre-implantation</bold></td>
<td valign="top" align="center">CI ear: 96.03 &#x000B1; 16.81 HA ear: 68.10 &#x000B1; 17.99</td>
<td/>
</tr>
<tr>
<td valign="top" align="left"><bold>Post-implantation</bold></td>
<td valign="top" align="center">CI ear: 46.13 &#x000B1; 12.37 HA ear: 68.18 &#x000B1; 18.00</td>
<td/>
</tr>
<tr>
<td valign="top" align="left"><bold>T2 SNR</bold> Mean &#x000B1; SD (range) in dB</td>
<td valign="top" align="center">15.87 &#x000B1; 6.90 (7&#x02013;30)</td>
<td valign="top" align="center">&#x02212;2.00 &#x000B1; 2.39 (&#x02212;6 to 2)</td>
</tr>
<tr>
<td valign="top" align="left"><bold>T3, T4 SNR</bold> Mean &#x000B1; SD (range) in dB</td>
<td valign="top" align="center">10.07 &#x000B1; 5.51<xref ref-type="table-fn" rid="TN2"><sup>&#x0002A;&#x0002A;</sup></xref> (1&#x02013;20)</td>
<td/>
</tr>
<tr>
<td valign="top" align="left"><bold>T2 words detected</bold> Mean &#x000B1; SD in %</td>
<td valign="top" align="center">69.72 &#x000B1; 11.46</td>
<td valign="top" align="center">69.72 &#x000B1; 13.19</td>
</tr>
<tr>
<td valign="top" align="left"><bold>T3 words detected</bold> Mean &#x000B1; SD in %</td>
<td valign="top" align="center">61.06 &#x000B1; 20.83</td>
<td/>
</tr>
<tr>
<td valign="top" align="left"><bold>T4 words detected</bold> Mean &#x000B1; SD in %</td>
<td valign="top" align="center">68.00 &#x000B1; 9.47</td>
<td/>
</tr>
<tr>
<td valign="top" align="left"><bold>HADS&#x02014;Anxiety</bold></td>
<td valign="top" align="center">T2: 5.80 &#x000B1; 4.18; T4: 4.20 &#x000B1; 3.08</td>
<td/>
</tr>
<tr>
<td valign="top" align="left"><bold>HADS&#x02014;Depression</bold></td>
<td valign="top" align="center">T2: 4.53 &#x000B1; 4.56; T4: 3.80 &#x000B1; 3.95</td>
<td/>
</tr>
<tr>
<td valign="top" align="left"><bold>General health rating</bold></td>
<td valign="top" align="center">T2: 2.40 &#x000B1; 0.99; T4: 2.67 &#x000B1; 0.98</td>
<td/>
</tr>
<tr>
<td valign="top" align="left"><bold>Relevant other health conditions</bold></td>
<td valign="top" align="center">9</td>
<td/>
</tr>
<tr>
<td valign="top" align="left"><bold>Tinnitus</bold> yes/no</td>
<td valign="top" align="center">11</td>
<td/>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Aided thresholds at 0.5, 1, 2, and 4 kHz were recorded separately for each ear in free sound field using standard audiometric procedures (<xref ref-type="bibr" rid="B11">11</xref>), and results were averaged (PTA-4). In the CI group, the signal-to-noise-ratio (SNR) needed to detect 70% of the words in a stimulation block during AERP recordings reduced significantly (</italic></p>
<fn id="TN2">
<label>&#x0002A;&#x0002A;</label>
<p><italic>p = 0.001) between T2 and T3</italic>.</p></fn>
</table-wrap-foot>
</table-wrap>
<p>For each participant who completed the AERP measurement, a right-handed, age-, and sex-matched control with age-adequate normal hearing was recruited. Data of one NH participant was not included because of poor AERP recording. Average hearing thresholds between 0.25 and 10 kHz for both ears of the 14 control participants were 17.93 &#x000B1; 10.32 dB. Demographics of the 14 NH participants are also presented in <xref ref-type="table" rid="T1">Table 1</xref>.</p>
</sec>
<sec>
<title>History of Hearing Loss</title>
<p>At inclusion, all CI participants could communicate verbally when using their HA. Six participants reported hearing problems since early childhood, while nine had post-lingual onset of profound hearing impairment. On average, severe hearing impairment of the CI ear existed for half of the lifetime, while hearing impairment at the HA ear had shorter duration (<xref ref-type="table" rid="T1">Table 1</xref>). Causes for hearing loss were unknown for 73%, were due to sudden hearing loss in two, while one had Meniere&#x00027;s disease, and another participant suffered from Stickler syndrome.</p>
</sec>
<sec>
<title>Acceptance of Bimodal Hearing</title>
<p>Until the first formal appointment at the CI Center of the University Medical Center Mannheim 4 weeks following surgery, participants&#x00027; mean daily processor use was 11 h. At the end of the study (T4), all but one participant reported combined daily use of CI and HA for more than 8 h per day. CI and HA were always used in combination by eight participants, while the others reported situations during which use of the HA was inconvenient. Most commonly, this occurred during conversations in quiet. On a scale from 0 (no change) to &#x0002B;5 (more content) or &#x02212;5 (less content), satisfaction with the CI was higher (2.67 &#x000B1; 1.23) than with the HA (0.8 &#x000B1; 1.97), or with the combination of both devices (1.73 &#x000B1; 1.91). Life quality had improved for 10 participants and remained unchanged for the others.</p>
<p>All but one participant had performed CI training with different materials during the week preceding T4 with most (10 participants) training 2&#x02013;4 h per week. All but two participants cohabitated with at least one other person. Reception of CI use by their peers was perceived as being positive by most (nine participants), interested or curious by the peers of two participants, normal by the peers of one, and mixed by the peers of three participants.</p>
</sec>
<sec>
<title>Health-Related Factors</title>
<p>In addition to hearing status, participants indicated their personal judgment about their general health at T2 and T4 (poor: 0, moderate: 1, okay: 2, good: 3, very good: 4). In addition, mental health was assessed with the Hospital Anxiety and Depression scale (HADS) (<xref ref-type="bibr" rid="B46">46</xref>) at these assessments (<xref ref-type="bibr" rid="B45">45</xref>).</p>
</sec>
<sec>
<title>Setup</title>
<p>Audiometric testing and AERP recordings were performed within a dimly lit sound booth shielded against electromagnetic interference (IAC Acoustics, North Aurora, IL, USA). The booth was connected with the experimenter&#x00027;s room via a glass window, which, together with a camera in the recording booth, allowed constant surveillance of the participant. During testing, participants sat in a comfortable armchair.</p>
<p>During AERP recording and audiometry, auditory stimuli were presented in sound field via an M-Audio Fast Track Ultra USB Audio Interface and BX5 near-field monitor loudspeaker (inMusic Brand, Cumberland, RI, USA) located 1 m in front of the participant (0&#x000B0; azimuth: S0). For noise delivery, two additional loudspeakers of the same type as above were placed at &#x000B1;90&#x000B0; azimuth at a distance of 1 m to the participant&#x00027;s head (<xref ref-type="fig" rid="F1">Figure 1</xref>). Before each test session, sound pressure level was calibrated by a type 2250 sound level meter (Br&#x000FC;el &#x00026; Kj&#x000E6;r, N&#x000E6;rum, Denmark) with &#x000B1;0.5 dB accuracy at the center of the participant&#x00027;s head during testing (<xref ref-type="bibr" rid="B47">47</xref>).</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Localization of sound sources and electrode positions. Electrode positions on the scalp (black), ear lobes (red), and eyes (green) are indicated. Ground at Fpz is shown in blue. In the example shown here, a cochlear implant (CI) aids the right ear, whereas a hearing aid (HA) is worn on the left ear. While speech signals were always presented from the front (S0), noise was presented from one of three loudspeakers, here the one facing the HA ear (NHA), whereas the third loudspeaker, here facing the CI ear, was inactive.</p></caption>
<graphic xlink:href="fneur-11-00161-g0001.tif"/>
</fig>
<sec>
<title>Speech Audiometry</title>
<p>Audiometry performed for this study and self-assessment of the improvement of auditory communication in daily life was described in more detail in a previous report (<xref ref-type="bibr" rid="B11">11</xref>). In short, perceived improvements in auditory communication following bimodal provision were assessed with the benefit version of the Speech, Spatial, and Qualities of Hearing Questionnaire (SSQ-B) (<xref ref-type="bibr" rid="B48">48</xref>, <xref ref-type="bibr" rid="B49">49</xref>). The questionnaire focuses on speech comprehension (SSQ-B1), localization of sound sources (SSQ-B2), and sound quality (SSQ-B3) in a variety of ecological situations. The reader is asked whether the situation has changed compared to pre-CI hearing. Responses are indicated on a rating scale from &#x02212;5 to &#x0002B;5. Positive scores indicate improvement, while negative scores indicate worsening, and 0 represents no change. For all questions, there exists the option to tick &#x0201C;not applicable&#x0201D;. Means and their standard deviation were calculated for each of the SSQ-B1-3 scales.</p>
<p>During all audiometric tests, speech signals were presented by male talkers, and speech was always presented in sound field from a loudspeaker in front of the participant (S0). Speech comprehension in quiet was tested with the Freiburger Monosyllable Test (FBE) (<xref ref-type="bibr" rid="B50">50</xref>, <xref ref-type="bibr" rid="B51">51</xref>) and the Oldenburg matrix sentence test (OlSa) (<xref ref-type="bibr" rid="B52">52</xref>&#x02013;<xref ref-type="bibr" rid="B54">54</xref>). For testing intelligibility in background noise, speech-modulated OlSa noise was presented with a constant level of 60 dB SPL from the front (N0), from the side of the CI (NCI) or the HA ear (NHA) together with the OlSa sentences. Listeners verbally repeated the word (FBE) or each word in a sentence (OlSa) as understood, with the experimenter entering the keywords identified correctly. No feedback was given, and lists were not repeated within sessions. Two lists of 20 words each presented at 70 dB SPL contributed to FBE results, with higher percentages indicating better intelligibility. OlSa sentences consist of five-word nonsense sentences with identical structure and 10 possible words per position. The level of the OlSa speech signal was adapted starting either at 70 dB in quiet or from a signal-to-noise-ratio (SNR) of &#x0002B;10 dB. Twenty sentences were presented per condition with the last 10 contributing to the measure of 50% speech reception threshold in quiet (SRT 50%) or the SNR needed for 50% correct comprehension in noise (SNR 50%). If curves did not show turning points, SRT 50% or SNR 50% for that condition was determined with a second, different OlSa list. In all OlSa tests, lower values for SRT or SNR indicate better intelligibility.</p>
<p>Impact of CI provision on audiometric results was calculated for each audiometric test with the general linear model calculation for repeated measurements (GLM) with Bonferroni correction provided by SPSS24 (SPSS/IBM, Chicago, IL, USA). Values of <italic>p</italic> &#x0003C; 0.05 were considered to be statistically significant, while values of <italic>p</italic> &#x0003C; 0.01 were considered as highly statistically significant. Group means together with their standard deviations (SD) and an indication whether the change between T2 and T4 was significant are shown in <xref ref-type="table" rid="T2">Table 2</xref>.</p>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p>Development of speech comprehension.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th/>
<th valign="top" align="center"><bold>T2</bold></th>
<th valign="top" align="center"><bold>T3</bold></th>
<th valign="top" align="center"><bold>T4</bold></th>
<th valign="top" align="center"><bold>Significance of change</bold></th>
<th valign="top" align="center"><bold>NH</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left"><bold>FBE</bold> correct in %</td>
<td valign="top" align="center">57.83 &#x000B1; 31.45</td>
<td valign="top" align="center">65.50 &#x000B1; 25.46</td>
<td valign="top" align="center">68.17 &#x000B1; 25.83</td>
<td valign="top" align="center"><italic>F</italic> = 1.727, <italic>p</italic> = 0.200</td>
<td valign="top" align="center">98.93 &#x000B1; 1.62</td>
</tr>
<tr>
<td valign="top" align="left"><bold>OlSa S0</bold> SRT 50% in dB</td>
<td valign="top" align="center">54.27 &#x000B1; 17.04</td>
<td valign="top" align="center">45.77 &#x000B1; 7.83</td>
<td valign="top" align="center">43.78 &#x000B1; 7.18</td>
<td valign="top" align="center"><italic><bold>F</bold></italic> <bold>&#x0003D; 9.448<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;&#x0002A;</sup></xref></bold>, <italic><bold>p</bold></italic> <bold>&#x0003D; 0.006</bold></td>
<td valign="top" align="center">21.25 &#x000B1; 5.40</td>
</tr>
<tr>
<td valign="top" align="left"><bold>OlSa S0N0</bold> SNR 50% in dB</td>
<td valign="top" align="center">3.78 &#x000B1; 5.88</td>
<td valign="top" align="center">1.17 &#x000B1; 5.89</td>
<td valign="top" align="center">0.19 &#x000B1; 4.56</td>
<td valign="top" align="center"><italic><bold>F</bold></italic> <bold>&#x0003D; 6.622<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;&#x0002A;</sup></xref></bold>, <italic><bold>p</bold></italic> <bold>&#x0003D; 0.007</bold></td>
<td valign="top" align="center">&#x02212;6.21 &#x000B1; 2.73</td>
</tr>
<tr>
<td valign="top" align="left"><bold>OlSa S0NCI</bold> SNR 50% in dB</td>
<td valign="top" align="center">1.91 &#x000B1; 5.28</td>
<td valign="top" align="center">0.24 &#x000B1; 7.49</td>
<td valign="top" align="center">&#x02212;0.91 &#x000B1; 7.47</td>
<td valign="top" align="center"><italic>F</italic> = 3.132, <italic>p</italic> = 0.059</td>
<td valign="top" align="center">&#x02212;12.24 &#x000B1; 2.21</td>
</tr>
<tr>
<td valign="top" align="left"><bold>OlSa S0NHA</bold> SNR 50% in dB</td>
<td valign="top" align="center">3.73 &#x000B1; 5.16</td>
<td valign="top" align="center">0.92 &#x000B1; 4.72</td>
<td valign="top" align="center">&#x02212;0.67 &#x000B1; 4.22</td>
<td valign="top" align="center"><italic><bold>F</bold></italic> <bold>&#x0003D; 9.066<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;&#x0002A;</sup></xref></bold>, <italic><bold>p</bold></italic> <bold>&#x0003D; 0.001</bold></td>
<td valign="top" align="center">&#x02212;12.03 &#x000B1; 2.72</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Intelligibility in the binaural listening condition was assessed before (T2), as well as 3 (T3) and 6 (T4) months post-implantation in bimodal CI users, and for the normal hearing (NH) group. Intelligibility in quiet (S0) was assessed with the Freiburg Monosyllable Test (FBE) (<xref ref-type="bibr" rid="B50">50</xref>, <xref ref-type="bibr" rid="B51">51</xref>) at 70 dB SPL and with the adaptive version of the Oldenburg matrix sentence test (OlSa) (<xref ref-type="bibr" rid="B52">52</xref>&#x02013;<xref ref-type="bibr" rid="B54">54</xref>) determining the presentation level of the 50% speech reception threshold (SRT 50%). To assess intelligibility in noise, speech-shaped noise was presented from the same source (S0N0), from the side of the CI (S0NCI) or the HA ear (S0NHA) again using the adaptive OlSa method, and the signal-to-noise-ratio (SNR) was determined for 50% understanding (SNR 50%). With bimodal provision, significant improvements (</italic></p>
<fn id="TN1">
<label>&#x0002A;&#x0002A;</label>
<p><italic>p &#x0003C; 0.01) were seen for sentence understanding in quiet (S0), and with noise presented from the same direction (S0N0) or on the side of the HA ear (S0NHA)</italic>.</p></fn>
</table-wrap-foot>
</table-wrap>
</sec>
<sec>
<title>Data Acquisition</title>
<sec>
<title>AERP</title>
<p>AERPs were recorded from 62 active sintered Ag/AgCl surface electrodes arranged in an elastic cap (g.LADYbird/g.GAMMAcap<sup>2</sup>; g.tec Medical Engineering GmbH, Austria) according to the 10/10 system (<xref ref-type="bibr" rid="B55">55</xref>). The electrode at Fpz served as ground (<xref ref-type="fig" rid="F1">Figure 1</xref>). Two additional active sintered Ag/AgCl clip electrodes (g.GAMMAearclip; g.tec) were attached to the left and right earlobes. The electrooculogram (EOG) was monitored with four passive sintered Ag/AgCl surface electrodes (Natus Europe GmbH, Germany) placed below (IO1, IO2) and at the outer canthus (LO1, LO2) of each eye. To protect CI and HA devices, electrodes located above or close to the devices were not filled with gel (mean number of unfilled electrodes: CI: 3, SD: 1.1, range: 1&#x02013;5; HA: 1, SD: 0.5, range: 0&#x02013;2) and were interpolated during post-processing. Impedances were confirmed to be below 5 kOhm for passive electrodes and below 30 kOhm for active electrodes. AERP signals were acquired using a 512-Hz sampling frequency by a biosignal amplifier (g.HIamp; g.tec) with 24-bit resolution. Amplifier data acquisition and playback of stimuli were controlled using MATLAB/Simulink R2010a (Mathworks, Natick, MA, USA) with custom MATLAB scripts in combination with g.tec&#x00027;s g.HIsys toolbox. Real-time access to the soundcard was realized with the playrec toolbox (<ext-link ext-link-type="uri" xlink:href="http://www.playrec.co.uk">http://www.playrec.co.uk</ext-link>). A trigger box (g.TRIGbox; g.tec) was used to mark stimuli onsets and offsets and to record push button activity (see section on Task and Procedure below) in the continuously recorded EEG data. Stimuli consisted of German monosyllable words taken from the Freiburg Monosyllable Test presented by a male speaker (FBE) (<xref ref-type="bibr" rid="B50">50</xref>), which is the clinical standard for speech audiometry in Germany (<xref ref-type="bibr" rid="B51">51</xref>). Non-words were generated with the time-reversed audio tracks of these monosyllables (reversals). Only reversals that did not resemble a German word as judged by the lab members were taken as reversal stimuli. Overall, a set of 269 monosyllabic words and 216 reversed words with a mean length of 770 ms (SD: 98 ms, range: 484&#x02013;1,035 ms) were used for stimulation. Lists with 75 stimuli of which 30% were words and 70% were reversals were generated randomly from the whole set for each stimulation block. Lists were not repeated during an assessment. In addition, speech-shaped noise from the OlSa sentence test (<xref ref-type="bibr" rid="B52">52</xref>&#x02013;<xref ref-type="bibr" rid="B54">54</xref>) was presented from a loudspeaker at participants&#x00027; HA ear or the designated HA ear in NH controls (azimuth &#x000B1;90&#x000B0;: NHA) at 60 dB SPL. Loudspeaker distance to participant&#x00027;s head was 1 m (<xref ref-type="fig" rid="F1">Figure 1</xref>).</p>
</sec>
<sec>
<title>Task and procedure</title>
<p>Participants were instructed to face the loudspeaker in front of them and to keep their eyes closed during recording. In addition, they were instructed to respond only to words by pressing a button after a burst of white noise was played at 75 dB SPL 1,000 ms after offset of each word or reversal stimulus (<xref ref-type="fig" rid="F2">Figure 2</xref>). The test stimuli and white noise bursts were played from the same loudspeaker, similar to the paradigm in Senkowski et al. (<xref ref-type="bibr" rid="B56">56</xref>). Inter-stimulus interval between the end of the noise burst and the start of the next stimulus was 1,900 &#x000B1; 200 ms yielding 75 stimuli per 5 min presentation block (<xref ref-type="fig" rid="F2">Figure 2</xref>). Each presentation block was followed by a short break before the start of the next block. Overall, 4.04 (SD: 0.81, range: 3&#x02013;7) blocks were recorded per individual assessment.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>Stimulus presentation during auditory event-related potentials (AERP) recording. Blue striped area: speech stimuli, black striped areas: noise burst that indicates a button press interaction if word was heard before, gray area: background noise with 60 dB SPL.</p></caption>
<graphic xlink:href="fneur-11-00161-g0002.tif"/>
</fig>
<p>To avoid ceiling and floor effects, and because intermediate difficulty levels provide the best opportunity for compensatory operation of top&#x02013;down processes (<xref ref-type="bibr" rid="B57">57</xref>), SNR was set to achieve 70% detection of words. This SNR was determined at T2 and T3 in two training blocks, which also served to familiarize participants with the task. If rates deviated substantially from 70% correct classification, the procedure was repeated with an adjusted presentation level. If button press occurred before the noise burst, that particular AERP was excluded from analysis. At T4, two familiarization blocks were administered using the same SNR from T3.</p>
</sec>
<sec>
<title>Data pre-processing</title>
<p>EEG data were pre-processed offline with MATLAB R2018a (Mathworks, Natick, MA, USA) with the EEGLAB toolbox (version 13.3.2b) (<xref ref-type="bibr" rid="B58">58</xref>), and custom MATLAB scripts. Raw data were (1) re-referenced to linked earlobes, (2) low-pass filtered with a cut-off frequency of 64 Hz, and (3) high-pass filtered with a cut-off frequency of 0.5 Hz using finite impulse response (FIR) filters, and (4) segmented into epochs from &#x02212;300 to 2,200 ms relative to stimulus onset. Epochs with amplitudes in single channels exceeding the threshold range of from &#x02212;150 to 150 &#x003BC;V were highlighted during visual inspection together with epochs with non-stereotyped artifacts, classified by kurtosis and joint probability (threshold: 3 SD). The final rejection of epochs and identification of poor electrode channels (CI group: Mean: 0.9, SD: 1.5, range: 0&#x02013;7; NH group: Mean: 0.9, SD: 1.1, range: 0&#x02013;3) were performed by experienced lab members.</p>
<p>Next, EOG artifacts were removed automatically with a second-order blind identification (SOBI) and independent component analysis (ICA) technique (<xref ref-type="bibr" rid="B59">59</xref>&#x02013;<xref ref-type="bibr" rid="B61">61</xref>), as described in Balkenhol et al. (<xref ref-type="bibr" rid="B62">62</xref>). To remove electrical artifacts caused by the implant, SOBI ICA was performed. An automated artifact removal algorithm was developed for the present study with identification of artifacts based on their power distribution. Power spectra were determined for all independent components. In response to the acoustic stimuli employed in the present study, implants induced narrow- and wide-band components in the frequency range above 25 Hz. Narrow-band artifacts were automatically detected by a spectral peak search algorithm. Wide-band artifacts were identified by their average power in the frequency range from 40 to 256 Hz, relative to power in the frequency band from 3 to 25 Hz. Thus, if spectral power in the interval from 40 to 256 Hz exceeded power in the low-frequency interval from 3 to 25 Hz, this component was labeled as artifact and removed.</p>
<p>Muscle artifacts, electrical heartbeat activity, as well as other sources of non-cerebral activity were identified by visual inspection of independent component scalp maps and their power spectra (<xref ref-type="bibr" rid="B38">38</xref>) and were removed by back-projecting all but these components. Finally, unfilled channels and channels of poor quality were interpolated by spherical splines. On average, 253 &#x000B1; 58 responses per participant, and assessment remained for data analysis, i.e., 17% of the recorded responses were removed due to artifacts.</p>
</sec>
<sec>
<title>Data analysis</title>
<p>Data analysis was performed in MATLAB R2018a (Mathworks, Natick, MA, USA) with the fieldtrip toolbox (version 20170925; <ext-link ext-link-type="uri" xlink:href="http://www.ru.nl/fcdonders/fieldtrip">http://www.ru.nl/fcdonders/fieldtrip</ext-link>) (<xref ref-type="bibr" rid="B63">63</xref>) and custom MATLAB scripts. Because optimal ROIs differ between potentials and are uncertain for N2, single-subject averages of all 62 scalp electrodes for the categories &#x0201C;words&#x0201D; (all responses to word stimuli), &#x0201C;reversals&#x0201D; (all responses to reversal stimuli), and the combination of word and reversal stimuli (&#x0201C;all&#x0201D;) were used for N1, P2, and N2 evaluation. AERPs with button press before onset of the white noise burst (<xref ref-type="fig" rid="F2">Figure 2</xref>) were not included in the analysis. For baseline correction, the mean of the pre-stimulus interval from &#x02212;150 to &#x02212;50 ms was subtracted from each epoch. The level corresponding to 50% intensity of the stimuli was reached with different delays relative to the stimulus onset. For the analysis of N1, P2, and N2 amplitudes, this delay was corrected by shifting the trigger signal for onset to the first time point the corresponding stimulus reached 50% of its absolute maximal peak amplitude (<xref ref-type="fig" rid="F2">Figure 2</xref>). Mean values in time intervals from 80 to 180 ms, 180 to 330 ms, and from 370 to 570 ms were used as amplitude measures for N1, P2, and N2 (<xref ref-type="bibr" rid="B38">38</xref>), while latencies were quantified by the 50% area latency measure according to Liesefeld (<xref ref-type="bibr" rid="B64">64</xref>). With this approach, the baseline between two consecutive peaks is calculated by dividing the amplitude difference between these peaks into half. Latency of the later peak is determined by the time point that splits the area below (N1 and N2) or above (P2) this baseline into half. This procedure was also used to estimate the area under the curve.</p>
<p>Statistical analysis was performed with MATLAB&#x00027;s Statistics and Machine Learning Toolbox (R2018a) and custom scripts. Parametric tests were applied to normally distributed data, otherwise non-parametric tests were used. Mean amplitudes, area latencies, and area under the curve of N1, P2, and N2 responses for the categories &#x0201C;words&#x0201D;, &#x0201C;reversals&#x0201D;, and &#x0201C;all&#x0201D; were subjected to separate Dunnett&#x00027;s multiple comparison procedures to compare CI group results at T2, T3, and T4 with the NH group (<xref ref-type="bibr" rid="B65">65</xref>, <xref ref-type="bibr" rid="B66">66</xref>). Statistical significance of differences between &#x0201C;words&#x0201D; and &#x0201C;reversals&#x0201D; was explored with <italic>t</italic> or Wilcoxon tests. Values of <italic>p</italic> &#x0003C; 0.05 were considered statistically significant, while <italic>p</italic> &#x0003C; 0.1 was considered to indicate a trend.</p>
</sec>
<sec>
<title>Source localization</title>
<p>Source localization analysis for the N2 interval was performed with the fieldtrip toolbox and the time domain-based eLORETA algorithm (<xref ref-type="bibr" rid="B67">67</xref>, <xref ref-type="bibr" rid="B68">68</xref>). The head model utilized was the standard anatomical magnetic resonance imaging (MRI) data set known as &#x0201C;colin27&#x0201D; (<xref ref-type="bibr" rid="B69">69</xref>). Monte Carlo estimates were derived by a non-parametric randomization test (<italic>N</italic><sub>rand</sub> = 1000, two-sided) performed with 5 mm lead field resolution on averaged absolute dipole moments. A false discovery rate (FDR) was applied to correct for multiple comparisons.</p>
</sec>
</sec>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<sec>
<title>Behavioral Results</title>
<p>Data from 15 bimodal participants contributed to the final analysis (<xref ref-type="table" rid="T1">Table 1</xref>). Self-assessed improvements of bimodal hearing compared to pre-CI HA-assisted hearing recorded by the SSQ-B questionnaire were largest for speech comprehension (SSQ-B1: 1.42 &#x000B1; 1.08), lowest for the localization of sound sources (SSQ-B2: 0.91 &#x000B1; 0.86), and intermediate for sound quality (SSQ-B3: 1.19 &#x000B1; 1.53). All improvements attained statistical significance (SSQ-B1: <italic>t</italic> = 5.117, <italic>p</italic> &#x0003C; 0.0001; SSQ-B2: <italic>t</italic> = 4.061, <italic>p</italic> = 0.001; SSQ-B3: <italic>t</italic> = 3.023, <italic>p</italic> = 0.009).</p>
<p>Intelligibility in audiometric tests improved with bimodal provision (<xref ref-type="table" rid="T2">Table 2</xref>) and as reported in Wallh&#x000E4;usser-Franke et al. (<xref ref-type="bibr" rid="B11">11</xref>) and Servais et al. (<xref ref-type="bibr" rid="B45">45</xref>). Statistically significant improvements were found for OlSa sentences presented both in quiet and within background noise (<xref ref-type="table" rid="T2">Table 2</xref>). Likewise, the SNR needed to correctly classify 70% of the monosyllabic words in the AERP experiment reduced significantly between T2 and T3 (<italic>T</italic> = 2.758, <italic>p</italic> = 0.001) from 15.87 &#x000B1; 6.90 to 10.07 &#x000B1; 5.51 dB (<xref ref-type="table" rid="T1">Table 1</xref>). For NH, SNR was &#x02212;2.00 &#x000B1; 2.39 dB (<xref ref-type="table" rid="T1">Table 1</xref>).</p>
<p>With the T3 presentation level being retained for T4, the percentage of word identification, as opposed to reversals, was &#x0007E;70% at T2 and T4, as planned, while average success rate at T3 was 61% (<xref ref-type="table" rid="T1">Table 1</xref>). Note that the standard deviation is about twice as large at T3 in relation to T2 and T4, indicating increased variability with short bimodal experience. Moreover, SD was much lower in the NH group for all audiometric evaluations (<xref ref-type="table" rid="T1">Tables 1</xref>, <xref ref-type="table" rid="T2">2</xref>).</p>
</sec>
<sec>
<title>AERP</title>
<p>AERPs of the CI group were analyzed regarding differences in CI experience (from T2 to T4) and similarity to NH. The two obligatory evoked potentials N1 and P2 and the event-related N2 potential were analyzed separately, in terms of amplitude, latency, and area, for categories &#x0201C;words&#x0201D;, &#x0201C;reversals&#x0201D;, and the combination of word and reversal stimuli (&#x0201C;all&#x0201D;). Statistical significance of differences was calculated using Dunnett&#x00027;s test, and by <italic>post hoc</italic> comparisons.</p>
<sec>
<title>N1 Response</title>
<p>N1 amplitude averaged across all stimuli did not differ significantly between groups or between T2 to T4 assessments (<xref ref-type="fig" rid="F3">Figure 3A</xref>), which together with the behavioral results suggests that similar intelligibility across conditions had been achieved as planned. However, in NH, N1 amplitude toward words was significantly larger compared to reversals (<italic>t</italic> = &#x02212;3.159, <italic>p</italic> = 0.008), whereas this difference, which was largest at T4 (<italic>t</italic> = &#x02212;1.221, <italic>p</italic> = 0.242) did not attain statistical significance in CI listeners. In addition, N1 area significantly depended on stimulus categories at T3 (<italic>t</italic> = &#x02212;2.719, <italic>p</italic> = 0.017) and for NH (<italic>t</italic> = &#x02212;4.180, <italic>p</italic> = 0.001), whereas a trend was evident at T4 (<italic>t</italic> = &#x02212;1.956, <italic>p</italic> = 0.071) (<xref ref-type="fig" rid="F3">Figures 3C&#x02013;E</xref>, <xref ref-type="fig" rid="F4">4A</xref>). Furthermore, data revealed significant differences regarding latency of the N1 depending on group, within-group assessments, and on stimulus categories. A significant main effect was found for N1 latencies in response to words (Dunnett&#x00027;s test: <italic>F</italic> = 5.550, <italic>p</italic> = 0.002) with significantly shorter latency at T2 compared to NH (CI: 119.53 &#x000B1; 15.543 ms; NH: 143.97 &#x000B1; 14.44 ms; <italic>p</italic> = 0.0005) (<xref ref-type="fig" rid="F4">Figure 4B</xref>). For the category &#x0201C;all&#x0201D;, <italic>post hoc</italic> testing revealed significant shorter N1 latency at T2 compared to NH (<italic>p</italic> = 0.02), but the main effect showed weak significance (Dunnett&#x00027;s test: <italic>F</italic> = 2.611, <italic>p</italic> = 0.061). Moreover, whereas no latency difference was seen for NH, N1 latencies were significantly shorter in response to words compared to reversals at T2 (<italic>t</italic> = &#x02212;3.493, <italic>p</italic> = 0.004) and T3 (<italic>t</italic> = &#x02212;2.201, <italic>p</italic> = 0.045), while a trend was evident at T4 (<italic>t</italic> = &#x02212;2.080, <italic>p</italic> = 0.056), but significance was lost for T3 after correction for multiple comparisons (<xref ref-type="fig" rid="F3">Figures 3B&#x02013;E</xref>, <xref ref-type="fig" rid="F4">4B</xref>).</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p><bold>(A)</bold> Grand averages for all stimuli (&#x0201C;all&#x0201D;) and <bold>(B&#x02013;E)</bold> for the categories &#x0201C;words&#x0201D; and &#x0201C;reversals&#x0201D;. <bold>(A&#x02013;E)</bold> Time intervals with N1, P2, and N2 responses are shaded in different grays.</p></caption>
<graphic xlink:href="fneur-11-00161-g0003.tif"/>
</fig>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p>Quantitative AERP results: <bold>(A)</bold> area and <bold>(B)</bold> latency of the N1, <bold>(C)</bold> area and <bold>(D)</bold> latency of the P2, and <bold>(E)</bold> N2 amplitude for the categories &#x0201C;words&#x0201D;, &#x0201C;reversals&#x0201D;, and &#x0201C;all&#x0201D;. <bold>(A&#x02013;E)</bold> Means with their standard errors; significant differences between conditions are indicated (&#x0002A;<italic>p</italic> &#x0003C; 0.05 and trends <sup>&#x0002B;</sup><italic>p</italic> &#x0003C; 0.1).</p></caption>
<graphic xlink:href="fneur-11-00161-g0004.tif"/>
</fig>
<p>When comparing differences between responses toward words and reversals, several significant effects appeared. Differences were obtained by subtracting &#x0201C;reversals&#x0201D; latencies/areas from corresponding &#x0201C;words&#x0201D; latencies/areas for each single participant and averaging these for the groups and assessments (<xref ref-type="fig" rid="F5">Figures 5A,B</xref>). Dunnett&#x00027;s test revealed a weak significant main effect for N1 latency (<italic>F</italic> = 2.190, <italic>p</italic> = 0.0996), and <italic>post hoc</italic> testing showed significant differences between the CI and NH group at T2 (<italic>p</italic> = 0.037) (<xref ref-type="fig" rid="F5">Figure 5A</xref>). From T2 to T4, an alignment of the area differences of the CI group with the NH group area differences could be observed for N1 response (<xref ref-type="fig" rid="F5">Figure 5B</xref>). However, Dunnett&#x00027;s tests revealed no significant main effect.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p>Area latencies <bold>(A)</bold> and areas <bold>(B,C)</bold> of the &#x0201C;reversals&#x0201D; category were subtracted subject-wise from the &#x0201C;words&#x0201D;&#x00027; category area latency and area results. Dunnett&#x00027;s test revealed significant differences between T2 and normal hearing (NH) for the N1 area latency difference between &#x0201C;words&#x0201D; and &#x0201C;reversals&#x0201D; <bold>(A)</bold>. <bold>(D)</bold> Grand averages of N2 amplitudes for the &#x0201C;words&#x0201D; and the &#x0201C;reversals&#x0201D; categories of the NH group were subtracted from N2 mean amplitudes of individual CI users. Multiple <italic>t</italic> tests revealed significant differences from zero (Bonferroni corrected, &#x0002A;<italic>p</italic> &#x0003C; 0.0167 and trends <sup>&#x0002B;</sup><italic>p</italic> &#x0003C; 0.033). <bold>(A&#x02013;D)</bold> Mean values and their standard errors are shown.</p></caption>
<graphic xlink:href="fneur-11-00161-g0005.tif"/>
</fig>
</sec>
<sec>
<title>P2 Response</title>
<p>In the P2 interval, only responses to words at T2 and T3 had a positive peak, while peak responses at T4, in NH, and toward reversals were negative (<xref ref-type="fig" rid="F3">Figures 3B&#x02013;E</xref>). At T4, P2 response to reversals was delayed in comparison to words (<italic>t</italic> = &#x02212;3.674, <italic>p</italic> = 0.003), and a trend for such a delay was present for NH (<italic>t</italic> = &#x02212;1.794, <italic>p</italic> = 0.096) (<xref ref-type="fig" rid="F4">Figure 4D</xref>). In addition, P2 areas were larger for responses to words than reversals (T2: <italic>p</italic> = 0.049, <italic>t</italic> = 2.154; T3: <italic>p</italic> = 0.046, <italic>t</italic> = 2.188; T4: <italic>p</italic> = 0.054, <italic>t</italic> = 2.101; NH: <italic>p</italic> = 0.023, <italic>t</italic> = 2.568) (<xref ref-type="fig" rid="F4">Figure 4C</xref>).</p>
<p>Area differences were computed as described above. From T2 to T4, area differences aligned to the area difference of the NH group (<xref ref-type="fig" rid="F5">Figure 5C</xref>), and consequently, Dunnett&#x00027;s test showed no significant main effect.</p>
</sec>
<sec>
<title>N2 Response</title>
<p>The most obvious differences between CI and NH listeners concerned the N2 deflection between 370 and 570 ms after stimulus onset. Whereas a prominent deflection was seen in the CI group at all assessments for both word and reversal stimuli, it was always absent in NH listeners. Therefore, responses toward words and reversals were combined in the category &#x0201C;all&#x0201D;. N2 amplitudes were more negative for the CI group compared to NH, and this difference attained a significant main effect (Dunnett&#x00027;s test: <italic>F</italic> = 3.018, <italic>p</italic> = 0.037). <italic>Post hoc</italic> testing revealed significant differences to NH at all assessments (<xref ref-type="fig" rid="F4">Figure 4E</xref>).</p>
<p>Grand averages of N2 amplitudes for the &#x0201C;words&#x0201D; and the &#x0201C;reversals&#x0201D; category of the NH group were subtracted from corresponding N2 mean amplitudes for individual CI users, and multiple <italic>t</italic> tests showed significant differences from zero for both categories at all assessments (<xref ref-type="fig" rid="F5">Figure 5D</xref>).</p>
</sec>
</sec>
<sec>
<title>Source Localization</title>
<p>Cortical source localization analysis for the N2 interval was performed with the time domain-based eLORETA algorithm in the fieldtrip toolbox, performing a difference analysis between the CI group at T4 and the NH group. Since the AERP response did not show major differences between responses to words and reversals in either group, the analysis was conducted for the combined word and reversal stimuli (&#x0201C;all&#x0201D;). Increased activation in CI listeners was bilateral but more pronounced in the left hemisphere. Most extensive activation differences were seen in the frontal lobe (<xref ref-type="fig" rid="F6">Figure 6</xref>). Cortical regions, with enhanced activation in the bimodal CI listeners, localized to inferior frontal gyrus (IFG), including Brodman areas BA44, 45, 46, and 47, to orbitorectal gyrus (OrG), and to the medial frontal gyrus (MFG). In addition, extended areas in the superior frontal gyrus (SFG) were more active in CI listeners, comprising areas BA6, 8, 9, and 10. The focus of differential activation in SFG was more dorsal in the left compared to the right hemisphere. Beyond that, enhanced activity in CI listeners was observed in the anterior insula and anterior basal ganglia in the left hemisphere, and bilaterally in the anterior cingulate cortex (ACC: BA24, 32). Finally, a small region in the left inferior temporal and fusiform gyrus (ITG, FuG: BA37) in the temporal lobe showed increased activation. For a complete list of brain regions with enhanced activity in the N2 time window in CI listeners, see <xref ref-type="table" rid="T3">Table 3</xref>.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p>Spatial spread of enhanced cortical activation in CI listeners during the N2 interval. Auditory&#x02013;cognitive processing is prolonged in CI users in comparison to NH bilaterally in frontal areas [inferior frontal gyrus (IFG), medial frontal gyrus (MFG), superior frontal gyrus (SFG)] and in anterior cingulate gyrus. In addition, in the left hemisphere, significantly enhanced activation is present in inferior temporal gyrus (ITG), at the anterior pole of superior temporal gyrus (STG), and in the rostral basal ganglia (BG). Activity differences between CI and NH listeners are more widespread in the left hemisphere. <bold>(A,B)</bold> View at the left and right hemisphere from the lateral surface. <bold>(C,D)</bold> Left and right hemisphere seen from the midline. For a complete list of CI listeners&#x00027; brain areas with significantly increased activation, see <xref ref-type="table" rid="T3">Table 3</xref>. Darkening of the red color scale indicates decreasing <italic>p</italic> values (see scale).</p></caption>
<graphic xlink:href="fneur-11-00161-g0006.tif"/>
</fig>
<table-wrap position="float" id="T3">
<label>Table 3</label>
<caption><p>Localization results.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th/>
<th valign="top" align="center" style="border-bottom: thin solid #000000;" colspan="2"><bold>Left hemisphere</bold></th>
<th valign="top" align="center" style="border-bottom: thin solid #000000;" colspan="2"><bold>Right hemisphere</bold></th>
</tr>
<tr>
<th valign="top" align="left"><bold>Frontal lobe</bold></th>
<th valign="top" align="center"><bold>Voxel in ROI</bold></th>
<th valign="top" align="center"><bold>% significant</bold></th>
<th valign="top" align="center"><bold>Voxel in ROI</bold></th>
<th valign="top" align="center"><bold>% significant</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">SFG, superior frontal gyrus, medial area BA8</td>
<td valign="top" align="center">6,770</td>
<td valign="top" align="center" style="background-color:#818181">98</td>
<td valign="top" align="center">5,961</td>
<td valign="top" align="center" style="background-color:#818181">99</td>
</tr>
<tr>
<td valign="top" align="left">SFG, superior frontal gyrus, dorsolateral area BA8</td>
<td valign="top" align="center">5,700</td>
<td valign="top" align="center" style="background-color:#a6a8a7">55</td>
<td valign="top" align="center">7,048</td>
<td valign="top" align="center" style="background-color:#d9dad9">43</td>
</tr>
<tr>
<td valign="top" align="left">SFG, superior frontal gyrus, lateral area BA9</td>
<td valign="top" align="center">7,025</td>
<td valign="top" align="center" style="background-color:&#x00023;F4F7F7">20</td>
<td valign="top" align="center">6,074</td>
<td valign="top" align="center">7</td>
</tr>
<tr>
<td valign="top" align="left">SFG, superior frontal gyrus, dorsolateral area BA6</td>
<td valign="top" align="center">5,314</td>
<td valign="top" align="center" style="background-color:#818181">81</td>
<td valign="top" align="center">5,394</td>
<td valign="top" align="center" style="background-color:#d9dad9">25</td>
</tr>
<tr>
<td valign="top" align="left">SFG, superior frontal gyrus, medial area BA6</td>
<td valign="top" align="center">5,970</td>
<td valign="top" align="center" style="background-color:#d9dad9">48</td>
<td valign="top" align="center">6,191</td>
<td valign="top" align="center" style="background-color:#d9dad9">41</td>
</tr>
<tr>
<td valign="top" align="left">SFG, superior frontal gyrus, medial area BA9</td>
<td valign="top" align="center">6,895</td>
<td valign="top" align="center" style="background-color:#a6a8a7">59</td>
<td valign="top" align="center">5,589</td>
<td valign="top" align="center" style="background-color:#d9dad9">48</td>
</tr>
<tr>
<td valign="top" align="left">SFG, superior frontal gyrus, medial area BA10</td>
<td valign="top" align="center">7,535</td>
<td valign="top" align="center" style="background-color:#818181">100</td>
<td valign="top" align="center">8,193</td>
<td valign="top" align="center" style="background-color:#818181">79</td>
</tr>
<tr>
<td valign="top" align="left">MFG, middle frontal gyrus, dorsal area BA9/46</td>
<td valign="top" align="center">8,040</td>
<td valign="top" align="center" style="background-color:&#x00023;F4F7F7">20</td>
<td valign="top" align="center">8,444</td>
<td valign="top" align="center" style="background-color:#d9dad9">42</td>
</tr>
<tr>
<td valign="top" align="left">MFG, middle frontal gyrus, inferior frontal junction</td>
<td valign="top" align="center">4,609</td>
<td valign="top" align="center" style="background-color:#818181">98</td>
<td valign="top" align="center">6,362</td>
<td valign="top" align="center" style="background-color:#a6a8a7">50</td>
</tr>
<tr>
<td valign="top" align="left">MFG, middle frontal gyrus, area BA46</td>
<td valign="top" align="center">8,347</td>
<td valign="top" align="center" style="background-color:#818181">83</td>
<td valign="top" align="center">6,299</td>
<td valign="top" align="center">7</td>
</tr>
<tr>
<td valign="top" align="left">MFG, middle frontal gyrus, ventral area BA9/46</td>
<td valign="top" align="center">7,361</td>
<td valign="top" align="center" style="background-color:#a6a8a7">67</td>
<td valign="top" align="center">8,140</td>
<td valign="top" align="center" style="background-color:#818181">92</td>
</tr>
<tr>
<td valign="top" align="left">MFG, middle frontal gyrus, ventrolateral area BA8</td>
<td valign="top" align="center">6,557</td>
<td valign="top" align="center" style="background-color:#a6a8a7">53</td>
<td valign="top" align="center">7,867</td>
<td valign="top" align="center" style="background-color:#a6a8a7">70</td>
</tr>
<tr>
<td valign="top" align="left">MFG, middle frontal gyrus, ventrolateral area BA6</td>
<td valign="top" align="center">4,982</td>
<td valign="top" align="center" style="background-color:#818181">94</td>
<td valign="top" align="center">5,010</td>
<td valign="top" align="center" style="background-color:#d9dad9">35</td>
</tr>
<tr>
<td valign="top" align="left">MFG, middle frontal gyrus, lateral area BA10</td>
<td valign="top" align="center">8,071</td>
<td valign="top" align="center" style="background-color:#818181">94</td>
<td valign="top" align="center">6,643</td>
<td valign="top" align="center" style="background-color:#d9dad9">46</td>
</tr>
<tr>
<td valign="top" align="left">IFG, inferior frontal gyrus, dorsal area BA44</td>
<td valign="top" align="center">2,804</td>
<td valign="top" align="center" style="background-color:#818181">92</td>
<td valign="top" align="center">2,590</td>
<td valign="top" align="center" style="background-color:#d9dad9">32</td>
</tr>
<tr>
<td valign="top" align="left">IFG, inferior frontal gyrus, inferior frontal sulcus</td>
<td valign="top" align="center">2,666</td>
<td valign="top" align="center" style="background-color:#a6a8a7">64</td>
<td valign="top" align="center">2,980</td>
<td valign="top" align="center" style="background-color:#818181">100</td>
</tr>
<tr>
<td valign="top" align="left">IFG, inferior frontal gyrus, caudal area BA45</td>
<td valign="top" align="center">2,938</td>
<td valign="top" align="center" style="background-color:#a6a8a7">60</td>
<td valign="top" align="center">2,482</td>
<td valign="top" align="center" style="background-color:#d9dad9">41</td>
</tr>
<tr>
<td valign="top" align="left">IFG, inferior frontal gyrus, rostral area BA45</td>
<td valign="top" align="center">3,310</td>
<td valign="top" align="center" style="background-color:#818181">93</td>
<td valign="top" align="center">2,971</td>
<td valign="top" align="center" style="background-color:#818181">100</td>
</tr>
<tr>
<td valign="top" align="left">IFG, inferior frontal gyrus, opercular area BA44</td>
<td valign="top" align="center">4,501</td>
<td valign="top" align="center" style="background-color:#818181">99</td>
<td valign="top" align="center">3,790</td>
<td valign="top" align="center" style="background-color:#a6a8a7">62</td>
</tr>
<tr>
<td valign="top" align="left">IFG, inferior frontal gyrus, ventral area BA44</td>
<td valign="top" align="center">2,305</td>
<td valign="top" align="center" style="background-color:#d9dad9">37</td>
<td valign="top" align="center">2,328</td>
<td valign="top" align="center">&#x02013;</td>
</tr>
<tr>
<td valign="top" align="left">OrG, orbital gyrus, medial area BA14</td>
<td valign="top" align="center">5,044</td>
<td valign="top" align="center" style="background-color:#818181">100</td>
<td valign="top" align="center">4,001</td>
<td valign="top" align="center" style="background-color:#818181">100</td>
</tr>
<tr>
<td valign="top" align="left">OrG, orbital gyrus, orbital area BA12/47</td>
<td valign="top" align="center">3,726</td>
<td valign="top" align="center" style="background-color:#818181">94</td>
<td valign="top" align="center">3,920</td>
<td valign="top" align="center" style="background-color:#818181">90</td>
</tr>
<tr>
<td valign="top" align="left">OrG, orbital gyrus, lateral area BA11</td>
<td valign="top" align="center">9,471</td>
<td valign="top" align="center" style="background-color:#818181">96</td>
<td valign="top" align="center">7,518</td>
<td valign="top" align="center" style="background-color:#818181">94</td>
</tr>
<tr>
<td valign="top" align="left">OrG, orbital gyrus, medial area BA11</td>
<td valign="top" align="center">5,650</td>
<td valign="top" align="center" style="background-color:#818181">93</td>
<td valign="top" align="center">5,076</td>
<td valign="top" align="center" style="background-color:#818181">98</td>
</tr>
<tr>
<td valign="top" align="left">OrG, orbital gyrus, area BA13</td>
<td valign="top" align="center">6,243</td>
<td valign="top" align="center" style="background-color:#a6a8a7">74</td>
<td valign="top" align="center">7,364</td>
<td valign="top" align="center" style="background-color:#a6a8a7">56</td>
</tr>
<tr>
<td valign="top" align="left">OrG, orbital gyrus, lateral area BA12/47</td>
<td valign="top" align="center">4,059</td>
<td valign="top" align="center" style="background-color:#818181">97</td>
<td valign="top" align="center">4,714</td>
<td valign="top" align="center" style="background-color:#818181">100</td>
</tr>
<tr>
<td valign="top" align="left">PrG, precentral gyrus, caudal ventrolateral area BA6</td>
<td valign="top" align="center">5,556</td>
<td valign="top" align="center" style="background-color:#a6a8a7">74</td>
<td valign="top" align="center">5,832</td>
<td valign="top" align="center">&#x02013;</td>
</tr>
<tr>
<td valign="top" align="left" colspan="5"><bold>Temporal lobe</bold></td>
</tr>
<tr>
<td valign="top" align="left">STG, superior temporal gyrus, medial area BA38</td>
<td valign="top" align="center">5,294</td>
<td valign="top" align="center" style="background-color:#d9dad9">46</td>
<td valign="top" align="center">5,731</td>
<td valign="top" align="center">&#x02013;</td>
</tr>
<tr>
<td valign="top" align="left">STG, superior temporal gyrus TE1.0 and TE1.2</td>
<td valign="top" align="center">5,789</td>
<td valign="top" align="center">15</td>
<td valign="top" align="center">6,459</td>
<td valign="top" align="center">&#x02013;</td>
</tr>
<tr>
<td valign="top" align="left">STG, superior temporal gyrus, lateral area BA38</td>
<td valign="top" align="center">5167</td>
<td valign="top" align="center" style="background-color:#a6a8a7">50</td>
<td valign="top" align="center">3,988</td>
<td valign="top" align="center">7</td>
</tr>
<tr>
<td valign="top" align="left">ITG, inferior temporal gyrus, extreme lateroventral area BA37</td>
<td valign="top" align="center">1,773</td>
<td valign="top" align="center" style="background-color:#818181">82</td>
<td valign="top" align="center">2,514</td>
<td valign="top" align="center">&#x02013;</td>
</tr>
<tr>
<td valign="top" align="left">ITG, inferior temporal gyrus, ventrolateral area BA37</td>
<td valign="top" align="center">2,683</td>
<td valign="top" align="center" style="background-color:#a6a8a7">59</td>
<td/>
<td valign="top" align="center">&#x02013;</td>
</tr>
<tr>
<td valign="top" align="left">FuG, fusiform gyrus, medioventral area BA37</td>
<td valign="top" align="center">6,142</td>
<td valign="top" align="center" style="background-color:#a6a8a7">52</td>
<td valign="top" align="center">6,869</td>
<td valign="top" align="center">6</td>
</tr>
<tr>
<td valign="top" align="left">FuG, fusiform gyrus, lateroventral area BA37</td>
<td valign="top" align="center">6,989</td>
<td valign="top" align="center" style="background-color:#a6a8a7">74</td>
<td valign="top" align="center">7,926</td>
<td valign="top" align="center">&#x02013;</td>
</tr>
<tr>
<td valign="top" align="left" colspan="5"><bold>Occipital lobe</bold></td>
</tr>
<tr>
<td valign="top" align="left">MVOcC, medioventral occipital cortex, rostral lingual gyrus</td>
<td valign="top" align="center">6,954</td>
<td valign="top" align="center">4</td>
<td valign="top" align="center">5,975</td>
<td valign="top" align="center">18</td>
</tr>
<tr>
<td valign="top" align="left">LOcC, lateral occipital cortex, area V5/MT&#x0002B;</td>
<td valign="top" align="center">6,484</td>
<td valign="top" align="center" style="background-color:#d9dad9">27</td>
<td valign="top" align="center">5,931</td>
<td valign="top" align="center">&#x02013;</td>
</tr>
<tr>
<td valign="top" align="left" colspan="5"><bold>Insula</bold></td>
</tr>
<tr>
<td valign="top" align="left">INS, insular gyrus, ventral agranular insula</td>
<td valign="top" align="center">1,698</td>
<td valign="top" align="center" style="background-color:#818181">98</td>
<td valign="top" align="center">1,818</td>
<td valign="top" align="center">17</td>
</tr>
<tr>
<td valign="top" align="left">INS, insular gyrus, dorsal agranular insula</td>
<td valign="top" align="center">1,968</td>
<td valign="top" align="center" style="background-color:#818181">100</td>
<td valign="top" align="center">2,109</td>
<td valign="top" align="center" style="background-color:#d9dad9">33</td>
</tr>
<tr>
<td valign="top" align="left">INS, insular gyrus, ventral dysgranular and granular insula</td>
<td valign="top" align="center">2,174</td>
<td valign="top" align="center">16</td>
<td valign="top" align="center">2,188</td>
<td valign="top" align="center">&#x02013;</td>
</tr>
<tr>
<td valign="top" align="left">INS, insular gyrus, dorsal dysgranular insula</td>
<td valign="top" align="center">2,360</td>
<td valign="top" align="center" style="background-color:#a6a8a7">52</td>
<td valign="top" align="center">2,965</td>
<td valign="top" align="center">&#x02013;</td>
</tr>
<tr>
<td valign="top" align="left" colspan="5"><bold>Cingulate gyrus</bold></td>
</tr>
<tr>
<td valign="top" align="left">ACC, anterior cingulate gyrus, rostroventral area BA24</td>
<td valign="top" align="center">2,217</td>
<td valign="top" align="center" style="background-color:#818181">91</td>
<td valign="top" align="center">1,509</td>
<td valign="top" align="center" style="background-color:&#x00023;BFCCCC">73</td>
</tr>
<tr>
<td valign="top" align="left">ACC, anterior cingulate gyrus, pregenual area BA32</td>
<td valign="top" align="center">3,096</td>
<td valign="top" align="center" style="background-color:#818181">100</td>
<td valign="top" align="center">3,979</td>
<td valign="top" align="center" style="background-color:#818181">100</td>
</tr>
<tr>
<td valign="top" align="left">ACC, anterior cingulate gyrus, caudodorsal area BA24</td>
<td valign="top" align="center">2,088</td>
<td valign="top" align="center" style="background-color:#818181">99</td>
<td valign="top" align="center">3,044</td>
<td valign="top" align="center" style="background-color:#818181">92</td>
</tr>
<tr>
<td valign="top" align="left">ACC, anterior cingulate gyrus, subgenual area BA32</td>
<td valign="top" align="center">3,250</td>
<td valign="top" align="center" style="background-color:#818181">100</td>
<td valign="top" align="center">5,063</td>
<td valign="top" align="center" style="background-color:#818181">99</td>
</tr>
<tr>
<td valign="top" align="left" colspan="5"><bold>Basal ganglia</bold></td>
</tr>
<tr>
<td valign="top" align="left">BG, basal ganglia, ventral caudate</td>
<td valign="top" align="center">2,577</td>
<td valign="top" align="center" style="background-color:#a6a8a7">73</td>
<td valign="top" align="center">3,489</td>
<td valign="top" align="center">15</td>
</tr>
<tr>
<td valign="top" align="left">BG, basal ganglia, globus pallidus</td>
<td valign="top" align="center">2,558</td>
<td valign="top" align="center">16</td>
<td valign="top" align="center">2,571</td>
<td valign="top" align="center">&#x02013;</td>
</tr>
<tr>
<td valign="top" align="left">BG, basal ganglia, nucleus accumbens</td>
<td valign="top" align="center">3,161</td>
<td valign="top" align="center" style="background-color:#d9dad9">30</td>
<td valign="top" align="center">2,599</td>
<td valign="top" align="center">2</td>
</tr>
<tr>
<td valign="top" align="left">BG, basal ganglia, ventromedial putamen</td>
<td valign="top" align="center">2,073</td>
<td valign="top" align="center" style="background-color:#a6a8a7">54</td>
<td valign="top" align="center">2,682</td>
<td valign="top" align="center">1</td>
</tr>
<tr>
<td valign="top" align="left">BG, basal ganglia, dorsal caudate</td>
<td valign="top" align="center">5,314</td>
<td valign="top" align="center" style="background-color:#a6a8a7">51</td>
<td valign="top" align="center">4,090</td>
<td valign="top" align="center">1</td>
</tr>
<tr>
<td valign="top" align="left">BG, basal ganglia, dorsolateral putamen</td>
<td valign="top" align="center">3,541</td>
<td valign="top" align="center">16</td>
<td valign="top" align="center">3,495</td>
<td valign="top" align="center">&#x02013;</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>List of brain areas with significantly increased activation in the N2 time interval from 370 to 570 ms in CI listeners in relation to NH at T4. Only areas with significantly increased activity of at least 100 voxels are included. The percentage of voxels with significantly increased activity (% significant) in each region is shown separately for left and right hemispheres. Dark gray shading indicates significantly increased activity in at least 75% of the voxels, a white label is used for increased activation in less than 25% of the voxels, and shades in between represent categories 50&#x02013;75% and 25&#x02013;49% of the voxels with increased activation. Note that the spatial extent of increased activity is larger in the left hemisphere. If available, Brodmann areas (BA) are indicated</italic>.</p>
</table-wrap-foot>
</table-wrap>
</sec>
<sec>
<title>Speech Perception and Brain&#x02013;Behavior Correlations</title>
<p>Although speech perception improved with bimodal hearing, and this improvement attained statistical significance for three of the five tested conditions (<xref ref-type="table" rid="T2">Table 2</xref>), speech perception remained worse than in NH after 6 months of bimodal hearing. In quiet, average comprehension was 30% lower for the monosyllable FBE test. Also, at T4, CI listeners required 20 dB higher sound pressure level to understand 50% of the OlSa sentences presented in quiet. With noise presented from the same source (S0N0), at T4, CI listeners&#x00027; SNR 50% was 6 dB higher with bimodal hearing. This difference increased to 12 dB for lateral noise because in contrast to NH, CI listeners did not benefit from spatial release from masking. Despite the small sample size, a large variability in audiometric performance for the bimodal participants allowed us to examine brain behavior correlations. Correlation analysis was performed between the results of the FBE and OlSa tests and all AERP measures at T3 and T4. As variability was low in the NH group, correlations were not computed for this group.</p>
<p>The most significant correlations between OlSa tests and AERP characteristics were seen at T3. These included latency of the N1 for reversals (S0: <italic>r</italic> = 0.518, <italic>p</italic> = 0.048; S0NCI: <italic>r</italic> = 0.564, <italic>p</italic> = 0.028), latency of P2 for words (S0N0: <italic>r</italic> = 0.728, <italic>p</italic> = 0.002; S0NCI: <italic>r</italic> = 0.644, <italic>p</italic> = 0.007; S0NHA: <italic>r</italic> = 0.600, <italic>p</italic> = 0.018); and N2 latency in response to words (S0NHA: <italic>r</italic> = 0.711, <italic>p</italic> = 0.003). In addition, a significant correlation existed between the area of the N1 for words and S0N0 (<italic>r</italic> = 0.537, <italic>p</italic> = 0.039). At T4, the only correlation with a value of <italic>p</italic> &#x0003C; 0.05 was found for the OlSa test with noise presented to the CI ear (S0NCI) and N2 latency in response to words (<italic>r</italic> = 0.529, <italic>p</italic> = 0.042). However, because of the high number of correlations, significance of these comparisons would never survive a Bonferroni correction.</p>
<p>Bivariate correlations between N2 amplitude at T4 and the change in N2 amplitude between T2 and T4 with self-perceived improvement in everyday auditory communication, assessed via SSQ-B1&#x02013;3 did not achieve statistical significance but showed a trend for a moderate negative correlation between N2 amplitude at T4 and the improvement of speech comprehension (SSQ-B1: <italic>r</italic> = &#x02212;0.471, <italic>p</italic> = 0.076) and localization (SSQ-B2: <italic>r</italic> = &#x02212;0.494, <italic>p</italic> = 0.061) recorded at the T4 assessment.</p>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>The study objective was to characterize the temporal dynamics of speech processing in bimodal CI users, to explore whether it changes during the first months of CI experience and whether it approximates the characteristics seen in NH. Moreover, it was of interest to explore at which stage of processing differences occur, depending on familiarity of the stimuli and whether this differs between bimodal and NH listeners. The assumption was that neural efficacy, indicated by an earlier classification of the stimuli, together with shorter duration and a more spatially focused neural activation, increases with CI experience. The task performed required monosyllable word/non-word classification. Intelligibility was impeded by adding speech-modulated noise to the non-CI side, and loudness of the stimuli was adjusted individually to achieve similar intelligibility across groups and assessments. To control for changes in central processing associated with aging, age of NH listeners was matched to individual CI users. Presence of AERPs N1 and P2 at all assessments and with all listening conditions indicates that sound had reached the auditory cortex of our hearing-impaired study participants, suggesting successful amplification and functional integrity of central auditory brain structures and pathways. This is in line with literature that reports sensory components with similar morphologies as in NH in response to acoustic stimulation, even after extended periods of auditory deprivation (<xref ref-type="bibr" rid="B23">23</xref>, <xref ref-type="bibr" rid="B24">24</xref>, <xref ref-type="bibr" rid="B26">26</xref>, <xref ref-type="bibr" rid="B70">70</xref>).</p>
<p>Bimodal listeners showed the following developments between T2 and T4 and relative to NH: (1) No difference in the N1 amplitude between stimulus types at T2 and development of a difference with bimodal experience, although to a lesser degree than in NH. In addition, N1 latencies in response to words were shorter than to reversed words (&#x0201C;reversals&#x0201D;) at T2, while no difference existed for NH. The latency difference in CI users reduced until T4. (2) An increase in the P2 amplitude in response to words between T2 and T3 followed by a reduction until T4, together with the development of a latency difference depending on stimulus category that was similar to NH. (3) A sustained N2 deflection irrespective of stimulus type, which did not wear off with bimodal experience and which was absent in NH. (4) Enhanced activity at T4 during the N2 interval localized to extended areas in the frontal and prefrontal lobes, all of which have been implicated in speech processing.</p>
<sec>
<title>Importance of Longitudinal Studies</title>
<p>Longitudinal AERP studies following CI provision are important to achieve a better understanding on the magnitude and time course of potential reorganizations in auditory and speech-relevant brain systems. The insights obtained throw light on the chances of auditory rehabilitation and how to make best possible use of these. A related reason for repeated measurements is the observed heterogeneity of hearing-impaired individuals&#x00027; etiology and time course of hearing impairment, the associated deficits, and CI outcome. To date, only a few studies have investigated changes in sensory processing associated with CI experience (<xref ref-type="bibr" rid="B26">26</xref>, <xref ref-type="bibr" rid="B70">70</xref>&#x02013;<xref ref-type="bibr" rid="B72">72</xref>), while longitudinal studies of later potentials are missing altogether.</p>
<p>Longitudinal observations exist for the N1, but allow only limited comparison with our findings because of different stimulus types, task requirements, and listening conditions. Only one study (<xref ref-type="bibr" rid="B71">71</xref>) also used sound field acoustic presentation and binaural listening conditions, although participants in this study suffered from single-sided deafness (SSD). Legris et al. (<xref ref-type="bibr" rid="B71">71</xref>) probed binaural hearing before and up to 12 months post-implantation with stimuli presented at a constant sound pressure level for all assessments. An increase, although not statistically significant, was seen for the N1 amplitude, but only for CIs implanted on the left side. In contrast, Sandmann et al. (<xref ref-type="bibr" rid="B26">26</xref>) and Purdy and Kelly (<xref ref-type="bibr" rid="B72">72</xref>) adjusted loudness individually and investigated monaural perception via the CI ear. Whereas, N1 amplitude and latency in response to pure tones did not change significantly within the first 9 months of CI use (<xref ref-type="bibr" rid="B72">72</xref>), a significant reduction of N1 latency, together with a significant increase in N1 amplitude, was found in response to complex tones within 4 months of the implant being switched on (<xref ref-type="bibr" rid="B26">26</xref>). Finally, because their two participants used a magnet-free CI, Pantev et al. (<xref ref-type="bibr" rid="B70">70</xref>) were able to perform repeated MEG recordings during the first 2 years following implantation. Sounds were passed directly to the speech processor of the CI, and loudness was set to a comfortably loud level, which was obviously retained for all measurements. N1m and P2m amplitude of the two CI users increased with CI experience. Hence, results of those studies are not contradictory to the present findings, but because of methodological issues, their findings cannot be compared directly to the present results.</p>
</sec>
<sec>
<title>Adaptation to Bimodal Hearing</title>
<sec>
<title>Early Auditory-Evoked Potentials N1&#x02013;P2</title>
<p>N1 amplitude in general, and latency after CI provision averaged for the combined responses to words and reversed words (&#x0201C;all&#x0201D;) did not differ between CI and NH group, or between the T2 to T4 assessments in the CI group. This suggested similar audibility across groups and assessments, although only after significant adjustments to the SNR. This finding is in line with the results of a recent study reporting N1 amplitudes and latencies that are similar across NH and hearing-impaired listeners (<xref ref-type="bibr" rid="B73">73</xref>) and likewise between CI and NH ears of SSD participants, for words presented in background noise (<xref ref-type="bibr" rid="B23">23</xref>), if sensation level was adjusted to achieve similar audibility. A closer look at our NH data revealed significant distinctions in N1 amplitude between responses to the familiar sounds of words and their unfamiliar reversals, with N1 amplitudes for words being larger. While a difference in N1 amplitude depending on stimulus type was absent at T2 for the CI group, i.e., with acoustic amplification and worst hearing ability. Responses of the bimodal listeners approximated the difference seen in NH until T4.</p>
<p>In addition, whereas N1 latencies in NH did not differ between stimulus categories, for the CI group, N1 latencies were significantly shorter for words than for reversals, but this difference reduced with CI experience. It is known that focusing attention on stimuli that are behaviorally relevant, e.g., requiring a response like the button press, influences the N1 response (<xref ref-type="bibr" rid="B74">74</xref>). A shorter latency of the magnetic field response M100 to an attended auditory stimulus, compared to the unattended condition, was observed in NH, although this difference failed to reach statistical significance (<xref ref-type="bibr" rid="B75">75</xref>). Also, the processing of degraded speech was shown to critically depend on attention (<xref ref-type="bibr" rid="B76">76</xref>). In addition, a previous study (<xref ref-type="bibr" rid="B77">77</xref>) found that N1 latencies in response to stimuli with different voice onset times were longest in good CI performers, while they were shorter in poor performers and in NH. Thus, shorter N1 latency does not necessarily indicate better sensory processing in CI users.</p>
<p>Further relevant findings regarding sensory processing pertained to P2 amplitude and latency. The most positive peak in the P2 interval reached a positive value only during the pre-implantation T2 assessment, i.e., with worst hearing, while it remained negative for bimodal hearing and in NH. This is in line with the assumption that P2 amplitude is larger in the hearing impaired if the task can be accomplished. Others reported a larger P2 amplitude in the moderately hearing impaired as opposed to NH, which was in line with previous studies cited therein and was interpreted as an indication of effortful listening (<xref ref-type="bibr" rid="B78">78</xref>). Furthermore, the auditory P2m response arising from intelligible speech is stronger than that which follows unintelligible speech (<xref ref-type="bibr" rid="B79">79</xref>). In contrast, a study comparing monaural electric listening in bilaterally hearing-impaired individuals to monaural performance in NH (<xref ref-type="bibr" rid="B23">23</xref>) reports significantly larger P2 areas in NH listeners in response to target words in a word classification task. Thus, P2 amplitude may be influenced by several brain processes, or several components may superimpose, leading to divergent results.</p>
<p>Negativity of the P2 response in the current study is interpreted as an indication that it may be overlapped by a contingent negative variation (CNV) potential, a negative deflection commencing in this time window, which is present if participants prepare for an action in response to a signal (<xref ref-type="bibr" rid="B80">80</xref>). Note that participants were required to press a button if the stimulus was classified as a word, but only after an alarm signal, which sounded 1,000 ms after each stimulus. The delayed motor response was necessary to keep the participants alert during the recording, to control for intelligibility, and to avoid interference of auditory and motor responses. A CNV can be expected in this setting, although such a superposition was not reported in a previous investigation that used a similar delay of the motor response to an auditory stimulus in a group of CI listeners (<xref ref-type="bibr" rid="B56">56</xref>). Alternatively or additionally, the P2 potential may be overlapped by an early onset auditory evoked negativity, which, supposedly, reflects acoustic&#x02013;phonological word processing and is observed as early as 150 ms over parietal sites (<xref ref-type="bibr" rid="B19">19</xref>). Thus, it appears that only a large P2 peak may show as a positive deflection, while a negative P2 may be due to lower P2 amplitude, a larger CNV, or an acoustic&#x02013;phonological negativity. This ambiguity cannot be resolved in the present results.</p>
<p>The second finding in this time window concerned P2 latencies, which were longer in response to reversed words than to words. This latency difference was significant for CI users at T4. Additionally, a trend toward longer latencies for reversals existed in NH listeners, suggesting that better hearing is associated with faster processing of the familiar sounds of words in comparison to the unfamiliar reversals. The reversed monosyllable words used in the present study were clearly different from regular words in that they mostly contained sound combinations, which are not present in the participants&#x00027; mother tongue. Experience with one&#x00027;s own language has been shown to support more efficient processing of phonemes that belong to the native language (<xref ref-type="bibr" rid="B30">30</xref>). Latency differences were reported for familiarity as in phoneme or word detection tasks (<xref ref-type="bibr" rid="B29">29</xref>, <xref ref-type="bibr" rid="B81">81</xref>), but, in addition, also depend on CI performance (<xref ref-type="bibr" rid="B28">28</xref>). Importantly, at T3, with little bimodal experience, P2 latency to words correlates significantly with sentence understanding in the presence of noise (S0N0, S0NCI, S0NHA), with shorter P2 latencies being associated with better intelligibility. Similarly, data by Han et al. (<xref ref-type="bibr" rid="B77">77</xref>), who investigated N1&#x02013;P2 amplitude and latency changes depending on voice onset time, suggested the P2 response to be a more sensitive index of speech perception ability in CI users than the N1 potential. Thus, decreased P2 amplitude and shorter P2 latency to familiar sounds may be associated with better hearing, whereas a stronger response may be a marker of inefficient encoding.</p>
<p>Taken together, findings in this early time window suggest that differences in the processing of speech-relevant auditory stimuli between bimodal and NH listeners already start at the subcortical level. In support, more efficient processing of elements for one&#x00027;s native tongue was evidenced physiologically already at the level of the brainstem (<xref ref-type="bibr" rid="B82">82</xref>, <xref ref-type="bibr" rid="B83">83</xref>), and Cheng et al. (<xref ref-type="bibr" rid="B84">84</xref>) interpret this to be an indication that long-term lexical knowledge has its effect via sub-lexical processing. Therefore, the present findings indicate that efficient processing of the familiar speech elements may be weakened by prolonged hearing impairment, despite pre-implantation acoustic amplification. Consequently, approximation of the response in bimodal listeners to the N1&#x02013;P2 morphology in NH suggests that processing of speech elements regains efficacy within the first months of CI use.</p>
</sec>
<sec>
<title>Lexical&#x02013;Semantic Processing: Late Event-Related Negativity</title>
<p>In bimodal CI listeners, a prominent negative deflection was present between 370 and 570 ms after stimulus onset irrespective of stimulus type, while it was absent in NH. This response did not approximate the NH response during the duration of the study.</p>
<p>Bimodal CI users report an increased effort when listening in noise. Understanding requires more time and is improved if context is known. In their extended ease of listening model, R&#x000F6;nnberg et al. (<xref ref-type="bibr" rid="B57">57</xref>) postulate that whereas speech is largely processed automatically in NH and in favorable listening situations, top&#x02013;down processing takes on a larger role in challenging listening conditions; such as for bimodal hearing in background noise. It has been reasoned that the perceptual organization of acoustic cues takes place subsequent to the obligatory N1&#x02013;P2 response (<xref ref-type="bibr" rid="B20">20</xref>). Further, a MEG study showed differences in responses to acoustic monosyllabic words and pseudowords that occurred around 350 ms after stimulus onset (<xref ref-type="bibr" rid="B85">85</xref>). It is known that categorial perception of speech appears to be highly reliant on top&#x02013;down processes (<xref ref-type="bibr" rid="B86">86</xref>) where many aspects of cognitive control manifest in event-related negativities, typically being recorded when the task requires active participation (<xref ref-type="bibr" rid="B34">34</xref>). As we assumed, extended top&#x02013;down cognitive processing to compensate for the distorted auditory signals, prolonged negativity in the AERP trace in a time window following the N1&#x02013;P2 response was expected. Our results are in line with this assumption. The bimodal CI users show a prominent N2 irrespective of stimulus category. Neither amplitude nor duration of this response reduces with CI experience in the study interval. In contrast, negativity in this time window was absent in NH listeners, again irrespective of stimulus category. This finding suggests prolonged processing of auditory stimuli by CI users where matching with the mental lexicon is required. This is in line with the results of previous reports, evidencing prolonged duration of negativity in this time window for listening with the CI ear (<xref ref-type="bibr" rid="B23">23</xref>, <xref ref-type="bibr" rid="B24">24</xref>). Existing literature shows a stronger adaptation of late AERPs to the activation pattern seen in NH and for good CI performers (<xref ref-type="bibr" rid="B4">4</xref>). Absence of this late negativity in NH listeners in the current study may be due to the less demanding word categorization task and to the binaural listening condition.</p>
<p>The late negative-going N2 deflection observed in the current results is largely similar to the N400. In general, the N400 response is elicited by meaningful stimuli, including isolated words, pronounceable non-words or pseudowords (<xref ref-type="bibr" rid="B31">31</xref>, <xref ref-type="bibr" rid="B35">35</xref>), and any factor that facilitates lexical access reduces its amplitude (<xref ref-type="bibr" rid="B31">31</xref>). In keeping with this, the N400 is larger for meaningless pseudowords than for matched common words (<xref ref-type="bibr" rid="B87">87</xref>), and as shown in MEG recordings (<xref ref-type="bibr" rid="B79">79</xref>), increased intelligibility reduces it. Finally, Finke et al. (<xref ref-type="bibr" rid="B24">24</xref>) could relate prolonged N2 negativity to subjective listening effort, to lower behavioral performance, and to prolonged reaction times. In agreement with this literature, we interpret prolonged N2 activity in the brains of our bimodal CI users as an indication of effortful and attentive processing of speech, suggesting slower lexical access or increased uncertainties in lexical matching, which does not resolve within the first 6 months of CI use.</p>
<p>During acclimatization, CI listeners must adapt to a new set of acoustic&#x02013;phonetic cues and correlate them to their mental lexicon. Words are identified on the basis of lexical neighborhood, i.e., confusability of the individual phonemes and relations of the stimulus word to other words that are phonetically similar (<xref ref-type="bibr" rid="B88">88</xref>). It is assumed that NH listeners encode acoustic cues accurately and compare them to a discrete boundary to obtain sharp categories (<xref ref-type="bibr" rid="B13">13</xref>). A study in CI users (<xref ref-type="bibr" rid="B89">89</xref>) suggests that categories are less discrete and more overlapping, but may sharpen with experience. When hearing spoken words, NH listeners rapidly activate multiple candidates that match the input, and with more information on the correct word, competitors are rejected. In contrast, in eye-tracking experiments, CI users who experience higher uncertainties during the processing of spoken speech were shown to delay their commitment to lexical items (<xref ref-type="bibr" rid="B90">90</xref>).</p>
<p>Taken together, the present AERP results suggest that the processing of speech information by CI users is prolonged and possibly requires more cognitive resources to achieve similar behavioral intelligibility to NH listeners. Moreover, results suggest that while early sensory processing approximates the situation in NH, later lexically related processing does not approximate the NH response during the first months of CI use. It remains to be seen, whether this late negative response is a correlate of listening effort and reduces with additional CI experience, or whether it is a correlate of a fundamentally different processing strategy, which is adopted by CI listeners.</p>
</sec>
</sec>
<sec>
<title>Extended Spatial Activation With Bimodal Hearing</title>
<p>Since the AERP response in the N2 time window differed between groups but not between stimulus categories, contrasts of activation were calculated by subtracting activity in response to all stimuli in NH listeners from that in CI listeners at T4. Taking this approach, activity in brain areas that are active to the same extent in both groups is subtracted out. Several brain areas exhibited increased activation for the bimodal listeners. Differences were mostly bilateral, although more pronounced in the left hemisphere. Increased activation was present in extended regions of IFG including opercular and triangular parts or Broca&#x00027;s region, and in the MFG in the medial as well as in SFG in the dorsolateral frontal lobe. Beyond that, ACC in the medial frontal cortex, left insula, left basal ganglia, and a circumscribed area in the left caudo-ventral portion of ITG all exhibited increased activation in CI listeners. All regions with increased activation in the CI group were previously shown to be involved in speech processing (<xref ref-type="bibr" rid="B4">4</xref>, <xref ref-type="bibr" rid="B16">16</xref>&#x02013;<xref ref-type="bibr" rid="B18">18</xref>, <xref ref-type="bibr" rid="B31">31</xref>, <xref ref-type="bibr" rid="B76">76</xref>, <xref ref-type="bibr" rid="B86">86</xref>, <xref ref-type="bibr" rid="B91">91</xref>&#x02013;<xref ref-type="bibr" rid="B93">93</xref>).</p>
<p>BA44 and 45 in the left IFG are regarded as the core Broca areas (<xref ref-type="bibr" rid="B17">17</xref>, <xref ref-type="bibr" rid="B92">92</xref>). IFG contributes to processes involved in accessing and combining word meanings, in particular, in demanding contexts (<xref ref-type="bibr" rid="B16">16</xref>), and activity in this region is consistently affected by the contextual semantic fit (<xref ref-type="bibr" rid="B31">31</xref>). IFG responses are elevated for distorted-yet-intelligible speech compared to both clear speech and unintelligible noise, while IFG is inactive during effortless comprehension (<xref ref-type="bibr" rid="B94">94</xref>). The elevated response to distorted speech in the left IFG was insensitive to the form of distortion, indicating supra-auditory compensatory processes (<xref ref-type="bibr" rid="B93">93</xref>). Several studies suggest a functional partition of the IFG with an anterior part driving controlled retrieval based on context, a posterior part selecting between representations (<xref ref-type="bibr" rid="B31">31</xref>), and a dorsal part being active during effortful auditory search processes (<xref ref-type="bibr" rid="B95">95</xref>). In addition to their IFG, older adults rely on MFG and BA6 activation, which also correlates with comprehension (<xref ref-type="bibr" rid="B15">15</xref>, <xref ref-type="bibr" rid="B39">39</xref>).</p>
<p>Distribution of increased SFG activity in CI participants differed between hemispheres with increased activation in BA6, 8, 9, and 10 in the left, and in BA8, 9, and 10 in the right. BA6 is a pre-motor area connected to Broca&#x00027;s area (<xref ref-type="bibr" rid="B92">92</xref>), anteriorly adjacent BA8 is involved in the management of uncertainty (<xref ref-type="bibr" rid="B96">96</xref>), and BA9 is involved in a number of complex language processes (<xref ref-type="bibr" rid="B92">92</xref>), while BA10 is implicated in memory recall and executive functions, as well as in language processing that lacks automaticity (<xref ref-type="bibr" rid="B97">97</xref>). In the left hemisphere, a connection exists between the Broca region and the SFG (<xref ref-type="bibr" rid="B98">98</xref>), and lesions to the left lateral prefrontal cortex impaired decision threshold adjustment for lexical selection (<xref ref-type="bibr" rid="B99">99</xref>). In combination with the left IFG, SFG has been shown to be involved in word selection (<xref ref-type="bibr" rid="B100">100</xref>) and with conceptually driven word retrieval (<xref ref-type="bibr" rid="B101">101</xref>). Moreover, increased predictability was associated with activation in medial and left lateral prefrontal cortices (<xref ref-type="bibr" rid="B94">94</xref>). Beyond that, the left dorsolateral prefrontal cortex is associated with task switching and, together with the anterior insula/frontal operculum and ACC, is a region of the cortical attention systems (<xref ref-type="bibr" rid="B102">102</xref>).</p>
<p>Left BA37 in ITG has been implicated in categorical perception of speech (<xref ref-type="bibr" rid="B86">86</xref>), and dysfunction of this area leads to word-finding difficulties (<xref ref-type="bibr" rid="B92">92</xref>).</p>
<p>The insula is another core region of the language system, which is related to both language understanding and production (<xref ref-type="bibr" rid="B92">92</xref>). The processing of degraded speech is associated with higher activation of the left anterior insula (<xref ref-type="bibr" rid="B39">39</xref>), and together with Broca&#x00027;s area, the anterior insula was shown to be involved in verbal rehearsal (<xref ref-type="bibr" rid="B92">92</xref>, <xref ref-type="bibr" rid="B103">103</xref>). ACC, in turn, is highly connected with the auditory and frontal cortices, and the insula (<xref ref-type="bibr" rid="B104">104</xref>), and older adults with impaired hearing expressed higher ACC activity (<xref ref-type="bibr" rid="B39">39</xref>). Moreover, AERP measurements with eLORETA source localization indicated greater ACC and MFG activation in the N2 interval during visual presentation of non-words that were associated with increased conflict due to similarity for word representation (<xref ref-type="bibr" rid="B105">105</xref>). ACC and insula also are key nodes of the attention and salience networks (<xref ref-type="bibr" rid="B102">102</xref>, <xref ref-type="bibr" rid="B106">106</xref>), and there is evidence for a decrease in usage of the attentional network in association with successful performance (<xref ref-type="bibr" rid="B107">107</xref>). Whereas, processing of degraded speech is associated with higher activation of the left anterior insula, older adults with impaired hearing expressed higher ACC activity independent of task difficulty and consistent with a persistent upregulation in cognitive control (<xref ref-type="bibr" rid="B39">39</xref>, <xref ref-type="bibr" rid="B94">94</xref>, <xref ref-type="bibr" rid="B108">108</xref>). Thus, activation of anterior insula and ACC is interpreted as another indicator of a compensation for degraded auditory input.</p>
<p>In the current study, the two groups under investigation differ with regard to their hearing, but experimental conditions were chosen to allow the same intelligibility for all. Therefore, findings are interpreted in the sense that despite hearing provision and supra-threshold stimulation, more brain resources are required in CI users to achieve the same intelligibility. As extended brain activation has been associated with increased listening effort (<xref ref-type="bibr" rid="B24">24</xref>, <xref ref-type="bibr" rid="B39">39</xref>), results suggest that speech understanding remains more effortful for the bimodal CI users despite intensive auditory training. Similarly, increased frontal activation suggests successful compensation of the reduced sensory input in CI users as similar performance is achieved despite better (NH) or worse (CI) hearing. Such effects are in accord with the <italic>decline&#x02013;compensation hypothesis</italic> (<xref ref-type="bibr" rid="B42">42</xref>, <xref ref-type="bibr" rid="B43">43</xref>), which postulates a decline in sensory processing and cognitive abilities during aging accompanied by an increase in the recruitment of more general cognitive areas in the frontal cortex as a means of compensation.</p>
</sec>
<sec>
<title>Bilateral Activation</title>
<p>While increased activation in CI users during the word/non-word classification task was left lateralized regarding the insula and ITG, activation differences in the frontal lobe were mostly bilateral, despite the right-handedness of all participants. This may be due to one or several reasons. First, source localization based on AERP recordings is not as precise as localization with other imaging techniques, and paradoxically, activation of the contralateral hemisphere has been attributed to this circumstance (<xref ref-type="bibr" rid="B31">31</xref>). Second, although language is clearly left lateralized in right-handed individuals, several aspects associated with speech activate the right hemisphere in a number of tasks (<xref ref-type="bibr" rid="B101">101</xref>). Third, areas with increased activation in the CI group are not those concerned with primary phonological analysis but rather of a domain-general nature (<xref ref-type="bibr" rid="B16">16</xref>, <xref ref-type="bibr" rid="B31">31</xref>). Finally, a loss of lateralization has been observed as a compensatory mechanism associated with sensory and cognitive decline (<xref ref-type="bibr" rid="B42">42</xref>, <xref ref-type="bibr" rid="B43">43</xref>).</p>
</sec>
<sec>
<title>Limitations</title>
<p>A potential limitation, but also an advantage of our study, is that our CI users used the same CI provision, both in terms of implant and speech processor model being used.</p>
<p>Complex speech signals, but also relatively simple phonemes, evoke multiple overlapping neural response patterns, which differ between different speech tokens and phonemes [e.g., (<xref ref-type="bibr" rid="B22">22</xref>, <xref ref-type="bibr" rid="B109">109</xref>)]. We chose to use a large set of monosyllable words and their reversals to avoid habituation and to create a more naturalistic situation, and could show that this approach is successful in producing several separable potentials in NH and CI listeners. In support of our study design, the present study&#x00027;s findings are consistent with several other studies investigating speech perception in NH and CI listeners using natural speech tokens (<xref ref-type="bibr" rid="B23">23</xref>, <xref ref-type="bibr" rid="B24">24</xref>, <xref ref-type="bibr" rid="B105">105</xref>).</p>
<p>EEG data offer high temporal resolution, which is mandatory for describing evolution of the brain&#x00027;s response to speech stimuli. They are also remarkably stable within an individual over time (<xref ref-type="bibr" rid="B22">22</xref>), which justifies assessing changes in the response following CI provision. However, because of the inverse problem and the need to employ source localization techniques, there is no unambiguous localization of the underlying sources. Therefore, localization data should be interpreted with caution (<xref ref-type="bibr" rid="B31">31</xref>, <xref ref-type="bibr" rid="B38">38</xref>).</p>
<p>Finally, as in other studies investigating speech perception in CI users with AERPs, sample size is small, and etiology of the hearing loss is heterogeneous. Therefore, our results do not allow generalization to bimodal CI users, and it would be worthwhile to replicate this study using further participants.</p>
</sec>
</sec>
<sec sec-type="conclusions" id="s5">
<title>Conclusions</title>
<p>In sum, there are four main findings from the present study:</p>
<list list-type="simple">
<list-item><p>(1) With bimodal hearing, intelligibility in background noise improves significantly, indicated by a significant reduction in SNR in the AERP experiment and by reduced intensities for 50% thresholds in sentence comprehension tests.</p></list-item>
<list-item><p>(2) Differences depending on familiarity of the stimuli occur early, at the level of the N1, with an amplitude difference in NH and a latency difference in CI listeners depending on the stimulus category. Differences are also apparent for the P2 potential, with shorter latencies in response to words for NH listeners. With bimodal experience, morphology of the N1&#x02013;P2 response in CI users approximates the response seen in NH.</p></list-item>
<list-item><p>(3) A prominent negative deflection from 370 to 570 ms (N2/N400) following stimulus onset is evident for CI users irrespective of stimulus category, while it is absent in NH indicating that central processing of speech is enhanced and prolonged in CI users.</p></list-item>
<list-item><p>(4) For the N2/N400 time window, extended activation in CI users is shown in frontal brain areas, suggesting an increased need for cognitive processing to compensate for the degraded auditory speech signal.</p></list-item>
</list>
</sec>
<sec sec-type="data-availability-statement" id="s6">
<title>Data Availability Statement</title>
<p>The datasets generated for this study will not be made publicly available for ethical or legal reasons. Requests to access the dataset can be directed to the corresponding author.</p>
</sec>
<sec id="s7">
<title>Ethics Statement</title>
<p>The studies involving human participants were reviewed and approved by Institutional Review Board of the Medical Faculty of Mannheim at Heidelberg University. The patients/participants provided their written informed consent to participate in this study.</p>
</sec>
<sec id="s8">
<title>Author Contributions</title>
<p>TB designed the computational framework, collected and analyzed the data, and wrote the manuscript. EW-F designed the study, collected and analyzed the data, and wrote the manuscript. NR was responsible for the critical review. JS was responsible for the recruitment and critical review.</p>
<sec>
<title>Conflict of Interest</title>
<p>This study was partly funded by the Advanced Bionics AG, Staefa, Switzerland. Advanced Bionics AG manufactures the device under investigation in this study. This does not alter the authors&#x00027; adherence to all the Frontier policies as detailed online in the guide for authors.</p>
</sec>
</sec>
</body>
<back>
<ack><p>The authors thank Tanja Sutter for helping recruit the study participants and also thank the subjects who participated in this research. This study was supported by the Advanced Bionics AG, Staefa, Switzerland. Moreover, the authors acknowledge financial support by Deutsche Forschungsgemeinschaft within the funding programme Open Access Publishing, by the Baden-W&#x000FC;rttemberg Ministry of Science, Research and the Arts and by Ruprecht-Karls-Universit&#x000E4;t Heidelberg.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<label>1.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Butler</surname> <given-names>BE</given-names></name> <name><surname>Meredith</surname> <given-names>MA</given-names></name> <name><surname>Lomber</surname> <given-names>SG</given-names></name></person-group>. <article-title>Editorial introduction: special issue on plasticity following hearing loss and deafness</article-title>. <source>Hear Res.</source> (<year>2017</year>) <volume>343</volume>:<fpage>1</fpage>&#x02013;<lpage>3</lpage>. <pub-id pub-id-type="doi">10.1016/j.heares.2016.10.014</pub-id><pub-id pub-id-type="pmid">27771426</pub-id></citation></ref>
<ref id="B2">
<label>2.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Heid</surname> <given-names>S</given-names></name> <name><surname>J&#x000E4;hn-Siebert</surname> <given-names>TK</given-names></name> <name><surname>Klinke</surname> <given-names>R</given-names></name> <name><surname>Hartmann</surname> <given-names>R</given-names></name> <name><surname>Langner</surname> <given-names>G</given-names></name></person-group>. <article-title>Afferent projection patterns in the auditory brainstem in normal and congenitally deaf white cats</article-title>. <source>Hear Res.</source> (<year>1997</year>) <volume>110</volume>:<fpage>191</fpage>&#x02013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1016/S0378-5955(97)00074-9</pub-id><pub-id pub-id-type="pmid">9282901</pub-id></citation></ref>
<ref id="B3">
<label>3.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bose</surname> <given-names>M</given-names></name> <name><surname>Mu&#x000F1;oz-Llancao</surname> <given-names>P</given-names></name> <name><surname>Roychowdhury</surname> <given-names>S</given-names></name> <name><surname>Nichols</surname> <given-names>JA</given-names></name> <name><surname>Jakkamsetti</surname> <given-names>V</given-names></name> <name><surname>Porter</surname> <given-names>B</given-names></name> <etal/></person-group>. <article-title>Effect of the environment on the dendritic morphology of the rat auditory cortex</article-title>. <source>Synapse.</source> (<year>2010</year>) <volume>64</volume>:<fpage>97</fpage>&#x02013;<lpage>110</lpage>. <pub-id pub-id-type="doi">10.1002/syn.20710</pub-id><pub-id pub-id-type="pmid">19771593</pub-id></citation></ref>
<ref id="B4">
<label>4.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stropahl</surname> <given-names>M</given-names></name> <name><surname>Chen</surname> <given-names>L-C</given-names></name> <name><surname>Debener</surname> <given-names>S</given-names></name></person-group>. <article-title>Cortical reorganization in postlingually deaf cochlear implant users: intra-modal and cross-modal considerations</article-title>. <source>Hear Res.</source> (<year>2017</year>) <volume>343</volume>:<fpage>128</fpage>&#x02013;<lpage>37</lpage>. <pub-id pub-id-type="doi">10.1016/j.heares.2016.07.005</pub-id><pub-id pub-id-type="pmid">27473503</pub-id></citation></ref>
<ref id="B5">
<label>5.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lenarz</surname> <given-names>M</given-names></name> <name><surname>S&#x000F6;nmez</surname> <given-names>H</given-names></name> <name><surname>Joseph</surname> <given-names>G</given-names></name> <name><surname>B&#x000FC;chner</surname> <given-names>A</given-names></name> <name><surname>Lenarz</surname> <given-names>T</given-names></name></person-group>. <article-title>Long-term performance of cochlear implants in postlingually deafened adults</article-title>. <source>Otolaryngol Head Neck Surg.</source> (<year>2012</year>) <volume>147</volume>:<fpage>112</fpage>&#x02013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1177/0194599812438041</pub-id><pub-id pub-id-type="pmid">22344289</pub-id></citation></ref>
<ref id="B6">
<label>6.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Akeroyd</surname> <given-names>MA</given-names></name></person-group>. <article-title>The psychoacoustics of binaural hearing</article-title>. <source>Int J Audiol.</source> (<year>2006</year>) <volume>45</volume>:<fpage>25</fpage>&#x02013;<lpage>33</lpage>. <pub-id pub-id-type="doi">10.1080/14992020600782626</pub-id><pub-id pub-id-type="pmid">16938772</pub-id></citation></ref>
<ref id="B7">
<label>7.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>T&#x000E1;vora-Vieira</surname> <given-names>D</given-names></name> <name><surname>Rajan</surname> <given-names>GP</given-names></name> <name><surname>Van de Heyning</surname> <given-names>P</given-names></name> <name><surname>Mertens</surname> <given-names>G</given-names></name></person-group>. <article-title>Evaluating the long-term hearing outcomes of cochlear implant users with single-sided deafness</article-title>. <source>Otol Neurotol.</source> (<year>2019</year>) <volume>40</volume>:<fpage>e575</fpage>&#x02013;<lpage>80</lpage>. <pub-id pub-id-type="doi">10.1097/MAO.0000000000002235</pub-id><pub-id pub-id-type="pmid">31135665</pub-id></citation></ref>
<ref id="B8">
<label>8.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Noble</surname> <given-names>W</given-names></name> <name><surname>Tyler</surname> <given-names>R</given-names></name> <name><surname>Dunn</surname> <given-names>C</given-names></name> <name><surname>Bhullar</surname> <given-names>N</given-names></name></person-group>. <article-title>Hearing handicap ratings among different profiles of adult cochlear implant users</article-title>. <source>Ear Hear.</source> (<year>2007</year>) <volume>29</volume>:<fpage>112</fpage>&#x02013;<lpage>20</lpage>. <pub-id pub-id-type="doi">10.1097/AUD.0b013e31815d6da8</pub-id><pub-id pub-id-type="pmid">18091100</pub-id></citation></ref>
<ref id="B9">
<label>9.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zirn</surname> <given-names>S</given-names></name> <name><surname>Angermeier</surname> <given-names>J</given-names></name> <name><surname>Arndt</surname> <given-names>S</given-names></name> <name><surname>Aschendorff</surname> <given-names>A</given-names></name> <name><surname>Wesarg</surname> <given-names>T</given-names></name></person-group>. <article-title>Reducing the device delay mismatch can improve sound localization in bimodal cochlear implant/hearing-aid users</article-title>. <source>Trends Hear.</source> (<year>2019</year>) <volume>23</volume>:<fpage>2331216519843876</fpage>. <pub-id pub-id-type="doi">10.1177/2331216519843876</pub-id><pub-id pub-id-type="pmid">31018790</pub-id></citation></ref>
<ref id="B10">
<label>10.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gu&#x000E9;rit</surname> <given-names>F</given-names></name> <name><surname>Santurette</surname> <given-names>S</given-names></name> <name><surname>Chalupper</surname> <given-names>J</given-names></name> <name><surname>Dau</surname> <given-names>T</given-names></name></person-group>. <article-title>Investigating interaural frequency-place mismatches via bimodal vowel integration</article-title>. <source>Trends Hear.</source> (<year>2014</year>) <volume>18</volume>:<fpage>2331216514560590</fpage>. <pub-id pub-id-type="doi">10.1177/2331216514560590</pub-id><pub-id pub-id-type="pmid">25421087</pub-id></citation></ref>
<ref id="B11">
<label>11.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wallh&#x000E4;usser-Franke</surname> <given-names>E</given-names></name> <name><surname>Balkenhol</surname> <given-names>T</given-names></name> <name><surname>Hetjens</surname> <given-names>S</given-names></name> <name><surname>Rotter</surname> <given-names>N</given-names></name> <name><surname>Servais</surname> <given-names>JJ</given-names></name></person-group>. <article-title>Patient benefit following bimodal CI-provision: self-reported abilities vs. hearing status</article-title>. <source>Front Neurol.</source> (<year>2018</year>) <volume>9</volume>:<fpage>753</fpage>. <pub-id pub-id-type="doi">10.3389/fneur.2018.00753</pub-id><pub-id pub-id-type="pmid">30250450</pub-id></citation></ref>
<ref id="B12">
<label>12.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Warren</surname> <given-names>S</given-names></name> <name><surname>Dunbar</surname> <given-names>M</given-names></name></person-group>. <article-title>Bimodal hearing in individuals with severe-to-profound hearing loss: benefits, challenges, and management</article-title>. <source>Semin Hear.</source> (<year>2018</year>) <volume>39</volume>:<fpage>405</fpage>&#x02013;<lpage>13</lpage>. <pub-id pub-id-type="doi">10.1055/s-0038-1670706</pub-id><pub-id pub-id-type="pmid">30374211</pub-id></citation></ref>
<ref id="B13">
<label>13.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>McMurray</surname> <given-names>B</given-names></name> <name><surname>Farris-Trimble</surname> <given-names>A</given-names></name> <name><surname>Seedorff</surname> <given-names>M</given-names></name> <name><surname>Rigler</surname> <given-names>H</given-names></name></person-group>. <article-title>The effect of residual acoustic hearing and adaptation to uncertainty on speech perception in cochlear implant users</article-title>. <source>Ear Hear.</source> (<year>2016</year>) <volume>37</volume>:<fpage>e37</fpage>&#x02013;<lpage>51</lpage>. <pub-id pub-id-type="doi">10.1097/AUD.0000000000000207</pub-id><pub-id pub-id-type="pmid">26317298</pub-id></citation></ref>
<ref id="B14">
<label>14.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Presacco</surname> <given-names>A</given-names></name> <name><surname>Simon</surname> <given-names>JZ</given-names></name> <name><surname>Anderson</surname> <given-names>S</given-names></name></person-group>. <article-title>Speech-in-noise representation in the aging midbrain and cortex: effects of hearing loss</article-title>. <source>PLoS ONE.</source> (<year>2019</year>) <volume>14</volume>:<fpage>e0213899</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0213899</pub-id><pub-id pub-id-type="pmid">30865718</pub-id></citation></ref>
<ref id="B15">
<label>15.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wong</surname> <given-names>PCM</given-names></name> <name><surname>Uppunda</surname> <given-names>AK</given-names></name> <name><surname>Parrish</surname> <given-names>TB</given-names></name> <name><surname>Dhar</surname> <given-names>S</given-names></name></person-group>. <article-title>Cortical mechanisms of speech perception in noise</article-title>. <source>J Speech Lang Hear Res.</source> (<year>2008</year>) <volume>51</volume>:<fpage>1026</fpage>&#x02013;<lpage>41</lpage>. <pub-id pub-id-type="doi">10.1044/1092-4388(2008/075)</pub-id><pub-id pub-id-type="pmid">18658069</pub-id></citation></ref>
<ref id="B16">
<label>16.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Friederici</surname> <given-names>AD</given-names></name> <name><surname>Chomsky</surname> <given-names>N</given-names></name> <name><surname>Berwick</surname> <given-names>RC</given-names></name> <name><surname>Moro</surname> <given-names>A</given-names></name> <name><surname>Bolhuis</surname> <given-names>JJ</given-names></name></person-group>. <article-title>Language, mind and brain</article-title>. <source>Nat Hum Behav.</source> (<year>2017</year>) <volume>1</volume>:<fpage>713</fpage>&#x02013;<lpage>22</lpage>. <pub-id pub-id-type="doi">10.1038/s41562-017-0184-4</pub-id><pub-id pub-id-type="pmid">31024099</pub-id></citation></ref>
<ref id="B17">
<label>17.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hickok</surname> <given-names>G</given-names></name> <name><surname>Poeppel</surname> <given-names>D</given-names></name></person-group>. <article-title>Neural basis of speech perception. In: The Human Auditory System - Fundamental Organization and Clinical Disorders The Human Auditory System - Fundamental Organization and Clinical Disorders</article-title>. <publisher-loc>Amsterdam</publisher-loc>: <publisher-name>Elsevier</publisher-name> (<year>2015</year>). p. <fpage>149</fpage>&#x02013;<lpage>60</lpage>.</citation></ref>
<ref id="B18">
<label>18.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Peelle</surname> <given-names>JE</given-names></name></person-group>. <article-title>Hierarchical processing for speech in human auditory cortex and beyond</article-title>. <source>Front Hum Neurosci</source>. (<year>2010</year>) <volume>4</volume>:<fpage>51</fpage>. <pub-id pub-id-type="doi">10.3389/fnhum.2010.00051</pub-id><pub-id pub-id-type="pmid">20661456</pub-id></citation></ref>
<ref id="B19">
<label>19.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hagoort</surname> <given-names>P</given-names></name></person-group>. <article-title>The fractionation of spoken language understanding by measuring electrical and magnetic brain signals</article-title>. <source>Philos Trans R Soc Lond B Biol Sci.</source> (<year>2007</year>) <volume>363</volume>:<fpage>1055</fpage>&#x02013;<lpage>69</lpage>. <pub-id pub-id-type="doi">10.1098/rstb.2007.2159</pub-id><pub-id pub-id-type="pmid">17890190</pub-id></citation></ref>
<ref id="B20">
<label>20.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Moberly</surname> <given-names>AC</given-names></name> <name><surname>Lowenstein</surname> <given-names>JH</given-names></name> <name><surname>Tarr</surname> <given-names>E</given-names></name> <name><surname>Caldwell-Tarr</surname> <given-names>A</given-names></name> <name><surname>Welling</surname> <given-names>DB</given-names></name> <name><surname>Shahin</surname> <given-names>AJ</given-names></name> <etal/></person-group>. <article-title>Do adults with cochlear implants rely on different acoustic cues for phoneme perception than adults with normal hearing?</article-title> <source>J Speech Lang Hear Res.</source> (<year>2014</year>) <volume>57</volume>:<fpage>566</fpage>&#x02013;<lpage>82</lpage>. <pub-id pub-id-type="doi">10.1044/2014_JSLHR-H-12-0323</pub-id><pub-id pub-id-type="pmid">24686722</pub-id></citation></ref>
<ref id="B21">
<label>21.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Marschark</surname> <given-names>M</given-names></name> <name><surname>Convertino</surname> <given-names>C</given-names></name> <name><surname>McEvoy</surname> <given-names>C</given-names></name> <name><surname>Masteller</surname> <given-names>A</given-names></name></person-group>. <article-title>Organization and use of the mental lexicon by deaf and hearing individuals</article-title>. <source>Am Ann Deaf.</source> (<year>2004</year>) <volume>149</volume>:<fpage>51</fpage>&#x02013;<lpage>61</lpage>. <pub-id pub-id-type="doi">10.1353/aad.2004.0013</pub-id><pub-id pub-id-type="pmid">15332467</pub-id></citation></ref>
<ref id="B22">
<label>22.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tremblay</surname> <given-names>KL</given-names></name> <name><surname>Friesen</surname> <given-names>L</given-names></name> <name><surname>Martin</surname> <given-names>BA</given-names></name> <name><surname>Wright</surname> <given-names>R</given-names></name></person-group>. <article-title>Test-retest reliability of cortical evoked potentials using naturally produced speech sounds</article-title>. <source>Ear Hear.</source> (<year>2003</year>) <volume>24</volume>:<fpage>225</fpage>&#x02013;<lpage>32</lpage>. <pub-id pub-id-type="doi">10.1097/01.AUD.0000069229.84883.03</pub-id><pub-id pub-id-type="pmid">12799544</pub-id></citation></ref>
<ref id="B23">
<label>23.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Finke</surname> <given-names>M</given-names></name> <name><surname>B&#x000FC;chner</surname> <given-names>A</given-names></name> <name><surname>Ruigendijk</surname> <given-names>E</given-names></name> <name><surname>Meyer</surname> <given-names>M</given-names></name> <name><surname>Sandmann</surname> <given-names>P</given-names></name></person-group>. <article-title>On the relationship between auditory cognition and speech intelligibility in cochlear implant users: An ERP study</article-title>. <source>Neuropsychologia.</source> (<year>2016</year>) <volume>87</volume>:<fpage>169</fpage>&#x02013;<lpage>81</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2016.05.019</pub-id><pub-id pub-id-type="pmid">27212057</pub-id></citation></ref>
<ref id="B24">
<label>24.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Finke</surname> <given-names>M</given-names></name> <name><surname>Sandmann</surname> <given-names>P</given-names></name> <name><surname>B&#x000F6;nitz</surname> <given-names>H</given-names></name> <name><surname>Kral</surname> <given-names>A</given-names></name> <name><surname>B&#x000FC;chner</surname> <given-names>A</given-names></name></person-group>. <article-title>Consequences of stimulus type on higher-order processing in single-sided deaf cochlear implant users</article-title>. <source>Audiol Neurootol.</source> (<year>2016</year>) <volume>21</volume>:<fpage>305</fpage>&#x02013;<lpage>15</lpage>. <pub-id pub-id-type="doi">10.1159/000452123</pub-id><pub-id pub-id-type="pmid">27866186</pub-id></citation></ref>
<ref id="B25">
<label>25.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hahne</surname> <given-names>A</given-names></name> <name><surname>Mainka</surname> <given-names>A</given-names></name> <name><surname>Leuner</surname> <given-names>A</given-names></name> <name><surname>M&#x000FC;rbe</surname> <given-names>D</given-names></name></person-group>. <article-title>Adult cochlear implant users are able to discriminate basic tonal features in musical patterns</article-title>. <source>Otol Neurotol.</source> (<year>2016</year>) <volume>37</volume>:<fpage>e360</fpage>&#x02013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1097/MAO.0000000000001067</pub-id><pub-id pub-id-type="pmid">27631660</pub-id></citation></ref>
<ref id="B26">
<label>26.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sandmann</surname> <given-names>P</given-names></name> <name><surname>Plotz</surname> <given-names>K</given-names></name> <name><surname>Hauthal</surname> <given-names>N</given-names></name> <name><surname>de Vos</surname> <given-names>M</given-names></name> <name><surname>Sch&#x000F6;nfeld</surname> <given-names>R</given-names></name> <name><surname>Debener</surname> <given-names>S</given-names></name></person-group>. <article-title>Rapid bilateral improvement in auditory cortex activity in postlingually deafened adults following cochlear implantation</article-title>. <source>Clin Neurophysiol.</source> (<year>2015</year>) <volume>126</volume>:<fpage>594</fpage>&#x02013;<lpage>607</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2014.06.029</pub-id><pub-id pub-id-type="pmid">25065298</pub-id></citation></ref>
<ref id="B27">
<label>27.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Henkin</surname> <given-names>Y</given-names></name> <name><surname>Yaar-Soffer</surname> <given-names>Y</given-names></name> <name><surname>Steinberg</surname> <given-names>M</given-names></name> <name><surname>Muchnik</surname> <given-names>C</given-names></name></person-group>. <article-title>Neural correlates of auditory-cognitive processing in older adult cochlear implant recipients</article-title>. <source>Audiol Neurootol.</source> (<year>2014</year>) <volume>19</volume>:<fpage>21</fpage>&#x02013;<lpage>6</lpage>. <pub-id pub-id-type="doi">10.1159/000371602</pub-id><pub-id pub-id-type="pmid">25733362</pub-id></citation></ref>
<ref id="B28">
<label>28.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Maurer</surname> <given-names>J</given-names></name> <name><surname>Collet</surname> <given-names>L</given-names></name> <name><surname>Pelster</surname> <given-names>H</given-names></name> <name><surname>Truy</surname> <given-names>E</given-names></name> <name><surname>Gall&#x000E9;go</surname> <given-names>S</given-names></name></person-group>. <article-title>Auditory late cortical response and speech recognition in digisonic cochlear implant users</article-title>. <source>Laryngoscope.</source> (<year>2002</year>) <volume>112</volume>:<fpage>2220</fpage>&#x02013;<lpage>4</lpage>. <pub-id pub-id-type="doi">10.1097/00005537-200212000-00017</pub-id><pub-id pub-id-type="pmid">12461344</pub-id></citation></ref>
<ref id="B29">
<label>29.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kuuluvainen</surname> <given-names>S</given-names></name> <name><surname>Nevalainen</surname> <given-names>P</given-names></name> <name><surname>Sorokin</surname> <given-names>A</given-names></name> <name><surname>Mittag</surname> <given-names>M</given-names></name> <name><surname>Partanen</surname> <given-names>E</given-names></name> <name><surname>Putkinen</surname> <given-names>V</given-names></name> <etal/></person-group>. <article-title>The neural basis of sublexical speech and corresponding nonspeech processing: a combined EEG-MEG study</article-title>. <source>Brain Lang.</source> (<year>2014</year>) <volume>130</volume>:<fpage>19</fpage>&#x02013;<lpage>32</lpage>. <pub-id pub-id-type="doi">10.1016/j.bandl.2014.01.008</pub-id><pub-id pub-id-type="pmid">24576806</pub-id></citation></ref>
<ref id="B30">
<label>30.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kuhl</surname> <given-names>PK</given-names></name> <name><surname>Conboy</surname> <given-names>BT</given-names></name> <name><surname>Coffey-Corina</surname> <given-names>S</given-names></name> <name><surname>Padden</surname> <given-names>D</given-names></name> <name><surname>Rivera-Gaxiola</surname> <given-names>M</given-names></name> <name><surname>Nelson</surname> <given-names>T</given-names></name></person-group>. <article-title>Phonetic learning as a pathway to language: new data and native language magnet theory expanded (NLM-e)</article-title>. <source>Philos Trans R Soc Lond B Biol Sci.</source> (<year>2007</year>) <volume>363</volume>:<fpage>979</fpage>&#x02013;<lpage>1000</lpage>. <pub-id pub-id-type="doi">10.1098/rstb.2007.2154</pub-id><pub-id pub-id-type="pmid">17846016</pub-id></citation></ref>
<ref id="B31">
<label>31.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lau</surname> <given-names>EF</given-names></name> <name><surname>Phillips</surname> <given-names>C</given-names></name> <name><surname>Poeppel</surname> <given-names>D</given-names></name></person-group>. <article-title>A cortical network for semantics: (de)constructing the N400</article-title>. <source>Nat Rev Neurosci.</source> (<year>2008</year>) <volume>9</volume>:<fpage>920</fpage>&#x02013;<lpage>33</lpage>. <pub-id pub-id-type="doi">10.1038/nrn2532</pub-id><pub-id pub-id-type="pmid">19020511</pub-id></citation></ref>
<ref id="B32">
<label>32.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brown</surname> <given-names>CJ</given-names></name> <name><surname>Jeon</surname> <given-names>E-K</given-names></name> <name><surname>Driscoll</surname> <given-names>V</given-names></name> <name><surname>Mussoi</surname> <given-names>B</given-names></name> <name><surname>Deshpande</surname> <given-names>SB</given-names></name> <name><surname>Gfeller</surname> <given-names>K</given-names></name> <etal/></person-group>. <article-title>Effects of long-term musical training on cortical auditory evoked potentials</article-title>. <source>Ear Hear.</source> (<year>2017</year>) <volume>38</volume>:<fpage>e74</fpage>&#x02013;<lpage>84</lpage>. <pub-id pub-id-type="doi">10.1097/AUD.0000000000000375</pub-id><pub-id pub-id-type="pmid">28225736</pub-id></citation></ref>
<ref id="B33">
<label>33.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Thaerig</surname> <given-names>S</given-names></name> <name><surname>Behne</surname> <given-names>N</given-names></name> <name><surname>Schadow</surname> <given-names>J</given-names></name> <name><surname>Lenz</surname> <given-names>D</given-names></name> <name><surname>Scheich</surname> <given-names>H</given-names></name> <name><surname>Brechmann</surname> <given-names>A</given-names></name> <etal/></person-group>. <article-title>Sound level dependence of auditory evoked potentials: simultaneous EEG recording and low-noise fMRI</article-title>. <source>Int J Psychophysiol.</source> (<year>2008</year>) <volume>67</volume>:<fpage>235</fpage>&#x02013;<lpage>41</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijpsycho.2007.06.007</pub-id><pub-id pub-id-type="pmid">17707939</pub-id></citation></ref>
<ref id="B34">
<label>34.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tremblay</surname> <given-names>KL</given-names></name> <name><surname>Ross</surname> <given-names>B</given-names></name> <name><surname>Inoue</surname> <given-names>K</given-names></name> <name><surname>McClannahan</surname> <given-names>K</given-names></name> <name><surname>Collet</surname> <given-names>G</given-names></name></person-group>. <article-title>Is the auditory evoked P2 response a biomarker of learning?</article-title> <source>Front Syst Neurosci.</source> (<year>2014</year>) <volume>8</volume>:<fpage>28</fpage>. <pub-id pub-id-type="doi">10.3389/fnsys.2014.00028</pub-id><pub-id pub-id-type="pmid">24600358</pub-id></citation></ref>
<ref id="B35">
<label>35.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kutas</surname> <given-names>M</given-names></name> <name><surname>Federmeier</surname> <given-names>KD</given-names></name></person-group>. <article-title>Thirty years and counting: finding meaning in the N400 component of the event-related brain potential (ERP)</article-title>. <source>Annu Rev Psychol.</source> (<year>2011</year>) <volume>62</volume>:<fpage>621</fpage>&#x02013;<lpage>47</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.psych.093008.131123</pub-id><pub-id pub-id-type="pmid">20809790</pub-id></citation></ref>
<ref id="B36">
<label>36.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Song</surname> <given-names>J</given-names></name> <name><surname>Iverson</surname> <given-names>P</given-names></name></person-group>. <article-title>Listening effort during speech perception enhances auditory and lexical processing for non-native listeners and accents</article-title>. <source>Cognition.</source> (<year>2018</year>) <volume>179</volume>:<fpage>163</fpage>&#x02013;<lpage>70</lpage>. <pub-id pub-id-type="doi">10.1016/j.cognition.2018.06.001</pub-id><pub-id pub-id-type="pmid">29957515</pub-id></citation></ref>
<ref id="B37">
<label>37.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Duncan</surname> <given-names>CC</given-names></name> <name><surname>Barry</surname> <given-names>RJ</given-names></name> <name><surname>Connolly</surname> <given-names>JF</given-names></name> <name><surname>Fischer</surname> <given-names>C</given-names></name> <name><surname>Michie</surname> <given-names>PT</given-names></name> <name><surname>N&#x000E4;&#x000E4;t&#x000E4;nen</surname> <given-names>R</given-names></name> <etal/></person-group>. <article-title>Event-related potentials in clinical research: guidelines for eliciting, recording, and quantifying mismatch negativity, P300, and N400</article-title>. <source>Clin Neurophysiol.</source> (<year>2009</year>) <volume>120</volume>:<fpage>1883</fpage>&#x02013;<lpage>908</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2009.07.045</pub-id><pub-id pub-id-type="pmid">19796989</pub-id></citation></ref>
<ref id="B38">
<label>38.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Luck</surname> <given-names>SJ</given-names></name></person-group>. <source>An Introduction to the Event-Related Potential Technique, 2nd Edn.</source> <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>The MIT Press</publisher-name> (<year>2014</year>).</citation></ref>
<ref id="B39">
<label>39.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Erb</surname> <given-names>J</given-names></name> <name><surname>Obleser</surname> <given-names>J</given-names></name></person-group>. <article-title>Upregulation of cognitive control networks in older adults&#x00027; speech comprehension</article-title>. <source>Front Syst Neurosci.</source> (<year>2013</year>) <volume>7</volume>:<fpage>116</fpage>. <pub-id pub-id-type="doi">10.3389/fnsys.2013.00116</pub-id><pub-id pub-id-type="pmid">24399939</pub-id></citation></ref>
<ref id="B40">
<label>40.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Henry</surname> <given-names>MJ</given-names></name> <name><surname>Herrmann</surname> <given-names>B</given-names></name> <name><surname>Kunke</surname> <given-names>D</given-names></name> <name><surname>Obleser</surname> <given-names>J</given-names></name></person-group>. <article-title>Aging affects the balance of neural entrainment and top-down neural modulation in the listening brain</article-title>. <source>Nat Commun.</source> (<year>2017</year>) <volume>8</volume>:<fpage>15801</fpage>. <pub-id pub-id-type="doi">10.1038/ncomms15801</pub-id><pub-id pub-id-type="pmid">28654081</pub-id></citation></ref>
<ref id="B41">
<label>41.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wong</surname> <given-names>PCM</given-names></name> <name><surname>Jin</surname> <given-names>JX</given-names></name> <name><surname>Gunasekera</surname> <given-names>GM</given-names></name> <name><surname>Abel</surname> <given-names>R</given-names></name> <name><surname>Lee</surname> <given-names>ER</given-names></name> <name><surname>Dhar</surname> <given-names>S</given-names></name></person-group>. <article-title>Aging and cortical mechanisms of speech perception in noise</article-title>. <source>Neuropsychologia.</source> (<year>2009</year>) <volume>47</volume>:<fpage>693</fpage>&#x02013;<lpage>703</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2008.11.032</pub-id><pub-id pub-id-type="pmid">19124032</pub-id></citation></ref>
<ref id="B42">
<label>42.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cabeza</surname> <given-names>R</given-names></name> <name><surname>Albert</surname> <given-names>M</given-names></name> <name><surname>Belleville</surname> <given-names>S</given-names></name> <name><surname>Craik</surname> <given-names>FIM</given-names></name> <name><surname>Duarte</surname> <given-names>A</given-names></name> <name><surname>Grady</surname> <given-names>CL</given-names></name> <etal/></person-group>. <article-title>Maintenance, reserve and compensation: the cognitive neuroscience of healthy ageing</article-title>. <source>Nat Rev Neurosci.</source> (<year>2018</year>) <volume>19</volume>:<fpage>701</fpage>&#x02013;<lpage>10</lpage>. <pub-id pub-id-type="doi">10.1038/s41583-018-0068-2</pub-id></citation></ref>
<ref id="B43">
<label>43.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cabeza</surname> <given-names>R</given-names></name> <name><surname>Anderson</surname> <given-names>ND</given-names></name> <name><surname>Locantore</surname> <given-names>JK</given-names></name> <name><surname>McIntosh</surname> <given-names>AR</given-names></name></person-group>. <article-title>Aging gracefully: compensatory brain activity in high-performing older adults</article-title>. <source>Neuroimage.</source> (<year>2002</year>) <volume>17</volume>:<fpage>1394</fpage>&#x02013;<lpage>402</lpage>. <pub-id pub-id-type="doi">10.1006/nimg.2002.1280</pub-id><pub-id pub-id-type="pmid">12414279</pub-id></citation></ref>
<ref id="B44">
<label>44.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kalbe</surname> <given-names>E</given-names></name> <name><surname>Kessler</surname> <given-names>J</given-names></name> <name><surname>Calabrese</surname> <given-names>P</given-names></name> <name><surname>Smith</surname> <given-names>R</given-names></name> <name><surname>Passmore</surname> <given-names>AP</given-names></name> <name><surname>Brand</surname> <given-names>M</given-names></name> <etal/></person-group>. <article-title>DemTect: a new, sensitive cognitive screening test to support the diagnosis of mild cognitive impairment and early dementia</article-title>. <source>Int J Geriatr Psychiatry.</source> (<year>2004</year>) <volume>19</volume>:<fpage>136</fpage>&#x02013;<lpage>43</lpage>. <pub-id pub-id-type="doi">10.1002/gps.1042</pub-id><pub-id pub-id-type="pmid">14758579</pub-id></citation></ref>
<ref id="B45">
<label>45.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Servais</surname> <given-names>JJ</given-names></name> <name><surname>H&#x000F6;rmann</surname> <given-names>K</given-names></name> <name><surname>Wallh&#x000E4;usser-Franke</surname> <given-names>E</given-names></name></person-group>. <article-title>Unilateral cochlear implantation reduces tinnitus loudness in bimodal hearing: a prospective study</article-title>. <source>Front Neurol.</source> (<year>2017</year>) <volume>8</volume>:<fpage>60</fpage>. <pub-id pub-id-type="doi">10.3389/fneur.2017.00060</pub-id><pub-id pub-id-type="pmid">28326059</pub-id></citation></ref>
<ref id="B46">
<label>46.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zigmond</surname> <given-names>AS</given-names></name> <name><surname>Snaith</surname> <given-names>RP</given-names></name></person-group>. <article-title>The hospital anxiety and depression scale</article-title>. <source>Acta Psychiatr Scand.</source> (<year>1983</year>) <volume>67</volume>:<fpage>361</fpage>&#x02013;<lpage>70</lpage>. <pub-id pub-id-type="doi">10.1111/j.1600-0447.1983.tb09716.x</pub-id><pub-id pub-id-type="pmid">6880820</pub-id></citation></ref>
<ref id="B47">
<label>47.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Letowski</surname> <given-names>T</given-names></name> <name><surname>Champlin</surname> <given-names>C</given-names></name></person-group>. <article-title>Audiometric calibration: air conduction</article-title>. <source>Semin Hear.</source> (<year>2014</year>) <volume>35</volume>:<fpage>312</fpage>&#x02013;<lpage>28</lpage>. <pub-id pub-id-type="doi">10.1055/s-0034-1390161</pub-id></citation></ref>
<ref id="B48">
<label>48.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Jensen</surname> <given-names>NS</given-names></name> <name><surname>Akeroyd</surname> <given-names>MA</given-names></name> <name><surname>Noble</surname> <given-names>W</given-names></name> <name><surname>Naylor</surname> <given-names>G</given-names></name></person-group>. <article-title>The speech, spatial and qualities of hearing scale (SSQ) as a benefit measure</article-title>. In: <source>Paper Presented at the NCRAR Conference: The Ear-Brain System: Approaches to the Study and Treatment of Hearing Loss</source>. <publisher-loc>Portland, OR</publisher-loc> (<year>2009</year>).</citation></ref>
<ref id="B49">
<label>49.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Noble</surname> <given-names>W</given-names></name></person-group>. <article-title>Assessing binaural hearing: results using the speech, spatial and qualities of hearing scale</article-title>. <source>J Am Acad Audiol.</source> (<year>2010</year>) <volume>21</volume>:<fpage>568</fpage>&#x02013;<lpage>74</lpage>. <pub-id pub-id-type="doi">10.3766/jaaa.21.9.2</pub-id><pub-id pub-id-type="pmid">21241644</pub-id></citation></ref>
<ref id="B50">
<label>50.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hahlbrock</surname> <given-names>KH</given-names></name></person-group>. <source>Sprachaudiometrie: Grundlagen und praktische Anwendung einer Sprachaudiometrie f&#x000FC;r das deutsche Sprachgebiet (German Edition)</source>. <edition>2nd ed</edition>. <publisher-loc>Stuttgart</publisher-loc>: <publisher-name>Thieme</publisher-name> (<year>1970</year>).</citation></ref>
<ref id="B51">
<label>51.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>L&#x000F6;hler</surname> <given-names>J</given-names></name> <name><surname>Akcicek</surname> <given-names>B</given-names></name> <name><surname>Wollenberg</surname> <given-names>B</given-names></name> <name><surname>Sch&#x000F6;nweiler</surname> <given-names>R</given-names></name> <name><surname>Verges</surname> <given-names>L</given-names></name> <name><surname>Langer</surname> <given-names>C</given-names></name> <etal/></person-group>. <article-title>Results in using the Freiburger monosyllabic speech test in noise without and with hearing aids</article-title>. <source>Eur Arch Otorhinolaryngol.</source> (<year>2014</year>) <volume>272</volume>:<fpage>2135</fpage>&#x02013;<lpage>42</lpage>. <pub-id pub-id-type="doi">10.1007/s00405-014-3039-x</pub-id><pub-id pub-id-type="pmid">24740734</pub-id></citation></ref>
<ref id="B52">
<label>52.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wagener</surname> <given-names>K</given-names></name> <name><surname>Kollmeier</surname> <given-names>B</given-names></name> <name><surname>K&#x000FC;hnel</surname> <given-names>V</given-names></name></person-group>. <article-title>Entwicklung und Evaluation eines Satztests f&#x000FC;r die deutsche Sprache I: Design des Oldenburger Satztests</article-title>. <source>Z Audiol.</source> (<year>1999</year>) <volume>38</volume>:<fpage>4</fpage>&#x02013;<lpage>15</lpage>.</citation></ref>
<ref id="B53">
<label>53.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wagener</surname> <given-names>K</given-names></name> <name><surname>Kollmeier</surname> <given-names>B</given-names></name> <name><surname>K&#x000FC;hnel</surname> <given-names>V</given-names></name></person-group>. <article-title>Entwicklung und Evaluation eines Satztests f&#x000FC;r die deutsche Sprache Teil II: Optimierung des Oldenburger Satztests</article-title>. <source>Z Audiol.</source> (<year>1999</year>) <volume>38</volume>:<fpage>44</fpage>&#x02013;<lpage>56</lpage>.</citation></ref>
<ref id="B54">
<label>54.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wagener</surname> <given-names>K</given-names></name> <name><surname>Kollmeier</surname> <given-names>B</given-names></name> <name><surname>K&#x000FC;hnel</surname> <given-names>V</given-names></name></person-group>. <article-title>Entwicklung und Evaluation eines Satztests f&#x000FC;r die deutsche Sprache I: Evaluation des Oldenburger Satztests</article-title>. <source>Z Audiol.</source> (<year>1999</year>) <volume>38</volume>:<fpage>86</fpage>&#x02013;<lpage>95</lpage>.</citation></ref>
<ref id="B55">
<label>55.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Oostenveld</surname> <given-names>R</given-names></name> <name><surname>Praamstra</surname> <given-names>P</given-names></name></person-group>. <article-title>The five percent electrode system for high-resolution EEG and ERP measurements</article-title>. <source>Clin Neurophysiol.</source> (<year>2001</year>) <volume>112</volume>:<fpage>713</fpage>&#x02013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1016/S1388-2457(00)00527-7</pub-id><pub-id pub-id-type="pmid">11275545</pub-id></citation></ref>
<ref id="B56">
<label>56.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Senkowski</surname> <given-names>D</given-names></name> <name><surname>Pomper</surname> <given-names>U</given-names></name> <name><surname>Fitzner</surname> <given-names>I</given-names></name> <name><surname>Engel</surname> <given-names>AK</given-names></name> <name><surname>Kral</surname> <given-names>A</given-names></name></person-group>. <article-title>Beta-band activity in auditory pathways reflects speech localization and recognition in bilateral cochlear implant users</article-title>. <source>Hum Brain Mapp.</source> (<year>2013</year>) <volume>35</volume>:<fpage>3107</fpage>&#x02013;<lpage>21</lpage>. <pub-id pub-id-type="doi">10.1002/hbm.22388</pub-id><pub-id pub-id-type="pmid">24123535</pub-id></citation></ref>
<ref id="B57">
<label>57.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>R&#x000F6;nnberg</surname> <given-names>J</given-names></name> <name><surname>Lunner</surname> <given-names>T</given-names></name> <name><surname>Zekveld</surname> <given-names>A</given-names></name> <name><surname>S&#x000F6;rqvist</surname> <given-names>P</given-names></name> <name><surname>Danielsson</surname> <given-names>H</given-names></name> <name><surname>Lyxell</surname> <given-names>B</given-names></name> <etal/></person-group>. <article-title>The Ease of Language Understanding (ELU) model: theoretical, empirical, and clinical advances</article-title>. <source>Front Syst Neurosci.</source> (<year>2013</year>) <volume>7</volume>:<fpage>31</fpage>. <pub-id pub-id-type="doi">10.3389/fnsys.2013.00031</pub-id><pub-id pub-id-type="pmid">23874273</pub-id></citation></ref>
<ref id="B58">
<label>58.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Delorme</surname> <given-names>A</given-names></name> <name><surname>Makeig</surname> <given-names>S</given-names></name></person-group>. <article-title>EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis</article-title>. <source>J Neurosci Methods.</source> (<year>2004</year>) <volume>134</volume>:<fpage>9</fpage>&#x02013;<lpage>21</lpage>. <pub-id pub-id-type="doi">10.1016/j.jneumeth.2003.10.009</pub-id><pub-id pub-id-type="pmid">15102499</pub-id></citation></ref>
<ref id="B59">
<label>59.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Delorme</surname> <given-names>A</given-names></name> <name><surname>Sejnowski</surname> <given-names>T</given-names></name> <name><surname>Makeig</surname> <given-names>S</given-names></name></person-group>. <article-title>Enhanced detection of artifacts in EEG data using higher-order statistics and independent component analysis</article-title>. <source>Neuroimage.</source> (<year>2007</year>) <volume>34</volume>:<fpage>1443</fpage>&#x02013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2006.11.004</pub-id><pub-id pub-id-type="pmid">17188898</pub-id></citation></ref>
<ref id="B60">
<label>60.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Onton</surname> <given-names>J</given-names></name> <name><surname>Westerfield</surname> <given-names>M</given-names></name> <name><surname>Townsend</surname> <given-names>J</given-names></name> <name><surname>Makeig</surname> <given-names>S</given-names></name></person-group>. <article-title>Imaging human EEG dynamics using independent component analysis</article-title>. <source>Neurosci Biobehav Rev.</source> (<year>2006</year>) <volume>30</volume>:<fpage>808</fpage>&#x02013;<lpage>22</lpage>. <pub-id pub-id-type="doi">10.1016/j.neubiorev.2006.06.007</pub-id><pub-id pub-id-type="pmid">16904745</pub-id></citation></ref>
<ref id="B61">
<label>61.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Molgedey</surname> <given-names>L</given-names></name> <name><surname>Schuster</surname> <given-names>HG</given-names></name></person-group>. <article-title>Separation of a mixture of independent signals using time delayed correlations</article-title>. <source>Phys Rev Lett.</source> (<year>1994</year>) <volume>72</volume>:<fpage>3634</fpage>&#x02013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.1103/PhysRevLett.72.3634</pub-id><pub-id pub-id-type="pmid">10056251</pub-id></citation></ref>
<ref id="B62">
<label>62.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Balkenhol</surname> <given-names>T</given-names></name> <name><surname>Wallh&#x000E4;usser-Franke</surname> <given-names>E</given-names></name> <name><surname>Delb</surname> <given-names>W</given-names></name></person-group>. <article-title>Psychoacoustic tinnitus loudness and tinnitus-related distress show different associations with oscillatory brain activity</article-title>. <source>PLoS ONE.</source> (<year>2013</year>) <volume>8</volume>:<fpage>e53180</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0053180</pub-id><pub-id pub-id-type="pmid">23326394</pub-id></citation></ref>
<ref id="B63">
<label>63.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Oostenveld</surname> <given-names>R</given-names></name> <name><surname>Fries</surname> <given-names>P</given-names></name> <name><surname>Maris</surname> <given-names>E</given-names></name> <name><surname>Schoffelen</surname> <given-names>J-M</given-names></name></person-group>. <article-title>FieldTrip: open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data</article-title>. <source>Comput Intell Neurosci.</source> (<year>2011</year>) <volume>2011</volume>:<fpage>1</fpage>&#x02013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1155/2011/156869</pub-id><pub-id pub-id-type="pmid">21253357</pub-id></citation></ref>
<ref id="B64">
<label>64.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liesefeld</surname> <given-names>HR</given-names></name></person-group>. <article-title>Estimating the timing of cognitive operations with MEG/EEG latency measures: a primer, a brief tutorial, and an implementation of various methods</article-title>. <source>Front Neurosci.</source> (<year>2018</year>) <volume>12</volume>:<fpage>765</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2018.00765</pub-id><pub-id pub-id-type="pmid">30410431</pub-id></citation></ref>
<ref id="B65">
<label>65.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dunlap</surname> <given-names>WP</given-names></name> <name><surname>Marx</surname> <given-names>MS</given-names></name> <name><surname>Agamy</surname> <given-names>GJ</given-names></name></person-group>. <article-title>FORTRAN IV functions for calculating probabilities associated with Dunnett&#x00027;s test</article-title>. <source>Behav Res Meth Instr.</source> (<year>1981</year>) <volume>13</volume>:<fpage>363</fpage>&#x02013;<lpage>66</lpage>. <pub-id pub-id-type="doi">10.3758/BF03202031</pub-id></citation></ref>
<ref id="B66">
<label>66.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dunnett</surname> <given-names>CW</given-names></name></person-group>. <article-title>A multiple comparison procedure for comparing several treatments with a control</article-title>. <source>J Am Stat Assoc.</source> (<year>1955</year>) <volume>50</volume>:<fpage>1096</fpage>&#x02013;<lpage>121</lpage>. <pub-id pub-id-type="doi">10.1080/01621459.1955.10501294</pub-id></citation></ref>
<ref id="B67">
<label>67.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Pascual-Marqui</surname> <given-names>RD</given-names></name></person-group>. <article-title>Theory of the EEG inverse problem</article-title>. In: <person-group person-group-type="editor"><name><surname>Tong</surname> <given-names>S</given-names></name> <name><surname>Thakor</surname> <given-names>NV</given-names></name></person-group>, editors. <source>Quantitative EEG Analysis: Methods Clinical Applications Quantitative EEG Analysis: Methods Clinical Applications</source>. <publisher-loc>Boston, MA</publisher-loc>: <publisher-name>Artech House</publisher-name> (<year>2009</year>). p. <fpage>121</fpage>&#x02013;<lpage>40</lpage>.</citation></ref>
<ref id="B68">
<label>68.</label>
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Pascual-Marqui</surname> <given-names>RD</given-names></name></person-group>. <source>Discrete, 3D Distributed, Linear Imaging Methods of Electric Neuronal Activity. Part 1: Exact, Zero Error Localization</source>. (<year>2007</year>). Available online at: <ext-link ext-link-type="uri" xlink:href="http://arxiv.org/pdf/0710.3341">http://arxiv.org/pdf/0710.3341</ext-link></citation></ref>
<ref id="B69">
<label>69.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Holmes</surname> <given-names>CJ</given-names></name> <name><surname>Hoge</surname> <given-names>R</given-names></name> <name><surname>Collins</surname> <given-names>L</given-names></name> <name><surname>Woods</surname> <given-names>R</given-names></name> <name><surname>Toga</surname> <given-names>AW</given-names></name> <name><surname>Evans</surname> <given-names>AC</given-names></name></person-group>. <article-title>Enhancement of MR images using registration for signal averaging</article-title>. <source>J Comput Assisted Tomogr.</source> (<year>1998</year>) <volume>22</volume>:<fpage>324</fpage>&#x02013;<lpage>33</lpage>. <pub-id pub-id-type="doi">10.1097/00004728-199803000-00032</pub-id><pub-id pub-id-type="pmid">9530404</pub-id></citation></ref>
<ref id="B70">
<label>70.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pantev</surname> <given-names>C</given-names></name> <name><surname>Dinnesen</surname> <given-names>A</given-names></name> <name><surname>Ross</surname> <given-names>B</given-names></name> <name><surname>Wollbrink</surname> <given-names>A</given-names></name> <name><surname>Knief</surname> <given-names>A</given-names></name></person-group>. <article-title>Dynamics of auditory plasticity after cochlear implantation: a longitudinal study</article-title>. <source>Cereb Cortex.</source> (<year>2005</year>) <volume>16</volume>:<fpage>31</fpage>&#x02013;<lpage>6</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhi081</pub-id><pub-id pub-id-type="pmid">15843632</pub-id></citation></ref>
<ref id="B71">
<label>71.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Legris</surname> <given-names>E</given-names></name> <name><surname>Galvin</surname> <given-names>J</given-names></name> <name><surname>Roux</surname> <given-names>S</given-names></name> <name><surname>Gomot</surname> <given-names>M</given-names></name> <name><surname>Aoustin</surname> <given-names>JM</given-names></name> <name><surname>Marx</surname> <given-names>M</given-names></name> <etal/></person-group>. <article-title>Cortical reorganization after cochlear implantation for adults with single-sided deafness</article-title>. <source>PLoS ONE.</source> (<year>2018</year>) <volume>13</volume>:<fpage>e0204402</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0204402</pub-id><pub-id pub-id-type="pmid">30248131</pub-id></citation></ref>
<ref id="B72">
<label>72.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Purdy</surname> <given-names>S</given-names></name> <name><surname>Kelly</surname> <given-names>A</given-names></name></person-group>. <article-title>Change in speech perception and auditory evoked potentials over time after unilateral cochlear implantation in postlingually deaf adults</article-title>. <source>Semin Hear.</source> (<year>2016</year>) <volume>37</volume>:<fpage>62</fpage>&#x02013;<lpage>73</lpage>. <pub-id pub-id-type="doi">10.1055/s-0035-1570329</pub-id><pub-id pub-id-type="pmid">27587923</pub-id></citation></ref>
<ref id="B73">
<label>73.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>McClannahan</surname> <given-names>KS</given-names></name> <name><surname>Backer</surname> <given-names>KC</given-names></name> <name><surname>Tremblay</surname> <given-names>KL</given-names></name></person-group>. <article-title>Auditory evoked responses in older adults with normal hearing, untreated, and treated age-related hearing loss</article-title>. <source>Ear Hear.</source> (<year>2019</year>) <volume>40</volume>:<fpage>1106</fpage>&#x02013;<lpage>16</lpage>. <pub-id pub-id-type="doi">10.1097/AUD.0000000000000698</pub-id><pub-id pub-id-type="pmid">30762601</pub-id></citation></ref>
<ref id="B74">
<label>74.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lange</surname> <given-names>K</given-names></name></person-group>. <article-title>The ups and downs of temporal orienting: a review of auditory temporal orienting studies and a model associating the heterogeneous findings on the auditory N1 with opposite effects of attention and prediction</article-title>. <source>Front Hum Neurosci.</source> (<year>2013</year>) <volume>7</volume>:<fpage>263</fpage>. <pub-id pub-id-type="doi">10.3389/fnhum.2013.00263</pub-id><pub-id pub-id-type="pmid">23781186</pub-id></citation></ref>
<ref id="B75">
<label>75.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chait</surname> <given-names>M</given-names></name> <name><surname>de</surname> <given-names>Cheveign&#x000E9; A</given-names></name> <name><surname>Poeppel</surname> <given-names>D</given-names></name> <name><surname>Simon</surname> <given-names>JZ</given-names></name></person-group>. <article-title>Neural dynamics of attending and ignoring in human auditory cortex</article-title>. <source>Neuropsychologia.</source> (<year>2010</year>) <volume>48</volume>:<fpage>3262</fpage>&#x02013;<lpage>71</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2010.07.007</pub-id><pub-id pub-id-type="pmid">20633569</pub-id></citation></ref>
<ref id="B76">
<label>76.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wild</surname> <given-names>CJ</given-names></name> <name><surname>Yusuf</surname> <given-names>A</given-names></name> <name><surname>Wilson</surname> <given-names>DE</given-names></name> <name><surname>Peelle</surname> <given-names>JE</given-names></name> <name><surname>Davis</surname> <given-names>MH</given-names></name> <name><surname>Johnsrude</surname> <given-names>IS</given-names></name></person-group>. <article-title>Effortful listening: the processing of degraded speech depends critically on attention</article-title>. <source>J Neurosci.</source> (<year>2012</year>) <volume>32</volume>:<fpage>14010</fpage>&#x02013;<lpage>21</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.1528-12.2012</pub-id><pub-id pub-id-type="pmid">23035108</pub-id></citation></ref>
<ref id="B77">
<label>77.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Han</surname> <given-names>J-H</given-names></name> <name><surname>Zhang</surname> <given-names>F</given-names></name> <name><surname>Kadis</surname> <given-names>DS</given-names></name> <name><surname>Houston</surname> <given-names>LM</given-names></name> <name><surname>Samy</surname> <given-names>RN</given-names></name> <name><surname>Smith</surname> <given-names>ML</given-names></name> <etal/></person-group>. <article-title>Auditory cortical activity to different voice onset times in cochlear implant users</article-title>. <source>Clin Neurophysiol.</source> (<year>2016</year>) <volume>127</volume>:<fpage>1603</fpage>&#x02013;<lpage>17</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2015.10.049</pub-id><pub-id pub-id-type="pmid">26616545</pub-id></citation></ref>
<ref id="B78">
<label>78.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Campbell</surname> <given-names>J</given-names></name> <name><surname>Sharma</surname> <given-names>A</given-names></name></person-group>. <article-title>Compensatory changes in cortical resource allocation in adults with hearing loss</article-title>. <source>Front Syst Neurosci.</source> (<year>2013</year>) <volume>7</volume>:<fpage>71</fpage>. <pub-id pub-id-type="doi">10.3389/fnsys.2013.00071</pub-id><pub-id pub-id-type="pmid">24478637</pub-id></citation></ref>
<ref id="B79">
<label>79.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tiitinen</surname> <given-names>H</given-names></name> <name><surname>Miettinen</surname> <given-names>I</given-names></name> <name><surname>Alku</surname> <given-names>P</given-names></name> <name><surname>May</surname> <given-names>PJC</given-names></name></person-group>. <article-title>Transient and sustained cortical activity elicited by connected speech of varying intelligibility</article-title>. <source>BMC Neurosci.</source> (<year>2012</year>) <volume>13</volume>:<fpage>157</fpage> <pub-id pub-id-type="doi">10.1186/1471-2202-13-157</pub-id><pub-id pub-id-type="pmid">23276297</pub-id></citation></ref>
<ref id="B80">
<label>80.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Steinmetzger</surname> <given-names>K</given-names></name> <name><surname>Rosen</surname> <given-names>S</given-names></name></person-group>. <article-title>Effects of acoustic periodicity, intelligibility, and pre-stimulus alpha power on the event-related potentials in response to speech</article-title>. <source>Brain Lang.</source> (<year>2017</year>) <volume>164</volume>:<fpage>1</fpage>&#x02013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1016/j.bandl.2016.09.008</pub-id><pub-id pub-id-type="pmid">27690124</pub-id></citation></ref>
<ref id="B81">
<label>81.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rufener</surname> <given-names>KS</given-names></name> <name><surname>Liem</surname> <given-names>F</given-names></name> <name><surname>Meyer</surname> <given-names>M</given-names></name></person-group>. <article-title>Age-related differences in auditory evoked potentials as a function of task modulation during speech-nonspeech processing</article-title>. <source>Brain Behav.</source> (<year>2013</year>) <volume>4</volume>:<fpage>21</fpage>&#x02013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1002/brb3.188</pub-id><pub-id pub-id-type="pmid">24653951</pub-id></citation></ref>
<ref id="B82">
<label>82.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhao</surname> <given-names>TC</given-names></name> <name><surname>Kuhl</surname> <given-names>PK</given-names></name></person-group>. <article-title>Linguistic effect on speech perception observed at the brainstem</article-title>. <source>Proc Natl Acad Sci USA.</source> (<year>2018</year>) <volume>115</volume>:<fpage>8716</fpage>&#x02013;<lpage>21</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1800186115</pub-id><pub-id pub-id-type="pmid">30104356</pub-id></citation></ref>
<ref id="B83">
<label>83.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Intartaglia</surname> <given-names>B</given-names></name> <name><surname>White-Schwoch</surname> <given-names>T</given-names></name> <name><surname>Meunier</surname> <given-names>C</given-names></name> <name><surname>Roman</surname> <given-names>S</given-names></name> <name><surname>Kraus</surname> <given-names>N</given-names></name> <name><surname>Sch&#x000F6;n</surname> <given-names>D</given-names></name></person-group>. <article-title>Native language shapes automatic neural processing of speech</article-title>. <source>Neuropsychologia.</source> (<year>2016</year>) <volume>89</volume>:<fpage>57</fpage>&#x02013;<lpage>65</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2016.05.033</pub-id><pub-id pub-id-type="pmid">27263123</pub-id></citation></ref>
<ref id="B84">
<label>84.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cheng</surname> <given-names>X</given-names></name> <name><surname>Schafer</surname> <given-names>G</given-names></name> <name><surname>Riddell</surname> <given-names>PM</given-names></name></person-group>. <article-title>Immediate auditory repetition of words and nonwords: an ERP study of lexical and sublexical processing</article-title>. <source>PLoS ONE.</source> (<year>2014</year>) <volume>9</volume>:<fpage>e91988</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0091988</pub-id><pub-id pub-id-type="pmid">24642662</pub-id></citation></ref>
<ref id="B85">
<label>85.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>MacGregor</surname> <given-names>LJ</given-names></name> <name><surname>Pulverm&#x000FC;ller</surname> <given-names>F</given-names></name> <name><surname>van Casteren</surname> <given-names>M</given-names></name> <name><surname>Shtyrov</surname> <given-names>Y</given-names></name></person-group>. <article-title>Ultra-rapid access to words in the brain</article-title>. <source>Nat Commun.</source> (<year>2012</year>) <volume>3</volume>:<fpage>711</fpage>. <pub-id pub-id-type="doi">10.1038/ncomms1715</pub-id><pub-id pub-id-type="pmid">22426232</pub-id></citation></ref>
<ref id="B86">
<label>86.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Davis</surname> <given-names>MH</given-names></name> <name><surname>Johnsrude</surname> <given-names>IS</given-names></name></person-group>. <article-title>Hearing speech sounds: top-down influences on the interface between audition and speech perception</article-title>. <source>Hear Res.</source> (<year>2007</year>) <volume>229</volume>:<fpage>132</fpage>&#x02013;<lpage>47</lpage>. <pub-id pub-id-type="doi">10.1016/j.heares.2007.01.014</pub-id><pub-id pub-id-type="pmid">17317056</pub-id></citation></ref>
<ref id="B87">
<label>87.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Garagnani</surname> <given-names>M</given-names></name></person-group>. <article-title>Effects of attention on what is known and what is not: MEG evidence for functionally discrete memory circuits</article-title>. <source>Front Hum Neurosci.</source> (<year>2009</year>) <volume>3</volume>:<fpage>10</fpage>. <pub-id pub-id-type="doi">10.3389/neuro.09.010.2009</pub-id></citation></ref>
<ref id="B88">
<label>88.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Meyer</surname> <given-names>TA</given-names></name> <name><surname>Frisch</surname> <given-names>SA</given-names></name> <name><surname>Pisoni</surname> <given-names>DB</given-names></name> <name><surname>Miyamoto</surname> <given-names>RT</given-names></name> <name><surname>Svirsky</surname> <given-names>MA</given-names></name></person-group>. <article-title>Modeling open-set spoken word recognition in postlingually deafened adults after cochlear implantation: some preliminary results with the neighborhood activation model</article-title>. <source>Otol Neurotol.</source> (<year>2003</year>) <volume>24</volume>:<fpage>612</fpage>&#x02013;<lpage>20</lpage>. <pub-id pub-id-type="doi">10.1097/00129492-200307000-00014</pub-id><pub-id pub-id-type="pmid">12851554</pub-id></citation></ref>
<ref id="B89">
<label>89.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lane</surname> <given-names>H</given-names></name> <name><surname>Denny</surname> <given-names>M</given-names></name> <name><surname>Guenther</surname> <given-names>FH</given-names></name> <name><surname>Hanson</surname> <given-names>HM</given-names></name> <name><surname>Marrone</surname> <given-names>N</given-names></name> <name><surname>Matthies</surname> <given-names>ML</given-names></name> <etal/></person-group>. <article-title>On the structure of phoneme categories in listeners with cochlear implants</article-title>. <source>J Speech Lang Hear Res.</source> (<year>2007</year>) <volume>50</volume>:<fpage>2</fpage>&#x02013;<lpage>14</lpage>. <pub-id pub-id-type="doi">10.1044/1092-4388(2007/001)</pub-id><pub-id pub-id-type="pmid">17344544</pub-id></citation></ref>
<ref id="B90">
<label>90.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Farris-Trimble</surname> <given-names>A</given-names></name> <name><surname>McMurray</surname> <given-names>B</given-names></name> <name><surname>Cigrand</surname> <given-names>N</given-names></name> <name><surname>Tomblin</surname> <given-names>JB</given-names></name></person-group>. <article-title>The process of spoken word recognition in the face of signal degradation</article-title>. <source>J Exp Psychol Hum Percept Perform.</source> (<year>2014</year>) <volume>40</volume>:<fpage>308</fpage>&#x02013;<lpage>27</lpage>. <pub-id pub-id-type="doi">10.1037/a0034353</pub-id><pub-id pub-id-type="pmid">24041330</pub-id></citation></ref>
<ref id="B91">
<label>91.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kroczek</surname> <given-names>LOH</given-names></name> <name><surname>Gunter</surname> <given-names>TC</given-names></name> <name><surname>Rysop</surname> <given-names>AU</given-names></name> <name><surname>Friederici</surname> <given-names>AD</given-names></name> <name><surname>Hartwigsen</surname> <given-names>G</given-names></name></person-group>. <article-title>Contributions of left frontal and temporal cortex to sentence comprehension: evidence from simultaneous TMS-EEG</article-title>. <source>Cortex.</source> (<year>2019</year>) <volume>115</volume>:<fpage>86</fpage>&#x02013;<lpage>98</lpage>. <pub-id pub-id-type="doi">10.1016/j.cortex.2019.01.010</pub-id><pub-id pub-id-type="pmid">30776735</pub-id></citation></ref>
<ref id="B92">
<label>92.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ardila</surname> <given-names>A</given-names></name> <name><surname>Bernal</surname> <given-names>B</given-names></name> <name><surname>Rosselli</surname> <given-names>M</given-names></name></person-group>. <article-title>How localized are language brain areas? A review of brodmann areas involvement in oral language</article-title>. <source>Arch Clin Neuropsychol.</source> (<year>2015</year>) <volume>31</volume>:<fpage>112</fpage>&#x02013;<lpage>22</lpage>. <pub-id pub-id-type="doi">10.1093/arclin/acv081</pub-id><pub-id pub-id-type="pmid">26663825</pub-id></citation></ref>
<ref id="B93">
<label>93.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Davis</surname> <given-names>MH</given-names></name> <name><surname>Johnsrude</surname> <given-names>IS</given-names></name></person-group>. <article-title>Hierarchical processing in spoken language comprehension</article-title>. <source>J Neurosci.</source> (<year>2003</year>) <volume>23</volume>:<fpage>3423</fpage>&#x02013;<lpage>31</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.23-08-03423.2003</pub-id><pub-id pub-id-type="pmid">12716950</pub-id></citation></ref>
<ref id="B94">
<label>94.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Obleser</surname> <given-names>J</given-names></name> <name><surname>Wise</surname> <given-names>RJS</given-names></name> <name><surname>Dresner</surname> <given-names>MA</given-names></name> <name><surname>Scott</surname> <given-names>SK</given-names></name></person-group>. <article-title>Functional integration across brain regions improves speech perception under adverse listening conditions</article-title>. <source>J Neurosci.</source> (<year>2007</year>) <volume>27</volume>:<fpage>2283</fpage>&#x02013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.4663-06.2007</pub-id><pub-id pub-id-type="pmid">17329425</pub-id></citation></ref>
<ref id="B95">
<label>95.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Giraud</surname> <given-names>AL</given-names></name></person-group>. <article-title>Contributions of sensory input, auditory search and verbal comprehension to cortical activity during speech processing</article-title>. <source>Cereb Cortex.</source> (<year>2004</year>) <volume>14</volume>:<fpage>247</fpage>&#x02013;<lpage>55</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhg124</pub-id><pub-id pub-id-type="pmid">14754865</pub-id></citation></ref>
<ref id="B96">
<label>96.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Volz</surname> <given-names>KG</given-names></name> <name><surname>Schubotz</surname> <given-names>RI</given-names></name> <name><surname>von Cramon</surname> <given-names>DY</given-names></name></person-group>. <article-title>Variants of uncertainty in decision-making and their neural correlates</article-title>. <source>Brain Res Bull.</source> (<year>2005</year>) <volume>67</volume>:<fpage>403</fpage>&#x02013;<lpage>12</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainresbull.2005.06.011</pub-id><pub-id pub-id-type="pmid">16216687</pub-id></citation></ref>
<ref id="B97">
<label>97.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jeon</surname> <given-names>H-A</given-names></name> <name><surname>Friederici</surname> <given-names>AD</given-names></name></person-group>. <article-title>Degree of automaticity and the prefrontal cortex</article-title>. <source>Trends Cogn Sci.</source> (<year>2015</year>) <volume>19</volume>:<fpage>244</fpage>&#x02013;<lpage>50</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2015.03.003</pub-id><pub-id pub-id-type="pmid">25843542</pub-id></citation></ref>
<ref id="B98">
<label>98.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ookawa</surname> <given-names>S</given-names></name> <name><surname>Enatsu</surname> <given-names>R</given-names></name> <name><surname>Kanno</surname> <given-names>A</given-names></name> <name><surname>Ochi</surname> <given-names>S</given-names></name> <name><surname>Akiyama</surname> <given-names>Y</given-names></name> <name><surname>Kobayashi</surname> <given-names>T</given-names></name> <etal/></person-group>. <article-title>Frontal fibers connecting the superior frontal gyrus to broca area: a corticocortical evoked potential study</article-title>. <source>World Neurosurg.</source> (<year>2017</year>) <volume>107</volume>:<fpage>239</fpage>&#x02013;<lpage>48</lpage>. <pub-id pub-id-type="doi">10.1016/j.wneu.2017.07.166</pub-id><pub-id pub-id-type="pmid">28797973</pub-id></citation></ref>
<ref id="B99">
<label>99.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Anders</surname> <given-names>R</given-names></name> <name><surname>Ri&#x000E8;s</surname> <given-names>S</given-names></name> <name><surname>Van Maanen</surname> <given-names>L</given-names></name> <name><surname>Alario</surname> <given-names>FX</given-names></name></person-group>. <article-title>Lesions to the left lateral prefrontal cortex impair decision threshold adjustment for lexical selection</article-title>. <source>Cog. Neuropsycho.</source> (<year>2017</year>) <volume>34</volume>:<fpage>1</fpage>&#x02013;<lpage>20</lpage>. <pub-id pub-id-type="doi">10.1080/02643294.2017.1282447</pub-id><pub-id pub-id-type="pmid">28632042</pub-id></citation></ref>
<ref id="B100">
<label>100.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schnur</surname> <given-names>TT</given-names></name></person-group>. <article-title>Word selection deficits and multiword speech</article-title>. <source>Cogn Neuropsychol.</source> (<year>2017</year>) <volume>34</volume>:<fpage>21</fpage>&#x02013;<lpage>5</lpage>. <pub-id pub-id-type="doi">10.1080/02643294.2017.1313215</pub-id><pub-id pub-id-type="pmid">28691606</pub-id></citation></ref>
<ref id="B101">
<label>101.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ries</surname> <given-names>SK</given-names></name> <name><surname>Dronkers</surname> <given-names>NF</given-names></name> <name><surname>Knight</surname> <given-names>RT</given-names></name></person-group>. <article-title>Choosing words: left hemisphere, right hemisphere, or both? Perspective on the lateralization of word retrieval</article-title>. <source>Ann NY Acad Sci.</source> (<year>2016</year>) <volume>1369</volume>:<fpage>111</fpage>&#x02013;<lpage>31</lpage>. <pub-id pub-id-type="doi">10.1111/nyas.12993</pub-id><pub-id pub-id-type="pmid">26766393</pub-id></citation></ref>
<ref id="B102">
<label>102.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dosenbach</surname> <given-names>NUF</given-names></name> <name><surname>Fair</surname> <given-names>DA</given-names></name> <name><surname>Miezin</surname> <given-names>FM</given-names></name> <name><surname>Cohen</surname> <given-names>AL</given-names></name> <name><surname>Wenger</surname> <given-names>KK</given-names></name> <name><surname>Dosenbach</surname> <given-names>RAT</given-names></name> <etal/></person-group>. <article-title>Distinct brain networks for adaptive and stable task control in humans</article-title>. <source>Proc Natl Acad Sci USA.</source> (<year>2007</year>) <volume>104</volume>:<fpage>11073</fpage>&#x02013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.0704320104</pub-id><pub-id pub-id-type="pmid">17576922</pub-id></citation></ref>
<ref id="B103">
<label>103.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>M&#x000FC;ller</surname> <given-names>NG</given-names></name> <name><surname>Knight</surname> <given-names>RT</given-names></name></person-group>. <article-title>The functional neuroanatomy of working memory: contributions of human brain lesion studies</article-title>. <source>Neuroscience.</source> (<year>2006</year>) <volume>139</volume>:<fpage>51</fpage>&#x02013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroscience.2005.09.018</pub-id><pub-id pub-id-type="pmid">16352402</pub-id></citation></ref>
<ref id="B104">
<label>104.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bubb</surname> <given-names>EJ</given-names></name> <name><surname>Metzler-Baddeley</surname> <given-names>C</given-names></name> <name><surname>Aggleton</surname> <given-names>JP</given-names></name></person-group>. <article-title>The cingulum bundle: anatomy, function, and dysfunction</article-title>. <source>Neurosci Biobehav Rev.</source> (<year>2018</year>) <volume>92</volume>:<fpage>104</fpage>&#x02013;<lpage>27</lpage>. <pub-id pub-id-type="doi">10.1016/j.neubiorev.2018.05.008</pub-id><pub-id pub-id-type="pmid">29753752</pub-id></citation></ref>
<ref id="B105">
<label>105.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hofmann</surname> <given-names>MJ</given-names></name> <name><surname>Tamm</surname> <given-names>S</given-names></name> <name><surname>Braun</surname> <given-names>MM</given-names></name> <name><surname>Dambacher</surname> <given-names>M</given-names></name> <name><surname>Hahne</surname> <given-names>A</given-names></name> <name><surname>Jacobs</surname> <given-names>AM</given-names></name></person-group>. <article-title>Conflict monitoring engages the mediofrontal cortex during nonword processing</article-title>. <source>Neuroreport.</source> (<year>2008</year>) <volume>19</volume>:<fpage>25</fpage>&#x02013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1097/WNR.0b013e3282f3b134</pub-id><pub-id pub-id-type="pmid">18281887</pub-id></citation></ref>
<ref id="B106">
<label>106.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xu</surname> <given-names>XM</given-names></name> <name><surname>Jiao</surname> <given-names>Y</given-names></name> <name><surname>Tang</surname> <given-names>T-</given-names></name> <name><surname>Lu</surname> <given-names>CQ</given-names></name> <name><surname>Zhang</surname> <given-names>J</given-names></name> <name><surname>Salvi</surname> <given-names>R</given-names></name> <etal/></person-group>. <article-title>Altered spatial and temporal brain connectivity in the salience network of sensorineural hearing loss and tinnitus</article-title>. <source>Front Neurosci.</source> (<year>2019</year>) <volume>13</volume>:<fpage>246</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2019.00246</pub-id><pub-id pub-id-type="pmid">30941010</pub-id></citation></ref>
<ref id="B107">
<label>107.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kelly</surname> <given-names>AMC</given-names></name> <name><surname>Garavan</surname> <given-names>H</given-names></name></person-group>. <article-title>Human functional neuroimaging of brain changes associated with practice</article-title>. <source>Cereb Cortex.</source> (<year>2004</year>) <volume>15</volume>:<fpage>1089</fpage>&#x02013;<lpage>102</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhi005</pub-id><pub-id pub-id-type="pmid">15616134</pub-id></citation></ref>
<ref id="B108">
<label>108.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Obleser</surname> <given-names>J</given-names></name> <name><surname>Kotz</surname> <given-names>SA</given-names></name></person-group>. <article-title>Multiple brain signatures of integration in the comprehension of degraded speech</article-title>. <source>Neuroimage.</source> (<year>2011</year>) <volume>55</volume>:<fpage>713</fpage>&#x02013;<lpage>23</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2010.12.020</pub-id><pub-id pub-id-type="pmid">21172443</pub-id></citation></ref>
<ref id="B109">
<label>109.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Christmann</surname> <given-names>CA</given-names></name> <name><surname>Berti</surname> <given-names>S</given-names></name> <name><surname>Steinbrink</surname> <given-names>C</given-names></name> <name><surname>Lachmann</surname> <given-names>T</given-names></name></person-group>. <article-title>Differences in sensory processing of German vowels and physically matched non-speech sounds as revealed by the mismatch negativity (MMN) of the human event-related brain potential (ERP)</article-title>. <source>Brain Lang.</source> (<year>2014</year>) <volume>136</volume>:<fpage>8</fpage>&#x02013;<lpage>18</lpage>. <pub-id pub-id-type="doi">10.1016/j.bandl.2014.07.004</pub-id><pub-id pub-id-type="pmid">25108306</pub-id></citation></ref>
</ref-list>
<fn-group>
<fn fn-type="financial-disclosure"><p><bold>Funding.</bold> This study was partly funded by the Advanced Bionics AG, Staefa, Switzerland.</p>
</fn>
</fn-group>
</back>
</article>
