Poor speech intelligibility. Hearing impairment

In case of central hearing impairment due to previous infection or taking ototoxic antibiotics, as well as age-related hearing impairment, speech intelligibility does not reach 100% even at a significant sound volume, and intelligibility may even decrease with increasing volume. In the literature, this is described as the phenomenon of accelerated increase in loudness (AFG) and is a sign of impaired sound perception.

The longer this process takes, the more difficult and more expensive effective hearing replacement will be. The hearing centers of the brain “forget” sounds and they do not “add up” to understanding speech. It also requires a longer adaptation to the hearing aid, as well as to new sensations of sound. Even with full hearing correction, in some patients the hearing aid does not always provide complete restoration of speech intelligibility. At the same time, the patient continues to have problems with speech perception, despite the fact that he has begun to hear sounds better. More profound disorders require not only high-quality hearing aids, but also additional compensation - lip reading, subtitles on television, choice of position when communicating, increased attention to the interlocutor, reduction of surrounding background sounds.

Modern hearing aids have the ability, when adjusted, to limit the amplification of loud sounds, maintain a comfortable volume for medium-volume sounds, and provide a good volume for quiet sounds. This feature is called the WDRC (Wide Dynamic Range Compression) signal processing strategy, and it is possible to change the compression ratio for a more comfortable sound. This achieves high speech intelligibility.

Also, modern hearing aids have introduced nonlinear frequency compression technology (SoundRecover), which allows you to hear sounds that are inaccessible to the extended frequency range. This method gradually compresses and moves high frequencies, ensuring audibility and natural sound. When setting up the device, it is also possible to set a sufficient nonlinear frequency compression coefficient for high-quality and comfortable sound.

Multi-channel devices also help improve speech intelligibility due to the establishment of different gains in different channels, which provides the necessary gain in accordance with the different sensitivity losses at different frequencies. This allows for greater speech intelligibility at lower volumes, resulting in greater hearing aid comfort.

Constant training is required, i.e. increasing the time of wearing the device, getting used to new sounds, increasing sound selectivity and speech emphasis. Frequency selectivity is of great importance in the process of understanding speech; differentiation increases, and therefore speech understanding.

The ability to distinguish sounds in cases of sound perception impairment is also affected by the time interval. With perceptual impairments (impaired perception), the ability to separate sounds is reduced, so a person with sensory impairments asks the interlocutor to speak slower rather than louder. Over time, the speed of word processing increases, which is also an element of training.

The earlier high-quality hearing aids are performed, the greater the effect that can be achieved, reducing adaptation time and restoring speech intelligibility.

Speech clarity and intelligibility

Understandability speeches - the main characteristic that determines the suitability of the path for speech transmission. Direct determination of this characteristic can be carried out using a statistical method with the involvement of a large number of listeners and speakers. Quantitative determination of speech intelligibility – legibility.

Legibility speeches call the relative or percentage number of correctly received speech elements out of the total number transmitted along the path. Elements of speech - complex sounds, words, phrases, numbers. Accordingly, they distinguish syllabic, sound, verbal, semantic And digital legibility. There is a statistical relationship between them. In practice, syllable, verbal and semantic intelligibility are mainly used.

To measure intelligibility, special tables of syllables have been developed, taking into account their occurrence in Russian speech. These tables are called articulatory. Intelligibility is measured with the help of a trained team of listeners without hearing and speech impairments through subjective statistical examinations. In this case, measurements can be carried out using various methods, for example, the scoring method, the method of determining the percentage of correctly accepted words, etc.

The relationship between speech intelligibility and its comprehensibility is shown in table. 16.1. In this table, speech intelligibility is assessed in four gradations:

1) excellent, if the clarity is complete, without asking questions;

2) good, if listeners have a need for separate questions about rarely occurring words or individual names;

3) satisfactory, if listeners reported that it was difficult for them to understand, there were frequent repeat questions;

4) maximum permissible if repeated questions of the same material were required with the transmission of individual words letter by letter under full hearing strain.

Table 16.1

The reasons for the decrease in intelligibility are acoustic noise in the room, interference from reverberation and diffuse sound, and insufficient amplification of the signals from the primary sound source.

Voice and sound reinforcement systems must provide the required speech intelligibility. When transmitting information programs, holding rallies and meetings, excellent speech intelligibility is required, which is ensured with 80% syllabic and 98% verbal intelligibility. For dispatch communications, complete speech intelligibility (satisfactory intelligibility) is obtained with 40...50% syllabic intelligibility and 87...93% verbal intelligibility. Therefore, when calculating dispatch communications, they are guided by lower intelligibility values ​​than when calculating systems of widespread use.

There is a relationship between speech intelligibility, reception conditions and the characteristics of transmission paths, which was established using formant theory developed by Fletcher and Collard.

Areas of energy concentration in a particular part of the frequency range are called formants. Their location depends both on the position of the sound in a word or phrase, and on the individual characteristics of the person’s articulatory apparatus. Each sound has several formants. Formants of speech sounds fill the frequency range from 150 to 7000 Hz.

It was agreed to divide this frequency range into 20 bands, in each of which the probability of the appearance of formants is the same. These frequency bands are called stripes equal to legibility. They are defined for a number of languages, including Russian. It was established that the probability of occurrence of fomants obeys the additivity rule. Given a sufficiently large amount of sound material, the probability of formants appearing in each band is 0.05.

Formants have different levels of intensity: in voiced sounds they are higher than in voiceless sounds. As the level of acoustic noise increases, formants with low levels are masked first, and then with higher ones. As a result of masking, the probability of formant perception decreases. The coefficient determining this reduction in To- th band is called the perceptual or intelligibility coefficient To f . In other words, in To-Ouch band probability of receiving formants

where is the formant perception coefficient To f depends on the level of sensation, which in turn is determined by the difference between the average spectral level of speech IN r in a band of equal intelligibility and the spectral level of noise and interference in the same band IN w :

E f = IN r - IN w . (16.2)

The coefficient of perception (legibility) can be determined from the graph presented in Fig. 16.1. This graph shows the sensation levels E f and their corresponding perception coefficients To f .

For sensation levels 0-18 dB To f can be determined approximately by the formula k f =(E+ 6)/30.

Figure 16.1. Integral distribution of speech levels.

In general, for each band of equal intelligibility the perception coefficient will be different. Overall formant intelligibility in the speech frequency range is determined from

(16.3)

Figure 16.2. Dependence of syllabic intelligibility on formant intelligibility.

The relationship between formant and other types of intelligibility has been found experimentally. This dependence for syllabic intelligibility is shown in Fig. 16.2. From this figure it can be seen that almost complete speech intelligibility (syllable intelligibility is 80%) is obtained by receiving only half of all formants (formant intelligibility is 0.5), which indicates redundancy of speech and the combinative ability of the brain.

Determination of speech intelligibility for sounded rooms is primarily carried out for points of the sounded surface with a minimum level of direct sound and a maximum level of acoustic noise. The spectral level of direct sound for a listener located at such a point is

, (16.4)

Where IN rm -spectral level of speech at the microphone (determined from the tables);

,

Where r m - removing the microphone from the speaker; - spectral level of speech at a distance of 1 m (determined from reference tables);
- gain index (path index - the difference between the sound levels created by the loudspeaker of the sound reinforcement system at the listener's ear and the primary sound source at the microphone input).

These data are determined for each band of equal intelligibility. For the same bands, spectral levels of noise and interference at the listening location

Where IN ash - spectral level of acoustic noise (determined from reference tables); IN n - spectral level of speech interference (speech self-masking),

Where
- correction for interference from diffuse sound (R - acoustic ratio at the design point); N d - diffraction correction, correction for reflection from the listener’s head (determined from reference tables);
- correction for reverberation interference (T r - reverberation time).

The level of acoustic noise does not depend on the path index, while the level of interference from speech increases with increasing path index (16.4), (16.6). Therefore, to increase the level of sensation, it is advisable to increase the tract index. After reaching the condition

IN n = B ash + 6 (16.7)

a further increase in the path index is not rational, since the level of sensation in the limit can only increase by 1 dB. This condition, taking into account (16.4), (16.6), (16.7), determines the path index

This path index is called rational. It is mainly determined by the maximum value of the acoustic ratio
at the design point and reverberation time.

With rational amplification, it follows from (16.5) that

IN w = B n + 1, (16.9)

those. acoustic noise contribution IN ash The overall level of noise and interference is insignificant.

The resulting expressions allow us to determine the intelligibility and understandability of speech. To do this, using formulas (16.4), (16.6), (16.9), the levels of speech, noise and interference are found and then, using formula (16.2), the level of sensation of formants is determined E f for each strip of equal legibility. The graphical dependence presented in Fig. 16.1, allows you to find intelligibility coefficients To f , corresponding to the obtained values E f . Overall formant intelligibility A in the speech frequency range is found from expression (16.3), and the corresponding syllabic intelligibility is determined from Fig. 16.2. Speech intelligibility is determined according to the table. 16.1.

Methods promotion legibility speeches

    Reduced interference levels. (In practice, this is not always possible to achieve.) They try to increase L p at the listener (moving the microphone closer, increasing the level of the speaker’s voice).

    Increasing the listener's sound pressure level by direct sound, bringing the microphone closer to the sound source, increasing the level of the speaker's voice, increasing the tract index.

    Compression D speech signal - increasing the sound pressure levels of weak sounds while maintaining maximum sound pressure levels.

Limiting case of compression D is the amplitude limitation - clipping. In this case, the speech signal turns into a sequence of pulses of constant amplitude, but with varying intervals between zero transitions (telegraph operating mode). As a result, all speech sounds will have the same (maximum) level when received. In this case, the sound quality deteriorates, but intelligibility increases sharply, since weak sounds of unclipped speech, masked by interference, with this transmission method will be higher than the interference levels.

    The use of vocoders.

A vocoder is a device in the transmitting part of which the parameters that determine the information content of speech are extracted from the speech signal: spectral envelopes of speech sounds and parameters of the fundamental tone of speech, i.e. features of speech sounds that slowly change over time.

The receiving part of the vocoder has a complex filter that simulates the acoustic system of the vocal tract for voiced speech sounds and unvoiced ones. The level of synthesized sounds and filter parameters are controlled by signals isolated at the transmitting end of the vocoder, as a result of which the spectral envelope of the speech signal is restored. The quality and intelligibility of the reconstructed signal are quite high.

    Increasing the average signal power, and therefore intelligibility, by dividing the signal into an envelope and instantaneous phases and their special processing.

Speech intelligibility calculation

    We calculate the spectral levels of speech adjusted for distance from the microphone

, (16.10)

Where IN' p – spectral level of speech at a distance of 1 m (determined from reference tables).

2. Using a given spectrum and level of acoustic noise, we find its spectral levels IN A(determined from lookup tables).

3. Determine the total correction ΣΔ L.

4. Determine the actual path index Q ms .

5. All data is entered into a table.

6. Calculate the spectral levels of speech from the listener

(16.11)

7. Calculate spectral interference levels

. (16.12)

8. We sum up the spectral levels of interference with the spectral levels of acoustic noise

9. Subtract the spectral level of total interference and noise from the spectral level of speech and obtain the level of sensation of formants

. (16.14)

10. Based on the found level of sensation, we find the intelligibility coefficient k f;

for 0 . (16.15)

or we find its exact values ​​​​from the table. We enter all calculated values ​​into a summary table.

11. We sum up the obtained values ​​of the intelligibility coefficients and find the formant intelligibility

. (16.16)

Based on formant intelligibility, we determine the syllabic S and verbal W intelligibility and understandability of speech.

From the analysis of the intelligibility coefficient data, it follows that the lower frequencies are transmitted much worse than the upper ones. Since there is a margin for the limiting index of the path at these frequencies, it is possible to design them by approximately 4 dB. This will hardly change the intelligibility, but will improve the sound quality.

To roughly determine speech intelligibility, you can use a shortened calculation method. If the spectra of speech and noise do not change very sharply in frequency, then there is no point in calculating them for all bands of equal intelligibility, but it is enough to calculate them at octave frequencies.

The octave of 173-350 Hz corresponds to one band of equal intelligibility (200-350 Hz).

The octave 350-700 Hz covers three bands (330-465);

The octave 700-1400 Hz includes 4 bands (750-900);

Octave 1400-2800 Hz → 6 bands (1410-2840).

Octave 2800-5600 Hz → 5 bands (2840-5640).

The 5600-7000 Hz range corresponds to the last band of equal intelligibility (5640-7000).

Taking this into account, formant intelligibility is determined by the formula

Where k f1 - k f6– intelligibility coefficients at octave frequencies.

Based on received Hessom(Hess) data, with speech audiometry in some people with hearing loss, discrimination, or speech intelligibility, is significantly more impaired than tone hearing. He called this disorder of phonemic hearing “phonemic regression.” It is more common in old age, with neuritis or senile hearing loss. A pure-tone audiogram shows a slight decrease in hearing, gradually increasing at high frequencies, while phonemic hearing is inappropriately sharply reduced.

Often at the same time sick there are symptoms of vascular disorders. According to the author, phonemic regression precedes more serious disorders of mental activity and is caused by a partial disruption of the blood circulation in the brain. According to Carhart, phonemic regression is a sign of central deafness.

An early sign cortical hearing loss is a violation of intelligible speech perception under conditions of mental stress, tension, as well as in the presence of noise interference or minor defects in the speaker’s diction. We have established that, in terms of the time of its appearance, it often precedes a noticeable decrease in the perception of pure tones. Many of these patients had a decrease only in C4096 within 10-15 dB.

Subsequently, the violation is noted in ordinary situation. When examining hearing, dissociation between speech and tonal hearing and increased fatigue of the hearing organ are revealed. Finally, in the late stage, due to the spread of inhibition through the sound analyzer system to the subcortical nodes, deterioration of tonal hearing occurs.

Articulate, clear hearing, speech perception and understanding represent the highest function of the cortical end of the auditory analyzer. It is carried out on the basis of temporary connections developed in a person in the process of mastering speech by highlighting signal signs of speech and inhibiting other unimportant signs. Additional, even small, distortion in the transmission of speech sounds by radio equipment and telephone increases the requirements for analysis and synthesis, which are difficult to cope with in the case of a disorder in the functional activity of the cerebral cortex. This is what explains the early appearance of difficulties when listening to the radio, talking on the phone, etc.

Deterioration speech intelligibility detected primarily in relation to monosyllabic words; At the same time, good intelligibility of two-syllable words may still be preserved. So, if a patient understands monosyllabic words at a distance of no more than 1 m, then he hears disyllabic words of approximately the same sound composition at a distance of 5-6 m. Patients complain to the doctor that they have difficulty hearing a person’s speech and often ask again, which is confirmed by the study of spoken and whispered speech; Meanwhile, the tone audiogram may be completely normal. We observed such a sharp dissociation between tonal and speech hearing in several patients with hypertension.

It should be emphasized that for hypertension usually there is no deep disturbance in the analysis and synthesis of speech, as noted in patients with damage to the cortex of the left temporal lobe of the brain. If the words are spoken loudly enough, analysis occurs normally. According to some reports, hypertension is accompanied by hyperacusis - slight excitability of the hearing organ to high-pitched sounds.
In particular, this is reflected in the fact that to loud sounds patients react with increased blood pressure; When patients are placed in a quiet, soundproofed room, the pressure drops.

Impaired human speech intelligibility, especially in the presence of background noise, is one of the main concerns of hearing aid users. Developers and manufacturers of modern hearing aids are well aware of this and make every effort to solve this difficult problem. Almost all modern digital hearing aids are equipped with special systems, the so-called noise reduction systems", allowing to reduce the impact of extraneous sounds on the speech signal. However, unfortunately, most of these systems are based on reducing the gain in the hearing aid for certain frequency regions. In other words, along with noise suppression, the device partially suppresses speech. As a result, speech intelligibility remains unsatisfactory.

Audiologists at Widex once again offer a non-standard solution to this problem. I'm talking about a unique system " Speech Enhancer", which is equipped with modern Widex devices of the Mind, Inteo, Passion series. This system uses a complex, fully automatic algorithm based on the analysis of data about the user's hearing impairment, the nature of the interlocutor's speech, and the characteristics of background noise. Thanks to this, the hearing aid optimizes the operation of all its systems so that in any sound environment, even the noisiest, the gain for speech is always significantly higher than the gain for the noise signal. In this way, Widex ensures the highest possible speech intelligibility for every listening situation.

Another unique development of Widex is a system based on the use of so-called linear frequency transposition. This system is called " Hearing range extender" The fact is that sometimes the use of even the most advanced and powerful hearing aid does not completely compensate for the user’s hearing impairment. We are talking about cases of severe hearing loss in the high-frequency region. A few years ago we would have told such patients: “ Unfortunately, you will not be able to hear high frequency sounds" When we talk about high-frequency sounds, we don't just mean the singing of birds, the sound of an alarm clock, or the melody of a flute. We also think about the high-frequency sounds that make up human speech. Without these sounds it is impossible to achieve adequate speech intelligibility, and also, what is especially important, the correct and complete development of the child’s speech is impossible.

« Audibility Range Extender"transfers some of the most significant sound signals from the high-frequency part to the underlying region. Precisely in the area in which sound sensitivity is preserved. Thus, a person even with complete lack of hearing in the high frequency region, begins to hear these sounds again. Of course, the sound of these sounds is different from the original one. However, they remain similar to the original original signal. The system has undergone long-term clinical trials involving children and adults in many audiological laboratories, including the USA and Australia, the results of which indicate its high efficiency.

However, it should be remembered that in each case of use " Hearing Range Extender» an individual approach should be used to suit the patient’s specific hearing impairment, and a long period of user adaptation is required (in some cases 2-3 months). Until recently, such a system was presented in Widex Inteo series devices. We are pleased to announce that now, with the release of new series of hearing aids, namely Mind-440 and Mind-330, the system has become more accessible to our hearing aid users.

Audiologist of the Russian representative office of Widex
Bronyakin Stanislav Yurievich

Do you like going to the movies and theaters? Do you visit restaurants and cafes? Do you often find yourself in noisy companies? In what situations is it most difficult for you to hear your interlocutor? Thanks to a hearing aid, your life will regain its former brightness and you will again be able to communicate and enjoy sounds that were not heard due to hearing loss. This is amazing!

With a decrease or loss of hearing, a person ceases to hear many sounds that were previously familiar: the ticking of a clock, the rustling of clothes, one’s own steps, the noise of equipment and cars, and the singing of birds. Difficulties arise in conversations with two or more interlocutors, the feeling that others are speaking in a low voice or whispering, and hence irritation and a desire to withdraw and avoid communicating with relatives and friends. There is a desire to increase the volume of the radio or TV...

The hearing aid will return this perception of the world around you, you will begin to hear them again.

It should be noted that you need to get used to the device. The adaptation period takes an average of 6-8 weeks if worn regularly for more than 4 hours a day. Remember, you need to wear the device every day! You need to start with a few hours a day and then progressively.

Shape and type of hearing aid depend on your individual characteristics - the shape of the auricle, ear canal, etc.

If hearing loss is diagnosed in both ears, then two hearing aids are indicated. In addition, speech intelligibility with two devices is improved by 30%, less amplification is required, and therefore less load on the analyzer.

What does automatic hearing aid mean?

Automatic program switching (gain adjustment in different environments), setting directional parameters, noise reduction and other necessary functions occurs without pressing the program switch button. The device itself determines the acoustic conditions and adjusts to them.

Device with increased speech intelligibility

To make it easier to perceive speech in the device, manufacturers have developed microphone directional technologies. Depending on where the source of noise is located, the device changes the sensitivity range of the microphones, which allows you to get rid of excess ambient noise and hear your interlocutor clearly.

Two devices are better than one

We have two ears, two eyes, two hands... Nature created this for a reason. Firstly, speech intelligibility with two devices is one third better than with one. Secondly, two devices allow you to accurately determine the location of the sound source. Thirdly, with such prosthetics, the load is distributed evenly, as with normal hearing, and most importantly, over time, you can restore the natural ability to perceive an interlocutor against a background of very loud noise, at a train station, in a store, in a crowd, etc.

Hearing loss is much more noticeable than having a hearing aid

Modern hearing aids are small and ergonomic. They are either behind the ear or in-ear. You can choose a device with any appearance and body color.

Modern technologies have made it possible to achieve significant results. Modern hearing aids are attractive in appearance, almost invisible and very precisely adjusted to each hearing loss.

Hearing aid lifespan

The average service life of behind-the-ear hearing aid models is 5-7 years. To extend the life of the device, we offer some useful care tips.

How to handle the device

The most important thing is to try to prevent the device from coming into contact with water and other liquids and aerosols (except for devices with moisture protection). Try to handle it carefully and carefully, do not drop the device on the floor or other hard surfaces. The device should not be exposed to heat or direct sunlight, strong electromagnetic fields created by MRI, X-ray, or fluorographic equipment. Keep your hearing aid away from small children and pets.

How to care for the device

It is very important to keep your device clean. After use, the device should be wiped with a dry soft cloth every day. Remove wax and other contaminants using a special fishing line or brush. In order to reduce the risk of damage to the device from moisture, place it in a box together with an absorbent. You can use a special device care kit.

How to replace the battery in a hearing aid

To power hearing aids, zinc-air batteries of various sizes are used: 10, 13, 312, 675 with a voltage of 1.4 V. The batteries are usually stored in a package (blister) of 6 pieces, covered with a protective film. The film is used to increase the shelf life of batteries. It is important to know that after removing the battery from the package, remove the protective film and leave the battery in the open air for 3-4 minutes. This time is quite enough to activate the chemical processes in the battery.