< Return to Video

Adv Sig Prcssing Part 2 mp4

  • 0:00 - 2:57
    Advanced Signal Processing Part 2 Lecture 3/25/20
  • 2:57 - 3:07
    Dr. Jodi Baxter >>Okay, so what is the purpose of directional microphones or directional technology?
  • 3:07 - 3:16
    So directional technology was developed in the 20s and 30s and what I remember of this -
  • 3:16 - 3:17
    - unfortunately I did not write it down -
  • 3:17 - 3:21
    -was that it was actually initially developed for places like stadiums,
  • 3:21 - 3:28
    for football games, basketball games in order to have audibility
  • 3:28 - 3:31
    for what was going on on the court or on the field,
  • 3:31 - 3:35
    and not have what was happening in the crowd
  • 3:35 - 3:40
    or the stadium interfering as much with that sound.
  • 3:40 - 3:48
    The first hearing aid that was directional came out from Oticon in 1972 for the -
  • 3:48 - 3:52
    - that hearing aid in the 80s,
  • 3:52 - 3:57
    it was that you were purchasing a directional hearing aid.
  • 3:57 - 4:02
    And so that was implemented in the way of having that single microphone with two ports,
  • 4:02 - 4:08
    which we had talked about back in the hearing aid components lecture.
  • 4:08 - 4:12
    So it was a very small number of hearing aids
  • 4:12 - 4:14
    that were sold that were directional hearing
  • 4:14 - 4:19
    aids and not really stayed that way and even declined throughout the 80s.
  • 4:19 - 4:25
    So there wasn't a lot of availability or options in analog hearing aids,
  • 4:25 - 4:31
    and it wasn't until the early 90s that the option became available to switch
  • 4:31 - 4:41
    manually from an omnidirectional hearing aid or hearing aid mode program to a directional mode.
  • 4:41 - 4:43
    And this was again still pretty limited in terms
  • 4:43 - 4:48
    of what the size of that directional array was,
  • 4:48 - 4:50
    and it was fixed and it was using,
  • 4:50 - 4:54
    again, a manual toggle between the two modes.
  • 4:54 - 4:59
    So Phonak actually was the first manufacturer to come out with the
  • 4:59 - 5:05
    concept of a twin microphone or having two omnidirectional microphones and
  • 5:05 - 5:10
    since then, we've been really able to run with that technology.
  • 5:10 - 5:18
    So I would like to encourage you to take a couple of minutes to go back
  • 5:18 - 5:21
    to the hearing aid components lecture and review
  • 5:21 - 5:25
    the two types of directional microphone technology that we discussed,
  • 5:25 - 5:29
    which was the single microphone that had two different ports,
  • 5:29 - 5:33
    which I believe is not used at this point anymore.
  • 5:33 - 5:35
    And the multi microphone system,
  • 5:35 - 5:37
    meaning that there are multiple,
  • 5:37 - 5:41
    typically two per hearing aid omnidirectional microphones.
  • 5:41 - 5:46
    And then now, because we have this binaural communication,
  • 5:46 - 5:49
    they're really calling it a microphone array or a dual
  • 5:49 - 5:56
    microphone, so meaning the communication between those microphones
  • 5:56 - 6:01
    provide the hearing aid with information on where
  • 6:01 - 6:07
    it should be having maximal sensitivity to sound.
  • 6:07 - 6:20
    So when we're talking about a polar plot or a polar sensitivity pattern,
  • 6:20 - 6:23
    we're really looking at the three-dimensional -
  • 6:23 - 6:25
    - dimensional space surrounding the
  • 6:25 - 6:31
    microphone and where it is most sensitive to sound.
  • 6:31 - 6:36
    Again, if you take a few minutes to go back to the hearing aid components lecture and review that,
  • 6:36 - 6:41
    that's really kind of determined by those two
  • 6:41 - 6:43
    design parameters that we initially talked about.
  • 6:43 - 6:49
    So the external delay, meaning the port spacing between the two microphones,
  • 6:49 - 6:56
    which is typically between 4 and 12 millimeters,
  • 6:56 - 6:59
    and then that internal delay which,
  • 6:59 - 7:04
    at this point, is typically done mathematically or electrically.
  • 7:04 - 7:13
    So a polar plot is a plot of the hearing aid output
  • 7:13 - 7:22
    as a function of the angle of the sound source in the horizontal plane.
  • 7:22 - 7:26
    So typically the way polar plots are developed
  • 7:26 - 7:37
    is by having a keymar or an acoustic mannequin in the center of a speaker array,
  • 7:37 - 7:40
    a 360-degree array of speakers,
  • 7:40 - 7:44
    and then plotting the output of the hearing aid,
  • 7:44 - 7:49
    depending on where the signal of interest or sound source is,
  • 7:49 - 7:52
    what speaker it's coming from.
  • 7:52 - 7:56
    It's also often referred to as the directional sensitivity pattern.
  • 7:56 - 8:02
    A vocabulary word you need to know is the null
  • 8:02 - 8:08
    point which is the point of greatest attenuation or the least amount of sensitivity.
  • 8:08 - 8:12
    So if we look at figure 1027 here,
  • 8:12 - 8:19
    we're looking at - - this kind of center area is where the acoustic mannequin is,
  • 8:19 - 8:24
    and then this is the speaker array,
  • 8:24 - 8:29
    and you can see the dotted is the omnidirectional BTE.
  • 8:29 - 8:33
    Remember we talked about, it's really not truly omnidirectional if it's
  • 8:33 - 8:38
    mounted on someone's head because of the head shadow effect.
  • 8:38 - 8:45
    So my guess by looking at this is that the hearing aid is a mounted on the right ear,
  • 8:45 - 8:50
    because we have greatest sensitivity here and a little bit less on this side.
  • 8:50 - 8:56
    And then we have the directional sensitivity pattern here,
  • 8:56 - 9:00
    so greatest sensitivity in this area,
  • 9:00 - 9:05
    and then a null point here and less sensitivity to this side of the head.
  • 9:05 - 9:13
    And then what's important to also understand is that this varies by frequency as well.
  • 9:13 - 9:26
    So remember that directional microphones operate by sampling at multiple locations,
  • 9:26 - 9:32
    and comparing the sound sample at those locations.
  • 9:32 - 9:38
    So remember the external delay is simply the spacing between the two microphones,
  • 9:38 - 9:48
    or the microphone ports. And calculating that spacing or calculating the external delay by dividing the spacing itself.
  • 9:48 - 9:52
    So the physical distance between the speed of sound.
  • 9:52 - 9:57
    The internal delay, back in these original directional microphones,
  • 9:57 - 9:59
    where there was one microphone,
  • 9:59 - 10:09
    two ports- - a damper was put into the port of the rear microphone or the back microphone.
  • 10:09 - 10:13
    So this was typically like a wire mesh acoustic damper.
  • 10:13 - 10:16
    And the purpose was to introduce a time delay
  • 10:16 - 10:24
    by slowing down the sound as it arrived to the rear microphone.
  • 10:24 - 10:27
    So if you think about it, sound coming from behind the hearing aid
  • 10:27 - 10:32
    user would arrive to this microphone first and then travel to the front microphone.
  • 10:32 - 10:39
    By introducing the damper here and flowing down that sound,
  • 10:39 - 10:44
    the goal is that the sound that arrives to the
  • 10:44 - 10:47
    rear port and goes to the underside or bottom of the diaphragm,
  • 10:47 - 10:51
    would arrive at the same time as the sound arriving to the front microphone,
  • 10:51 - 10:57
    which is sent to the top or the upper side of the diaphragm.
  • 10:57 - 11:00
    And if they are arriving at the same time,
  • 11:00 - 11:06
    then there's essentially no net pressure on the diaphragm and that sound is canceling.
  • 11:06 - 11:12
    It's canceled out. So the more exactly matched the phase is,
  • 11:12 - 11:16
    the greater the attenuation is.
  • 11:16 - 11:21
    Now that manipulation or that relationship between the internal delay
  • 11:21 - 11:29
    and the external delay is primarily done mathematically or electronically.
  • 11:29 - 11:34
    And that is what determines what a polar plot looks like.
  • 11:34 - 11:41
    So again, omnidirectional means that the microphone is equally sensitive to
  • 11:41 - 11:45
    sounds from all direction, all directions.
  • 11:45 - 11:50
    Again when we're looking at one that's a perfect circle like this,
  • 11:50 - 11:52
    it's suspended in free space,
  • 11:52 - 11:55
    so it's not taking into account the hearing aid user wearing the
  • 11:55 - 12:01
    hearing aid. So again, our head shadow effect.
  • 12:01 - 12:05
    This is good when there - - it's a
  • 12:05 - 12:10
    quiet environment, because it gives the patient access to all sounds in the environment.
  • 12:10 - 12:17
    And then this is an example of a directionality pattern,
  • 12:17 - 12:21
    a cardioid pattern is what it's called because it looks like a little heart.
  • 12:21 - 12:24
    When you're looking at this type of polar plot,
  • 12:24 - 12:33
    the hearing aid microphone is sensitive to sounds in front of the hearing aid user,
  • 12:33 - 12:36
    so that's gonna be the zero degree here,
  • 12:36 - 12:41
    and there's a large null point or a point where it's not as sensitive to
  • 12:41 - 12:44
    sounds behind the hearing aid user.
  • 12:44 - 12:45
    The other thing that's important to
  • 12:45 - 12:50
    understand and we'll see here in a minute is that these polar plots are
  • 12:50 - 12:59
    also frequency-dependent. So this is a series of other versions of cardioid -
  • 12:59 - 13:04
    - or polar plot - - you can ignore this one.
  • 13:04 - 13:07
    I actually pulled this from something that was not hearing aid
  • 13:07 - 13:11
    related just so that - - I thought it was the best image I could find.
  • 13:11 - 13:17
    So we also have a - - let's see,
  • 13:17 - 13:28
    super cardioid here, so again more sensitive to sounds behind the user than a plain cardioid,
  • 13:28 - 13:39
    but narrow sensitivity to sounds in front compared to a regular cardioid.
  • 13:39 - 13:43
    Same thing with a hyper-cardioid,
  • 13:43 - 13:46
    so even more narrower or more directional in the front,
  • 13:46 - 13:49
    as you can see - - although it's not as obvious.
  • 13:49 - 13:55
    But again, greater sensitivity to sounds in the rear and these kind of null points
  • 13:55 - 14:04
    on the sides. Figure 8, or a bi-directional is equally as sensitive to the front and back,
  • 14:04 - 14:07
    but not as sensitive to sounds to the side.
  • 14:07 - 14:17
    Yeah, so just options of what sensitivity and null points are.
  • 14:17 - 14:23
    Okay, so directivity index is -
  • 14:23 - 14:27
    - (break in audio) - - - What makes this a little bit less straightforward
  • 14:27 - 14:30
    and a little bit more confusing is at this point,
  • 14:30 - 14:35
    directionality in all digital systems are adaptive;
  • 14:35 - 14:38
    they switch automatically and they're multiband.
  • 14:38 - 14:42
    So each frequency is often doing something different.
  • 14:42 - 14:45
    It's not as global as it used to be.
  • 14:45 - 14:52
    So, but some more terminology that's useful to understand.
  • 14:52 - 14:59
    Fixed directionality is when the sensitivity pattern does not adapt or does not change.
  • 14:59 - 15:05
    Fixed directionality does not actually have to be what I have written
  • 15:05 - 15:10
    on this slide here which is one null point behind the patient or a cardioid.
  • 15:10 - 15:15
    Fixed just really means that it's not adaptive and at this point,
  • 15:15 - 15:24
    we do have the ability to set up fixed directionality in most hearing aids as a
  • 15:24 - 15:32
    separate program, so times that I may choose to do this is if,
  • 15:32 - 15:37
    for example, a patient is still continuing to struggle,
  • 15:37 - 15:43
    and it doesn't seem like the adaptive
  • 15:43 - 15:47
    algorithms of the hearing aid are as sensitive as the person needs it to be
  • 15:47 - 15:53
    or they're describing something of it kind of coming in and out of directionality,
  • 15:53 - 16:00
    I could put a fixed kind of tight directional channel so a beam former,
  • 16:00 - 16:02
    which we'll talk about here in a minute,
  • 16:02 - 16:05
    directly in front of them, so that they
  • 16:05 - 16:10
    can manually override those decision rules that are in the hearing aid
  • 16:10 - 16:17
    programming and the patient can use their phone app or their program button
  • 16:17 - 16:22
    on the hearing aid to go into that directional program.
  • 16:22 - 16:25
    That tends to be a little bit more effective
  • 16:25 - 16:29
    in terms of providing a better signal-to-noise ratio,
  • 16:29 - 16:31
    but there are some challenges too;
  • 16:31 - 16:36
    for example, the patient has to know how to use it,
  • 16:36 - 16:38
    so how to actually change into it,
  • 16:38 - 16:42
    but they also have to remember when to change into it.
  • 16:42 - 16:44
    They have to physically be able to do that;
  • 16:44 - 16:47
    again either the push button on the hearing aid,
  • 16:47 - 16:55
    or through their app. And they also have to remember to get out of it which,
  • 16:55 - 16:58
    anecdotally, is often a challenge of,
  • 16:58 - 17:01
    okay, now I'm leaving this restaurant and I chose to
  • 17:01 - 17:05
    put my hearing aid into the directional program and now I'm leaving and I
  • 17:05 - 17:09
    completely forgot to move it out of that.
  • 17:09 - 17:11
    Which then, say those two individuals get in the car,
  • 17:11 - 17:18
    they're really going to struggle if they're in a fixed directional program and the array is
  • 17:18 - 17:21
    focused on sounds directly in front of them.
  • 17:21 - 17:27
    If they're driving, they're not going to hear the person to the right of them in the car at all.
  • 17:27 - 17:32
    So it's important that if you're gonna set up a fixed program,
  • 17:32 - 17:35
    you've got to choose the patient wisely,
  • 17:35 - 17:40
    and you really have to counsel on when to use it and when not to use it.
  • 17:40 - 17:42
    Another challenge again of that is,
  • 17:42 - 17:47
    say, two people are at a restaurant and then the server comes up and starts talking,
  • 17:47 - 17:49
    the person in that fixed program -
  • 17:49 - 17:55
    - again, of course, depending on their hearing loss and other features in the hearing aid,
  • 17:55 - 17:59
    they may not have any awareness that the server has come up and started talking,
  • 17:59 - 18:02
    because they have a null point in that spot,
  • 18:02 - 18:08
    you know, to the side and really only has good sensitivity
  • 18:08 - 18:13
    or ability to pick up sounds from directly in front of them.
  • 18:13 - 18:18
    Other scenarios where I've set up fixed directional programs,
  • 18:18 - 18:22
    if you think of someone in a wheelchair or someone who's driving children,
  • 18:22 - 18:27
    maybe, you know, parent bus driver -
  • 18:27 - 18:31
    - I actually have set up a fixed directional program for a bus driver.
  • 18:31 - 18:36
    Think about it. Where that person needs to hear is actually
  • 18:36 - 18:39
    not in front of them but sounds behind them.
  • 18:39 - 18:44
    So we often have the ability to make a fixed directivity pattern,
  • 18:44 - 18:46
    you know, in front, behind to the side,
  • 18:46 - 18:49
    depending on what the need of your patient is,
  • 18:49 - 18:51
    but that - you need to have, you know,
  • 18:51 - 18:54
    lengthy conversations and counseling sessions
  • 18:54 - 18:57
    on what their need is and how to use it effectively;
  • 18:57 - 19:01
    otherwise they could potentially get into trouble.
  • 19:01 - 19:03
    In a fixed sensitivity pattern,
  • 19:03 - 19:08
    that time delay is constant. In an adaptive pattern,
  • 19:08 - 19:12
    that directivity pattern typically is changing
  • 19:12 - 19:17
    to maximize the audibility from a target direction or more often,
  • 19:17 - 19:21
    the most dominant sound source.
  • 19:21 - 19:28
    So, in general, these are usually more effective than the fixed arrays,
  • 19:28 - 19:32
    because it's rare that an individual only wants to hear from
  • 19:32 - 19:39
    one direction. So in this, that time delay is constantly varied,
  • 19:39 - 19:48
    which then changes the polar plot constantly to provide the most attenuation for,
  • 19:48 - 19:51
    you know, depending on what the sound level is to the side,
  • 19:51 - 19:55
    from behind. Again, there are typically,
  • 19:55 - 19:58
    just like what we talked about in the last lecture,
  • 19:58 - 20:01
    digital - -the digital noise reduction lecture -
  • 20:01 - 20:03
    -there are usually rules or criterion in
  • 20:03 - 20:06
    place for the hearing aid to decide,
  • 20:06 - 20:10
    okay, at this point, so for example,
  • 20:10 - 20:15
    when sound exceeds 55 or 65 DB SPL,
  • 20:15 - 20:22
    which I would say is typically really conservative or more often,
  • 20:22 - 20:26
    it's like a 70 DB SPL and the SNR is of a certain range in this
  • 20:26 - 20:31
    frequency band - - the hearing aid will go into a directionality mode.
  • 20:31 - 20:34
    But when that criterion has not been met,
  • 20:34 - 20:36
    the hearing aid is functioning omnidirectionally.
  • 20:36 - 20:40
    Again, this is multi-channel,
  • 20:40 - 20:45
    so this is happening at multiple frequency channels at any given time.
  • 20:45 - 20:49
    So in- - within any frequency band,
  • 20:49 - 20:52
    the hearing aid is looking for a noise source,
  • 20:52 - 20:55
    detecting this overall DB SPL,
  • 20:55 - 20:58
    detecting the signal-to-noise ratio,
  • 20:58 - 21:01
    and then applying those different time delays in each channel.
  • 21:01 - 21:11
    And so the goal is to reduce multiple noise sources that are spatially separated,
  • 21:11 - 21:15
    if they're occurring in different frequency regions.
  • 21:15 - 21:24
    Okay, so beamforming directionality.
  • 21:24 - 21:33
    This really came to be when we had the
  • 21:33 - 21:36
    development of that, again, intraoral or intra ear wireless communication
  • 21:36 - 21:39
    because that essentially allowed us to then have four microphones because,
  • 21:39 - 21:45
    again, we have two on each hearing aid that communicate with one another.
  • 21:45 - 21:51
    So this allowed us to move towards even more narrow directionality.
  • 21:51 - 21:53
    When we're talking about beamforming,
  • 21:53 - 21:58
    though, it's exactly that. So you can think of it as a beam and I really like this graphic here,
  • 21:58 - 22:05
    where the purple is what is within that beam or that very narrow directionality,
  • 22:05 - 22:12
    that is getting good emphasis or good sensitivity and everything outside of
  • 22:12 - 22:18
    that beam is attenuated. So a traditional dual mic system
  • 22:18 - 22:22
    has about a sixty degree angle,
  • 22:22 - 22:24
    whereas beamforming technology can be as
  • 22:24 - 22:28
    tight as about 45 degrees. The limitations,
  • 22:28 - 22:30
    I've kind of already talked about so,
  • 22:30 - 22:33
    for example, if you're using
  • 22:33 - 22:39
    this beam in a restaurant setting and then the server comes up to the side,
  • 22:39 - 22:42
    they're not going to have probably even awareness -
  • 22:42 - 22:48
    - again, of course, depending on degree of hearing loss in the speaker and a number of other things,
  • 22:48 - 22:51
    but much less awareness of sound coming from other directions.
  • 22:51 - 22:54
    So I have some people, for example,
  • 22:54 - 22:57
    who may use this in church, but again,
  • 22:57 - 23:00
    they're not gonna be able to communicate at that point with the
  • 23:00 - 23:02
    person sitting next to them.
  • 23:02 - 23:04
    So counseling is really, really important
  • 23:04 - 23:11
    when you are choosing to use this type of directionality.
  • 23:11 - 23:14
    And you can see here, I'm pretty sure this is Signia.
  • 23:14 - 23:18
    I do like this app, although I don't use that manufacturer
  • 23:18 - 23:23
    as often, but this is a really nice way for the patient to visually understand and
  • 23:23 - 23:31
    control what they - - where their directional beams are pointing to and then you can see down here at the bottom,
  • 23:31 - 23:38
    they also have the ability to control how wide or narrow that beam is.
  • 23:38 - 23:41
    In general, the smaller that angle is,
  • 23:41 - 23:45
    the more favorable the signal-to-noise ratio is.
  • 23:45 - 23:50
    So evidence has shown that when we,
  • 23:50 - 23:55
    in the lab setting, the angle goes from 60 degrees,
  • 23:55 - 24:01
    which is a traditional directional mic system to about 40 degrees -
  • 24:01 - 24:10
    - 45 degrees - - that can make an improvement as great as 1 DB in the signal-to-noise ratio.
  • 24:10 - 24:16
    So there are a number of manufacturers
  • 24:16 - 24:20
    that are implementing this beamforming technology,
  • 24:20 - 24:27
    although not all of them. What you're looking at here on these polar plots -
  • 24:27 - 24:32
    - these were Signia, although not the latest platform now,
  • 24:32 - 24:36
    and we're looking at, you know,
  • 24:36 - 24:41
    them bragging about their beamforming technology
  • 24:41 - 24:47
    in their hearing aids versus other manufacturers,
  • 24:47 - 24:53
    but mostly what - - I just liked the ability to look at how those polar plots may vary,
  • 24:53 - 24:56
    again, for different manufacturers,
  • 24:56 - 25:07
    different technology. Some manufacturers use split band or split channel directionality.
  • 25:07 - 25:11
    So what this is, is the hearing aid is directional in the
  • 25:11 - 25:15
    high frequencies, but omnidirectional and the low frequencies.
  • 25:15 - 25:19
    This cutoff point of where it becomes directional
  • 25:19 - 25:23
    versus omnidirectional may be fixed by the manufacturer;
  • 25:23 - 25:29
    it may be something that's able to be adjusted by the clinician or audiologist.
  • 25:29 - 25:34
    Or it may vary depending on the patient's hearing loss.
  • 25:34 - 25:40
    Typically that cutoff point is around two thousand Hz or so.
  • 25:40 - 25:44
    The purpose of this is that direction -
  • 25:44 - 25:51
    - directional systems typically have a low frequency roll-off naturally.
  • 25:51 - 25:57
    That is then compensated for by applying an extra
  • 25:57 - 26:00
    boost of amplification in the low frequencies.
  • 26:00 - 26:03
    This is called equalization.
  • 26:03 - 26:09
    But when that low frequency boost is provided,
  • 26:09 - 26:14
    it then affects the effectiveness of the directionality of the system.
  • 26:14 - 26:17
    At the same time, if you don't do it,
  • 26:17 - 26:24
    the patient may complain that the hearing aid sounds too tinny or too unnatural.
  • 26:24 - 26:30
    The advantages of having this concept of directional
  • 26:30 - 26:35
    in the higher frequencies and omnidirectional in the low frequencies
  • 26:35 - 26:39
    is that it does more closely mimic the directivity of the human ear.
  • 26:39 - 26:44
    So we take advantage of the directionality,
  • 26:44 - 26:52
    but also we preserve the environmental awareness and low frequency cues that happen,
  • 26:52 - 26:54
    such as the interaural timing difference.
  • 26:54 - 26:58
    If the equalization concept -
  • 26:58 - 27:02
    - so again, that idea that the low frequencies are boosted
  • 27:02 - 27:05
    in order to compensate for that low frequency
  • 27:05 - 27:09
    roll-off that happens naturally with directional systems,
  • 27:09 - 27:17
    then the individual's own voice,
  • 27:17 - 27:20
    wind noise, other low frequency sounds,
  • 27:20 - 27:30
    may be overamplified. But this is gonna sound more natural to the hearing aid user.
  • 27:30 - 27:34
    The disadvantage of having this concept of split
  • 27:34 - 27:37
    band or split channel directionality is that
  • 27:37 - 27:41
    there will be no signal-to-noise ratio improvement in the
  • 27:41 - 27:47
    low frequencies, which is where noise often occurs.
  • 27:47 - 27:54
    So sometimes that may not be great for hearing and noise.
  • 27:54 - 27:59
    The other concept to keep in mind is that all
  • 27:59 - 28:04
    open fit hearing aids really are split directionality.
  • 28:04 - 28:07
    And the reason for that is, remember in the low frequencies,
  • 28:07 - 28:09
    if you have something that's truly open,
  • 28:09 - 28:12
    like open dome, we're really not providing
  • 28:12 - 28:18
    gain in those frequencies at all and the sound is able to enter the ear canal
  • 28:18 - 28:22
    naturally, the way it normally does.
  • 28:22 - 28:25
    A hearing aid isn't going to be acting on
  • 28:25 - 28:29
    any sound that is entering the ear canal naturally.
  • 28:29 - 28:32
    So if it's not being processed by the hearing aid,
  • 28:32 - 28:35
    then it's not getting any directionality.
  • 28:35 - 28:38
    There have definitely been times working
  • 28:38 - 28:44
    with patients where I may move to more occluding or closed canal,
  • 28:44 - 28:53
    by way of dome or earmold sooner than I thought I would based on their hearing loss.
  • 28:53 - 28:59
    So they may have, you know, decent low frequency hearing,
  • 28:59 - 29:04
    but I may choose to go with something a little bit more occluding because I really need
  • 29:04 - 29:08
    that directionality, noise reduction,
  • 29:08 - 29:10
    any of that to take place in the low frequencies.
  • 29:10 - 29:14
    This is not common but it's something that I definitely think about
  • 29:14 - 29:17
    and have to balance as I'm making decisions.
  • 29:17 - 29:22
    Again, this is typically not the way I'll go right off the bat,
  • 29:22 - 29:27
    because most of the time, if you include the ear more than it should be,
  • 29:27 - 29:31
    the patient is not going to be satisfied with the sound quality or the
  • 29:31 - 29:34
    naturalness of their own voice,
  • 29:34 - 29:38
    particularly, because of the occlusion effect.
  • 29:38 - 29:44
    But if I'm continuing to have a patient struggle or complain about hearing and noise or,
  • 29:44 - 29:48
    you know, things that make it sound like the noise reduction
  • 29:48 - 29:53
    aren't doing their job, and the patient is in a completely open dome,
  • 29:53 - 29:58
    that may be something that I consider changing.
  • 29:58 - 30:02
    So again, we're looking at polar plots here.
  • 30:02 - 30:10
    But what we're looking at now is the polar plot by frequency.
  • 30:10 - 30:14
    So you can see there's different colors to demarcate the different frequencies,
  • 30:14 - 30:20
    so 500, 1000, 2000, 4000. And this graph is really meant to show you
  • 30:20 - 30:24
    both in a completely open ear and also with a
  • 30:24 - 30:29
    hearing aid that is intentionally doing split band directionality,
  • 30:29 - 30:36
    the low frequencies, so 500 and 1000 hz are essentially omnidirectional versus two and four,
  • 30:36 - 30:41
    which are getting some level of directionality.
  • 30:41 - 30:47
    So again, these, you know, anything that falls outside of this red
  • 30:47 - 30:52
    or this gold region for the two and four K,
  • 30:52 - 30:54
    the hearing aid is really not picking up.
  • 30:54 - 30:58
    It's not sensitive to it, so you can see the difference in sensitivity patterns
  • 30:58 - 31:00
    between the microphone between those different frequencies.
  • 31:00 - 31:05
    One manufacturer, G.N. Resound,
  • 31:05 - 31:13
    uses asymmetric directivity and to be completely honest with you,
  • 31:13 - 31:16
    the first time I heard about this type of directionality,
  • 31:16 - 31:19
    and even now, as I read about it,
  • 31:19 - 31:22
    it's hard for me to understand that it's effective.
  • 31:22 - 31:24
    But the evidence shows that it is.
  • 31:24 - 31:27
    What this is is, one hearing aid is always in
  • 31:27 - 31:34
    omnidirectional and one hearing aid is always in directional.
  • 31:34 - 31:36
    The theory behind this or the goal,
  • 31:36 - 31:40
    is to maximize auditory awareness from any direction.
  • 31:40 - 31:45
    I say that in quotes because we still have
  • 31:45 - 31:51
    to remember that we have the concept of the head shadow coming into play here.
  • 31:51 - 31:58
    But also having directionality at all times as well.
  • 31:58 - 32:05
    They reported that this was developed initially to overcome a patient's inability to use the manual
  • 32:05 - 32:12
    directional microphone, and the limitations that do exist of automatic
  • 32:12 - 32:14
    environmental classification.
  • 32:14 - 32:16
    The last I heard, this was still being used
  • 32:16 - 32:21
    in Resound hearing aids, but hopefully the group that has been assigned to look
  • 32:21 - 32:25
    at Resound can tell us more about it.
  • 32:25 - 32:27
    Again, there is research to show that
  • 32:27 - 32:33
    it's successful, so two different articles showed just as much directional
  • 32:33 - 32:39
    benefit - - and then two articles that showed less directional benefit.
  • 32:39 - 32:45
    So 1-1/2 to 2 dB in adults and children with mild to moderate hearing loss.
  • 32:45 - 32:52
    Okay so let's talk a little bit about reverberation.
  • 32:52 - 32:57
    Reverberation is when a sound source is reflected
  • 32:57 - 33:02
    off room surfaces and so in this scenario we
  • 33:02 - 33:04
    have a direct sound pathway,
  • 33:04 - 33:09
    and multiple reflected sound pathways.
  • 33:09 - 33:13
    As the distance from the speaker,
  • 33:13 - 33:16
    or the sound source increases,
  • 33:16 - 33:19
    the direct sound level will decrease,
  • 33:19 - 33:23
    but the reverberant sound remains constant depending on the
  • 33:23 - 33:29
    location of the reflective services.
  • 33:29 - 33:33
    So, for example, if you are near a wall or a
  • 33:33 - 33:38
    hard surface that may be a reflective surface,
  • 33:38 - 33:44
    the reflective sound or the reverberant sound may even be higher than the direct sound.
  • 33:44 - 33:52
    Critical distance is an important concept to understand.
  • 33:52 - 33:58
    Critical distance is the point that the direct sound is equal to the reverberant sound.
  • 33:58 - 34:03
    So basically once you're outside of that critical distance,
  • 34:03 - 34:10
    the direct sound level is drastically reduced compared to the reverberant sound.
  • 34:10 - 34:15
    And once we're outside of that critical distance point,
  • 34:15 - 34:17
    the directional microphone and the digital noise
  • 34:17 - 34:22
    reduction effectiveness are also reduced.
  • 34:22 - 34:25
    So this is why counseling is very important,
  • 34:25 - 34:28
    even with these great features
  • 34:28 - 34:32
    that we have of directional microphones and digital noise reduction,
  • 34:32 - 34:36
    the patient still has to be close to the speaker.
  • 34:36 - 34:39
    This will improve the signal-to-noise
  • 34:39 - 34:45
    ratio, this reduces the consequential effects of reverberation.
  • 34:45 - 34:49
    This reduces the effects of distance,
  • 34:49 - 34:52
    so think about your inverse square law.
  • 34:52 - 34:55
    The further you move away from the sound,
  • 34:55 - 34:58
    the greater the reduction of sound,
  • 34:58 - 35:04
    and this also does allow the directional microphones to be more effective.
  • 35:04 - 35:09
    Finally, it gives the patient access to visual cues from the speaker.
  • 35:09 - 35:15
    Typically, critical distance is around six feet.
  • 35:15 - 35:22
    Some of the manufacturers say that directionality really is most effective within 10 feet.
  • 35:22 - 35:26
    So again, this is important for you to remember as the clinician.
  • 35:26 - 35:30
    A perfect example of this is church.
  • 35:30 - 35:33
    You know, even if the patient is in the very front row,
  • 35:33 - 35:37
    they are rarely within 10 feet of the person talking.
  • 35:37 - 35:42
    Now, again, this is a different type of environment because they do have speakers
  • 35:42 - 35:49
    too - - but you can set up the most beautiful fixed directional program,
  • 35:49 - 35:52
    but if the speaker is still 30 feet away,
  • 35:52 - 35:56
    it's not going to be as effective.
  • 35:56 - 36:00
    So this is why we need to have a constant conversation about,
  • 36:00 - 36:09
    again, being close to the speaker for a variety of different reasons.
  • 36:09 - 36:13
    Reverberation is a pretty significant challenge:
  • 36:13 - 36:16
    the biggest challenge for directional microphones.
  • 36:16 - 36:21
    If we look at our waveform graphs or figures on the
  • 36:21 - 36:23
    right side of the slide here,
  • 36:23 - 36:27
    what you can see happens is reverberation
  • 36:27 - 36:33
    essentially fills in the modulation depth of the speech signal.
  • 36:33 - 36:39
    So this is the same signal, 4.8 and 415,
  • 36:39 - 36:43
    but this signal is sampled in a reverberant room.
  • 36:43 - 36:47
    So you can see - - we have these really big peaks and valleys and
  • 36:47 - 36:55
    modulation depths here that are essentially wiped away or gone in Figure 415.
  • 36:55 - 36:58
    This isn't noise; this is just a reverberation.
  • 36:58 - 37:03
    Average reverberation time is 4 seconds.
  • 37:03 - 37:08
    The longer the time is, the more that modulation is filled in.
  • 37:08 - 37:14
    So basically what's happening here is earlier parts of speech that are then
  • 37:14 - 37:18
    reverberating are serving to mask part -
  • 37:18 - 37:22
    - later occurring parts of speech.
  • 37:22 - 37:26
    Another challenge with reverberation in terms of directional microphones is that
  • 37:26 - 37:31
    essentially it's reaching the microphone evenly from all directions.
  • 37:31 - 37:36
    So then those concepts of making a polar plot and
  • 37:36 - 37:40
    putting in a null point really works minimally well when the noise is coming
  • 37:40 - 37:43
    from every direction. So again,
  • 37:43 - 37:46
    an adaptive algorithm tries to choose -
  • 37:46 - 37:53
    - choose the pattern that is average from overall directions to attenuate the most noise.
  • 37:53 - 37:58
    But that's going to be challenging when the noise is really coming from everywhere.
  • 37:58 - 38:08
    So clinical application of directional microphones.
  • 38:08 - 38:14
    At this point they're primarily implemented automatically and adaptively,
  • 38:14 - 38:16
    but there are scenarios, as I
  • 38:16 - 38:21
    described earlier in the lecture that they can be implemented as a manual program.
  • 38:21 - 38:27
    You just have to make sure the patient knows how and when to
  • 38:27 - 38:30
    successfully use the program.
  • 38:30 - 38:34
    The mic port location orientation have to be
  • 38:34 - 38:38
    correct and in the horizontal plane in order for them to be effective.
  • 38:38 - 38:48
    The range of benefit of directional microphones is really highly dependent on the environment.
  • 38:48 - 38:52
    So hopefully we've covered this in depth throughout this slide show,
  • 38:52 - 38:56
    but factors, of course, are going to include the number,
  • 38:56 - 39:00
    location and type of competing noise sources,
  • 39:00 - 39:04
    reverberation time in the environment,
  • 39:04 - 39:08
    the distance between the hearing aid user and sound of interest,
  • 39:08 - 39:15
    and the fact that the hearing aids are operating on the principle that the speech is the signal
  • 39:15 - 39:22
    of interest. And typically operating on the principle that the loudest speech is
  • 39:22 - 39:25
    the clinical - - the signal of interest,
  • 39:25 - 39:29
    which may not always be the case.
  • 39:29 - 39:34
    Another thing to keep in mind is the effective venting on directionality,
  • 39:34 - 39:39
    which we talked about briefly a couple of slides ago.
  • 39:39 - 39:41
    So the hearing aids style doesn't -
  • 39:41 - 39:45
    -has not been found in the research to have a significant impact on directionality.
  • 39:45 - 39:49
    But venting or openness of the canal does.
  • 39:49 - 39:53
    Again, because sound - - sound is leaking out of the ear canal,
  • 39:53 - 39:59
    but also because sound is entering the ear canal unprocessed by the hearing aid.
  • 39:59 - 40:02
    So if you look at figure 1116 here,
  • 40:02 - 40:06
    we're looking at changes in low frequency DI,
  • 40:06 - 40:11
    or directivity index. In AIDI which is the articulation index directivity
  • 40:11 - 40:16
    index for a no-vent versus a 1-millimeter event,
  • 40:16 - 40:20
    which is quite small, so that's gonna be a little bit larger than a pressure vent.
  • 40:20 - 40:22
    A 2-millimeter vent, which is,
  • 40:22 - 40:27
    then, a pretty large vent and then a completely open canal.
  • 40:27 - 40:38
    But, you know, venting is valuable and omnidirectional is valuable,
  • 40:38 - 40:46
    in that it does provide us with sound awareness from multiple directions.
  • 40:46 - 40:50
    So what does the evidence tell us on directionality
  • 40:50 - 40:54
    and speech recognition and noise?
  • 40:54 - 40:59
    This is wide-ranging. It's been reported that up to 11 DB improvement
  • 40:59 - 41:04
    or 70 percent improvement in word recognition in noise in a lab setting,
  • 41:04 - 41:10
    when using directionality. A 2017 article by Todd Ricketts at Vanderbilt
  • 41:10 - 41:13
    reported that directional microphones provide
  • 41:13 - 41:19
    an advantage or improvement in signal-to-noise ratio about 42 percent of the time in school.
  • 41:19 - 41:25
    This is important because for a number of years,
  • 41:25 - 41:28
    it was said that we shouldn't be using directionality
  • 41:28 - 41:31
    or noise reduction in pediatric fittings because we were
  • 41:31 - 41:40
    potentially not able to take advantage of the concept of incidental learning.
  • 41:40 - 41:43
    At this point it seems like enough evidence has
  • 41:43 - 41:51
    disproven that directional microphone noise reduction would be valuable for kids in schools,
  • 41:51 - 41:59
    but we will talk more about that in Hearing Aids II when we have a pediatric-specific lecture.
  • 41:59 - 42:05
    Adaptive directionality and beamforming directionality
  • 42:05 - 42:08
    have been shown to add an additional 1 to 2 DB
  • 42:08 - 42:14
    of benefit over that concept of traditional fixed directionality.
  • 42:14 - 42:21
    Listening effort - - uh, so I'd love to see the research coming out about listening effort.
  • 42:21 - 42:24
    Both directional and beamforming technology have been shown to
  • 42:24 - 42:31
    reduce listening effort, both behaviorally and subjectively compared
  • 42:31 - 42:35
    to omnidirectional implementation.
  • 42:35 - 42:41
    So, again, this is where we have to think
  • 42:41 - 42:43
    about how we use this for patients.
  • 42:43 - 42:46
    This study Chord et al. , found that 30% of
  • 42:46 - 42:49
    users did not switch between settings and users often did not know when to switch.
  • 42:49 - 42:53
    Anecdotally I will tell you that for the most part that seems to be true.
  • 42:53 - 42:59
    I really try to avoid setting up specific programs for patients because
  • 42:59 - 43:05
    even the most savvy patients seem to stop using them after a period of time,
  • 43:05 - 43:08
    or still don't quite understand how to use them,
  • 43:08 - 43:11
    when to use them, when to get out of them.
  • 43:11 - 43:15
    I do do them, sometimes that you just absolutely have to,
  • 43:15 - 43:20
    but I would be conservative with doing that and making sure you're counseling and also
  • 43:20 - 43:26
    checking back in routinely because I troubleshooted a lot of problems
  • 43:26 - 43:33
    because of excessive program use and lack of understanding of programs.
  • 43:33 - 43:38
    One study with Terese Walden suggested that some
  • 43:38 - 43:41
    of the switching algorithms may be too conservative,
  • 43:41 - 43:45
    so it's - - the study suggested that 33% of the time,
  • 43:45 - 43:50
    hearing users were in environments where a directional benefit may be -
  • 43:50 - 43:54
    - the directional microphone may be beneficial;
  • 43:54 - 43:59
    however, they only switch 5 to 17 percent of the time.
  • 43:59 - 44:01
    I will tell you, also anecdotally,
  • 44:01 - 44:04
    this has been my experience clinically;
  • 44:04 - 44:11
    in the manufacturers that do have really precise data logging systems,
  • 44:11 - 44:19
    I am shocked by how rarely they're actually going
  • 44:19 - 44:22
    into their speech in noise program or programs
  • 44:22 - 44:24
    where directionality and noise reduction should be -
  • 44:24 - 44:29
    - are used, so that information can be valuable,
  • 44:29 - 44:34
    especially when you have the ability to change how quickly the hearing aid is doing this.
  • 44:34 - 44:41
    So again, looking at directional advantage,
  • 44:41 - 44:46
    compared to omni, it's a wide range,
  • 44:46 - 44:53
    and replicated by a number of studies.
  • 44:53 - 44:56
    There does not seem to be a significant effect
  • 44:56 - 45:05

    based on hearing loss in terms of omnidirectional versus directional.
  • 45:05 -
Title:
Adv Sig Prcssing Part 2 mp4
Video Language:
English
Duration:
45:21

English subtitles

Incomplete

Revisions