Augmented Xperience (AX) hearing aid platform with Augmented Focus intelligently and automatically processes sound to better ensure that listeners always hear clearly – regardless of the listening environment.
Rather than simply amplifying all sounds, like most of today’s hearing aids, Signia Augmented Xperience hearing aids intelligently understands which sounds should be pulled to the foreground and prioritized, and which should remain in the background.
The net result of this world’s first split-processing technology is a fully-immersive and intelligent hearing experience. Sounds shift into the foreground and background naturally and seamlessly depending on the environment, creating an augmented hearing experience that’s better than normal hearing in certain situations*.
“Hearing isn’t always easy – a big group of people talking at the same time, softly spoken talkers in a bustling environment, too much background noise,” stated Samy Lauriette, Signia’s head of brand. “Augmented Xperience changes the game by understanding which sounds should be brought into focus and which remain in the background – creating an almost superhuman level of hearing that optimizes one’s human performance through enhanced hearing in any situation.”
The Augmented Xperience platform will debut in the all-new Signia Pure Charge&Go AX hearing aids, compatible with iOS and Android devices and that deliver up to 36 hours of run-time per charge.
Augmented Xperience: A Platform Built On World-First Technologies
The Augmented Xperience platform is rooted in the world’s first Augmented Focus™ technology that processes speech and background noise separately to create a clear contrast between the two. It then recombines them to deliver outstanding speech clarity even in a fully immersive soundscape – like a crowded cafe or an open office environment.
Augmented Focus leverages two independent processors – the first of which addresses ‘focus’ sounds like the speech of a conversation partner, while the second addresses ‘surrounding’ sounds like background music or ambient laughter, which create situational awareness and excitement. The two processors capture focus and surrounding sounds independently to create greater contrast between the two – pulling focus sounds closer and placing surrounding sounds further away.
Understanding Signia's Augmented Focus Technology
Understanding this new technology is easy, when the effects of sound inputs to the brain are considered.
The brain is constantly looking for alterations in its perceived environment. For vision, it is “edge detection”. It is the brain’s ability to enhance contrast around edges or silhouettes.
For hearing, this occurs when a sound with more dynamics is given more focus by the brain. When a person enters a new environment, the initial focus is on the largest contrasts, for both visual and hearing domains.
An example where this phenomenon frequently occurs is when we watch a movie at theatres. When the sound engineers mixed the movie’s audio, the idea is to steer the attention of the audience toward the film’s dialogue, they add more contrast to speech to make those speech sounds stand out from the background noise.
A critical part of this mixing process involves controlling the sound of the background noise, so they don’t mask the dialogue. This engineering sleight of hand automatically “moves” the less important sounds further away from the moviegoer, while simultaneously bringing the talker of interest closer to them.
The stream containing the talker’s voice and the sound stream containing less important sounds are processed independently, in their own time and with their respective changes.
The result is what Signia calls, AX.
Augmented Focus Analysis:
Beamforming forms the core of this new technology. The incoming sound to the hearing aid is split into two separate streams.
One stream contains sound from the front of the wearer, while the second stream contains sound from the back. Both streams are processed independently.
This means that for each stream, a dedicated processor is used to analyze the characteristics of sound from every direction.
Two separate processors are used to independently analyze sound streams with 48 distinct channels. Then it determines if the incoming sound contains data the user needs to focus on, including background sound that helps the wearer be attuned to the surroundings, or is distracting background noise that the user needs to suppress. The algorithm determines the probability that either of these categories might be present in the sound stream.
Relying on the strength and speed of the input signal’s amplitude modulation. Steady state noises with slow and soft modulation (fan noise, humming, etc.) and fast and strongly modulated sounds like dishes clattering are all distracting and/or annoying as well. Both are recognized and suppressed by both processors within Augmented Focus.
In contrast, the relevant information is mostly contained in sound inputs with faster modulations. That is, distinguishing between different modulation rates of the input signal:
- slow – mostly unwanted sounds like fan noise
- med/low – most of the surrounding sounds
- med/fast – what the human brain usually wants to focus on
- fast – typically distracting noise in the surroundings.
In addition to the separate stream analysis, Augmented Focus maintains the dynamics of the soundscape around the wearer.
To do this, a powerful soundscape processing unit, a carryover from the successful Xperience platform is used. This two-analysis system (Stream & Soundscape Analysis), operating in tandem, enables AX to know exactly what is happening around the hearing aid wearer at all time, regardless of the listening situation.
Figure 3 illustrates the essential components of soundscape analysis.
The most remarkable wearer benefit results from this processing strategy: By shaping the two signal streams independently and without compromise and by maintaining a resolution of 48 channels in each stream, Augmented Focus creates a totally unique sound experience for the wearer.
Knowledge of the content of both streams allows Augmented Focus to process the sound in the hearing aid in the same way as a movie sound engineer, as previously described. Similar to the processing in Augmented Focus, the movie sound engineer has access to the different sound streams of the movie (the dialogue, atmosphere and music) and applies different sound design philosophies by acting on each stream independently and combining them in varying ways.
From serial to orchestrated processing
A common problem associated with traditional serial processing is audible artifacts. For example, a noise reduction algorithm may reduce overall gain, but a compression algorithm then analyzes the resulting signal and increases gain again. In extreme cases, this can amplify small artifacts and result in smeared, or unstable sound.
The processing architecture of Augmented Focus is radically different, as outlined in Figure 5. All algorithms receive the same clean input signal and all processing is performed in parallel. Calculated gain changes are combined in the central gain unit and only applied once. In this way, Augmented Focus avoids artifacts generated by interactions between several algorithms processed in series as shown in Figure 5.
Signia Augmented Focus is a groundbreaking change in the hearing aid’s sound processing architecture. Previously, all hearing aid algorithms were processed one after another in a series, as shown in Figure 4. Now, with Augmented Focus, hearing aid processing is achieved in a much different way.
New way of directional processing
The combination of the two streams, each using independent gain processing, creates a directional system. If a sudden loud sound appears from behind the wearer, the compression in the back-processing stream attenuates the gain (like any normal compressor). The resulting attenuation creates a directional amplification pattern. The same is true for the noise reduction – if noise is detected in one of the streams it can be attenuated separately, also resulting in a directional amplification pattern.
Essentially, at its heart is a directional compression and noise reduction system. That is, compression and noise reduction can be applied separately for sounds coming from the front as well as inputs coming from the back of the wearer.
Most hearing aids on the market today claim to have directional noise reduction, which is also true of Signia’s former platforms. However, this claim relates to the analysis of the direction of arrival of the noise, not on the application of the noise reduction itself.
It is well-documented that noise reduction algorithms improve a wearer’s listening comfort but offer minimal speech intelligibility benefit. Because noise reduction algorithms change overall gain, they tend to reduce the gain for speech within each channel.
In real world listening conditions, speech signals and background noise often arrive from different directions. Augmented Focus splits the noise and speech signals into two separate streams, and therefore, automatically detects the noise more efficiently in the stream without speech. Then, noise reduction is only applied in this separate stream. Thus, speech inputs are not affected by the noise reduction algorithm and speech intelligibility tends to be markedly improved.