.

Showing posts with label Tina Jeffrey. Show all posts
Showing posts with label Tina Jeffrey. Show all posts

Building (sound) character into cars

Tina Jeffrey
Modern engines are overachievers when it comes to fuel efficiency — but they often score a C minus in the sound department. Introducing a solution that can make a subtle but effective difference.

Car engines don’t sound like they used to. Correction: They don’t sound as good as they used to. And for that, you can blame modern fuel-saving techniques, such as the practice of deactivating cylinders when engine load is light. Still, if you’re an automaker, delivering an optimal engine sound is critical to ensuring a satisfying user experience. To address this need, we’ve released QNX Acoustics for Engine Sound Enhancement (ESE), a complementary technology to our solution for active noise control.

The why
We first demonstrated our ESE technology at 2014 CES
in the QNX technology concept car for acoustics.
Many people assume, erroneously, that ESE is about giving cars an outsized sonic personality — such as making a Smart ForTwo snarl like an SRT Hellcat. While that is certainly possible, most automakers will use ESE to augment engine sounds in subtle but effective ways that bolster the emotional connection between car and driver — just like engine sounds did in the past. It boils down to creating a compelling acoustic experience for drivers and passengers alike.

ESE isn’t new. Traditionally, automakers have used mechanical solutions that modify the design of the exhaust system or intake pipes to differentiate the sound of their vehicles. Today, automakers are shifting to software-based ESE, which costs less and does a better job at augmenting engine sounds that have been degraded by new, efficient engine designs. With QNX Acoustics for Engine Sound Enhancement, automakers can accurately preserve an existing engine sound for use in a new model, craft a unique sound to market a new brand, or offer distinct sounds associated with different transmission modes, such as sport or economy.

The how
QNX Acoustics for Engine Sound Enhancement is entirely software based. It comprises a runtime library that augments naturally transmitted engine sounds as well as a design tool that provides several advanced features for defining and tuning engine-sound profiles. The library runs on the infotainment system or on the audio system DSP and plays synthesized sound synchronized to the engine’s real-time data: RPM, speed, throttle position, transmission mode, etc.




The ESE designer tool enables sound designers to create, refashion, and audition sounds directly on their desktops by graphically defining the mapping between a synthesized engine-sound profile and real-time engine parameters. The tool supports both granular and additive synthesis, along with a variety of digital signal processing techniques to configure the audio path, including gain, filter, and static equalization control.



The value
QNX Acoustics for Engine Sound Enhancement offers automakers numerous benefits in the design of sound experiences that best reflect their brand:

  • Ability to design consistent powertrain sounds across the full engine operating range
     
  • Small footprint runtime library that can be ported to virtually any DSP or CPU running Linux or the QNX OS, making it easy to customize all vehicle models and to leverage work done in existing models
     
  • Tight integration with other QNX acoustics middleware libraries, including QNX Acoustics for Active Noise Control, enabling automakers to holistically shape their interior vehicle soundscape
     
  • Dedicated acoustic engineers that can support development and pre-production activities, including porting to customer-specific hardware, system audio path verification, and platform and vehicle acoustic tuning
     
If you’re with an automaker or Tier One and would like to discuss how QNX Acoustics for ESE can address your project requirements, I invite you to contact us at anc@qnx.com.

In the meantime, learn more about this solution on the QNX website.

Top 10 challenges facing the ADAS industry

Tina Jeffrey
It didn’t take long. Just months after the release of the ISO 26262 automotive functional safety standard in 2011, the auto industry began to grasp its importance and adopt it in a big way. Safety certification is gaining traction in the industry as automakers introduce advanced driver assistance systems (ADAS), digital instrument clusters, heads-up displays, and other new technologies in their vehicles.

Governments around the world, in particular those of the United States and the European Union, are calling for the standardization of ADAS features. Meanwhile, consumers are demonstrating a readiness to adopt these systems to make their driving experience safer. In fact, vehicle safety rating systems are becoming a vital ‘go to’ information resource for new car buyers. Take, for example, the European New Car Assessment Programme Advanced (Euro NCAP Advanced). This organization publishes safety ratings on cars that employ technologies with scientifically proven safety benefits for drivers. The emergence of these ratings encourages automakers to exceed minimum statutory requirements for new cars.

Sizing the ADAS market
ABI Research claims that the global ADAS market, estimated at US$16.6 billion at the end of 2012, will grow to more than US$260 billion by the end of 2020, representing a CAGR of 41%. Which means that cars will ship with more of the following types of safety-certified systems:



The 10 challenges
So what are the challenges that ADAS suppliers face when bringing systems to market? Here, in my opinion, are the top 10:
  1. Safety must be embedded in the culture of every organization in the supply chain. ADAS suppliers can't treat safety as an afterthought that is tacked on at the end of development; rather, they must embed it into their development practices, processes, and corporate culture. To comply with ISO 26262, an ADAS supplier must establish procedures associated with safety standards, such as design guidelines, coding standards and reviews, and impact analysis procedures. It must also implement processes to assure accountability and traceability for decisions. These processes provide appropriate checks and balances and allow for safety and quality issues to be addressed as early as possible in the development cycle.
     
  2. ADAS systems are a collaborative effort. Most ADAS systems must integrate intellectual properties from a number of technology partners; they are too complex to be developed in isolation by a single supplier. Also, in a safety-certified ADAS system, every component must be certified — from the underlying hardware (be it a multi-core processor, GPU, FPGA, or DSP) to the OS, middleware, algorithms, and application code. As for the application code, it must be certified to the appropriate automotive safety integrity level; the level for the ADAS applications listed above is typically ASIL D, the highest level of ISO 26262 certification.
     
  3. Systems may need to comply with multiple industry guidelines or specifications. Besides ISO 26262, ADAS systems may need to comply with additional criteria, as dictated by the tier one supplier or automaker. On the software side, these criteria may include AUTOSAR or MISRA. On the hardware side, they will include AEC-Q100 qualification, which involves reliability testing of auto-grade ICs at various temperature grades. ICs must function reliably over temperature ranges that span -40 degrees C to 150 degrees C, depending on the system.
     
  4. ADAS development costs are high. These systems are expensive to build. To achieve economies of scale, they must be targeted at mid- and low-end vehicle segments. Prices will then decline as volume grows and development costs are amortized, enabling more widespread adoption.
     
  5. The industry lacks interoperability specifications for radar, laser, and video data in the car network. For audio-video data alone, automakers use multiple data communication standards, including MOST (media-oriented system transport), Ethernet AVB, and LVDS. As such, systems must support a multitude of interfaces to ensure adoption across a broad spectrum of possible interfaces. Also, systems may need additional interfaces to support radar or lidar data.
     
  6. The industry lacks standards for embedded vision-processing algorithms. Ask 5 different developers to develop a lane departure warning system and you’ll get 5 different solutions. Each solution will likely start with a Matlab implementation that is ported to run on the selected hardware. If the developer is fortunate, the silicon will support image processing primitives (a library of functions designed for use with the hardware) to accelerate development. TI, for instance, has a set of image and video processing libraries (IMGLIB and VLIB) optimized for their silicon. These libraries serve as building blocks for embedded vision processing applications. For instance, IMGLIB has edge detection functions that could be used in a lane departure warning application.
     
  7. Data acquisition and data processing for vision-based systems is high-bandwidth and computationally intensive. Vision-based ADAS systems present their own set of technical challenges. Different systems require different image sensors operating at different resolutions, frame rates, and lighting conditions. A system that performs high-speed forward-facing driver assistance functions such as road sign detection, lane departure warning, and autonomous emergency breaking must support a higher frame rate and resolution than a rear-view camera that performs obstacle detection. (A rear-view camera typically operates at low speeds, and obstacles in the field of view are in close proximity to the vehicle.) Compared to the rear-view camera, an LDW, AEB, or RSD system must acquire and process more incoming data at a faster incoming frame rate, before signaling the driver of an unintentional lane drift or warning the driver that the vehicle is exceeding the posted speed limit.
     
  8. ADAS cannot add to driver distraction. There is an increase in the complexity of in-vehicle tasks and displays that can result in driver information overload. Systems are becoming more integrated and are presenting more data to the driver. Information overload could result in high cognitive workload, reducing situational awareness and countering the efficacy of ADAS. Systems must therefore be easy to use and should make use of the most appropriate modalities (visual, manual, tactile, sound, haptic, etc.) and be designed to encourage driver adoption. Development teams must establish a clear specification of the driver-vehicle interface early on in development to ensure user and system requirements are aligned.
     
  9. Environmental factors affect ADAS. ADAS systems must function under a variety of weather and lighting conditions. Ideally, vision-based systems should be smart enough to understand when they are operating in poor visibility scenarios such as heavy fog or snow, or when direct sunlight shines into the lens. If the system detects that the lens is occluded or that the lighting conditions are unfavorable, it can disable itself and warn the driver that it is non-operational. Another example is an ultrasonic parking sensor that becomes prone to false positives when encrusted with mud. Combining the results of different sensors or different sensor technologies (sensor fusion) can often provide a more effective solution than using a single technology in isolation.
     
  10. Testing and validating is an enormous undertaking. Arguably, testing and validation is the most challenging aspect of ADAS development, especially when it comes to vision systems. Prior to deploying a commercial vision system, an ADAS development team must amass hundreds if not thousands of hours of video clips in a regression test database, in an effort to test all scenarios. The ultimate goal is to achieve 100% accuracy and zero false positives under all possible conditions: traffic, weather, number of obstacles or pedestrians in the scene, etc. But how can the team be sure that the test database comprises all test cases? The reality is that they cannot — which is why suppliers spend years testing and validating systems, and performing extensive real-world field-trials in various geographies, prior to commercial deployment.
     
There are many hurdles to bringing ADAS to mainstream vehicles, but clearly, they are surmountable. ADAS systems are commercially available today, consumer demand is high, and the path towards widespread adoption is paved. If consumer acceptance of ADAS provides any indication of societal acceptance of autonomous drive, we’re well on our way.

A sound approach to creating a quieter ride

Tina Jeffrey
Add sound to reduce noise levels inside the car. Yup, you read that right. And while it may seem counterintuitive, it’s precisely what automakers are doing to provide a better in-car experience. Let’s be clear: I’m not talking about playing a video of SpongeBob SquarePants on the rear-seat entertainment system to keep noisy kids quiet — although I can personally attest to the effectiveness of this method. Rather, I’m referring to deliberately synthesized sound played over a vehicle’s car speakers to cancel unwanted low-frequency engine tones in the passenger compartment, yielding a quieter and more pleasant ride.

So why is this even needed? It comes down to fuel economy. Automakers are continually looking at ways to reduce fuel consumption through techniques such as variable cylinder management (reducing the number of cylinders in operation under light engine load) and operating the engine at lower RPM. Some automakers are even cutting back on passive damping materials to decrease vehicle weight. These approaches do indeed reduce consumption, but they also result in more engine noise permeating the vehicle cabin, creating a noisier ride for occupants. To address the problem, noise vibration and harshness engineers (OEM engineers responsible for characterizing and improving sound quality in vehicles) are using innovative sound technologies such as active noise control (ANC).

Automotive ANC technology is analogous to the technology used in noise-cancelling headphones but is more difficult to implement, as developers must optimize the system based on the unique acoustic characteristics of the cabin interior. An ANC system must be able to function alongside a variety of other audio processing tasks such as audio playback, voice recognition, and hands-free communication.


The QNX Acoustics for Active Noise Control solution uses realtime engine data and sampled microphone data from the cabin to construct the “anti-noise” signal played over the car speakers.

So how does ANC work?
According to the principle of superposition, sound waves will travel and reflect off glass, the dash, and other surfaces inside the car; interfere with each other; and yield a resultant wave of greater or lower amplitude to the original wave. The result varies according to where in the passenger compartment the signal is measured. At some locations, the waves will “add” (constructive interference); at other locations, the waves will “subtract” or cancel each other (destructive interference). Systems must be tuned and calibrated to ensure optimal performance at driver and passenger listening positions (aka “sweet spots”).

To reduce offending low-frequency engine tones (typically <150 Hz), an ANC system typically requires real-time engine data (including RPM) in addition to signals from the cabin microphones. The ANC system then synthesizes and emits “anti-noise” signals that are directly proportional but inverted to the original offending engine tones, via the car’s speakers. The net effect is a reduction of the offending tones.


According to the superposition principle of sound waves, a noise signal and an anti-noise signal will cancel each other if the signals are 180 degrees out of phase. Image adapted from Wikipedia.

Achieving optimal performance for these in-vehicle systems is complex, and here’s why. First off, there are multiple sources of sound inside a car — some desirable and some not. These include the infotainment system, conversation between vehicle occupants, the engine, road, wind, and structural vibrations from air intake valves or the exhaust. Also, every car interior has unique acoustic characteristics. The location and position of seats; the position, number, and type of speakers and microphones; and the materials used inside the cabin all play a role in how an ANC system performs.

To be truly effective, an ANC solution must adapt quickly to changes in vehicle cabin acoustics that result from changes in acceleration and deceleration, windows opening and closing, changes in passenger seat positions, and temperature changes. The solution must also be robust; it shouldn’t become unstable or degrade the audio quality inside the cabin should, for example, a microphone stop working.

The solution for every vehicle model must be calibrated and tuned to achieve optimal performance. Besides the vehicle model, engine noise characteristics, and number and arrangement of speakers and microphones, the embedded platform being used also plays a role when tuning the system. System tuning can, with conventional solutions, take months to reach optimal performance levels. Consequently, solutions that ease and accelerate the tuning process, and that integrate seamlessly into a customer’s application, are highly desirable.

Automotive ANC solutions — then and now
Most existing ANC systems for engine noise require a dedicated hardware control module. But automakers are beginning to realize that it’s more cost effective to integrate ANC into existing vehicle hardware systems, such as the infotainment head unit. This level of integration facilitates cooperation between different audio processing tasks, such as managing a hands-free call and reducing noise in the cabin.

Earlier today, QNX announced the availability of a brand new software product that targets ANC for engine tone reduction in passenger vehicles. It’s a flexible, software-based solution that can be ported to floating or fixed-point DSPs or application processors, including ARM, SHARC, and x86, and it supports systems with or without an OS. A host application that executes on the vehicle’s head unit or audio amplifier manages ANC through the library’s API calls. As a result, the host application can fully integrate ANC functionality with its other audio tasks and control the entire acoustic processing chain.

Eliminating BOM costs
The upshot is that the QNX ANC solution can match or supersede the performance of a dedicated hardware module — and we have the benchmarks to show it. Let me leave you with some of the highlights of the QNX Acoustics for Active Noise Control solution:

  • Significantly better performance than dedicated hardware solutions — The QNX solution can provide up to 9dB of reduction at the driver’s head position compared to 5dB for a comparative hardware solution in the same vehicle under the same conditions.
     
  • Significant BOM cost savings — Eliminates the cost of a dedicated hardware module.
     
  • Flexible and configurable — Can be integrated into the application processor or DSP of an existing infotainment system or audio amplifier, and can run on systems with or without an OS, giving automakers implementation choices. Also supports up to 6 microphone and 6 speaker-channel configurations.
     
  • Faster time to market — Speeds development by shortening tuning efforts from many months to weeks. Also, a specialized team of QNX acoustic engineers can provide software support, consulting, calibration, and system tuning.

For the full skinny on QNX Acoustics for Active Noise Control, visit the QNX website.

QNX Acoustics for Voice — a new name and a new benchmark in acoustic processing


Tina Jeffrey
Earlier this month, QNX Software Systems officially released QNX Acoustics for Voice 3.0 — the company’s latest generation of acoustic processing software for automotive hands-free voice communications. The solution sets a new benchmark in hands-free quality and supports the rigorous requirements of smartphone connectivity specifications.

Designed as a complete software solution, the product includes both the QNX Acoustics for Voice signal-processing library and the QWALive tool for tuning and configuration.

The signal-processing library manages the flow of audio during a hands-free voice call. It defines two paths: the send path, which handles audio flowing from the microphones to the far end of the call, and the receive path, which handles audio flowing from the far end to the loudspeakers in the car:





QWALive, used throughout development and pre-production phases, gives developers realtime control over all library parameters to accelerate tuning and diagnosis of audio issues:



A look under the hood
QNX Acoustics for Voice 3.0 builds on QNX Software Systems’ best-in-class acoustic echo cancellation and noise reduction algorithms, road-proven in tens of millions of cars, and offers breakthrough advancements over existing solutions.

Let me run through some of the innovative features that are already making waves (sorry, couldn’t resist) among automotive developers.

Perhaps the most significant innovation is our high efficiency technology. Why? Well, simply put, it saves up to 30% both in CPU load and in memory requirements for wideband (16 kHz sample rate for HD Voice) and Wideband Plus (24 kHz sample rate). This translates into the ability to do more processing on existing hardware, and with less memory. For instance, automakers can enable new smartphone connectivity capabilities on current hardware, without compromising performance:



Another feature that premieres with this release is intelligent voice optimization technology, designed to accelerate and increase the robustness of send-path tuning. This technology implements an automated frequency response correction model that dynamically adjusts the frequency response of the send path to compensate for variations in the acoustic path and vehicle cabin conditions.

Dynamic noise shaping, which is exclusive to QNX Acoustics for Voice, also debuts in this release. It enhances speech quality in the send path by reducing broadband noise from fans, defrost vents, and HVAC systems — a welcome feature, as broadband noise can be particularly difficult for hands-free systems to contend with.

Flexibility and portability — check and check
Like its predecessor (QNX Aviage Acoustic Processing 2.0), QNX Acoustics for Voice 3.0 continues to offer maximum flexibility to automakers. The modular software library comes with a comprehensive API, easing integration efforts into infotainment, telematics, and audio amplifier modules. Developers can choose from fixed- and floating-point versions that can be ported to a variety of operating systems and deployed on a wide range of processors or DSPs.

We’re excited about this release as it’s the most sophisticated acoustic voice processing solution available to date, and it allows automakers to build and hone systems for a variety of speech requirements, across all their vehicle platforms.

Check out the QNX Acoustics for Voice product page to learn more.

DevCon5 recap: building apps for cars

Tina Jeffrey
Last week I had the pleasure of presenting at the DevCon5 HTML5 & Mobile App Developers Conference, held at New York University in the heart of NYC. The conference was abuzz with the latest and greatest web technologies for a variety of markets, including gaming, TV, enterprise, mobile, retail, and automotive.

The recurring theme throughout the event was that HTML5 is mainstream. Even though HTML5 still requires some ripening as a technology, it is definitely the burgeoning choice for app developers who wish to get their apps onto as many platforms as possible, quickly and cost effectively. And when a developer is confronted with a situation where HTML5 falls short (perhaps a feature that isn’t yet available), then hybrid is always an option. At the end of the day, user experience is king, and developers need to design and ship apps that offer a great experience and keep users engaged, regardless of the technology used.

Mainstream mobile device platforms all have web browsers to support HTML5, CSS3, and JavaScript. And there’s definitely no shortage of mobile web development frameworks to build consumer and enterprise apps that look and perform like native programs. Many of these frameworks were discussed at the conference, including jQuery Mobile, Dojo Mobile, Sencha Touch, and Angular JS. Terry Ryan of Adobe walked through building a PhoneGap app and discussed how the PhoneGap Build tool lets programmers upload their code to a cloud compiler and automatically generate apps for every supported platform — very cool.

My colleague Rich Balsewich, senior enterprise developer at BlackBerry, hit a homerun with his presentation on the multiple paths to building apps. He walked us through developing an HTML5 app from end to end, and covered future features and platforms, including the automobile. A special shout-out to Rich for plugging my session “The Power of HTML5 in the Automobile” held later that afternoon.

My talk provided app developers with some insight into creating apps for the car, and discussed the success factors that will enable automakers to leverage mobile development — key to achieving a rich, personalized, connected user experience. Let me summarize with the salient points:

What’s needed

What we're doing about it

The automotive community wants apps, and HTML5 provides a common app platform for infotainment systems. We’ve implemented an HTML5 application framework in the QNX CAR Platform for Infotainment.
Automotive companies must leverage the broad mobile developer ecosystem to bring differentiated automotive apps and services to the car. We’re helping by getting the word out and by building a cloud-based app repository that will enable qualified app partners to get their apps in front of automotive companies. We plan to roll out this repository with the release of the QNX CAR Platform 2.1 in the fall.
The developer community needs standardized automotive APIs. We’re co-chairing the W3C Automotive and Web Platform Business Group, which has a mandate to create a draft specification of a vehicle data API. We’re also designing the QNX CAR Platform APIs to be Apache Cordova-compliant.
Automotive platform vendors must supply tools that enable app developers to build and test their apps. We plan to release the QNX CAR Platform 2.1 with open, accessible tooling to make it easy for developers to test their apps in a software-only environment.

OTA software: not just building castles in the air

Tina Jeffrey
After attending Telematics Detroit earlier this month, I realized more than ever that M2M will become the key competitive differentiator for automakers. With M2M, automakers can stay connected with their vehicles and perhaps more importantly, vehicle owners, long after the cars have been driven off dealer lots. Over-the-air (OTA) technology provides true connectivity between automakers and their vehicles, making it possible to upgrade multiple systems, including electronic control unit (ECU) software, infotainment systems that provide navigation and smartphone connectivity, and an ever-increasing number of apps and services.

Taken together, the various systems in a vehicle contain up to 100 million lines of code — which makes the 6.5 million lines of code in the Boeing 787 Dreamliner seem like a drop in the proverbial bucket. Software in cars will only continue to grow in both amount and complexity, and the model automakers currently use to maintain and upgrade vehicle software isn’t scalable.

Vehicle owners want to keep current with apps, services, and vehicle system upgrades, without always having to visit the dealer. Already, vehicle owners update many infotainment applications by accepting software pushed over the air, just like they update applications on their smartphones. But this isn’t currently the case for ECUs, which require either a complete module replacement or module re-flashing at a dealership.

Pushing for updates
Automakers know that updates must be delivered to vehicle owners in a secure, seamless, and transparent fashion, similar to how OTA updates are delivered to mobile phones. Vehicle software updates must be even more reliable given they are much more critical.


BlackBerry’s OTA solution: Software Update Management for Automotive service

With OTA technology, automakers will use wireless networks to push software updates to vehicles automatically. The OTA service will need to notify end-users of updates as they become available and allow the users to schedule the upgrade process at a convenient time. Large software updates that may take a while to download and install could be scheduled to run overnight while the car is parked in the garage, making use of the home Wi-Fi connection. Smaller size updates could be delivered over a cellular connection through a tethered smartphone, while on a road trip. In this latter scenario, an update could be interrupted, for instance, if the car travels into a tunnel or beyond the network area.

A win-win-win
Deployment of OTA software updates is a winning proposition for automakers, dealers, and vehicle owners. Automakers could manage the OTA software updates themselves, or extend the capability to their dealer networks. Either way, drivers will benefit from the convenience of up-to-date software loads, content, and apps with less frequent trips to the dealer. Dealership appointments would be limited to mechanical work, and could be scheduled automatically according to the vehicle’s diagnostic state, which could be transmitted over the air, routinely, to the dealer. With this sharing of diagnostic data, vehicle owners would better know how much they need to shell out for repairs in advance of the appointment, with less chance of a shocking repair-cost phone call.

OTA technology also provides vehicle owners and automakers with the ability to personalize the vehicle. Automaker-pushed content can be carefully controlled to target the driver’s needs, reflect the automaker's brand, and avoid distraction — rather than the unrestricted open content found on the internet, which could be unsafe for consumption while driving. Overall, OTA software updates will help automakers maintain the customers they care about, engender brand loyalty, and provide the best possible customer experience.

Poised to lead
Thinking back to Telematics Detroit, if the number of demos my BlackBerry colleagues gave of their Software Update Management for Automotive service is any indication, OTA will transform the auto industry. According to a study from Gartner ( “U.S. Consumer Vehicle ICT Study: Web-Based Features Continue to Rise” by Thilo Koslowski), 40 percent of all U.S. vehicle owners either “definitely want to get” or at least are “likely to get” the ability for wireless software updates in their next new vehicle — making it the third most demanded automotive-centric Web application and function.

BlackBerry is poised to lead in this space, given their expertise in infrastructure, security, software management, and close ties to automotive. They were leaders in building an OTA solution for the smartphone market, and now again are among the first entrants in enabling a solution that is network, hardware, firmware, OS, software, and application agnostic.

Crisper, clearer in-car communication — Roger that

Tina Jeffrey
Over the years, Telematics Detroit has become a premier venue for showing off advancements in automotive infotainment, telematics, apps, cloud connectivity, silicon, and more. If the breadth of QNX technology being demonstrated at the show this week is any indication, the event won’t disappoint. Among the highlights is our next-generation acoustics processing middleware — QNX Acoustics for Voice 3.0 — which has been architected to deliver the highest-quality audio for hands-free and speech recognition systems, enabling the ultimate acoustics experience in the car.

What is QNX Acoustics for Voice?
QNX Acoustics for Voice 3.0 is the successor to the QNX Aviage Acoustics Processing Suite 2.0. The new product includes a set of libraries — standard and premium — that offer automakers ultimate flexibility for voice processing in the harsh audio environment of the car.

The standard library provides a full-featured solution for implementing narrowband and wideband hands-free communications, operating at 8 kHz and 16kHz sample rates, respectively. It also includes innovative new features for performing echo cancellation, noise reduction, adaptive equalization, and automatic gain control. Perhaps the most valuable feature, especially for systems constrained by limited CPU cycles, is the high efficiency mode, which can process wideband and higher-bandwidth speech with substantially less CPU load. The net result: more processing headroom for other tasks.

The premium library includes all the standard library functionality, plus support for Wideband Plus, which expands the frequency range of transmitted speech to 50 Hz - 11 kHz, at a 24kHz sample rate. The introduction of Wideband Plus fulfills the higher voice quality and low noise requirements demanded by the latest smartphone connectivity protocols for telephony, VoIP services, and speech recognition. Let me recap with a table:

Supported capabilities
Standard library
Premium library
Narrowband audio: 300 – 3400Hz (8kHz sample rate)
   
   
Wideband audio: 50-7000Hz
(16kHz sample rate)
   
   
Wideband Plus audio: 50Hz – 11kHz (24kHz sample rate)

   
High efficiency mode
 
(Wideband only)
   
VOIP requirements for new smartphone connectivity protocols

   
Cloud-based speech recognition requirements for new smartphone connectivity protocols

   



Why is high-quality speech important in the car?

Simply put, it improves the user experience and can benefit passenger safety. Also, new smartphone connectivity protocols require it. Let’s examine two use cases: hands-free voice calling, and speech recognition.

In a voice call, processing a larger bandwidth of speech and eliminating echo and noise from various sources, including wind, road, vents, fans, and tires, dramatically increases speech intelligibility — and the more intelligible the speech, the more natural the flow of conversation. Also, clearer speech has less impact on the driver’s cognitive load, enabling the driver to pay more attention to the task at hand: driving.

Speech recognition systems are becoming a primary way to manage apps and services in the car. Voice commands can initiate phone calls, select media for playback, search for points of interest (POI), and choose a destination.

Technological advancements in pre-processing voice input to remove noise and disturbances helps speech recognizers detect commands more reliably, thereby achieving higher recognition accuracy. Early speech recognition systems, by comparison, were unintuitive and performed poorly. Drivers became so frustrated that they stopped using these systems and resorted to picking up their smartphones, completely eliminating the safety benefits of speech recognition.

QNX Acoustics for Voice 3.0 is a comprehensive automotive voice solution that includes industry-leading echo cancellation, noise reduction, adaptive equalization and automatic gain control.

If you happen to be at Telematics Update in Novi Michigan this week, be sure to drop by our booth to sit in our latest concept car — a specially modified Mercedes-Benz CLA45 AMG — and experience our acoustics technologies first hand.