The BMW 7 series infotainment system received a major voice software improvement last year from supplier Nuance Communications.
Years after voice recognition was introduced in vehicles, why is this technology still beset by glitches?
Google Now, Apple Siri and Cortana serve millions of smartphone owners, but automakers are still trying to develop reliable and effective systems.
According to the J.D. Power 2016 U.S. Initial Quality Study, 23 percent of all problems reported by car buyers involved infotainment, and voice-recognition systems remain a huge part of the problem.
“Voice recognition is still the No. 1 problem that we see,” says Renee Stephens, J.D. Power’s vice president of U.S. automotive quality. “This year, we started to see some improvement, but it’s been slow.”
It’s not just a problem for older motorists. According to Stephens, voice-based commands are among the top five “difficult-to-use” cockpit technologies for Generation X, Generation Y and baby boomers.
Nuance Communications Inc. of Burlington, Mass., the auto industry’s top supplier of in-vehicle voice systems through its Dragon Drive technology, says voice-recognition technology is improving.
Weil: Progress, but it’s uneven.
Arnd Weil, general manager of Nuance Automotive, points to the BMW 7-series sedan as an indicator of improvements to come. Last year, Nuance rolled out a major voice software upgrade for the redesigned 7 series.
“We are showing the 7 series as a benchmark,” Weil says. “Other vehicles are coming that will advance beyond that.”
The new software allows motorists to make conversational commands, use speech to send text messages and even interrupt the system with follow-up instructions. Reliability is improving, although progress has been uneven, Weil says.
For example, voice systems do pretty well with phone numbers, according to Weil. Accuracy rates for voice dialing typically range from 90 to 95 percent.
Voice systems also are getting better with addresses for route guidance. But it’s trickier to handle less structured commands such as requests relating to places of interest.
Early voice systems relied on databases that were limited to major retail chains such as Starbucks shops, Exxon stations or Hilton hotels. But newer in-car computers connect to the cloud to identify just about any place of business — as long as it understands the motorist’s voice.
That’s where things get problematic. For example, restaurants often have difficult-to-understand foreign names, Weil says.
Another problem is noisy passenger cabins. For example, the computer may have trouble understanding the motorist if other passengers are talking.
To solve this problem, automakers have begun using two microphones instead of one. Acting like a pair of human ears, the two microphones can pinpoint the position of each talker and thus identify the driver’s voice. This system, dubbed “beam-forming,” first was adopted by Audi, Mercedes-Benz and BMW, Weil says. Some mass-market brands are now adopting it.
More powerful computer chips have allowed yet another improvement. Because cloud connectivity isn’t always available, automakers provide backup voice recognition via a computer chip installed in the vehicle.
These “embedded” systems are less sophisticated than cloud-based voice recognition, but they’re getting better, Weil says.
Despite the improvements, vehicles still are unlikely to match the performance of a $200 cellphone. Passenger cabins are noisy, and motorists must keep an eye on traffic.
Despite Nuance’s efforts, it will be a slow grind to improve those J.D. Power ratings.