Skip to main content
boy singing on microphone with pop filter

Talking Business: The future for voice technology in the Enterprise

Written by John Harris, Global Research & Development Director at Panasonic Mobile Solutions Business Division

Talking Business: The future for voice technology in the Enterprise

In our day-to-day lives, most of us will have used our voice to interact with technology but you may be surprised at just how fast this technology is being adopted. In Europe, we are never too far behind the US in the adoption of technology trends. Recent reports have shown that as more consumers turn toward smart speakers to make their lives more convenient, Amazon’s voice-activation devices now control around 70 per cent of the US market share, with close to 100 million units in American homes.Digital assistants are expected to reach around 8 billion units by 2023 – more than the entire world population today. This rapid rise in uptake means that people are becoming increasingly familiar with such technology.

That’s amazing considering it doesn’t seem that long ago that the closest most people had come to voice technology in business was when contacting those frustrating call centres, where the system seemed almost designed to prevent you from actually speaking to a real person. But this technology is now light years ahead. It’s much more like my childhood hero the digital assistant computer from Star Trek, with its 100% pure voice recognition, no misunderstandings, and no repetition. The technology has become what voice interaction really should be. The opportunities to save time and effort utilising natural voice now seem boundless.

Roots in the 70’s
So where did it all begin? The history of speech recognition and of digital assistants really began in 1971. There were some developments before that, but for me, Harpy, created by the Carnegie Mellon University, was the first significant step. It could comprehend over a thousand words and some phrases. It was the first real working version.

In 1986, IBM launched IBM Tangara. It used the Hidden Markov Model, which uses statistics to predicts upcoming phonemes in speech and as a result it took a giant leap forward and could recognise over 20,000 words.

In 1997, the first continuous dictation product came from Dragon Systems, called NaturallySpeaking 1.0. Then in 2007, PAL (Personal Assistant that Learns) was born out of a US defence research programme run by DARPA and artificial intelligence began to play a major role. SRI Inc was spun out of this program and later acquired by Apple.

In 2008, Google unveiled its voice search application for mobile phones and Apple brought cloud powered voice recognition. In 2011, it was Apple again, with the launch of SIRI and in 2014 Amazon launched the Echo, powered by Alexa, the digital voice assistant. Although late to the party, you will have noticed from the statistics earlier on that Amazon has taken voice applications to the masses.

[Header image by Jason Rosewell on Unsplash]


So what are the opportunities for the use of these advanced voice applications in business? Well, I think we are just beginning to scratch the surface. There are a number of simple applications that can be transferred to the work environment, such as voice activated remote controls, online digital assistants or bots that perform simple tasks such as guiding you to the correct department in the larger organisation. For a mobile workforce in the Enterprise, this type of application could be used in a variety of areas. Imagine walking into a remote utilities facility and asking your mobile computing device when the last time a particular component was serviced or to identify the last 10 faults. Perhaps, even to talk you through a particular maintenance or repair task. These types of solutions could all be possible, alongside the more obvious questions about details for your next location visit, or even where you’ve misplaced your keys!

Like all developments, there are of course some challenges with these potential new applications. If you are underground or out of reach of a Wi-Fi connection, you will need to have the processing power on your mobile computing device to analyse the task and a decent memory store for the natural language processing library. The application will also need to be trained to manage different accents or pronunciations. However, there are also some benefits of working offline. You have the added security of processing remaining on the device. It works offline and it should be faster as it doesn’t need to process information in the cloud.

It’s still relatively early days, but Panasonic TOUGHBOOK is very keen to explore this area further with its customers. We are keen to discuss new user case ideas and your views on the potential balance of functionality required between voice and traditional type and touch screen. If you have some thoughts, please don’t hesitate to get in touch at

Tell us your thoughts on the potential value of voice applications in mobile computing.

Get in touch

If you would like to discuss any of the topics featured on this blog or want one of our experts to get in touch to see how we can help with your IT mobility challenges, then please use the Contact Us button to get in touch.

Get in touch