Technology is racing forward faster than most businesses can keep up – and at the heart of new developments lies one key focus; communication.
Voice activation and voice searches are becoming the norm and they’re changing the way we search, share and receive information. This wasn’t always the case – for years, voice-activated technology was known not so much for it’s successes, but for it’s limitations. Consumers were up in arms about misunderstood commands, a lack of terminology, and seemingly random responses.
Not so much anymore… voice-activated ‘assistants’ are providing useful information in the form of immersive conversation and it’s now considered normal to talk to machines to control elements in our lives. In fact, it’s estimated that by the end of the decade 50% of mobile search queries will be initiated by voice.
Yet despite these advances, and despite these triumphs, there’s one key issue that continues to limit the likes of Siri and Alexa…how can they become truly multilingual?
The power, reach – and limitations – of voice activation
For those of us that only speak English and live here in the UK, any linguistic issues encountered with Alexa or Siri are likely to be minimal.
Use the same commands overseas, or speak to your ‘assistant’ in your second language however, and you might just find that all communication stalls…
- Amazon’s Echo currently only works in the UK and the US – a drop in the ocean when you consider the number of consumers available on a global scale.
- Alexa is also limited by country-specific information stored. For example, if you were to ask for information about particular Chinese services, even in English, Alexa wouldn’t be able to provide the answers needed.
- Alexa is only available in English – a surprising fact and one that restricts the opportunities of a huge proportion of the global population…80% to be more precise.
- Siri on the other hand supports 20 different languages and is even capable of handling changes in language mid-sentence. However, if you’re using Siri on your mobile, you can only have your device set to one language – deeming it near useless for those that speak more than one language on a daily basis.
So, is the quest for true multilingualism one step too far?
To get the most from natural language, sequential inference and voice interface, voice activation technology needs to be accessible to everyone from any country, in any language. And while cloud-based speech recognition has improved accuracy and capability massively, it is not without its limitations.
What about those that communicate in a second, third, or even fourth language on a daily basis? The world is getting more multicultural, not less – 43% of the global population are bilingual and 13% are trilingual – yet for the time being voice-activated technology is not set up to accommodate this.
Voice activation’s biggest draw is in allowing us to communicate in our natural language, all of the time. Yet until these technologies recognise that communicating in just one language is not representative of today’s global consumer, these barriers will continue to raise their ugly head.
Are language barriers holding your business back? Maybe your translation doesn’t allow for regional differences for example or your multilingual clients like to work across multiple languages. If your business requires translation services get in touch with us here at Every Translation to find out more about how we can help.