All posts

Why IoT Means Speech Recognition

Why IoT Means Speech Recognition

People now use their voice to control an increasing number of their devices-from TVs to cars, and everything in between. The technology that makes voice control possible is speech recognition - technology that transforms your verbal commands into executable actions via an API.

How do Speech Recognition APIs work?

When you talk to your speech recognition-enabled device-Google assistant, Siri, or your stove-you first have to wake the device by saying (or yelling) a keyword like "OK Google." These devices have a little bit of local hardware that is constantly listening for a specific wake word. Once it hears that, it really starts listening until you stop speaking. At this point, the device sends your voice clip to a speech recognition API in the cloud. There, the API does its best to convert your verbal command into text. The service running the speech recognition API then looks at the transcript and reacts accordingly. Maybe it talks to a music service and plays a song for you, maybe it offers you a choice of paper towels, or maybe it dims the lights. The ability to control the world using our voices is a game-changer in our relationship with the products, machines and services that are part of our modern life. Here are some examples:

Voice Assistants

There are probably a couple dozen voice assistants available on the market today. Google Assistant, Siri, and Cortana are three of the better known ones. These are speech recognition API-powered apps that make our lives easier, essentially acting as concierges and note-takers. Need to schedule a phone call with your mother at 2pm, Wednesday May 15th 2024? No problem, just ask Siri to do so. Need to call a Lyft to take you to your meeting with an executive? No problem, ask Google. When we use voice assistants, we the consumer have a sense that the speech app itself is the scheduling dates, ordering food, playing music, etc. However, that is not quite the case. Speech recognition APIs are a bit of cloud software that engineers can add to their products similarly to how you can add apps like Uber to your phone. That speech recognition API is what allows you to interact with other technologies or applications via voice. Speech recognition APIs can be worked into almost any app or device to allow people to interact with technology in a more human way using voice, rather than fiddling with mice, keyboards, and touch screens.

Smart Speakers

Alt

The Amazon Echo and Google Home are two renowned examples of smart speakers-voice recognition-enabled devices. These devices are screenless extensions of the digital voice assistants that "live" in your phone, except with better sound and Space Odyssey 2001-like lines. Like the voice assistants on your phone, these two allow you to make appointments, send texts, and, depending on the product, even make purchases. Users make use of them to look up recipes while their hands are covered in chocolate mousse, buy concert tickets while parents clean up said abandoned chocolate mousse, and of course, to turn on music without interrupting more important activities. What makes the consumer experience different for each smart speaker depends on three factors:

  1. The accuracy of the speech recognition API running behind the scenes

  2. The suite of apps and services that are integrated with the speaker-e.g. Spotify vs. iTunes, Apple Maps vs. Google Maps.

  3. The quality of the actual speaker.

Controlling Homes

Apple HomeKit, GoogleHome and the Athom Homey are three examples of devices designed to allow humans to talk to their machines, rather than push buttons. Now, the 30 possible functions of the modern home thermostat, as well as the 40 billion possible functions of a modern television set can be controlled by speaking. We can think of these devices as smart speakers more richly integrated into the home.

ALT

Smart home solutions are a transformational technology. Here's why:

  1. They bring all the home machines, i.e. heating, cooling, refrigerator, alarm system, into one ecosystem. Such unification is the sort of disruptive change that makes economic successes.

  2. Smart home solutions connect your home to the cloud, and thus your mobile phone. As a result they make any home part of a village - no matter where you go, you are always close to home.

Controlling our Cars

Some auto manufacturers have utilized speech recognition APIs to enable next-generation hands-free control of car systems. Until recently, automotive voice command systems received fairly poor reviews. This has now begun to change as auto manufacturers have begun to utilize more accurate speech recognition APIs. When integrated into the automobile and concomitant apps, speech recognition APIs allow users to safely do hands free navigation, hear and write text messages, make phone calls and use the climate control, audio system and other traditional "hand-on" aspects of the auto. Clearly, speech recognition APIs are particularly well suited for implementation in cars. That said, this is a use case where accuracy is as important as it can get. When used at 70 miles an hour, no matter the task, good performance is absolutely critical. As a result, which speech recognition API companies choose matters.

Voice UI and Speech Recognition APIs are Revolutionary

Speech recognition APIs make it possible to interact with the machines, apps and devices that we use every day in a more human, user-friendly way. As such, their advances bring about a paradigm shift how we interact with machines. In 1930, no one would have thought of running a tractor with a keyboard. In 2018, we know we will be running nearly everything with voice. From this we can gather three key insights:

  1. Products will increasingly be designed with voice in mind.

  2. Advertisers will take the voice channel-listening and speaking-into account. Advertising campaigns must be designed to work with this changing, multichannel ecosystem.

  3. As companies design products that rely on more and more accurate speech recognition APIs, product leaders will look to next-generation automatic speech recognition technologies to reliably build better voice experiences for their users.

If you have any feedback about this post, or anything else around Deepgram, we'd love to hear from you. Please let us know in our GitHub discussions .

More with these tags:

Share your feedback

Thank you! Can you tell us what you liked about it? (Optional)

Thank you. What could we have done better? (Optional)

We may also want to contact you with updates or questions related to your feedback and our product. If don't mind, you can optionally leave your email address along with your comments.

Thank you!

We appreciate your response.