ndp doodle logo

scroll to explore

Things said and heard at the Alexa conference.

Just finished attending the third annual Alexa conference in Chattanooga, Tennessee. A great mix of speakers, companies, developers, and entrepreneurs were in attendance to share their thoughts on the future of this nascent technology.

Here are a few takeaways:

Faster than a speeding smartphone

Smart speaker adoption has been almost twice as fast as smartphone adoption. That not only shows the consumer desire for smart technology but the preference of voice interface over touch. It feels more connected. More human. And that’s the end goal – more human experiences, not less.

Zero pixel interface? Yes, please.

Yes, we are addicted to our phones and our touch screens. But having to constantly look down, dig through apps and their sub-menus is a cumbersome and inefficient experience. It’s not going away any time soon, but those in attendance believe combining a proactive voice assistant with an intuitive graphical user interface is on the horizon.

Look Ma, no hands!

The automotive industry is driving (pun intended) the innovation in this category due to the obvious fact that a touch-based interface isn’t optimal in cars. While touch screens in cars aren’t great, voice interfaces aren’t that much greater. The technology hasn’t been developed yet to allow us to speak naturally with our vehicles. This was highlighted by the folks at SoundHound. They’ve developed the Houndify platform which allows you to add voice-enabled AI to anything. Their goal is to develop technology that allows for compound, complex queries as well as context and follow-up questions, in order to create a more natural dialogue. So, instead of asking for directions to an Italian restaurant, you could ask for directions to an Italian restaurant, with a five-star rating, that is open until 10pm, that offers more than just pizza.

Hello, is this on?

Companies that try to monetize our data for using their product or platform will continue to face scrutiny. That being said, the notion that all our smart devices are always “listening” to our conversations to glean more information was denounced by most. There just isn’t enough storage to sift through all of those conversations and devices… yet. So, are our devices always listening to us? No. Always tracking us? Maybe.

The doctor will hear you now.

Smart assistants are making their way into the healthcare industry… slowly. The industry understands that more and more patients are using this technology and expect their healthcare providers to offer them the same. The concern is how to use this technology without being in conflict with the Health Insurance Portability and Accountability Act (HIPAA) or the FDA. For the most part, the consensus is that the industry can tend to overregulate itself and err on the side of caution to ensure that it meets HIPAA guidelines. At the end of the day, patients will always be able to give and/or revoke consent. How comfortable the healthcare industry is with technology allowing them to do so remains to be seen.

Grandmom. Grandad. Meet Alexa.

While SNL had a fun parody video that hit close to home, the reality is that the elderly will definitely benefit from proactive voice interactions offered by these devices. Instead of waiting for them to initiate a dialogue, companies like LifePod are developing applications so that Alexa can proactively ask them if they’ve taken their medication, performed any necessary exercises, suggest music, books, tv shows as well as just carry on a conversation with them to keep them engaged. It will even be able to text loved ones if certain phrases or words are spoken that are flagged to raise concern.

A.I. is out. C.I. is in.

Artificial Intelligence is giving way to Conversational Intelligence. A device that doesn’t understand me, or what I’m saying doesn’t know me at all and isn’t very “smart.” Contextual awareness will be a large step into seamless, virtual assistant experiences. Say you’re a foodie and wine aficionado. At some point, there should be enough “conversations” you’ve had with your smart speaker so that it knows that if you’re cooking a steak, it will proactively suggest a wine for you to go along with it. One that you enjoyed last year when you visited Napa. It could even go as far as suggesting what sides or desserts the guests you’ve invited may like or if they have food allergies or are vegan. Unfortunately, the smarter you want your AI to be. The more data you, your family and friends will have to give it. Pass the salt, please.

Overall

It feels that everyone at the conference believes that voice is the next UI. Technology just needs to catch up with our expectations of what that looks like. We’ve seen that vision played out in movies like 2001: A Space Odyssey, HER and Star Trek. With technologies such as integrated microphones and headphones in glasses that can “telepathically” hear what you are saying, that vision may not be that many Alexa conferences away.

Previous Post

Next Post