🗣Digital assistants: What’s the big deal?

 

By 2021 digital assistants are expected to outnumber people.  In America a massive 35.6 million Americans are expected to use a voice-activated assistant device at least once a month. This represents an increase of 129% over last year. According to a study by Grand View Research the intelligent virtual assistant market worth $3.07Bn by 2020. Voice has become a trend too big to ignore, especially when it comes to digital assistants, but what’s deal?

voice devices

A frictionless and natural medium

We are living in an increasingly digital and connected world, which has had consequences on our expectations and exchanges. Today we live in the moment, place great importance on experience and expect instant gratification. Voice reduces friction on any transaction or goal we want users to hit: no time lost typing out a long question or researching, voice is quick and easy and digital assistants give instant results.

Furthermore, voice is one of our primary and most natural methods of communication, making voice a natural interface for humans. Speaking does not normally require a lot of cognitive or mechanical exertion, neither does listening to the response. In fact, a recent study found that 50% less brain activity occurs when processing an answer delivered by voice, meaning people find it much easier to use. Got a question about your agenda, fancy ordering a takeaway or want to know what the traffic is like, you just need to ask! Digital assistants are transforming how we search for information, they even have the potential to disrupt web browsers, this perhaps explains why Google is investing so much in Google Home!

Endless opportunities for third party developers

By working with third party developers, digital assistants are able to even further extend their scope. Both Amazon and Google recently announced financial incentives to encourage developers to build apps for their respective assistants, meaning the possibilities of what can be done with a digital assistant is expanding constantly. For example, connected vacuums from Neato allow you to start or pause their autonomous vacuums using the Amazon Echo.  The fact that digital assistants can be integrated with a smart home takes their practicalness to a whole new level! The devices are useful for the whole family, and are changing our views on machines, widening access and making them an integral part of our lives.

Digital assistants and business

Digital assistants are disrupting our world, from how we shop and interact with our favourite brands, to how we drive our cars and organise our agendas. Consequently, more and more businesses are wanting to harness the power of digital assistants, not only to improve internal procedures through saving time and enhancing human ability, but also to expand their customer base, interact with their own customers, wherever they are and improve customer experience. For example, Virgin Active has recently become the first gym to integrate Amazon Alexa, meaning that gym go-ers can book their next gym class from the comfort of their sofa. Furthermore, Domino’s Pizza recently achieved break out sales, which it partly attributes to it’s Alexa Skill!

 

dominos

In addition, a recent study found that voice technology drives greater emotional connection with brands, driving engagement and creating deeper relationships between consumers and brands.

Innovation a go-go

The market space is fierce, with tech giants pulling out all the stops to create the most comprehensive digital assistant. Both Google and Amazon have announced their new proactive assistant features, in addition to hands free calling and visual responses. Irrelevant of the company making them, this is great news for the AI industry, powering innovation and creating a booming ecosystem, and further underlines the position of voice as the main medium for human to machine interaction. Voice is our most natural interface and technology is finally exploiting this.

🔊Notifications on voice assistants: Awesome or annoying?

Both Amazon and Google have recently announced proactive notifications for their respective connected devices. Instead of simply reacting to your demands, they will be able to light up when they have something to say. Awesome or, well, just plain annoying? Let’s investigate…

Notifications on voice assistants: Awesome or annoying

Notifications a go-go

We have all had the feeling of being harassed by our telephone when a multitude of application notifications flood in, so we can understand that you are perhaps rolling your eyes at the prospect of another device hassling you! However, the proactive notifications on both of the speakers will not be that intrusive; Google Home will simply light up, and Alexa will chime and light up, and both will only speak when prompted.

If done right and in the appropriate context, proactive notifications could be highly practical, for example, a cooking app could tell you when your water was boiling, you could be alerted to set off to your meeting earlier due to traffic issues or your car could tell you when you are low on gas… However, proactive voice notifications on assistants raise several issues. While notifications on a smartphone can be treated in a glance, listening to your digital assistant reel off a number of notifications is less practical. Amazon’s Echo Show is equipped with a screen, which could overcome this issue. However, this feature poses others concerns, for example, often users have more than one device, therefore, should the notifications be sent to all? Furthermore, most digital assistant owners are out during the day, so how should developers create the experience so users don’t feel bombarded when they return home and that users who are around the device all the time still find the feature useful? Google and Amazon will have to carefully consider how notifications are rolled out to make sure the devices don’t drive users crazy!

So, how exactly are they going to do it?

Google is keeping things uncomplicated to begin with. The device will light up (no sound) when it has a notification and, when prompted by the user saying “what’s up”, will give news regarding reminders, flight changes, traffic delays ahead of upcoming appointments.

“We’re going to start simple, with really important messages like reminders, traffic delays and flight status changes,” said Rishi Chandra, Google’s vice president of product management, on stage at I/O 2017. “With multiple user support, you will have the ability to control the type of proactive notifications you want over time.”

It appears that Google is playing it safe and ensuring it gets it right before elaborating the feature.

Psst: Did we mention how excited we are about Google Home coming to France this summer!

Amazon on the other hand is expanding notifications to all skills. Notifications on Echo devices and third party Alexa devices will be at the discretion of the developer that builds the skill and it has not been confirmed if Amazon will limit the number of notifications the developer wants to send. All notifications will be opt in and triggered by the user saying “Alexa, what did I miss today?” or “Alexa, what are my notifications?”. Amazon also stated that there will be the possibility of temporarily suppressing all notifications by employing the “do not disturb mode”.

Long road ahead

It’s still early days for proactive notifications, but clearly lots of challenges lie ahead for both of these Internet giants in order to fine tune this feature. While notifications can be useful for certain types of apps, for example breaking news, this capability can be exploited, as is often the case with push notifications. For example, news apps enable users to opt in for all or nothing notifications for breaking news and then push notifications for events that are not urgent. It will be interesting to see how developers expand this feature in the future, however, if they get it wrong, we might see a couple of devices thrown out of windows 😉

🔔Toward a fully Context-aware Conversational Agent

I was recently asked by my friend Bret Kinsella from voicebot.ai for my predictions on AI and Voice. You can find my 50 cents in the post 2017 Predictions From Voice-first Industry Leaders.

In this contribution, I mentioned the concept of speech metadata that I want to detail with you here.

As Voice App developper, when you have to deal with voice inputs coming from an Amazon Echo or a Google Home, the best you can get today is the transcription of the text pronounced by the user.

While It’s cool to finally have access to efficient speech to text engines, It’s a bit sad that in the process, so much valuable information is lost!

The reality of a conversational input is much more than just a sequence of words, It’s also about:

  • the people — is it John or Emma speaking?
  • the emotions — is Emma happy ? angry ? excited ? tired ? laughing ?
  • the environment — is she walking on a beach or stuck in a traffic jam?
  • local sounds — a door slam? a fire alarm? some birds tweeting ?.

Imagine now the possibilities, the intelligence of the conversations if we could have access to all this information: Huge!

But even we could go further.

It’s a known fact in communication that while interacting with someone, non-verbal communication is as important as verbal communication.

So why are we sticking to the verbal side of the conversation while interacting with Voice Apps ?

Speech metadata is all about the non verbal information, wich is in my opinion the immerged part of the iceberg and thus the more interesting to explore!

A good example of speech metadata is the combination of vision and voice processing in the movie Her.

With the addition of the camera, new conversations can happens, such as discussing the beauty of a sunset, the origin of an artwork or the composition of a chocolate bar!

Asteria is one of the many startups starting to offer this kind of rich interactions.

I think this is the way to go and that there would be a tremendous amount of innovative apps that will be unleashed by the availablily of the conversational metadata.

In particular, I hope from Amazon, Google & Microsoft to release some of this data in 2017 so we the developers can work on a fully context aware conversational agent.

🎵 Alexa Skill update – Blind Test

Hey Alexa fans!

Introducing the Blind Test
Blind Test is a fun game that you can play to test your musical knowledge.
The concept is simple. Listen to the song extract and find the name of the artist!

fans-8x3[1]

So, what’s in this updated version? Well…

New songs 
Hundreds of new tracks to discover!
How many will you recognize?

New features 
Blind Test calculates your score so you can challenge your friends. 🙂

So if you don’t have it yet on your Alexa,
grab it now and enjoy the music!

📈 A new monitoring tool for Alexa Skills!

Hi there,

Once we published Music Quiz, our first Alexa skill, we quickly wanted to see how it was performing. We quickly discovered that we had to put in place a logging system and then that navigating through all the data generated by an Alexa skill was a nightmare.

To get more transparency and true actionable insights,
we decided to build a tool that would allow us to:

know exactly what’s going on between our skills and our users,
⇒ find and fix bugs and
⇒ enhance the user experience 

After weeks of work, here is the dashboard we have finally built:

 

The Logs section,  which allows you to search for specific sessions.

Capture
User logs

We are also bringing out specialized analytics for conversational apps.
You can see it as “Google Analytics” for Alexa.

Capture

Capture


Awesome, but… Can I use it for my skills?
Sure! All you have to do is log to Alexa Designer and install a small tracker code in your lambda function.

Cheers
The Vocal Apps Team 

PS: If you have privacy concerns, contact us so you can have everything installed in your server.

 

 

😎 Alexa Designer: Announcing upcoming update

Good news: A big update is coming soon!

Since we launched our private beta of Alexa Designer in January, we have received tons of feedback from the Alexa developer community.

Well, we took into account your feedback.
We reviewed and improved many points in this version.

Long story short, here is what the v2 is all about:

New dialog algorithm
Improved UX & better onboarding
In depth user analytics
Automated testing

Those features will come as part as an opt-in beta in April.
Stable version is expected in May.
Feel free to test it & send us your feedback and suggestions.

Cheers,
The Vocal Apps Team