🌍 Google Home is coming to France: Why should you be excited?

Google recently announced its plans for the geographical expansion of Google Home. Among the countries in Google’s roadmap for this summer is France . Given that we have been flying the French flag for vocal technology since 2012, this announcement generated a lot of excitement at Smarty.AI’s HQ in Paris!

Here is a little video demo of Google Home:

International aspirations with an AI first strategy

Although no exact date was given for the release date in the 4 countries, it looks likely that the voice assistant will be arriving in summer time! This could indeed be a clever move by Internet giant. Alexa won the race to the US market and consequently won the lion’s share of the market and grew an established base of developers. Therefore, being the first AI assistant in France, Japan, Australia and Canada could prove very fruitful for Google Home.

US market voice assistant breakdown

US market voice assistant breakdown

The international expansion was announced at Google I/O, where CEO, Sundar Pichai, also detailed the company’s new strategy to progress from a mobile first to an artificial intelligence and machine learning first company. The aim is to enable digital assistants to anticipate user needs and understand sights and sounds in ways that were previously not possible on a huge scale.

New and improved features

Google I/O was also the occasion to reveal a number of new and improved features coming to Google Home were also announced, including:

Visual responses

The device will soon be able to send visual responses to televisions and phones. Google Home will consider the information being sent before choosing the destination device. For example, if you want directions, the best place to receive them is on your phone. However, if you require the latest weather report or information on your agenda, the visuals would be sent to your television.

Google Home visual responsesGoogle Home visual responses

Proactive assistant

Google Home will be able to think ahead for you by looking at your agenda and personal data. At the beginning this feature will be used for simple, but important things (reminders, flight changes, traffic…), and then developed further.

Notifications on voice assistants: Awesome or annoying? What do you think?

Free hands free calling

This feature was announced for US and Canada, and will enable a user to call any mobile or landline without requiring any setup! Home’s ability to identify multiple voices means it can figure out non-proper names like “Mum”. Users will have the option to call using a private number or their mobile number. Admittedly, this feature would have generated more buzz if Amazon hadn’t previously announced that Alexa can also do calls. However, Alexa is limited to device to device or device to app. Google is using its infrastructure and know how from Google Voice to make calling a frictionless hands-free process.

Improved entertainment support

In addition to Spotify’s music service, Google Home will soon support other popular music platforms including Deezer and SoundCloud. Plus, in addition to being currently able to stream Netflix, YouTube and Google Photos to televisions, Google Home will now support even more entertainment services, notably HBO Now and Hula.

And not forgetting developers

The Alphabet Inc unit also used its I/O developer conference to announce that, similar to how Amazon has done with the Alexa platform, it has opened the entire Google Assistant API to developers meaning that they can create their own voice commands and responses that can control the local device. The Google Assistant API will spread the Voice First platform to numerous new devices, appliances, automobiles and other products. Manufacturers will also be able to a “Google Assistant Building-in” logo and registry. Conference go-ers were offered a Google Home speaker and $700 worth of credits for its cloud-computing service. Google hopes this will encourage developers to build and test new voice-based apps (known as Actions on Google) for Google Assistant.

The digital assistant race

Although Amazon’s Alexa has a long head start, it seems that Google is catching up. The company is clearly pulling out all the stops to gain market share. However, the race is far from over, and let’s not forget Apple and Microsoft’s AI assistants, as well as the Orange’s recent announcement of Djingo, France’s first home grown assistant! Either way, its imminent arrival in France is great news for our AI scene and could be a critical move for increasing adoption in Europe! Vive l’innovation 😎

🏆 Smartly.AI delighted to be named an Alexa Champion

Earlier this year, our CEO and co-founder, Hicham Tahiri, was honoured to be awarded the title “Alexa Champion” thanks to his knowledge and passion for Amazon Alexa, in addition to his motivation to educate and inspire other developers in the community. Smartly.AI was founded with the ambition of widening the access to voice and chat interfaces and introducing conversational intelligence into daily lives of everyone. This recognition from Amazon Alexa marks an important stage in our company’s development.
Smartly AI named Alexa Champion

What exactly is an Alexa Champion?

Since the creation of Amazon Alexa, the contributions from developers in the form of tools, tutorials, online training, meetups and more, has skyrocketed. Alexa Champions distinguishes the most engaged developers and contributors in the community.

Our love story with Alexa

In 2012, Smartly.AI’s two co-founders observed both the rapid growth and huge potential of voice interfaces and intelligent chatbots. However, one thing was missing: a tool to help developers build applications for vocal and text platforms. Thus, Smartly.AI was born.

When Alexa was launched in 2015, its openness was a developer’s dream. The Smartly.AI team set to work on creating Alexa Designer, a developer toolbox that offered a visual conversation design tool with automatic code generation and a community-generated intents library, plus a voice simulator, meaning users could speak to the skill directly in the browser without requiring an echo. The platform attracted hundreds of developers and following the arrival of more vocal assistants, Smartly.AI broadened its scope to incorporate more and more voice (and chat) platforms.

What it means to us

The programme selected over 20 champions worldwide. Hicham was the only French developer to be recognised and we are immensely proud to be waving the flag for vocal technology in France. This title not only recognises our hard work to date in the vocal industry, but encourages us to go even further. As artificial intelligence continues to transform our world, we will ensure that we provide both businesses and developers with the essential tools they need to harness the power of AI to change daily life, at both home and work, for the better.

A big thank you to Amazon Alexa for this title, we are looking forward to continuing to build great things together – the best is yet to come 😉


Interested in finding out more about how our platform works and how it can help you? We would love to chat!

Get in Touch!

🔔Toward a fully Context-aware Conversational Agent

I was recently asked by my friend Bret Kinsella from voicebot.ai for my predictions on AI and Voice. You can find my 50 cents in the post 2017 Predictions From Voice-first Industry Leaders.

In this contribution, I mentioned the concept of speech metadata that I want to detail with you here.

As Voice App developper, when you have to deal with voice inputs coming from an Amazon Echo or a Google Home, the best you can get today is the transcription of the text pronounced by the user.

While It’s cool to finally have access to efficient speech to text engines, It’s a bit sad that in the process, so much valuable information is lost!

The reality of a conversational input is much more than just a sequence of words, It’s also about:

  • the people — is it John or Emma speaking?
  • the emotions — is Emma happy ? angry ? excited ? tired ? laughing ?
  • the environment — is she walking on a beach or stuck in a traffic jam?
  • local sounds — a door slam? a fire alarm? some birds tweeting ?.

Imagine now the possibilities, the intelligence of the conversations if we could have access to all this information: Huge!

But even we could go further.

It’s a known fact in communication that while interacting with someone, non-verbal communication is as important as verbal communication.

So why are we sticking to the verbal side of the conversation while interacting with Voice Apps ?

Speech metadata is all about the non verbal information, wich is in my opinion the immerged part of the iceberg and thus the more interesting to explore!

A good example of speech metadata is the combination of vision and voice processing in the movie Her.

With the addition of the camera, new conversations can happens, such as discussing the beauty of a sunset, the origin of an artwork or the composition of a chocolate bar!

Asteria is one of the many startups starting to offer this kind of rich interactions.

I think this is the way to go and that there would be a tremendous amount of innovative apps that will be unleashed by the availablily of the conversational metadata.

In particular, I hope from Amazon, Google & Microsoft to release some of this data in 2017 so we the developers can work on a fully context aware conversational agent.

🔊Introducing Audicons™

The way we are interacting with our digital world
will be completely changed with the rise of voice assistants such as Alexa or Assistant.

We created Smartly.AI to make this transition easier for developers while pushing the horizons of Conversational AI.

The Problem
Currently, if you want to build a rich message for your bot, you can use a language called SSML to mix Voice Synthesis and Audio Sounds.
With SSML you can do pretty amazing things ( change the pitch and tone of the voice, add silences, …). You can check a documentation on how Alexa’s SSML . But, the issue here is that SSML has also a tricky syntax that makes it quite hard to master for a new developer.
As an illustration, let’s see what I have to do to build an answer to this question with SSML:

“Alexa, ask PlaneWatcher: Where is the plane DC-132?”

<speak>
    <audio src="https://server.com/audio/plane.mp3"/> 
    <s>Welcome to Plane Watcher, 
    <audio src="https://server.com/audio/sad.mp3" /> 
    <s>The plane DC-132 is currently being delayed of 30 minutes!
</speak>

Wait another XML like grammar to deal with… 🤔
Come on, this has to be fixed!

Our solution
As we overuse emoticons in our Slack channel, we couldn’t resist to try to transpose this awesome language to the voice world!
After a few experiments, we are happy to present you our latest creation:
the Audicons !

✈ Welcome to Plane Watcher ☹ The place DC-132 is currently being delayed of 30 minutes!

Audicons  is a set of standardized audio-files that can be easily recognized and associated to specific meanings. Audicons will be soon open sourced so you can reuse them in your ownprojects, Stay tuned 😀
In most cases, we think Audicons can replace SSML.
Audicons have the potential to evolve to a standardized audioset used in ALL the voice interfaces.

A short examples you may want to create for weather forecasts

😃 Tomorrow is gonna be sunny ☀🕶!
😩Tomorrow is gonna be rainy ☔⛈!

Hear our first Audicons in the demo below.

Cool isn’it ? 😃
Wich ones do you prefer?

You can already use Audicons in your Alexa skill if you build it with Smartly.AI but we plan to open source them soon along with our SSML generator.

Now it’s up to you to make your beloved Alexa more expressive  !

🌱La Cool Co. a smart environmental assistant powered by Alexa!

At Smartly.ai, we love to showcase cool hackers pushing the limits of conversational AI. Today we are excited to present you La Cool Co.
it’s vision and how they are using Alexa to give a voice to your plants!

Grow anything, anywhere.

At La Cool Co. is specialized in monitoring and controlled environments. We design and develop smart, straightforward and playful devices that improve the well-being of any kind of plant.
This adventure began in 2013. Since, we have been developing an open source greenhouse kit composed by lasercut wood, an Arduino board, standard plastic boxes and a set of open hardware sensors, components and LEDs.

FINAL

Each greenhouse can independently control air temperature, humidity, luminosity, soil moisture and water feed of up to three plants, as well as capture image/video footage used to provide time lapse videos of the plant growth.
From the beginning, we’ve dreamt about this system that will always give our plants what they need.
Everyday, La Cool Co.‘s is promoting its vision of an open source environmental system with everyone : open growers, every makers, with kids, teachers or educators, and now companies which are focus on climate changes and environment preservation. Let’s grow it !

Alexa + LaCoolCo

Our device is gathering a huge amount of data from the plants, dealing with all this data can be very painful.
With Alexa, all this data become conversational !
Searching a way to add chatbot feature and enter Alexa’s world for our products, we stumbled on Smartly.ai which for good, reduce in a significant way time of coding, needed for the establishment of an Alexa app.
Thanks to them, in less than one hour, we produced a first PoC to present it for clients and investors.

Next Steps

Our first PoC is straightforward but can be improved.
We plan some to make iterations on this skill by leveraging our plants database. We would also like to match with official taxonomy names and the vernacular ones. (a tomato, isn’t a tomato : it’s a solanum lycopersicum for Science)
To finish, we aim to combine our livestream datalloging system with Alexa apps. And I hope it will be happen soon because we’re already using AWS system for our projects.
In this case, if we’ll give people a tool to easily communicate with their plants, with external databases from everywhere, we’ll be able to bring them the best advices to grow their plants.
We would also love to leverage push notifications so the plant can say:
” Hey, you forget to water me this week, fix that or I will kick your ***!”
Adding some personality to the plants is also something we are looking at 😃

Antoine Berr, President of  La Cool Co.

If you too, you ♥ your plants, and want to know how they feel,
drop an email to this awesome team: contact@lacool.co !

🐼 The Bots for Good Challenge

 

As you know, voice is having an ever greater impact on our lives.
Put simply: Voice is the future.

At Smartly.ai we strongly believe that tech has to be used primarily to solve problems and bring progress to the World.

We see in Alexa + Smartly.ai an effective way to enhance everybody’s lives.

By everybody we mean not to forget, the elderly, the autistic, the visually impaired, the disabled… everybody.

By everybody we also mean those people who need food, water, drugs, security, freedom, education… everybody.

By everybody we also mean our  planet, our atmosphere, our water, and all the endangered species… everybody.

Because we can’t wait seeing all those innovations happen, we turn to you – the developers, the pioneers, the creators to take part in this challenge, make an impact and earn an awesome prize!

The tech is here.
Now it’s up to you! 🙂

Prize Pool

  • 1st Place: 2000$ + 6 Echo Dot
  • 2nd Place: 1000$ + 6 Echo Dot
  • 3rd Place: 6 Echo Dot

More details to be announced soon… Stay tuned to this blog and our upcoming Alexa Meetup!

 

 

 

 

💪🏻 Are you ready to hire an AI?

Let’s face it – the super-intelligent AI takeover that many are fearing is not for today.

We may all lose against Watson at Jeopardy, and AlphaGo is the champion when it comes to Go but… those cool marketing campaigns are far from the holy grail of the so-called General AI.

According to the most authoritative voices in the space,
the singularity may probably occur at some point in the 2040s.
Until then, I can’t imagine having a smart, meaningful and pleasant conversation with Siri, Alexa or Cortana for more than 5 minutes.
When it comes to open conversations, there is no match to humans.

 Human > general AI

Things get less contrasted when we narrow the conversation to a specific topic. A specialized AI can be much better at managing user requests because it has been designed for a unique purpose.
The Turing is easier to achieve for those AIs.
To illustrate this, you can try Amy, the virtual assistant created by x.ai to perform a single task: scheduling meetings for you. She does so by email: demo here.
Amy is so good at doing this that most people think she is a real assistant.
When it comes to narrowed conversations, AI has the advantage of dealing with big data volumes while humans are more accurate. 1-1 here.

Human ~  specialized AI

How is Amy doing such a great job?
Well, as explained here, a key part of the process relies on a Supervised Machine Learning. AI trainers teach Amy how humans express time, locations, contact names… Amy then uses this knowledge to work better.
It’s a virtuous circle. 🙂

Facebook M is relying even more on humans to teach the AI how to complete tasks. “M can purchase items, get gifts delivered to your loved ones, book restaurants, make travel arrangements, appointments and way more.”

In a recent project at Smartly.ai, we tried this “hybrid AI” approach.
The results were stunning – the AI was able to manage 80% of the requests!
While the AI was successfully dealing with the simple questions,
the operator was enjoying more time to engage in a qualitative way with the customers having complex requests.
Our AI excelled at narrowed and repetitive requests, humans excelled at complex and particular ones.

The magic of hybrid AI is that it just works and scale at the same time!
Peter Thiel’s Palantir is another powerful demonstration of the power of hybrid AI in solving big challenges of today’s world : fraud and terrorism.

So, basically:

Human > Human  + specialized AI

At Smartly.ai ,
We are committed to empowering humans with AI assistants.
We have got awesome demos,
book yours now by dropping us an email! 😉

 

 

 

👍 Congratulations for your chatbot Mr President !

Yesterday,
the White House released a brand new chatbot. 🙂

Why?

One remarkable habit of @Potus has been reading 10 letters a day since he was elected.
This allows him to get the pace of the nation from the inside.
I did my math and if that comes to be true it’s a quite impressive number of letters:

As of today, Obama has been President for 7 years and 204 days.
(7*365 + 204) * 10 = 27,590… That’s a lot of letters!

But wait, who is writing letters anymore?
Are those letters representative of generation X, Y, Z?
They are probably more used to emails, SMS or Facebook Messenger.

Capture
So, according to the White House, 2016 is Messenger year!
With 60B daily messages and 900M users worldwide, it’s probably a safe bet.

How?

Now, let’s see how the bot experience is delivered.

Capture

The experience is focused on getting your message to Obama, getting your name and email address… and that’s it until Mr. President decides to answer you. 🙂

The purpose is simple, and the edge cases are well managed.
At the end, you get an emoji and a cool video.

Still we may regret that the bot isn’t showing any kind of intelligence.
In fact, you may have sent your message to the President 10 times faster using the contact form…

It may have been funnier if the bot had been an automated version of Obama! Some gamification around his job, or even some interactive poll on his next actions, travels, outfits…

You can try this bot by yourself here.
The operation is also described on the White House website, here.

We hope to see more bots used by political figures but they should be aware that a poorly designed bot will inevitably flop.

And if President Hollande needs a bot,
we’ll be happy to build one for him. 😉

 

 

🚀 Introducing Smartly.ai

From day 1,

Our vision was to allow people to communicate with their devices as naturally as they would with each other.

When Amazon Echo was released we created Alexa Designer, an all-in-one platform to create voice-based applications for Alexa.
Today, this product is used by hundreds of developers around the world. As our ambition is beyond a specific device, we couldn’t stop there.

Conversational interfaces are booming to more and more platforms.
Siri is now opened to third-party apps, Google Home is coming, and chatbots are taking over mobile apps.
It is time for us to develop our offer, to extend it to something smarter and more universal.

As our product is radically evolving, so will its name. Introducing Smartly.ai!

Why Smartly.ai?
             • “Smartly” and the extension “.ai” reflect our shift to AI.
             • The name is versatile enough to remain relevant for a while.
             • The brand was available! 😉

Now, let’s have a look at the logo!

First, we took a circle as a basis for our icon as most of the popular personal assistants are round. We wanted to start from something familiar.

Siri-Google-Now-Cortana-640x320[1]

Then we added openings to illustrate the company’s open-mindedness and desire to communicate with the world.
The different sizes of the 3 parts represent motion while also symbolizing teamwork and unity.

Profile Icon

We then picked a modern, light and smooth font for the textual part of the logo, and here are some variations of the result.

 

Red Logo Transparent Background (1)

Green Logo Transparent Background

Blue Logo Transparent Background

 

Now, what’s next?

For developers: More Conversational AI, more platforms, more fun!
For businesses: More tools to reconnect with your users!

Here is an overview of the many platforms we are covering,
and more are coming soon!

multiplatform

We hope you will join us in our journey! 🚀
Let’s smartly connect your business to the world… with a touch of AI!

 

 

🍏 Testing out the Siri SDK

 

Finally, it is live!
Last week at the WWDC, Apple released the new Siri SDK.

At VocalApps, we were dying to try it out and find out the pros and cons of this new Apple feature.

The following video demonstrates how we successfully created an iOS app that can be launched directly within Siri.

Although this first version is quite limited to a predefined list of usages, it still allows developers to create some interesting use cases:

  • starting audio or video calls in your VoIP app
  • sending or searching text messages in your messaging app
  • searching pictures and displaying slideshows in your photo app
  • sending or requesting a payment in your payment app
  • starting, pausing, ending a workout in your sports app
  • listing and booking rides in your ride-sharing app
  • controlling your car music if your app is CarPlay-compatible

ios-10-siri-sirikit-third-party-apps-100666361-large[1]

The SDK was designed so that Siri will listen to the user, try to understand what he means, and if all goes well, transfer the user’s request to your app.
Then you can engage a conversation, display some custom data and process the request with your own web services.

This is really nice since it will support in an out-of-the-box way all Siri languages and all the stuff Siri knows about you (where you are, who your sister is, your name…).

For instance, if you want to send money to Sarah using your VocalApps app, you just have to tell Siri:

“Hey Siri, send $10 to Sarah using VocalApps.”

Siri understands you want to send money, that the amount is $10, that the recipient bears the name “Sarah” and that the app you want to use is “VocalApps.” So it calls a sendPayment method in your app with all these arguments.

Currently, Siri is included in iPhone 4S, iPhone 5, iPhone 5C, iPhone 5S, iPhone 6, iPhone 6 Plus, iPhone 6s, iPhone 6s Plus, iPhone SE, 5th generation iPod Touch, 6th generation iPod Touch, 3rd generation iPad, 4th generation iPad, iPad Air, iPad Air 2, all iPad Minis, iPad Pro, Apple Watch, and Apple TV.

It’s gigantic, it’s the future and it’s only the beginning.

Do you have a mobile app that you would like to connect to Siri?
If so, we can definitively help you, just start chatting with us 🙂 !