Let’s face it – the super-intelligent AI takeover that many are fearing is not for today.
We may all lose against Watson at Jeopardy, and AlphaGo is the champion when it comes to Go but… those cool marketing campaigns are far from the holy grail of the so-called General AI.
According to the most authoritative voices in the space,
the singularity may probably occur at some point in the 2040s.
Until then, I can’t imagine having a smart, meaningful and pleasant conversation with Siri, Alexa or Cortana for more than 5 minutes.
When it comes to open conversations, there is no match to humans.
Human > general AI
Things get less contrasted when we narrow the conversation to a specific topic. A specialized AI can be much better at managing user requests because it has been designed for a unique purpose.
The Turing is easier to achieve for those AIs.
To illustrate this, you can try Amy, the virtual assistant created by x.ai to perform a single task: scheduling meetings for you. She does so by email: demo here.
Amy is so good at doing this that most people think she is a real assistant.
When it comes to narrowed conversations, AI has the advantage of dealing with big data volumes while humans are more accurate. 1-1 here.
Human ~ specialized AI
How is Amy doing such a great job?
Well, as explained here, a key part of the process relies on a Supervised Machine Learning. AI trainers teach Amy how humans express time, locations, contact names… Amy then uses this knowledge to work better.
It’s a virtuous circle. 🙂
Facebook M is relying even more on humans to teach the AI how to complete tasks. “M can purchase items, get gifts delivered to your loved ones, book restaurants, make travel arrangements, appointments and way more.”
In a recent project at Smartly.ai, we tried this “hybrid AI” approach.
The results were stunning – the AI was able to manage 80% of the requests!
While the AI was successfully dealing with the simple questions,
the operator was enjoying more time to engage in a qualitative way with the customers having complex requests.
Our AI excelled at narrowed and repetitive requests, humans excelled at complex and particular ones.
The magic of hybrid AI is that it just works and scale at the same time!
Peter Thiel’s Palantir is another powerful demonstration of the power of hybrid AI in solving big challenges of today’s world : fraud and terrorism.
the White House released a brand new chatbot. 🙂
One remarkable habit of @Potus has been reading 10 letters a day since he was elected.
This allows him to get the pace of the nation from the inside.
I did my math and if that comes to be true it’s a quite impressive number of letters:
As of today, Obama has been President for 7 years and 204 days.
(7*365 + 204) * 10 = 27,590… That’s a lot of letters!
But wait, who is writing letters anymore?
Are those letters representative of generation X, Y, Z?
They are probably more used to emails, SMS or Facebook Messenger.
So, according to the White House, 2016 is Messenger year!
With 60B daily messages and 900M users worldwide, it’s probably a safe bet.
Now, let’s see how the bot experience is delivered.
The experience is focused on getting your message to Obama, getting your name and email address… and that’s it until Mr. President decides to answer you. 🙂
The purpose is simple, and the edge cases are well managed.
At the end, you get an emoji and a cool video.
Still we may regret that the bot isn’t showing any kind of intelligence.
In fact, you may have sent your message to the President 10 times faster using the contact form…
It may have been funnier if the bot had been an automated version of Obama! Some gamification around his job, or even some interactive poll on his next actions, travels, outfits…
You can try this bot by yourself here.
The operation is also described on the White House website, here.
We hope to see more bots used by political figures but they should be aware that a poorly designed bot will inevitably flop.
And if President Hollande needs a bot,
we’ll be happy to build one for him. 😉
Our vision was to allow people to communicate with their devices as naturally as they would with each other.
When Amazon Echo was released we created Alexa Designer, an all-in-one platform to create voice-based applications for Alexa.
Today, this product is used by hundreds of developers around the world. As our ambition is beyond a specific device, we couldn’t stop there.
Conversational interfaces are booming to more and more platforms. Siri is now opened to third-party apps, Google Home is coming, and chatbots are taking over mobile apps.
It is time for us to develop our offer, to extend it to something smarter and more universal.
As our product is radically evolving, so will its name. Introducing Smartly.ai!
Why Smartly.ai? • “Smartly” and the extension “.ai” reflect our shift to AI. • The name is versatile enough to remain relevant for a while. • The brand was available! 😉
Now, let’s have a look at the logo!
First, we took a circle as a basis for our icon as most of the popular personal assistants are round. We wanted to start from something familiar.
Then we added openings to illustrate the company’s open-mindedness and desire to communicate with the world.
The different sizes of the 3 parts represent motion while also symbolizing teamwork and unity.
We then picked a modern, light and smooth font for the textual part of the logo, and here are some variations of the result.
Now, what’s next?
For developers: More Conversational AI, more platforms, more fun! For businesses: More tools to reconnect with your users!
Here is an overview of the many platforms we are covering,
and more are coming soon!
We hope you will join us in our journey! 🚀
Let’s smartly connect your business to the world… with a touch of AI!
At VocalApps, we were dying to try it out and find out the pros and cons of this new Apple feature.
The following video demonstrates how we successfully created an iOS app that can be launched directly within Siri.
Although this first version is quite limited to a predefined list of usages, it still allows developers to create some interesting use cases:
starting audio or video calls in your VoIP app
sending or searching text messages in your messaging app
searching pictures and displaying slideshows in your photo app
sending or requesting a payment in your payment app
starting, pausing, ending a workout in your sports app
listing and booking rides in your ride-sharing app
controlling your car music if your app is CarPlay-compatible
The SDK was designed so that Siri will listen to the user, try to understand what he means, and if all goes well, transfer the user’s request to your app.
Then you can engage a conversation, display some custom data and process the request with your own web services.
This is really nice since it will support in an out-of-the-box way all Siri languages and all the stuff Siri knows about you (where you are, who your sister is, your name…).
For instance, if you want to send money to Sarah using your VocalApps app, you just have to tell Siri:
“Hey Siri, send $10 to Sarah using VocalApps.”
Siri understands you want to send money, that the amount is $10, that the recipient bears the name “Sarah” and that the app you want to use is “VocalApps.” So it calls a sendPayment method in your app with all these arguments.
Currently, Siri is included in iPhone 4S, iPhone 5, iPhone 5C, iPhone 5S, iPhone 6, iPhone 6 Plus, iPhone 6s, iPhone 6s Plus, iPhone SE, 5th generation iPod Touch, 6th generation iPod Touch, 3rd generation iPad, 4th generation iPad, iPad Air, iPad Air 2, all iPad Minis, iPad Pro, Apple Watch, and Apple TV.
It’s gigantic, it’s the future and it’s only the beginning.
Do you have a mobile app that you would like to connect to Siri?
If so, we can definitively help you, just start chatting with us 🙂 !
Once we published Music Quiz, our first Alexa skill, we quickly wanted to see how it was performing. We quickly discovered that we had to put in place a logging system and then that navigating through all the data generated by an Alexa skill was a nightmare.
To get more transparency and true actionable insights,
we decided to build a tool that would allow us to:
⇒know exactly what’s going on between our skills and our users, ⇒ find and fix bugs and ⇒ enhance the user experience
After weeks of work, here is the dashboard we have finally built:
The Logs section, which allows you to search for specific sessions.
We are also bringing out specialized analytics for conversational apps.
You can see it as “Google Analytics” for Alexa.
Awesome, but… Can I use it for my skills? Sure! All you have to do is log to Alexa Designer and install a small tracker code in your lambda function.
Cheers The Vocal Apps Team
PS: If you have privacy concerns, contact us so you can have everything installed in your server.