Why Google A.I. is the last user interface

0 Posted by - 11th October 2016 - Technology

Star Trek got it right: In the future, we’ll use computers by talking to them.

Google held an event this week and new hardware got most of the attention. Google unveiled a couple of Google-built Pixel phones, Google Wi-Fi home mesh routers, the Google Home virtual assistant appliance, a new 4k Chromecast Ultra streaming media box and the Daydream View VR headset.

Critics say Google is copying and competing directly with Apple with the Pixel phones and with Amazon with Home. But this misses the point.

Google believes that artificial intelligence (A.I.) virtual assistants (VA) and the conversational user interface (CUI) will largely supplant search engines and mobile apps for many users. We’re moving into an “A.I.-first world,” according to Google CEO Sundar Pichai.

Simply put, all that means is you talk to a computer. It “understands” what you say no matter how you say it. Then the computer does things for you based on those conversations.

Google is betting the company on its own version of this interface, called Google Assistant. And it’s a bet I think they’ll win.

The last user interface

The artificially intelligent conversation agent is the last user interface. The entire history of human-computer user interfaces has been about applying ever increasing amounts of compute power to making the machines work harder to interact in a way that’s easier for people.

Humans are hardwired for talking to each other, so the most human-compatible interface is one that has conversations with us.

Trouble is, people are able to converse the way we do because of a complex mixture of human psychology and knowledge. A.I. will need to simulate human psychology and gather massive knowledge in order to hold even the most basic conversation.

But wait! What about Google Now?

Google Now can be safely categorized as a non-A.I. virtual assistant. It’s like Siri, Cortana, Alexa and other virtual assistants. Nice, but not A.I.

Assistant is more adaptive. It learns, both to become more generally competent and also to personalize.

With personalization, Assistant remembers facts about you. I typed in “My wife’s name is Amira Elgan,” and Assistant responded with: “OK, I’ll remember that.”

You can use the command “Remember” to make Assistant your own repository of handy information. For example, you can say: “Remember my bike combo is 397.” Five years from now, you can ask Assistant what your bike combo is, and it will tell you.

The important thing to know about Assistant is that it learns and uses facts about you, personally. It accesses Google’s Search and Knowledge Base information. And it will do things such as control your home appliances and make your dinner reservations.

That omnipresent A.I. will do it all.

Unlike Google Now (or, for that matter, Siri, Alexa and Cortana), Assistant should be able to “figure things out,” even with a cryptic or vague request. The ability to do this should improve over time as users rank responses.

Assistant should also make minor decisions. So if you’re in your car and say: “Play ‘Gold’ from Kiiara,” that song will play through your car’s sound system. But if you say the same thing at home, it will play through either your Home device or your Chromecast, depending on which you tend to prefer.

Another Assistant skill is that it pays attention to (or, depending on your views on privacy, “spies on”) your conversations and interjects helpful information and links, such as restaurant recommendations. If you’re doing this in Google’s Allo, both parties see the recommendations.

Assistant gets even more contextual with a Pixel phone. A long press on the home button while you’re looking at a photo, for example, returns personalized search results based on the content of the photo (It’s basically the Google Now On Tap feature extended to Assistant and the new phones).

We’ll always have other user interfaces — virtual, mixed and augmented reality, for example. But these are for experience. For information, the interface will be conversation. You’ll be able to stop worrying about devices, apps, platforms and all the rest. You just talk, and Assistant makes it happen. That’s the vision, anyway.

When A.I. chooses bots for you

Google is planning to open up Assistant to developers via a platform called Actions on Google.

Developers won’t build “apps” or even “bots,” according to Google’s lingo, but “Actions.” These “Actions” can be either Direct Actions or Conversation Actions.

http://www.cio.com.au/article/608232/why-google-last-user-interface/?utm_medium=rss&utm_source=taxonomyfeed via http://www.cio.com.au/tax/news/ #CIO, #Technology