Artificial intelligence is approximating human reasoning more and more closely all the time. Wide-scale adoption by business may be approaching, with important implications for how people live and work.
AI is paving the way for new business models and raising questions about how people and machines can best work together. Now, thanks in part to cheaper and faster computing power, intelligent machines help doctors comb through troves of medical images to identify diseases early, allow manufacturers to predict when their machines will break (and fix them before that happens), and provide the “brains” behind increasingly autonomous vehicles. It’s also playing a central role in the consumer market, powering the latest virtual assistant, for example, or the engine that matches Airbnb guests with the housing they want.
International Data Corp. predicts the worldwide market for cognitive software platforms and applications, which roughly defines the market for AI, to grow to $16.5 billion in 2019 from $1.6 billion in 2015 with a CAGR of 65.2%. The market includes offerings from both established tech giants and AI startups.
But many technology leaders don’t have clear ideas about the role of AI in their business, or how to maximize its value. “The number one issue for CIOs is how can I invest anything in artificial intelligence without having clear visibility into real business results,” Gartner Inc. fellow and research vice president Tom Austin said.
Here’s a detailed but non-technical guide for the curious business person:
Machine learning: the brains behind AI
Artificial intelligence encompasses the techniques used to teach computers how to learn, reason, perceive, infer, communicate and make decisions like humans do. Its applications span technologies that can recognize images, schedule meetings and process human speech, to name just a few.
When people talk about artificial intelligence, they usually are referring to one of its subfields: machine learning. While AI concerns itself with making machines think like humans, machine learning has a narrower purpose: enabling computers to learn from data with minimal programming, says Vasant Dhar, a professor at New York University’s Stern School of Business and the NYU Center for Data Science. Instead of manually writing rules for how a machine should interpret a set of data, algorithms allow the computer to determine the rules itself.
The hottest field in artificial intelligence today is a branch of machine learning known as deep learning. It uses complex algorithms — essentially a set of instructions for solving a particular problem — to perform more abstract tasks such as recognizing images. A well known deep learning tool is the neural network, which roughly tries to mimic the operations of a human brain (more on that below).
How it works
First, companies need lots of clean and reliable data, as well as a clear idea of what they want the machine to be able to learn, said Joseph Sirosh, corporate vice president for Microsoft’s data group. Machine learning also requires significant compute power and the resources to experiment with different algorithms.
A common application of machine learning is categorization. Say you want to teach a computer to determine the relationship between the words in a news article and the category, such as business or politics, that those words predict. One way to do this is with a machine learning method called Naive Bayes, said Kristian Hammond, co-founder and chief scientist at Narrative Science, which creates software that turns data into computer-generated narratives.
Humans first create a set of training data, in this case a list of articles that are all given category tags. When the machine reads a politics article and sees the word Congress, it increases the likelihood that the word Congress predicts a story about politics. On the flip side, if it sees the word Congress in an entertainment article, a vote will go toward the entertainment category. As the computer reads more articles, it can figure out which words are the strongest predictors of certain topics and weight them accordingly.
A common component of deep learning, the neural network, is made up of layers of computational units called nodes that are linked together with weighted connectors. Data are fed to the “input layer,” and then moved to “hidden layers” that perform computations and pass them along to an “output layer.” If the input is a digital photo of a cat, the potential output would be “this is a cat,” expressed as a set of numerical values.
The first layer of nodes in the hidden layer might determine that a series of pixels in the photo form lines. The next layer uses the patterns identified in the previous layer to identify shapes, such as an oval. The layer above that could identify the oval as an eye. The process happens at each layer until the system produces a set of values that stand in for the idea “this is a cat.”
If the system incorrectly classifies the photo, the neural net must be tweaked. To do this, signals are automatically sent back through the network, telling those incorrectly weighted connections to adjust their weighting values. This process is called back propagation. It happens many times with many different examples until the network learns to identify a cat photo with an acceptable level of accuracy.
On a Google-scale deep learning project, there are tens to hundreds of millions of connections in a single model. Each of those connections is updated from 1 million to 1 billion times during the training process. Given that the company explores dozens to thousands of models for a single application, that can add up to to quintillions of computations, and can take days to train.
The development of AI
AI has come a long way since the 1956 Dartmouth artificial intelligence conference, which many consider the birthplace of the discipline. The field has grown in fits and starts since then, partially due to a lack of computing power.
Now with the availability of cheaper and faster computing power, AI has become a more viable pursuit in the enterprise. A CIO pursuing AI a decade ago ”would have spent more than 100% of his or her budget on compute power,” said Rob Thomas, vice president of product development for International Business Machines’s analytics business. Now it’s in the range of 10% to 20%, which frees up more resources for acquiring data sets and developing new algorithms.
AI in the enterprise
The use of AI is cropping up in all sorts of markets. Deep learning helps Alphabet’s Google improve its search results. Airbnb Inc. uses machine learning to recommend the best place to stay. Big tech companies have developed AI-powered virtual assistants. Nasdaq is using artificial intelligence systems that parse traders’ chatter to spot potential insider trading. Massachusetts General Hospital plans to use a system that draws on a database of 10 billion images to identify anomalies on CT scans and other medical images. General Electric uses computer vision systems to quickly identify cracks in jet engine blades. And MasterCard uses machine learning tools to analyze more than 1.3 billion transactions per day and help detect fraud.
Today, many machine learning tools can categorize data and make predictions. But few systems are in production that can reliably act on those predictions and make business-critical decisions on their own, an element some see as a key indicator of true artificial intelligence.
Machine learning platform or industry-specific app?
Tech companies have developed broad platforms aimed at bringing machine learning tools to corporate customers. But some say business users will favor favor industry-specific applications. “When you build a very narrowly focused artificial intelligence it helps the algorithms train faster and you end up with a system that’s more accurate,” said Michael Yamnitsky, an associate at Work-Bench, an enterprise technology venture capital fund. More specific applications also may make it easier for CIOs to create performance metrics around them.
Deploying machine learning tools will force companies to think about new ways to collect and store data, develop applications, design algorithms and build user interfaces. They will require input from many lines of business, which could make implementation challenging. Companies also have to deal with the uncertainty software presents, including unintended consequences and potentially unfair outcomes.
General confusion about the capabilities of artificial intelligence technologies may also slow adoption. “People are selling big dreams and visions of what this can do, and they’re having trouble landing this in a way that’s practical and realistic,” said Gartner’s Mr. Austin. “Most IT organizations don’t have the skills to understand what can and can’t be done.”
Firms also must grapple with the effect intelligent machines will have on their workforce. That not only means finding the skilled talent to work in tandem with smart machines, but also deciding how to structure their firms as more tasks are automated or require knowledge of data science.
As AI advances and more machines start making unsupervised decisions, companies will face tough questions about exactly when humans do or don’t need to be involved in decisionmaking. “The consequences of mistakes at the moment are just unknown,” NYU’s Mr. Dhar said. The recent crash of an Autopilot-equipped Tesla shows the challenges of coordinating the behavior of humans and machines.
The future: machine learning goes mainstream, but no Singularity in sight
Ray Kurzweil talks about Singularity at a Feb. 3, 2014 CIO Journal event.
Futurist, inventor and author Ray Kurzweil, now an engineering director at Google, has predicted that in the coming decades, machines will be able to simulate human intelligence, a phenomenon he calls the Singularity.
Today, “we are so far from any world where that’s a reality,” said Don Brown, co-founder and CIO at Rocana Inc. “People talk about AI taking over, and that is not the case.”
Machines will get smarter over time, experts say, but the progress will be gradual. “The reality is a slow, steady march up,” Mr. Dhar said. Recent advances have been in the field of perception, computers’ ability to see, hear and deal with unstructured input. Much new work will emerge from top research universities and technology giants, as well as the open source community.
Write to firstname.lastname@example.org
http://blogs.wsj.com/cio/2016/07/18/cio-explainer-what-is-artificial-intelligence/?mod=WSJBlog via http://blogs.wsj.com/cio #CIO, #Technology