Just what is artificial intelligence, anyway? In its ultimate form, will it model, augment or replace human thought?
There’s no easy answer to that question — at least by modern-day humans. But we can get closer to settling the big AI debate by asking another question first.
“We need to step back and ask why we’re trying to replicate human intelligence when we’ve already got 8 billion people to solve a problem,” says Peter J. Scott, founder of the Next Wave Institute, an organization that provides training on how to survive technology disruption, especially that caused by the coming of AI.
“We have to be more careful about how we frame this,” says Scott, author of Artificial Intelligence and You: Survive and Thrive Through AI’s Impact on Your Life, Your Work, and Your World. “We want [AI] to be able to do some of the things humans can do. But what we’re aiming for is not another human intelligence wrought in silicon, but a hybrid of human and computer. Like having Google in your hindbrain.”
So maybe we’re not rushing toward futurist Ray Kurzweil’s singularity: the moment he envisions when machine intelligence becomes infinitely more powerful than that of humans, an eventuality viewed as a dream by some and nightmare by others.
Research in AI dates back more than 60 years, but much about the way the brain works, and whether it can be duplicated by machines, remains unknown. Scientists have made dramatic progress over that time, knocking down one barrier after another. They’ve built computers that defeated human masters in chess and the even more complex game of Go. They’ve made enormous progress in efforts such as mapping human protein structures. In one form of another, AI has been applied to medicine, industry, defense and even creative endeavors that were once thought to be the exclusive domain of the human brain.
In other ways, however, AI has fallen short of expectations. Almost no one believes that it has become sentient, to the point where a machine can experience emotions or sensations, or even “think” the way that humans do. AI has made great leaps in speech recognition, for example, but it still can’t hold what we consider to be an actual conversation (notwithstanding the tricks programmed into Siri and other digital assistants).
The drawback of today’s computers — and possibly those of tomorrow as well — is that output depends on input. “There are countless problems now that could easily be solved by computers,” says Scott. “What takes a long time is getting the problem into the computer, and getting an answer out.” For the most part, machines still depend on the intelligence (and, unfortunately, biases) of their human creators for programming.
Given the torrent of recent claims about the successful application of AI to multiple disciplines, especially in the commercial sector, one might be surprised to learn that the technology is still in the formative stages. “We’re seeing a lot of AI-washing,” Scott says. “People will say a toaster has got AI if they can get away with it.”
Even top scientists can be overly optimistic about the progress of true AI. In 1956, two years after the death of pioneering computer scientist Alan Turing, attendees at an AI workshop at Dartmouth College gathered to speculate about how long it would take to build a computer that thinks like a human. “The Dartmouth folks thought they could knock it off in a summer,” Scott then. The ensuing years have seen similarly bold predictions about the technology’s future, with scientists declaring that the full promise of AI was always just over the horizon.
Scott says it’s important to distinguish real progress in applications of AI with “the dream of AI general intelligence.” What current AI does well — and why, for example, it can beat a chess grandmaster — is apply brute force to a problem, sorting through many millions of variables in milliseconds before settling on the best move.
AI has often been touted as the ideal tool to solve what’s known as the Traveling Salesman Problem: an exercise to determine the shortest possible route that a salesman can take while visiting each city within a designated territory. A practical version of the problem exists in the maritime industry, where ocean carriers must determine optimal routings while accounting for weather, port infrastructure, labor, cargo value and innumerable other factors.
Indeed, global supply chains in all their complexity are ripe for applications of AI, even if the technology isn’t really “thinking.” What it can do in its current stage of development is process huge amounts of data, and generate recommendations for action that might never have been occurred to human managers.
When it comes to assessing the future of AI as envisioned by its founders, however, much remains uncertain. Scott borrows from British computer scientist Andrew Ng in calling AI “the new electricity” — a technology whose ultimate applications aren’t evident in its early stages. “The ramifications of that are beyond anyone’s ability to see where it will go,” Scott, adding that AI’s development will likely speed up exponentially “when it gets to the point where AI is inventing new AI.
“In a very limited sense, that’s already happening,” Scott continues. “You have hyper-exponential growth that no longer depends on rates at which humans work.” But the final ironic truth about the ultimate destiny of AI — whether it will help or hinder, augment or shove aside the human brain — is that the choice “is up to us as a species.”