Visit Our Sponsors |
No doubt you’ve heard the argument against panicking about the coming of artificial intelligence: Through the centuries, every technological breakthrough has been accompanied by dire predictions of doom for humanity, but it never happens. This is just another example of that.
But is it? Might AI turn out to have an impact far greater than previous tech revolutions — to the point where, if not doomed, humanity might be seriously threatened?
Juliette Powell and Art Kleiner are fully aware of the possibility. In the introduction to their new book, The AI Dilemma: 7 Principles for Responsible Technology, they cite the concept of the Moral Machine, whereby some version of technology makes snap life-or-death decisions without human intervention. Think about AI-controlled autonomous vehicles, which have already been responsible for 18 fatalities nationwide as of January 15, 2023. Or, in the not-so-distant future, an AI making medical decisions that result in the deaths of patients. There are countless other such scenarios, causing even AI experts to express concern about the rapid progress of the technology.
Fears of technology have often proved to be unfounded or overstated, but there’s a feeling that this time is different. “Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of second, outside of real-time supervision,” wrote the authors of a 2018 article in Nature. “We are going to cross that bridge any time now…”
Powell, an entrepreneur, technologist and commentator, became interested in the subject as she was finishing up her sociology degree at Columbia University. Her thesis research, “The Limits and Possibilities in the Self-Regulation of Artificial Intelligence,” became the basis for The AI Dilemma, in collaboration with Kleiner, a writer, entrepreneur and now editor-in-chief of Kleiner Powell International.
What sparked the project was the notion that “self-regulation” wasn’t adequate protection against the darkest implications of AI. We need to get beyond our “illusion of control” over automated systems, the authors argue, and be able to assert actual power over them. So said the authors of the Nature article, who in response to the rise of autonomous vehicles called for “a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them.”
Powell and Kleiner’s contribution to that conversation are the seven principles that make up their new book. The first: “Be intentional about risk to humans.”
“We don’t have the capacity to manage risk in an effective way,” explains Kleiner. “That means building risk avoidance into what we do.” It’s the opposite of trying to fix things on the fly, as driverless cars lose their Wi-Fi connection, interfere with emergency vehicles, or get stuck in wet cement.
“Being intentional about risk means not just focusing on it when it’s convenient, when the costs of reducing risk are low, or when it feels comfortable,” Powell and Kleiner write. “It means continually looking at ways to achieve twin goals: deploying technology to realize its potential and reducing the potential harm to the people or communities where the technology is deployed.”
“Open the closed box” is the book’s second principle, one that reflects deep concern by many AI users over the technology’s inability to explain how it arrived at a particular conclusion. Some argue that the algorithms deployed in such cases are simply too complex to be understood by humans, but Kleiner disagrees. The box is closed, he says, “because they’re scouring unstructured masses of text and image for data, and are learning the patterns as they go along.” The facts spewed out by chatbots like ChatGPT aren’t reliable because they can’t cite their sources, but Kleiner says some degree of transparency is needed to meet the growing number of regulations attempting to rein in the potential excesses of generative AI.
Similar concerns surround the issue of reclaiming data rights, Powell and Kleiner’s third principle. The opaqueness of AI is in direct conflict with the need for consumers to keep their personal information out of the hands of marketers and governments. Hence the rise of tough new laws such as the European Union’s General Data Protection Regulation (GDPR). Says Kleiner: “We are our data.”
The fourth principle, “confront and question bias,” reflects the tendency of AI to mirror the prejudices of its programmers — and source material. That can lead, for instance, to output that discriminates against individuals of color in applications such as facial and voice recognition.
Fifth is “hold stakeholders accountable,” another aspect of transparency, whereby government and corporations need to be made responsible for the consequences of their actions that arise from AI applications. Powell and Kleiner say that can be done in part through establishing strict standards of practice, monitoring “external harms, as well as internal failures,” and ensuring proper data governance.
“Favor loosely coupled systems” is the sixth principle, through the diversification of decision-making, redundancy of systems to mitigate the impact of failures, and improvements in risk audit and oversight.
Powell and Kleiner’s seventh and final principle, “embrace creative friction,” calls for a balance between the need for speed of innovation — Silicon Valley’s mantra — and procedural “guardrails” that can head off the unintended consequences of unchecked, overly hasty AI initiatives. “Friction is what prevents the smoothness from running away from us,” Kleiner says. “It becomes necessary.”
What the seven principles have in common is a call for thinking deeply about the long-term implications of AI, even as developers move ahead to realize its potential for good. As Kleiner sums it up, referencing the “black box” of AI: “It’s a wakeup call for intelligent humanity.”
RELATED CONTENT
RELATED VIDEOS
Timely, incisive articles delivered directly to your inbox.