Visit Our Sponsors |
Mathematics is supposed to be the one discipline about which there’s no debate. Calculated properly, numbers don’t lie. So can we trust the algorithms that drive artificial intelligence?
The question is more than academic. When it comes to managing global supply chains, AI offers the prospect of securing vital data against hackers and cyber thieves. One would hope, therefore, that this highly touted means of ensuring trust in business processes can itself be trusted.
With its ability to make sense of far more information than can be digested by any group of humans, AI seems like the ideal tool for sussing out the vulnerabilities that reside within complex global supply chains. It should then be able to translate those insights “into something human-intelligible to prove the integrity of the system,” says Mike Kiser, director of strategy and standards with SailPoint, provider of a cloud-based platform for identity security.
All great in theory, but questions surrounding our increasing reliance on AI for shoring up cybersecurity persist. Implicit trust in the machine isn’t enough, says Kiser: We need human oversight of the process. Forget about the supposedly irrefutable nature of numbers. “A high value must be placed on transparency — being able to explain the results or estimations of risk generated by the AI in question.”
Absent human confirmation of the validity of AI’s “thinking,” Kiser suggests, supply chain managers have no way of assessing the integrity of automated systems. “Ironically,” he says, “the systems themselves must have a secure supply chain, even as they seek to watch over other components.”
There’s no question that AI is needed to help manage modern-day supply chains, with their hugely complex networks of raw-material providers, manufacturers, carriers, distributors and other partners. “Everything is relationship-driven,” Kiser says, “and securing that supply chain is becoming increasingly hard to do via human means.”
People are simply being outgunned by armies of hackers, whether acting as individuals, members of organized gangs or funded by hostile governments. According to the Identify Theft Resource Center, more than 10 million people and 1,700 organizations were hit by supply chain attacks in 2022. Combine that distressing fact with the increasing ease of accessing AI systems such as ChatGPT, and AI becomes an irresistible tool for ensuring cybersecurity.
The most common role of AI for that purpose consists of establishing what constitutes a normal range of behavior by people and systems within a supply chain, then identifying variances from that benchmark, raising an alarm and taking appropriate action, Kiser says. (Credit-card companies having been doing that for years, but AI supercharges the process.) Understanding what’s normal in supply chain relationships is “table stakes” today, he adds.
Over the last couple of decades, AI programs have consistently shown themselves to be superior to humans in chess, go and other competitive games. So it seems only natural that people would trust the conclusions of an AI program for managing something as wickedly complex as a supply chain. “That’s a mistake,” Kiser says. The system needs to explain why it’s raising an alert “in a way that human users can understand.”
The central problem surrounding the growing dependence on AI by businesses and individuals is the existence of the “black box”: the impossibility of peering into the system’s circuitry and figuring out how it reached a given decision. Kiser describes the need for such insight as the “ethical framework” underlying the technology. “If you’re using AI that is making decisions to secure a supply chain, you really want explainability to make sure the system hasn’t been poisoned,” he says.
AI is neither good nor evil by nature, but it can be used for either purpose. Kiser cites the popularity of facial recognition programs, which provide strong security even as they’ve been criticized for racial bias and their use by authoritarian governments to track private citizens.
Kiser doesn’t necessarily subscribe to the idea that AI will “take over the world,” but he acknowledges fears that “bad AI” can do a lot of damage. “It’s opacity that’s one of the real dangers,” he says.
Governments are wading into the controversy by placing restrictions on the collection of personal consumer data — think of the European Union’s General Data Protection Regulation (GDPR), enacted in 2018 and described as “the toughest privacy and security law in the world.” Expect to see many more such laws passed around the world in the coming years.
But the ultimate solution to the AI conundrum lies within the architecture of the system itself, and its ability to explain its conclusions. Humans, of course, will always make mistakes in reasoning, but AI is far from perfect. Says Kiser: “The danger is in assuming that it can do too much.”
RELATED CONTENT
RELATED VIDEOS
Timely, incisive articles delivered directly to your inbox.