THE VERISK RISK REPORT

The Unintended Consequences of Artificial Intelligence

“Success in creating AI [artificial intelligence] would be the biggest event in human history,” Stuart Russell, a leading AI researcher, wrote in a 2014 op-ed coauthored with Stephen Hawking and others. “Unfortunately, it might also be the last...”

Just five years later, AI and machine learning increasingly influence our personal, political, and commercial experiences. Algorithms created directly from data via machine learning decide what we see on our social media platforms, determine individual creditworthiness, buy and sell stock, and are beginning to gain traction as sentencing tools in the criminal justice system, to name just a few areas of recent explosive growth.

In 2018, a Deloitte report on AI in business found that 88 percent of companies surveyed planned to increase spending in 2019 on cognitive technology—which mimics the functions of the human brain—with more than half committing to a more than 10 percent rise in spending. By 2030, some estimates say that AI could deliver $13 trillion in economic output. Russell notes in his just-released book on AI, Human Compatible, that more money has been invested in AI in the last five years than in the decades-long history of the field up to that point.

As with much cutting-edge tech, the potential benefits of AI are extraordinary. However, the threat of “AI run amok” is of a different order than other risks inherent in innovation. No one is saying that Terminator-style cyborgs are planning their rebellion, but that should not make the real risks any less compelling. The pace of AI development and implementation is outstripping our understanding of how these various algorithms make decisions, curtailing our ability to redirect those decisions when necessary.

Falling into the knowledge gap

AI algorithms are typically built by AI scientists and engineers, the gatekeepers of the software’s inner workings. As they study the algorithms they’ve created, they often—understandably—focus solely on engineering issues, with little attention to social ramifications. As a result, the number of AI systems in use is growing faster than the research that can examine their behavior and effects on society, leading to what is known as the AI knowledge gap.

The problem of unforeseen outcomes

Think of it this way: Before we hire someone, we want to know their educational background, hear from former colleagues and employers, and see proof of their judgement; we also typically don’t give them full run of a company on day one. However, we entrust machine learning with sensitive data and important decisions without questioning the objectives, biases, and training behind it. We’re already seeing the unintended results of this process, and when adopted more broadly and employed more widely, the consequences could be devastating.

The pitfalls of employing AI without questioning its appropriateness can be seen in tools such as COMPAS, or the Correctional Offender Management Profiling for Alternative Sanctions. The tool has been used in sentencing by judges who employ it to predict recidivism. However, research has shown that COMPAS is no more effective at predicting recidivism than randomly selected volunteers from the Internet. Additionally, ProPublica found that COMPAS results could be skewed toward predicting higher rates of recidivism among black defendants than white defendants in a case study in Florida.

Unintended bias and the future of inclusion

The COMPAS example points to a larger issue: Bias in AI can lead to prejudicial outcomes in everything from banking to insurance to hiring practices. Consider that 55 percent of human resources managers say that AI will become a regular part of the hiring process in the next five years. Yet when machines assess résumés, built-in bias can skew the results.

Last year, for example, Amazon abandoned a hiring tool it had been developing since 2014. The tool was designed to sort and prioritize résumés, ultimately identifying top candidates. But the algorithm was developed using résumés the company had received over the previous ten years, which mostly came from men. As a result, the tool was biased against women, penalizing résumés that included words like “women’s” and phrases like “women’s chess club captain” as well as those that included the names of some all-women’s colleges. The company tried to fix this issue but ultimately couldn’t guarantee the algorithm wouldn’t discriminate in different ways.

In fields like healthcare, such flaws could have life-or-death consequences; and in some cases, bias could be specifically built into artificial intelligence for nefarious purposes. A government agency could design AI specifically to engage in racial profiling, for example, or a financial institution could seek to screen for economic indicators to restrict access to services.

“43% of senior personnel in global finance, including data scientists, say data quality is their biggest concern when it comes to implementing machine learning.”

With great scale comes great responsibility

Unfortunately, bias in AI isn’t always obvious from the outset and may be noticeable only when more output makes the problem clear—at which point it may be impractical to retrain the algorithm. And on a broader level, bias may never be entirely removed from AI. After all, there have been attempts to eliminate bias in human groups for centuries, and while there has been progress, not even the most optimistic person would claim we’ve seen anything approaching complete success.

That’s why the greatest risk of all may not be in an AI scientist’s equations but instead in the business mandates and legislative bills that will guide AI’s real-world application. Early adopters’ excitement about promising technologies can obscure the potential downsides when a new technology is embraced on a mass scale—an enormous risk with a technology as powerful as AI. For example, some experts estimate that more than half the trades made in the S&P 500 Index occur algorithmically, and it’s unclear if the oversight of these interactions is sufficient to avoid setting off unintended market spirals. On February 6, 2018, for instance, the stock market plunged 800 points in just ten minutes, almost certainly due to decisions made by machines—decisions that were then exacerbated by humans’ fear-driven reactions to the algorithms’ trades.

charts-ArtificialInteligence.png

One way researchers are addressing this problem is in the emerging field of Explainable AI (XAI). While much existing AI falls into the category of “black box” systems that offer no insight into their inner workings, Explainable AI aims to produce algorithms that are transparent when it comes to how and why they make particular decisions—allowing engineers and executives to fully understand what the computer program is doing and why.

Explainable AI is an enticing first step, and great strides are being made in this direction. However, further research is needed to close the knowledge gap and improve machine learning as we turn to the problem of how AI will perform as it takes on more complicated tasks.

To go back to the example of employment: It’s one thing to evaluate an individual applicant and another to assess the fitness of a sprawling corporation or highly specialized firm before entrusting it with critical tasks that underpin the functioning of entire societies. What industry bodies should certify and audit such systems? What government laws should regulate them? Who ultimately decides on their use? These are all questions society needs to tackle in tandem with the AI scientists and engineers creating the new possibilities. Society as a whole must continually question what role this technology will play in our lives—realizing that the gains we reap through AI today could have far broader consequences in the near future.