To provide clarity on AI and its potential impact, LEWIS arranged two panels in mid and late-October. Entitled, “Artificial Intelligence: Beyond the Hype and Headlines,” the panels gathered journalists, academics and industry insiders to evaluate the state of the field and how it could affect businesses, workers, law and society in general.
Our mid-October event, hosted at Digital Garage in downtown San Francisco, featured three journalists — Blair Hanley Frank of VentureBeat, Hannah Kuchler of Financial Times and Ted Greenwald of The Wall Street Journal — and two industry experts — Quentin Hardy, Head of Editorial at Google Cloud and Paul Hsiao, general partner and co-founder at Canvas Ventures.
Our San Francisco panelists agreed on a few aspects of AI as it’s discussed today. First, the term “artificial intelligence” is an inaccurate descriptor of the field, which can be more accurately described as “machine learning.” Second, any organization lacking a plan or an AI strategy is, in a competitive sense, already behind. Third, our panelists agreed AI’s impact on society will be mostly beneficial, enabling organizations to enjoy massive boosts in productivity and efficiencies while nudging humans towards more engaging work.
But each panelist expressed their own worries over AI’s potential as well. AI regulation will be difficult to implement. Any AI-relevant laws will depend on how a particular political and economic region has behaved in the past and how those potential regulations could conflict with AI’s underpinnings. For example, a law requiring an organization to explicitly define how an AI came to a particular decision could be impossible to implement, should the need ever arise. This is because some AI methods, such as Google’s touted “deep learning” method, use techniques that obscure the reasoning behind a particular result.
Dangers of AI:
This “black box” problem extends beyond law. For example, it could be difficult for experts to suss out why an AI took an action resulting in harm, making it difficult to fix an underlying problem. In a more fantastical, but not out of the realm of possibility, scenario, black boxes could also disguise when an AI system crosses a threshold into advanced state that excludes human control. This last scenario already occurred, to very small degree, when two Facebook AIs developed a unique language that humans could not interpret.
But these are the worries of AI’s potential, not the worries of its current actuality. In truth, AI’s current dangers are human-caused. For example, human and societal bias regularly steers AI towards the wrong conclusion, especially when the topics of race and sex are involved. For AI, these biases are the byproduct of the training data a system takes in, meaning human programmers will need to develop solutions to counter these issues.
Finally, in a more fantastical example, a poorly designed AI could destroy the world while doing exactly what it’s designed to do. This “stupid smart AI” theory is explored in Paperclips, an addictive click-based game utilizing the premise of an AI run amok. The game ends when the universe is consumed for the sole purpose of creating paperclips. While an extreme example, Paperclips gives good abstraction to the mundane issues that could arise from a faulty AI. For example, a trading AI could trade stocks so well that it causes a crash.
Impact on Jobs
Aside from universe-ending fantasies, AI also inspires a more immediate worry over what jobs will be automated. Our panelists agreed AI will impact nearly every field, but also that humans will need to monitor AI systems for mistakes and misunderstandings — meaning more jobs for those who’re skilled enough for them. In general, the job forecast is mildly optimistic for humans in the short to medium-term. The industrial revolution automated a great deal of work, and freed humans to focus on more engaging and challenging activities. The same ought to hold true for the AI/Automation revolution.
However, the rate of how many new jobs are created to how many jobs are eliminated is unclear. What is clear is that occupations with routine actions, like a sales clerk, are the most likely to be automated first. In the case of truck drivers, which accounts for 3.1 million jobs or roughly two percent of the American workforce, it’s expected that self-driving trucks will claim 1.7 million positions within the next decade. Even high-salary specialist jobs, such as radiology, are at risk for automation.
But what jobs will hold? Right now, creative and non-routine occupations, like surgeon or novelist, are considered safe from AI automation. In all likelihood, collecting data and correcting misguided AI interpretations will grow over time. But more critical than finding and investing in an “AI safe” career is developing the skills necessary to work with AI and understanding how to utilize it, and other tools, in a more productive manner. Design Thinking, a philosophy of discovering and understanding the underlying issues around a problem to create a new solution, could offer a preview of what skills future workers will need to succeed in an AI-driven world.
Is AI overhyped?
So, is AI overhyped? The short answer is, “no.” The long answer is, “it depends on what you’re talking about.” As a mirror to the human mind in consciousness, AI is woefully inept and will likely continue to be so over the next few decades. But as a system, or a series of systems, capable of sensing an environment in order to realize a well-defined objective, it’s exceeding expectations and possibly even under-hyped.
As Paul Hsiao noted during the San Francisco panel, venture capitalists have invested upwards of $80 billion across all sectors in 2017. AI makes up only about five percent (about $4 billion) of those investments. The technology, then, is only just getting its start. Over the coming years, AI’s impact on common business problems, such as automating and monitoring inventory, supply chains and pricing, will be felt. The field, too, will continue to grow in complexity and ubiquity. That much is predictable. What’s not predictable is how humans will react to AI’s growth and, if ever, how they will define AI in a modern age.