Experts agree that artificial intelligence’s capabilities are multiplying at an exponentially increasing rate. AI software can exercise intelligence that matches or exceeds our own at more and more tasks. Once software can make its own decisions, what is left for the human to do?
The autonomous cars on our roads, the automatic pilots flying our planes and the customer service software that measures our purchasing intent are all real-world examples of how rapidly AI’s capabilities are expanding and touching more aspects of our lives.
The key is that AI can make more and more decisions without human input and act autonomously. But aren’t we still the only beings with intelligence? Shouldn’t that be the criteria for who gets to make decision? Paul Scharre, a local expert on the military’s use of AI and author of the new book “Army of None,” pointed out to me that “intelligence is the ability to take in information and accomplish a task. This is not something that is unique to humans, nor is it inevitable that we will always be the best at applying intelligence to a particular task.” Humanity must have a plan for dealing with this new reality.
This question of applying autonomous AI to the military is getting much of the public attention of the tech industry and academic community. For instance, tech leaders such as Elon Musk have called for an outright ban on autonomous AI in warfare.
I believe this is happening for two reasons. First, the focus on autonomous AI’s use in war arises from peoples’ distaste for having software decide who lives and who dies. Second, individuals who will benefit financially from AI’s deployment in the civilian sector affirmatively use the military issue to deflect the conversation away from their own activities and plans.
The talk about banning military autonomous AI really is putting all our energy in the one problem that societal rules and norms already address. Scharre suggested to me that we should evaluate the likely use of AI in the military the same way we view nuclear weapons and other advanced military technologies. If we do, we realize that what prevents their use is the likelihood of similar response from an adversary. I agree with him. Is it sad that we develop technologies that can make our ability to kill one another ever more efficient? Absolutely. But we have at least figured out how to deal with the technologies’ implications.
Outside of the messy but addressable military issue there is a bigger and fundamental issue arising out of AI that many are dancing around. AI, particularly autonomous AI, will be more efficient than humans for many, many tasks. AI software won’t get tired, have a bad day or get distracted. In many cases, it will make decisions much faster than a human could.
Because our economy rewards efficiency, applying AI in business will be financially rewarding. This takes us to the issue fewer seem willing to discuss. We must discuss whether and to what extent we can tolerate the efficiency gains autonomous AI creates in a capitalist society if it makes human labor irrelevant or comparatively more expensive.
We do not have social norms or anything close to a political consensus on how to handle the inevitable job displacement of a population that in many instances is not only computer illiterate, but which also doesn’t have adequate reading and math skills. Moreover, there will be many highly educated people who will also find their jobs disappear. We need to address their situation in a long-term, comprehensive way. The world is about to get more competitive because humans will no longer be competing just with other humans for employment.
Although killer robots trigger a visceral emotional response, as should any weapon of mass destruction that can kill humans efficiently, we urgently need a societal commitment to become a country of people who understand AI and thus can work with it. We must develop a technically skilled population with critical thinking skills and the creativity necessary to excel at the unique, non-repetitive tasks that humans will remain better able to do for the foreseeable future. Failure to do that will marginalize more and more people, and eventually challenge the legitimacy of a democratic and capitalistic society.
We can co-exist with AI, but we must get ready by making immediate, substantial and forward-thinking investments in education at all levels. Killer robots are scary, but not having a plan for what AI is going to do to our economy is much more frightening.