The moment when science fiction becomes fact can take even a technologist’s breath away. I confess to gasping for air when I talked about artificial intelligence last week with Max Tegmark, a professor at MIT and the author of “Life 3.0: Being Human in the Age of Artificial Intelligence.”
Tegmark pointed out we need to understand that artificial intelligence is not science fiction any more. Within the lifetime of most who are reading this column, software will develop the ability to complete complex tasks without human intercession. And it will do it faster and better. And that is a very disquieting thought.
So, should we stop developing AI? Tegmark doesn’t see that as the right question to ask. As he puts it, the question is “not whether you are for or against AI. That’s like asking our ancestors if they were for or against fire.” Tegmark believes that as tool makers we inevitably create software that achieves artificial intelligence. It is just in our nature.
He then suggests that rather than deny the inevitable, we need to address what achieving artificial intelligence will mean. How comfortable should we be with using it to direct military force or cyber security? Should we have AI allocate healthcare or other societal benefits? What is the role of ethics–our collective sense of right and wrong–in a world where software makes instantaneous decisions on its own?
And then there is the thorny issue of consciousness itself, which Tegmark describes as the subjective sense of being. For him, it’s the difference between software that gets you from point to point and software that admires the scenery and feels the wind rushing over its sensors.
Does consciousness matter? Tegmark thinks that it does. Eventually, our software will develop the ability to process the world around it with a subjective sense of self. Software may never have feelings like we do, but it will think for itself based upon a sense of “thereness” that will be distinct from the task at hand. Software will be conscious, but in a way that will be alien to us because it will not be human. Software may provide us with a “first contact” opportunity.
When that happens, we will face profound challenges. What will be left for the human brain, when software can write better songs, make better artwork and allocate resources more efficiently? Will software become our overlords, our allies or our servants? Tegmark is asking us to consider that once artificial intelligence exists, these questions won’t be answered only by what we want.