The movie “2001: A Space Odyssey” introduced the world to HAL, the digital assistant with artificial intelligence, i.e., the robot, that murdered members of the crew of Discovery One, which was undertaking a manned mission to Jupiter.
As should be obvious to all: HAL’s killing spree could have been prevented if there had only been adequate regulatory oversight of artificial intelligence before he was asked to open the pod bay doors.
If that seems silly to you, perhaps you shouldn’t tell Elon Musk, the automobile and space entrepreneur who now insists that AI represents “a fundamental risk to the existence of civilization.” The only way out, he says, is to get the government in – early!
“Normally the way regulations are set up is when a bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry,” Musk told the National Governors Association meeting. “It takes forever. AI is the rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’ll be too late.”
The practical application for his proposal is a bit hard to fathom. For while it may well be too late if we get to the stage of HAL-like machines, we’re still way too early.
Roger Schank, founder and CEO of Socratic Arts and an expert in artificial intelligence, wrote a piece on LinkedIn that recounted 10 questions that AI is incapable of answering.
- “What would be the first question you would ask Bob Dylan if you were to meet him?”
- “Who do you love more, your parents, your spouse, or your dog?”
- “I can’t figure out how to grow my business. Got any ideas?”
And this one, which sums up the problem in a nutshell:
- “Is there anything else I need to know?”
All of these queries require complex value judgments and comprehensive assessment of a wide variety of information. Yet they are questions that any human being could quickly answer. AI is still only capable of only processing and responding to only the most concrete questions. Which is why the future of AI is really about how we incorporate and combine human judgments with machine intelligence, and not sequestering the field into the purview of a new regulatory entity.
(Photo of HAL9000.)