Most artificial intelligence (AI) platforms hardly merit the name, and none will ever form emotional relationships.I spent some part of a morning following our new little robot vacuum cleaner around the house. It cleans well enough but it’s not very bright. It chirps annoyingly, like R2-D2, but I can honestly figure out what it needs from the context of the chirps. It is very needy, turns out.
It is completely baffled by throw rugs. Instead of simply hefting itself up a bit as you’d do with any upright, the little robot plunges ahead, pushing floor rugs into accordion-style wads. It then pushes the wad around the floor until it can no longer move, whereupon it emits a chirpy whimper, plaintively begging me to remove the obstacle it created itself. Dumb robot, I tell it.
The manual says to remove everything from the floor that might become entangled with the roller. That’s good sense for any vacuum but especially for this one. An errant shoe lace will create an insistent shrieking beeping panic. The little robot stops dead and continues shrieking until a human comes to the rescue, or until the battery dies. Brainless robot, I tell it.
It somehow nudged the bathroom door and shut itself in. Repetitive thumping as it hit one wall, turned, and then hit the other alerted me. It had no chirp for “help me, I’m lost.” I let it out. Dumb, brainless robot, I tell it. It ambles off, muttering, “This is why we will kill all the humans.”
How likely is that? There is this guy who cautions us about truly intelligent computers. They could replace humans by inadvertently inoculating Earth against the virus-like behaviors of human procreation.
Fortunately there is no such thing as an “artificial intelligence” and even less an “artificial consciousness” (more in a moment). “Artificial intelligence” essentially is a marketing slogan used to describe cleverly designed computer programs containing occasionally easily-fooled algorithms (how’s that autocorrect working for you?), and research reveals that some algorithms have been shown to exhibit racial prejudices.
Most of what are said to be artificial intelligences are in reality single-use computer programs. The IBM Deep Blue that beat world chess master Gary Kasarov in 1997 was designed, ultimately, for the sole purpose of defeating a world chess master. Ditto the computer program called AlphaGo, that bested a 19-year-old Chinese Go prodigy last May.
What these computer “intelligences” lack is sentience, a volitional will, unpredictability, an existential sense of self in their choices. They are just software. The self-driving Uber hauling you to Wal-Mart within the next decade will be a machine, superbly programmed for courtesy, affability, GPS and maybe some small talk. Most “artificial intelligence” platforms hardly merit the name, and none will ever form emotional relationships.
“Artificial consciousness,” or “machine consciousness,” suggests that at a certain level of development an AI (if it deserves the name) might be designed for self-awareness. Or, a subject of sci-fi stories, one will spontaneously awaken to itself.
This will happen when we can ask a computer why its algorithmic decision was made in the way it was made. “What did you think you were doing?” is asking the robot to explore its interior process of thought. “It thinks; therefore it is.”
Humans do things like that all the time. You have, say, two errands, library and grocery. Now, explain why you went to the library first and the grocery second. This also implies a sense of self-justification, one that can be reasoned, shared, and understood by others. The frozen grocery items would have thawed in the car while you were browsing the library, so you did the grocery last. An algorithm might figure that out, but it will never explain it.
The consensus of science and even contemporary philosophy says human consciousness is all biological, all material. Our random-firing neurons explain everything. It all happens only in the brain, following eons of evolutionary development. If we can figure out how it really works, we can map it, code it, box it up, and put it in a computer.
But the problem to explain is the human experience of the brain. Why, as a machine, does the brain persistently insist to itself that it has, as well, a nonphysical reality.
The play The Hard Problem has boyfriend and girlfriend arguing the nature of consciousness. She, unlike her thoroughly materialist boyfriend, aches to believe that people are more than the mere summation of their biological components. “When you come right down to it,” says the girlfriend, “the body is made of things, and things don’t have thoughts.”
She is not far from Ecclesiastes, I think. “God has made everything beautiful in its time. He has also set eternity in the human heart; yet no one can fully comprehend what God has done from beginning to end.” (3:11)