We are swiftly approaching the Singularity.
Predicted by Ray Kurzweil, and coined by Vernor Vinge, the Singularity is the point in time at which the human mind is no longer more computationally powerful than a computer.
Fresh from the Loebner Prize, Elbot (as covered recently in this very column) is one very visible member of the Singularity vanguard. Its simple artificial intelligence has shown implications of a sentient robot and is a major focus of current research.
True intelligence is the ability to apply previously learned responses to a new situation. No computer yet has that ability, and until true fuzzy logic -the "maybe" part of thinking – can be developed in silicon, that won’t happen.
Until true intelligence is achieved in a computer – intelligence useful in thwarting malfeasants from their goals and, reacting to new onslaughts with a human-like efficiency – I will never trust my identity, personal credentials or money to a completely computer-bound system. Without humans in the loop to back up rank-and-file computer thinking, nothing is safe in cyberspace.
The Singularity, with its promise of computers thinking for humans, is also a scary concept – what will keep the tools from rising against their masters?
Certainly, as some argue, humans will not be so stupid and shortsighted as to leave a hard-wired "off" switch. To those people, I point out various vacuum robots, lawnmowing robots, gutter-cleaning robots and other contraptions useful in everyday chores. Many products – from iRobot especially – have a built-in routine to automatically trundle back to base for recharging when their batteries get low, and never fully turn off. When finished charging, they simply complete their jobs, return to base and wait for their internal schedules to wake them up. Once they’re on, they don’t turn off.
No matter what Asimov says about his Three Laws of Robotics, there will be a robot that can reason past them, justifying its actions outside of programming – that is what intelligence brings.
The course of action here is to maintain a vigilance toward keeping robots on the fine line between smart and sentient. When that line is crossed, there will be trouble – new rapidly forming societies nearly instantaneously will upset the normal order of things.
While I don’t think a new society would be a bad thing, the sheer speed of its formation will be chaotic, and neither robot nor human deal with chaos well.