A major ‘control problem’ lies in the wording. Asimov laws are ambiguous to a machine as written in English, not code.
Definitions for ‘robot’ and ‘human being’, even a combination of the two (i.e. hydraulic joint replacements) are and becoming increasingly ambiguous. If a robot does not analyse itself as a robot, it may conceivably conclude that the Laws do not apply.
Asimov’s
Law #1 “A robot may not injure a human being or, through inaction, allow a human being to come to harm” can be tweaked to “ A robot may do nothing that, to its knowledge, will harm a human being; nor, through inaction, knowingly allow a human being to come to harm." But even such added words may be no solution.
A dilemma in Mr Asimov’s short story ‘Liar’, highlighted this. A robot will hurt humans if he tells them something and hurt them if he does not. Poor Bot! So, in an increasingly sophisticated robot, the potential and severity of all actions, doing or not doing, are weighed and a robot will break the laws as little as possible rather than do nothing at all. Mmm… That means Law 1 is open to a robot’s (!) interpretation.
As for
Law #2, “A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.” We have only to look at current bad actors to know that if a way can be found to make someone richer or more powerful using a robot, they will. And organic casualties will be considered collateral damage.
Law #3, ”A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.” Asimov, realising there was a problem, later added the “Zeroth Law,” to precede the others – “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” My robot is thinking that humanity may indeed be threatened by human beings. Then it would be OK to e l i m i n a t e.
All hangs in words.
Meanwhile, I rather enjoyed the fictional series ‘Next’ where increasingly chilling AI sets itself up to decide all that is best for this world. ![]()
