Jump to content

User:Ataylor044/sandbox

From Wikipedia, the free encyclopedia

The idea of using concrete laws in the field of robotics has been explored in multiple areas of literature and has been a controversial topic of debate for many years. The most notable reference to laws of robotics is Isaac Asimov's Three Laws of Robotics, comprised of 4[a] laws defining how robots should behave.

Asimov's Laws of Robotics[edit]

Isaac Asimov utilized three (and later a fourth) laws when writing various science fiction novels surrounding robots. The three laws governed how any given robot should behave and interact with humans and it's environment.

First Law[edit]

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law[edit]

A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law[edit]

A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Zeroth Law[edit]

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Issues Surrounding Asimov's Laws[edit]

There is much controversy surrounding the validity of Asimov's laws and whether or not an autonomous robot could (or should) realistically follow them.

First Law[edit]

Robots are already currently utilized in the United States military, which already defies the first law. This also brings up the idea that not all robots are equal — some should follow laws and others should not. Robots should follow laws as applicable to their utility.

Second Law[edit]

In their current state, robots have a rudimentary understanding of spoken language, but language processing is still not perfect, and is likely far from it. Humans also communicate via non-verbal signals, including body language and hand signals. In order to completely obey human orders, a robot must first understand them completely. Any misunderstanding of instruction could be fatal. The second law also implies that robots don't have any initiative, so they will not act without being told to.

Third Law[edit]

Many robots are intended to be put in the way of danger in place of humans, so self-preservation can't be a top concern. Robots in the United States military are more akin to drones, which are controlled by human operators. Humans can't process and really information as fast as robots, and thus will not be able to protect the robot as well as the robot itself. In order to successfully protect itself, a robot must have some control over rudimentary functions allowing it to get out of harm's way.

Implementing Realistic Robotic Laws[edit]

Many studies have been done surrounding reasonable and realistic implementation of laws in robotics[1]. These studies have found that and suggested alternative laws that can be implemented to adequately cover all areas of robotics. Alternative laws are relative to the robot's intended purpose, the status of interacting humans, and give limited autonomy to the robot itself.

Notes[edit]

a.^ Taking into account the "zeroth" law.

References[edit]

  1. ^ Murphy, Robin R.; Woods, David D. (August 2009). "Beyond Asimov: The Three Laws of Responsible Robotics". The IEEE Computer Society: 14–20. doi:10.1109/MIS.2009.69. {{cite journal}}: Cite journal requires |journal= (help)