Ethics of artificial intelligence

There are many ethical problems associated with working to create intelligent creatures.


 * AI rights: if an AI is comparable in intelligence to humans, then should it have comparable moral status?
 * Would it be wrong to engineer robots that want to perform tasks unpleasant to humans?
 * Would a technological singularity be a good result or a bad one? If bad, what safeguards can be put in place, and how effective could any such safeguards be?
 * Could a computer simulate an animal or human brain in a way that the simulation should receive the same animal rights or human rights as the actual creature?
 * Under what preconditions could such a simulation be allowed to happen at all?

A major influence in the AI ethics dialogue was Isaac Asimov who, at the insistence of his editor John W. Campbell Jr., proposed the Three Laws of Robotics to govern artificial intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. Ultimately, a reading of his work concludes that no set of fixed laws can sufficiently match the possible behavior of AI agents and human society. A criticism of Asimov's robot laws is that the installation of unalterable laws into a sentient consciousness would be a limitation of free will and therefore unethical. Consequently, Asimov's robot laws would be restricted to explicitly non-sentient machines, which possibly could not be made to reliably understand them under all possible circumstances.

The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment. The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost Speciesism. The short story The Planck Dive suggest a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies. The movies Bicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.

Over time, debates have tended to focus less and less on possibility and more on desirability, as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.