Machine Ethics is the emerging field which seeks to create technology with moral decision making capabilities. A superintelligence will take many actions with moral implications. Programming it to act with respect to out values, given how complex they are, is the main goal of the field of friendly artificial intelligence.
A famous early attempt at machine ethics was that by Issac Asimov in a 1942 short story, a set of rules known as the Three Laws of Robotics. They formed the basis of many of his stories.
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
The zeroth rule was a later extrapolated by his robots from the three programmed rules.
Various moral philosophies have been explored as bases for machines. Several attempts have been made to program robots to obey utilitarian and deontological ethics. Programs which analyze a situation, compare it with others in a database, and return the an analysis have been created in several narrow ethical fields. An approach developed by Eliezer Yudkowsky, Coherent Extrapolated Volition, permits for the singularity to occur without a set of clear ethics driving it. Due to the explicitness required in programming machines to act ethically, as said by Daniel Dennett, "AI makes philosophy honest".
Today, there are many practical applications of Machine Ethics. Drones used in war, though they risk no operator's life, make targeted killing easier. Robots developed to care for the elderly may reduce their human contact, reduce their privacy and made them feel devalued, but could also permit them greater independence. The development of driverless cars will save lives but increase pollution and change family dynamics.