Machine Ethics is the emerging field which seeks to create technology with moral decision making capabilities. A superintelligence will take many actions with moral implications; programming it to act morally is the main goal of the field of friendly artificial intelligence.
A famous early attempt at machine ethics was that by Issac Asimov in a 1942 short story, a set of rules known as the Three Laws of Robotics. They formed the basis of many of his stories.
Later, he added a zeroth rule, used in further expanding his series.
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
These rules were implemented in fictional "positronic brains"; in reality today the field usually concerns itself in discussing the practical ethics and programming of such issues as robots in war, as home assistants, and lately in driverless cars.