Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Howdy00

I think this is an important thing to consider if we intend to make benevolent AI's that are harmonious with our own sense of morality

Howdy00

This question reminds me of the dilemma posed to medical students. It went something like this;

if the opportunity presented itself to secretly, with no chance of being caught, 'accidentally' kill a healthy patient who is seen as wasting their life (smoking, drinking, not exercising, lack of goals etc) in order to harvest his/her organs in order to save 5 other patients should you go ahead with it?

From a utilitarian perspective, it makes perfect sense to commit the murder. The person who introduced me to the dilemma also presented the rationale for saying 'no'... Thankfully it wasn't "It's just wrong" or even "murder is wrong"... The answer suggested was "You wouldn't want to live in a world where doctors might regularly operate in such a manner nor would you want to be a patient in such a system... It would be terrifying".

I suspect the key elements in the hospital and dust speck scenarios are a) someone power over an aspect of other peoples fates and b) the level of trust of those people. The net-sum calculation of overall 'good' might well suggest torture or organ harvesting as the solution, but how would you feel about nominating someone else to be the one who makes that decision... Would you want that person to favor the momentary 3^^^3 dust speck incident or the 50 year torture of an individual?