Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Uni comments on The Fun Theory Sequence - Less Wrong

28 Post author: Eliezer_Yudkowsky 25 January 2009 11:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Uni 28 March 2011 10:05:22PM 0 points [-]

Eliezer_Yudkowsky wrote: "We don't want the AI's models of people to be people - we don't want conscious minds trapped helplessly inside it. So we need how to tell that something is definitely not a person, and in this case, maybe we would like the AI itself to not be a person, which would simplify a lot of ethical issues if we could pull it off. Creating a new intelligent species is not lightly to be undertaken from a purely ethical perspective; if you create a new kind of person, you have to make sure it leads a life worth living."

I do want AI models of people to be people. If an AI model of a human lacks the important part of the person that we call emotions, there is the risk that it will be a dangerously faulty model of that person. If it can't suffer, it can't understand why humans so eagerly want to get rid of suffering, and so it may in many situations think it knows better than us what is most important to achieve. It may regard the diminishing of suffering in humans just as irrational as we would regard turning the planet into paper clips.

From a utilitarian perspective, even from a negative utilitarian perspective, building AI models of humans and equipping then with human emotions and ability to suffer would not necessarily be unethical. It might be what would in the long run minimize the suffering. Trying to minimize suffering too fast can be dangerous; it might take away something we really value, along with the suffering, even if we don't value the suffering in and of itself. Future superhuman AI will know how to get rid of suffering in a completely safe way, once they have become in every relevant way immensely smarter than humans - but perhaps only if the first transhuman AI we build are equipped with the ability to suffer just like humans. You often have to know a problem well before you can safely solve it, and you can't really know suffering unless you actually feel it. So let's not hesitate to create AI that can suffer (once we can create such AI).

Happiness will grow immensely and suffering will and should be abolished later on, but let's not rush it during the process of creating AI models of humans.

Comment author: nshepperd 28 March 2011 10:53:51PM 6 points [-]

We're talking about giving the models subjective experience, not just "emotions". You want the AI to create conscious minds inside itself and torture them to find out whether torture is bad? And then again every time it makes a decision where torture is a conceivable outcome? I'd hope we can give the AI a model that accurately predicts how humans react to stimuli without creating a conscious observer. Humans seem to be able to do that, at least..

Beware of anthropomorphizing AIs. A Really Powerful Optimization Process shouldn't need to "suffer" for us to tell it what suffering is, and that we would like less of it.