Tiiba comments on Open Thread, August 2010 - Less Wrong

4 Post author: NancyLebovitz 01 August 2010 01:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (676)

You are viewing a single comment's thread. Show more comments above.

Comment author: Tiiba 16 August 2010 09:18:42PM 0 points [-]

Okay, more details: if an animal's behavior changes when it's repeatedly injured, it can learn. And learning is goal-oriented. But if it always does the same thing in the same situation, whatever that action is, it doesn't correspond to a desire.

And the reason why this is important for animals is that I assume that whatever it is that suffering is, I guess that it evolved quite long ago. After all, avoiding injury is a big part of the point of having a brain that can learn.

Comment author: WrongBot 16 August 2010 09:39:29PM 2 points [-]

I've programmed a robot to behave in the way you describe, treating bright lights as painful stimuli. Was testing it immoral?

Comment author: Tiiba 16 August 2010 10:32:36PM *  1 point [-]

That's why I said it's hairier with machines.

Um, actual pain or just disutility?

Comment author: WrongBot 16 August 2010 11:07:14PM 0 points [-]

That would depend pretty heavily on how you define pain. This is a good question; my first instinct was to say that they're the same thing, but it's not quite that simple. Pain in animals is really just an inaccurate signal of perceived disutility. The robot's code contained a function that "punished" states in which its photoreceptor was highly stimulated, and the robot made changes to its behavior in response, but I'm really not sure if that's equivalent to animal pain, or where exactly that line is.

Comment author: Cyan 17 August 2010 12:35:29AM 0 points [-]

Pain has been the topic of a top-level post. I think my own comment on that thread is relevant here.

Comment author: WrongBot 17 August 2010 01:23:51AM 0 points [-]

Ahh, I hadn't seen that before. Thanks for the link.

So, did my robot experience suffering then? Or is there some broader category of negative stimulus that includes both suffering and the punishment of states in which certain variables are above certain thresholds? I think it's pretty clear that the robot didn't experience pain, but I'm still confused.