JoshuaZ comments on Is friendly AI "trivial" if the AI cannot rewire human values? - Less Wrong

-5 Post author: Alerus 09 May 2012 05:48PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (57)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 10 May 2012 04:39:28AM 1 point [-]

I'm not sure why running a complex society needs to be a condition. If we all revert to hunter-gatherers then it still satisfies the essential conditions.

That's a problem even if it isn't a doomsday scenario. Changes in animal welfare attitudes would probably make most of us unhappy, but having a society where torturing cute animals to death wouldn't hurt running a complex society. Similarly, allowing infanticide would work fine (heck for that one I can think of some pretty decent arguments why we should allow it). And while not a doomsday scenario, other scenarios that could suck a lot can also be constructed. For example, you could have a situation where we're all stuck with 1950s gender roles. That would be really bad but wouldn't destroy a complex society.

Comment author: Alerus 10 May 2012 01:47:27PM 0 points [-]

Hunter gathers is not something sustainable for a large scale complex society. It is not a position we would favor at all and I'm struggling to see why an AI would try to make us value that set up or how you think a society with technology strong enough to make strong AI would be able to be convinced to it.

Views of killing animals is more flexible as the reason humans object to it seems to come from a level of innate compassion for life itself. So I could see that value being more manipulatable as a result. I don't see what that has to do with a doomsday set of values though.

1950s gener roles were abandoned because (1) women didn't like it (in which case maximizing people's well being would suggest not having such gender roles) and (2) it was less productive for society in that suppressing women limits the set of contributions to society.

I don't think you've presented here a set of doomsday values to which humans could be manipulated to holding by persuasion alone or demonstrated why they would be a set of values the AI would prefer humans to have for maximization.