Multiheaded comments on Help, help, I'm being oppressed! - Less Wrong

30 Post author: Yvain 07 April 2009 11:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (141)

You are viewing a single comment's thread. Show more comments above.

Comment author: Multiheaded 28 January 2012 09:02:38AM 2 points [-]

splitting off into different species and civilizations with diverging values

Well, don't you think something this would be more or less a clear win for both sexes?

Comment author: [deleted] 28 January 2012 09:22:44AM *  6 points [-]

Generally actually I would. Honestly as much as I love sexual and romantic entanglement with women, I can't help but feel giddy about the awesomeness (according to my values) of an all male civilization on Mars. And I've already spoken about how I would probably take a pill that would make me asexual. Sexbots or homosexuality inducing pills seem an inferior solution but not that much. As long as the pill that would make me homosexual would change just my sexual preference and nothing else (I suspect the typical male homosexual brains actually differ in other subtle systematic ways from typical heterosexual male brains).

The problem comes here:

Note that as with encountering an alien civilization there is no guarantee whatsoever that peaceful coexistence would be viable in the long term.

Most LessWrongers have given very little though to the idea that human values might differ significantly enough to be incompatible. Even fewer have thought of finding a way to have them coexist rather than just making sure their own value set gobbles up as much matter.

Comment author: [deleted] 28 January 2012 09:38:09AM 0 points [-]

Even fewer have thought of finding a way to have them coexist rather than just making sure their own value set gobbles up as much matter.

That's because it seems more likely that there's only one FAI to rule them all, and whatever values it has will dominate the light-cone.

Comment author: [deleted] 28 January 2012 09:53:39AM *  5 points [-]

A FAI is more likley to actually be a FAI if people don't engage in a last desperate war for ownership of all the universe for eternity at the time of its construction.

The current proposed solution to avoid such negative sum arms race (where aggressive action and recklessness reduce the likelihood of a friendly AI for nearly all other human value sets, but increases the likelihood of one for your particular value set) has been to hope that our values aren't really different, we're just (for now) too dumb to see this.

Comment author: [deleted] 28 January 2012 11:12:16AM 3 points [-]

It's a bit worse than that. The "hope" seems to be more along the lines of:

"The existence of sufficient coherence is not certain; if it is not present, the system implementing CEV should execute a "controlled shutdown" rather than behaving unpredictably." -- Nick Tarleton, "Coherent Extrapolated Volition: A Meta-Level Approach to Machine Ethics"

Nevermind how a nascent valueless AI is supposed to convince itself to go back into the box.