timtyler comments on CEV: a utilitarian critique - Less Wrong

25 Post author: Pablo_Stafforini 26 January 2013 04:12PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 27 January 2013 03:43:56AM *  2 points [-]

We're pretty similar.

Not similar enough to prevent massive conflicts - historically.

Basically, small differences in optimisation targets can result in large conflicts.

Comment author: DanArmak 28 January 2013 05:48:19PM 0 points [-]

And even more simply, if everyone has exactly the same optimization target "benefit myself at the expense of others", then there's a big conflict.

Comment author: Benito 27 January 2013 02:03:51PM *  0 points [-]

The existence of moral disagreement is not an argument against CEV, unless all disagreeing parties know everything there is to know about their desires, and are perfect bayesians. People can be mistaken about what they really want, or what the facts prescribe (given their values).

I linked to this above, but I don't know if you've read it. Essentially, you're explaining moral disagreement by positing massively improbable mutations, but it's far more likely to be a combination of bad introspection and non-bayesian updating.

Comment author: timtyler 27 January 2013 02:41:13PM *  4 points [-]

Essentially, you're explaining moral disagreement by positing massively improbable mutations [...]

Um, different organisms of the same species typically have conflicting interests due to standard genetic diversity - not "massively improbable mutations".

Typically, organism A acts as though it wants to populate the world with its offspring, and organism B acts as though it wants to populate the world with its offspring, and these goals often conflict - because A and B have non-identical genomes. Clearly, no "massively improbable mutations" are required in this explanation. This is pretty-much biology 101.

Comment author: DanArmak 28 January 2013 05:51:43PM 2 points [-]

Typically, organism A acts as though it wants to populate the world with its offspring, and organism B acts as though it wants to populate the world with its offspring, and these goals often conflict - because A and B have non-identical genomes.

It's very hard for A and B to know how much their genomes differ, because they can only observe each other's phenotypes, and they can't invest too much time in that either. So they will mostly compete even if their genomes happen to be identical.

Comment author: timtyler 28 January 2013 11:35:04PM *  2 points [-]

The kin recognition that you mention may be tricky, but kin selection is much more widespread - because there are heuristics that allow organisms to favour their kin without the need to examine them closely - like: "be nice to your nestmates".

Simple limited dispersal often results in organisms being surrounded by their close kin - and this is a pretty common state of affairs for plants and fungi.

Comment author: Benito 27 January 2013 05:39:10PM *  2 points [-]

Oops.

Yup, I missed something there.

Well, for humans, we've evolved desires that work interpersonally (fairness, desires for others' happiness etc,). I think that an AI, which had our values written in, would have no problem figuring out what's best for us. It would say 'well, there's is complex set of values, that sum up to everyone being treated well (or something), and so each party involved should be treated well.'

You're right though, I hadn't made clear idea about how this bit worked. Maybe this helps?