Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Adele_L comments on Why Politics are Important to Less Wrong... - Less Wrong Discussion

6 Post author: OrphanWilde 21 February 2013 04:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (96)

You are viewing a single comment's thread.

Comment author: Adele_L 21 February 2013 04:38:54PM 10 points [-]

Which is where I think politics offers a pretty strong hint to the possibility that the Friendliness Problem has no resolution:

We can't agree on which political formations are more Friendly. That's what "Politics is the Mindkiller" is all about; our inability to come to an agreement on political matters. It's not merely a matter of the rules - which is to say, it's not a matter of the output: We can't even come to an agreement about which values should be used to form the rules.

I'm pretty sure this is a problem with human reasoning abilities, and not a problem with friendliness itself. Or in other words, I think this is only very weak evidence that friendliness is unresolvable.

Comment author: Benito 21 February 2013 05:13:15PM 3 points [-]

Indeed. If we were perfect bayesians, who had unlimited introspective access, and we STILL couldn't agree after an unconscionable amount of argument and discussion, then we'd have a bigger problem.

Comment author: OrphanWilde 21 February 2013 05:25:28PM 4 points [-]

Are perfect Bayesians with unlimited introspective access more inclined to agree on matters of first principles?

I'm not sure. I've never met one, much less two.

Comment author: Plasmon 21 February 2013 05:29:53PM 1 point [-]
Comment author: Adele_L 21 February 2013 05:33:30PM 12 points [-]

They will agree on what values they have, and what the best action is relative to those values, but they still might have different values.

Comment author: Benito 22 February 2013 11:47:59PM 1 point [-]

My point exactly. Only if we are sure agents are best representing themselves, can we be sure their values are not the same. If an agent is unsure of zir values, or extrapolates them incorrectly, then there will be disagreement that doesn't imply different values.

With seven billion people, none of which are best representing themselves (they certainly aren't perfect bayesians!) then we should expect massive disagreement. This is not an argument for fundamentally different values.

Comment author: OrphanWilde 21 February 2013 05:23:03PM -1 points [-]

I disagree with the first statement, but agree with the second. That is, I disagree with a certainty that the problem is with our reasoning abilities, but agree that the evidence is very weak.

Comment author: Adele_L 21 February 2013 05:24:39PM 1 point [-]

Um, I said I was "pretty sure". Not absolutely certain.

Comment author: OrphanWilde 21 February 2013 06:54:11PM 0 points [-]

Upvoted, and I'll consider it fair if you downvote my reply. Sorry about that!

Comment author: Adele_L 21 February 2013 10:24:01PM 1 point [-]

No worries!

Comment author: [deleted] 21 February 2013 08:33:01PM 1 point [-]

I'm amused that you've retracted the post in question after posting this.