Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Alexei comments on Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality” - Less Wrong

36 Post author: AnnaSalamon 12 December 2016 07:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (43)

You are viewing a single comment's thread.

Comment author: Alexei 10 December 2016 08:09:23AM 5 points [-]

We can be careful to include all information that they, from their vantage point, would want to know -- even if on our judgment, some of the information is misleading or irrelevant, or might pull them to the “wrong” conclusions.

I did not understand this part.

Comment author: gjm 10 December 2016 04:18:01PM 14 points [-]

I don't know how it plays out in the CFAR context specifically, but the sort of situation being described is this:

Alice is a social democrat and believes in redistributive taxation, a strong social safety net, and heavy government regulation. Bob is a libertarian and believes taxes should be as low as possible and "flat", safety-nets should be provided by the community, and regulation should be light or entirely absent. Bob asks Alice[1] what she knows about some topic related to government policy. Should Alice (1) provide Bob with all the evidence she can favouring the position she holds to be correct, or (2) provide Bob with absolutely all the relevant information she knows of, or (3) provide Bob with all the information she has that someone with Bob's existing preconceptions will find credible?

It's tempting to do #1. Anna is saying that CFAR will do (the equivalent of) #2 or even #3.

[1] I flipped a coin to decide who would ask whom.

Comment author: AnnaSalamon 10 December 2016 06:44:57PM *  9 points [-]

Yes. Or will seriously attempt this, at least. It seems required for cooperation and good epistemic hygiene.