You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gjm comments on A big Singularity-themed Hollywood movie out in April offers many opportunities to talk about AI risk - Less Wrong Discussion

34 Post author: chaosmage 07 January 2014 05:48PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (84)

You are viewing a single comment's thread. Show more comments above.

Comment author: gjm 09 January 2014 06:05:45PM 0 points [-]

But the issue of violating the Wikipedia policy doesn't factor much into the calculation.

The fact that the issue violates Wikipedia policy is an essential part of why doing as you propose would be likely to have a negative impact on MIRI's reputation.

(For the avoidance of doubt, I don't think this is the only reason not to do it. If you use something that has policies, you should generally follow those policies unless they're very unreasonable. But since ChristianKI is arguing that an expected-utility calculation produces results that swamp that (by tweaking the probability of a good/bad singularity) I think it's important to note that expected utility maximizing doesn't by any means obviously produce the conclusions he's arguing for.)