You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Yosarian2 comments on Beware surprising and suspicious convergence - Less Wrong Discussion

14 Post author: Thrasymachus 24 January 2016 07:13PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (6)

You are viewing a single comment's thread.

Comment author: Yosarian2 27 January 2016 11:43:41PM 3 points [-]

As a meta-level version of this, I have to admit that I find it a little concerning that this site was created in the first place partly because Eliezer Yudkowsky wanted to convince people that funding safe AI research was the best possible use of resources, and that much of the logic on this site seems to come to that conclusion, irrespective of which direction the logic goes in to get to that point.

I don't necessarily disagree with the conclusion, but it is a surprising and suspicious convergence nonetheless.

Comment author: SoerenE 29 January 2016 07:50:48AM *  1 point [-]

My thoughts exactly.

When I first heard it, it sounded to me like a headline from BuzzFeed: This one weird trick will literally solve all your problems!

Turns out that the trick is to create an IQ 20000 AI, and get it to help you.

(Obviously, Suspicious <> Wrong)