You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

James_Miller comments on Open Thread Feb 29 - March 6, 2016 - Less Wrong Discussion

4 Post author: Elo 28 February 2016 10:11PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (285)

You are viewing a single comment's thread. Show more comments above.

Comment author: James_Miller 29 February 2016 05:19:57PM *  3 points [-]

Yeah, I don't understand why safety should equal 'stop working on the thing'.

There is a good chance that if the first super-intelligent AI isn't carefully designed to be friendly it will destroy us, but creating a friendly super-intelligent AI is much harder than merely creating an AI, so our species only chance of survival is to go very slow with AI development until we have put a lot more resources into researching friendliness. Imagine that it was 1850 and you knew that the crash of a single airplane would destroy mankind, but you couldn't convince others of this. You would be scared if people started to work on creating airplanes.

Comment author: MrMind 01 March 2016 08:07:29AM 3 points [-]

I get that, but I think that "working to make a plane a lot safer" would still tick the box "working on a plane project". I would say this is even what happens in reality, otherwise you could just strap a jet engine under a bus.
I am all in favor of slowing down AI work to better focus on safety, and I would contest Zuckerberg telling him: "you know Mark, even if we are focusing on AI safety that doesn't mean we are slowing down progress on AI, if anything, we are accelerating it."