James_Miller comments on Open Thread Feb 29 - March 6, 2016 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (285)
Scary Mark Zuckerberg interview on AI risks where the Facebook founder says:
Yes, but if the crash of a single airplane would cause the extermination of mankind we would all be dead. A better analogy is scientists in 1940 considering whether detonating an atomic bomb would ignite the atmosphere.
I wonder if Zuckerberg is familiar with the concept of "hard takeoff". I've been under the impression the concept has become mainstream, but I've been in the OB/LW sphere for the entirety of my adult life, and I have no idea how big the inferential distance has gotten.
Yeah, I don't understand why safety should equal 'stop working on the thing'. If anything, AI friendliness will further the advancement of AI, allowing a more widespread use.
There is a good chance that if the first super-intelligent AI isn't carefully designed to be friendly it will destroy us, but creating a friendly super-intelligent AI is much harder than merely creating an AI, so our species only chance of survival is to go very slow with AI development until we have put a lot more resources into researching friendliness. Imagine that it was 1850 and you knew that the crash of a single airplane would destroy mankind, but you couldn't convince others of this. You would be scared if people started to work on creating airplanes.
I get that, but I think that "working to make a plane a lot safer" would still tick the box "working on a plane project". I would say this is even what happens in reality, otherwise you could just strap a jet engine under a bus.
I am all in favor of slowing down AI work to better focus on safety, and I would contest Zuckerberg telling him: "you know Mark, even if we are focusing on AI safety that doesn't mean we are slowing down progress on AI, if anything, we are accelerating it."
I worry that a lot of discussions about AI are all being done via metaphor or being based on past events while it's easy to make up a metaphor that matches any given future scenario and it shouldn't be easily assumed that building an artificial brain is (or isn't!) anything like past events.
I agree that using metaphors to predict the future is problematic, but predicting the future is really hard and if we don't have a good inside view of what's likely to happen the best we can do is to extrapolate from what has happened in the past.