Furcas comments on Some alternatives to “Friendly AI” - Less Wrong

19 Post author: lukeprog 15 June 2014 07:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (44)

You are viewing a single comment's thread.

Comment author: Furcas 16 June 2014 12:42:56AM 5 points [-]

I don't like 'Safe AGI' because it seems to include AIs that are Unfriendly but too stupid to be dangerous, for example.

Comment author: kokotajlod 16 June 2014 01:47:05PM 0 points [-]

That's not something the average person will think upon hearing the term, especially since "AGI" tends to connote something very intelligent. I don't think it is a strong reason not to use it.

Comment author: AlexMennen 16 June 2014 07:32:10PM 3 points [-]

Actually, I think people often will think that when they hear the term. "Safety research" implies a focus on how to prevent a system from causing bad outcomes while achieving its goal, not on getting the system to achieve its goal in the first place, so "AGI Safety" sounds like research on how to prevent a not-necessarily-friendly AGI from becoming powerful enough to be dangerous, especially to someone who does not see an intelligence explosion as the automatic outcome of a sufficiently intelligent AI.