AlphaOmega comments on Survey: Risks from AI - Less Wrong

9 Post author: XiXiDu 13 June 2011 01:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (19)

You are viewing a single comment's thread.

Comment author: AlphaOmega 14 June 2011 12:45:15AM *  -1 points [-]

Since I'm in a skeptical and contrarian mood today...

  1. Never. AI is Cargo Cultism. Intelligence requires "secret sauce" that our machines can't replicate.
  2. 0
  3. 0
  4. Friendly AI research deserves no support whatsoever
  5. AI risks outweigh nothing because 0 is not greater than any non-negative real number
  6. The only important milestone is the day when people realize AI is an impossible and/or insane goal and stop trying to achieve it.
Comment author: [deleted] 14 June 2011 03:08:51PM -1 points [-]

Upvoted because this appears an honest answer to the question, but it'd be useful if you said why you considered it an absolute certainty that no machine will ever show human-level intelligence. Personally I wouldn't assign probability 0 even to events that appear to contradict the most basic laws of physics, since I don't have 100% confidence in my own understanding of physics...