wedrifid comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 31 October 2010 08:29:14PM *  5 points [-]
  • Google site:lesswrong.com "me" 5,360 results
  • Google site:lesswrong.com "I" 7,520 results
  • Google site:lesswrong.com "it" 7,640 results
  • Google site:lesswrong.com "a" 7,710 results

Perhaps you overestimate the extent to which google search results on a term reflect the importance of the concept to which the word refers.

I note that:

  • The best posts on 'rationality' are among those that do not use the word 'rationality'*.
  • Similar to 'Omega' and 'Clippy', AI is a useful agent to include when discussing questions of instrumental rationality. It allows us to consider highly rational agents in the abstract without all the bullshit and normative dead weight that gets thrown into conversations whenever the agents in question are humans.