XiXiDu comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 31 October 2010 03:36:51PM *  4 points [-]
  • Google site:lesswrong.com "artificial intelligence" 4,860 results
  • Google site:lesswrong.com rationality 4,180 results

Besides its history and the logo with a link to the SIAI that you can see in the top right corner, I believe that you underestimate the importance of artificial intelligence and associated risks within this community. As I said, it is not obvious, but when Yudkowsky came up with LessWrong.com it was against the background of the SIAI.

Comment author: anonym 31 October 2010 06:53:12PM *  6 points [-]

Eliezer explicitly forbade discussion of FAI/Singularity topics on lesswrong.com for the first few months because he didn't want discussion of such topics to be the primary focus of the community.

Again, "refining the art of human rationality" is the central idea that everything here revolves around. That doesn't mean that FAI and related topics aren't important, but lesswrong.com would continue to thrive (albeit less so) if all discussion of singularity ceased.

Comment author: wedrifid 31 October 2010 08:29:14PM *  5 points [-]
  • Google site:lesswrong.com "me" 5,360 results
  • Google site:lesswrong.com "I" 7,520 results
  • Google site:lesswrong.com "it" 7,640 results
  • Google site:lesswrong.com "a" 7,710 results

Perhaps you overestimate the extent to which google search results on a term reflect the importance of the concept to which the word refers.

I note that:

  • The best posts on 'rationality' are among those that do not use the word 'rationality'*.
  • Similar to 'Omega' and 'Clippy', AI is a useful agent to include when discussing questions of instrumental rationality. It allows us to consider highly rational agents in the abstract without all the bullshit and normative dead weight that gets thrown into conversations whenever the agents in question are humans.