anonym comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 30 October 2010 04:09:01PM *  11 points [-]

Much is unclear. I believe this post is a good oppurtunity to give a roundup of the problem, for anyone who hasn't read the comments thread here:

The risk from recursive self-improvement is either dramatic enough to outweigh the low probability of the event or likely enough to outweight the probability of other existential risks. This is the idea everything revolves around in this community (it's not obvious, but I believe so). It is a idea that, if true, possible affects everyone and our collective future, if not the whole universe.

I believe that someone like Eliezer Yudkowsky and the SIAI should be able to state in a concise way (with possible extensive references) why it is rational to make friendly AI a top priority. Given that friendly AI seems to be what his life revolves around the absence of material in support for the proposition of risks posed by uFAI seems to be alarming. And I'm not talking about the absence of apocalyptic scenarios here but other kinds of evidence than a few years worth of disjunctive lines of reasoning. The bulk of all writings on LW and by the SIAI are about rationality, not risks posed by recursively self-improving artificial general intelligence.

  • Where are the formulas? What are the variables? Where is a method exemplified to reflect the decision process of someone who's already convinced, preferably of someone within the SIAI? That would be part of what I call transparency and a foundational and reproducible corroboration of one's first principles.
  • Where are the reference to substantial third-party research papers? There are many open problems regarding artificial general intelligence, how exactly does the SIAI handle those uncertainties and accounts for them in their probability estimations of the dangers posed by AI?
  • Where does the SIAI outline the likelihood of slow versus fast development of AGI? Where are your probability estimations that account for these uncertainties. Where are your variables and references that allow you to make any kind of estimations to balance the risks of a hard rapture with a somewhat controllable development?
  • What are the foundations that give credibility to the chain of reasoning that leads one to accept unfriendly superhuman intelligence going foom as a serious risk?
  • Where is the supportive evidence at the origin of your complex multi-step extrapolations argued to be from inductive generalizations?

What if someone came along making coherent arguments about some existential risk about how some sort of particle collider might destroy the universe? I would ask what the experts think who are not associated with the person who makes the claims. What would you think if he simply said, "do you have better data than me"? Or, "I have a bunch of good arguments"? If you say that some sort of particle collider is going to destroy the world with a probability of 75% if run, I'll ask you for how you came up with these estimations. I'll ask you to provide more than a consistent internal logic but some evidence-based prior.

The current state of evidence IS NOT sufficient to scare people up to the point of having nightmares and ask them for most of their money. It is not sufficient to leave comments making holocaust comparisons on the blogs of AI researchers.

  • Is smarter than human intelligence possible in a sense comparable to the difference between chimps and humans?
  • How is an encapsulated AI going to get into control without already existing advanced nanotechnology? It might order something over the Internet if it hacks some bank account etc. (long chain of assumptions), but how is it going to make use of the things it orders?
  • Why should self-optimization not be prone to be very limited. Changing anything substantial might lead Gandhi to swallow the pill that will make him want to hurt people, so to say.

You have to list your primary propositions on which you base further argumentation, from which you draw conclusions and which you use to come up with probability estimations stating risks associated with former premises. You have to list these main principles so anyone who comes across claims of existential risks and a plead for donation, can get an overview. Then you have to provide the references, if you believe they give credence to the ideas, so that people see that all you say isn't made up but based on previous work and evidence by people that are not associated with your organisation.

You could argue your case of "this is obviously true" with completely made-up claims, and I'd have no way to tell. -- Kaj_Sotala

This is a community devoted to refining the art of rationality. How is it rational to believe the Scary Idea without being able to tell if it is more than an idea?

Comment author: anonym 30 October 2010 11:18:07PM 7 points [-]

The risk from recursive self-improvement is either dramatic enough to outweigh the low probability of the event or likely enough to outweight the probability of other existential risks. This is the idea everything revolves around in this community (it's not obvious, but I believe so).

Umm, this is not the SIAI blog. It is "Less Wrong: a community blog devoted to refining the art of human rationality".

The idea everything revolves around in this community is what comes after the ':' in the preceding sentence.

Comment author: XiXiDu 31 October 2010 03:36:51PM *  4 points [-]
  • Google site:lesswrong.com "artificial intelligence" 4,860 results
  • Google site:lesswrong.com rationality 4,180 results

Besides its history and the logo with a link to the SIAI that you can see in the top right corner, I believe that you underestimate the importance of artificial intelligence and associated risks within this community. As I said, it is not obvious, but when Yudkowsky came up with LessWrong.com it was against the background of the SIAI.

Comment author: anonym 31 October 2010 06:53:12PM *  6 points [-]

Eliezer explicitly forbade discussion of FAI/Singularity topics on lesswrong.com for the first few months because he didn't want discussion of such topics to be the primary focus of the community.

Again, "refining the art of human rationality" is the central idea that everything here revolves around. That doesn't mean that FAI and related topics aren't important, but lesswrong.com would continue to thrive (albeit less so) if all discussion of singularity ceased.

Comment author: wedrifid 31 October 2010 08:29:14PM *  5 points [-]
  • Google site:lesswrong.com "me" 5,360 results
  • Google site:lesswrong.com "I" 7,520 results
  • Google site:lesswrong.com "it" 7,640 results
  • Google site:lesswrong.com "a" 7,710 results

Perhaps you overestimate the extent to which google search results on a term reflect the importance of the concept to which the word refers.

I note that:

  • The best posts on 'rationality' are among those that do not use the word 'rationality'*.
  • Similar to 'Omega' and 'Clippy', AI is a useful agent to include when discussing questions of instrumental rationality. It allows us to consider highly rational agents in the abstract without all the bullshit and normative dead weight that gets thrown into conversations whenever the agents in question are humans.