anonym comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (432)
Much is unclear. I believe this post is a good oppurtunity to give a roundup of the problem, for anyone who hasn't read the comments thread here:
The risk from recursive self-improvement is either dramatic enough to outweigh the low probability of the event or likely enough to outweight the probability of other existential risks. This is the idea everything revolves around in this community (it's not obvious, but I believe so). It is a idea that, if true, possible affects everyone and our collective future, if not the whole universe.
I believe that someone like Eliezer Yudkowsky and the SIAI should be able to state in a concise way (with possible extensive references) why it is rational to make friendly AI a top priority. Given that friendly AI seems to be what his life revolves around the absence of material in support for the proposition of risks posed by uFAI seems to be alarming. And I'm not talking about the absence of apocalyptic scenarios here but other kinds of evidence than a few years worth of disjunctive lines of reasoning. The bulk of all writings on LW and by the SIAI are about rationality, not risks posed by recursively self-improving artificial general intelligence.
What if someone came along making coherent arguments about some existential risk about how some sort of particle collider might destroy the universe? I would ask what the experts think who are not associated with the person who makes the claims. What would you think if he simply said, "do you have better data than me"? Or, "I have a bunch of good arguments"? If you say that some sort of particle collider is going to destroy the world with a probability of 75% if run, I'll ask you for how you came up with these estimations. I'll ask you to provide more than a consistent internal logic but some evidence-based prior.
The current state of evidence IS NOT sufficient to scare people up to the point of having nightmares and ask them for most of their money. It is not sufficient to leave comments making holocaust comparisons on the blogs of AI researchers.
You have to list your primary propositions on which you base further argumentation, from which you draw conclusions and which you use to come up with probability estimations stating risks associated with former premises. You have to list these main principles so anyone who comes across claims of existential risks and a plead for donation, can get an overview. Then you have to provide the references, if you believe they give credence to the ideas, so that people see that all you say isn't made up but based on previous work and evidence by people that are not associated with your organisation.
This is a community devoted to refining the art of rationality. How is it rational to believe the Scary Idea without being able to tell if it is more than an idea?
Umm, this is not the SIAI blog. It is "Less Wrong: a community blog devoted to refining the art of human rationality".
The idea everything revolves around in this community is what comes after the ':' in the preceding sentence.
Besides its history and the logo with a link to the SIAI that you can see in the top right corner, I believe that you underestimate the importance of artificial intelligence and associated risks within this community. As I said, it is not obvious, but when Yudkowsky came up with LessWrong.com it was against the background of the SIAI.
Eliezer explicitly forbade discussion of FAI/Singularity topics on lesswrong.com for the first few months because he didn't want discussion of such topics to be the primary focus of the community.
Again, "refining the art of human rationality" is the central idea that everything here revolves around. That doesn't mean that FAI and related topics aren't important, but lesswrong.com would continue to thrive (albeit less so) if all discussion of singularity ceased.
Perhaps you overestimate the extent to which google search results on a term reflect the importance of the concept to which the word refers.
I note that: