timtyler comments on Should I believe what the SIAI claims? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (600)
And I can't see where your beliefs might come from. What are you telling potential donors or AGI researchers? That AI is dangerous by definition? Well, what if they have a different definition, what should make them update in favor of your definition? That you thought about it for more than a decade now? I perceive serious flaws in any of the replies I got so far in under a minute and I am a nobody. There is too much at stake here to base the decision to neglect all other potential existential risks on the vague idea that intelligence might come up with something we haven't thought about. If that kind of intelligence is as likely as other risks then it doesn't matter what it comes up with anyway because those other risks will wipe us out just as good and with the same probability.
There already are many people criticizing the SIAI right now, even on LW. Soon, once you are more popular, other people than me will scrutinize everything you ever wrote. And what do you expect them to conclude if even a professional AGI researcher, who has been a member of the SIAI, does write the following:
Why would I disregard his opinion in favor of yours? Can you present any novel achievements that would make me conclude that you people are actually experts when it comes to intelligence? The LW sequences are well written but do not showcase some deep comprehension of the potential of intelligence. Yudkowsky was able to compile previously available knowledge into a coherent framework of rational conduct. That isn't sufficient to prove that he has enough expertise on the topic of AI to make me believe him regardless of any antipredictions being made that weaken the expected risks associated with AI. There is also insufficient evidence to conclude that Yudkowsky, or someone within the SIAI, is smart enough to be able to tackle the problem of friendliness mathematically.
If you would at least let some experts take a look at your work and assess its effectiveness and general potential. But there exists no peer review at all. There have been some popular people attend the Singularity Summit. Have you asked them why they do not contribute to the SIAI? Have you for example asked Douglas Hofstadter why he isn't doing everything he can to mitigate risks from AI? Sure, you got some people to donate a lot of money to the SIAI. But to my knowledge they are far from being experts and contribute to other organisations as well. Congratulations on that, but even cults get rich people to support them. I'll update on donors once they say why they support you and their arguments are convincing or if they are actually experts or people being able to showcase certain achievements.
Intelligence is powerful, intelligence doesn't imply friendliness, therefore intelligence is dangerous. Is that the line of reasoning based on which I shall neglect other risks? If you think so then you are making it more complicated than necessary. You do not need intelligence to invent stuff to kill us if there's already enough dumb stuff around that is more likely to kill us. And I do not think that it is reasonable to come up with a few weak arguments on how intelligence could be dangerous and conclude that their combined probability beats any good argument against one of the premises or in favor of other risks. The problems are far too diverse, you can't combine them and proclaim that you are going to solve all of them by simply defining friendliness mathematically. I just don't see that right now because it is too vague. You could as well replace friendliness with magic as the solution to the many disjoint problems of intelligence.
Intelligence is also not the solution to all other problems we face. As I argued several times, I just do not see that recursive self-improvement will happen any time soon and cause an intelligence explosion. What evidence is there against a gradual development? As I see it we will have to painstakingly engineer intelligent machines. There won't be some meta-solution that outputs meta-science to subsequently solve all other problems.
Douglas Hofstadter and Daniel Dennett both seem to think these issues are probably still far away.
...