I feel that people here are way too emotional. If you tell them they'll link you up to a sequence post on why being emotional can be a good thing. I feel that people here are not skeptic enough. If you tell them they'll link you up to a sequence post on why being skeptic can be a bad thing. I feel that people here take some possibilities too seriously. If you tell them they'll link you up...and so on. I could as well be talking to Yudkowsky only. And whether there is someone else, some expert or otherwise smart guy not agreeing then he is either accused of not having read the sequences or below their standards.
Eliezer believes that building a superhuman intelligence is so dangerous that experimenting with it is irresponsible...
The whole 'too dangerous' argument is perfect for everything from not having to prove any coding or engineering skills to dismissing openness and any kind of transparency up to things I am not even allowed to talk about here.
If he's wrong, then he'll fail, and SIAI will fail. If someone else has a different, viable, strategy, then that group will succeed. If nobody does, then nobody will.
Here we get to the problem. I have no good arguments against all of what I have hinted at above except that I have a strong gut feeling that something is wrong. So I'm trying to poke holes into it, I try to crumble the facade. Why? Well, they are causing me distress by telling me all those things about how possible galactic civilizations depend on my and your money. They are creating ethical dilemmas that make me feel committed to do something even though I'd really want to do something else. But before I do that I'll first have to see if it holds water.
But Eliezer has written tens of thousands of words introducing his strategy and his reasons for finding it compelling...
Yup, I haven't read most of the sequences but I did a lot spot tests and read what people linked me up to. I have yet to come across something novel. And I feel all that doesn't really matter anyway. The basic argument is that high-risks can outweigh low probabilities, correct? That's basically the whole fortification for why I am supposed to bother, everything else just being a side note. And that is also where I feel (yes gut feeling, no excuses here) something is wrong. I can't judge it yet, maybe in 10 years when I learnt enough math, especially probability. But currently it just sounds wrong. If I thought that there was a low probability that running the LHC was going to open an invasion door for a fleet of aliens interested in torturing mammals then according to the aforementioned line of reasoning I could justify murdering a bunch of LHC scientists to prevent them from running the LHC. Everything else would be scope-insensitivity! Besides the obvious problems with that, I have a strong feeling that that line of reasoning is somehow bogus. I also don't know jack shit about high-energy physics. And I feel Yudkowsky doesn't know jack shit about intelligence (not that anyone else does know more about it). In other words, I feel we need to do more experiments first to understand what 'intelligence' is to ask people for their money to save the universe from paperclip maximizers.
See, I'm just someone who got dragged into something he thinks is bogus and of which he doesn't want to be a part of but who nonetheless feels that he can't ignore it either. So I'm just hoping it goes away if I try hard enough. How wrong and biased, huh? But I'm neither able to ignore it nor get myself to do something about it.
I feel that people here are way too emotional
And you have expressed that feeling most passionately.
Artificial general intelligence researcher Ben Goertzel answered my question on charitable giving and gave his permission to publish it here. I think the opinion of highly educated experts who have read most of the available material is important to estimate the public and academic perception of risks from AI and the effectiveness with which the risks are communicated by LessWrong and the SIAI.
Alexander Kruel asked:
Ben Goertzel replied:
What can one learn from this?
I'm planning to contact and ask various experts, who are aware of risks from AI, the same question.