Just a gut reaction, but this whole scenario sounds preposterous. Do you guys seriously believe that you can create something as complex as a superhuman AI, and prove that it is completely safe before turning it on? Isn't that as unbelievable as the idea that you can prove that a particular zygote will never grow up to be an evil dictator? Surely this violates some principles of complexity, chaos, quantum mechanics, etc.? And I would also like to know who these "good guys" are, and what will prevent them from becoming "bad guys" when they wield this much power. This all sounds incredibly naive and lacking in common sense!
I can conceive of a social and technological order where transhuman power exists, but you may or may not want to live in it. This is a world where there are god-like entities doing wondrous things, and humanity lives in a state of awe and worship at what they have created. To like living in this world would require that you adopt a spirit of religious submission, perhaps not so different from modern-day monotheists who bow five times a day to their god. This may be the best post-Singularity order we can hope for.
I am going to assert that the fear of unfriendly AI over the threats you mention is a product of the same cognitive bias which makes us more fascinated by evil dictators and fictional dark lords than more mundane villains. The quality of "evil mind" is what really frightens us, not the impersonal swarm of "mindless" nanobots, viruses or locusts. However, since this quality of "mind," which encapsulates such qualities as "consciousness" and "volition," is so poorly understood by science and so totally undemonstrated by our technology, I would further assert that unfriendly AI is pure science fiction which should be far down the list of our concerns compared to more clear and present dangers.
How useful are these surveys of "experts", given how wrong they've been over the years? If you conducted a survey of experts in 1960 asking questions like this, you probably would've gotten a peak probability for human level AI around 1980 and all kinds of scary scenarios happening long before now. Experts seem to be some of the most biased and overly optimistic people around with respect to AI (and many other technologies). You'd probably get more accurate predictions by taking a survey of taxi drivers!
Since I'm in a skeptical and contrarian mood today...
- Never. AI is Cargo Cultism. Intelligence requires "secret sauce" that our machines can't replicate.
- 0
- 0
- Friendly AI research deserves no support whatsoever
- AI risks outweigh nothing because 0 is not greater than any non-negative real number
- The only important milestone is the day when people realize AI is an impossible and/or insane goal and stop trying to achieve it.
“Pure logical thinking cannot yield us any knowledge of the empirical world; all knowledge of reality starts from experience and ends in it. Propositions arrived at by pure logical means are completely empty of reality.” –Albert Einstein
I don't agree with Al here, but it's a nice quote I wanted to share.
Have you been doing anything in particular to cause your willpower to increase? What are some effective techniques for increasing willpower?
What bothers me is that the real agenda of the LessWrong/Singularity Institute folks is being obscured by all these abstract philosophical discussions. I know that Peter Thiel and other billionaires are not funding these groups for academic reasons -- this is ultimately a quest for power.
I've been told by Michael Anissimov personally that they are working on real, practical AI designs behind the scenes, but how often is this discussed here? Am I supposed to feel secure knowing that these groups are seeking the One Ring of Power, but it's OK because they've written papers about "CEV" and are therefore the good guys? He who can save the world can control it. I don't trust anyone with this kind of power, and I am deeply suspicious of any small group of intelligent people that is seeking power in this way.
Am I paranoid? Absolutely. I know too much about recent human history and the horrific failures of other grandiose intellectual projects to be anything else. Call me crazy, but I firmly believe that building intelligent machines is all about power, and that everything else (i.e. most of this site) is conversation.
You raise a good point here, which relates to my question: Is Good's "intelligence explosion" a mathematically well-defined idea, or is it just a vague hypothesis that sounds plausible? When we are talking about something as poorly defined as intelligence, it seems a bit ridiculous to jump to these "lather, rinse, repeat, FOOM, the universe will soon end" conclusions as many people seem to like to do. Is there a mathematical description of this recursive process which takes into account its own complexity, or are these just very vague and overly reductionist claims by people who perhaps suffer from an excessive attachment to their own abstract models and a lack of exposure to the (so-called) real world?
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Data point: you probably know I'm left-wing (in an eccentric way) - and yet, frankly, I'm very "pro-Israel" (although not fanatically so), and think that all the cool, nice, cosmopolitan, compassionate lefty people who protest "Zionist aggression" should go fuck themselves in regards to this particular issue. This includes e.g. Noam Chomsky, whom I otherwise respect highly. And I realize that this lands me in the same position as various far-right types whom I really dislike, yet I'm quite fine with it too.
Yes, I'm not neurotypical. However, you know that I can and do get kinda mind-killed on other political topics. So I'm not satisfied by your explanation.
I think what Viliam_Bur is trying to say in a rather complicated fashion is simply this: humans are tribal animals. Tribalism is perhaps the single biggest mind-killer, as you have just illustrated.
Am I correct in assuming that you identify yourself with the tribe called "Jews"? For me, who has no tribal dog in this particular fight, I can't get too worked up about it, though if the conflict involved, say, Irish people, I'm sure I would feel rather differently. This is just a reality that we should all acknowledge: Our attempts to "overcome bias" with respect to tribalism are largely self-delusion, and perhaps even irrational.