Just a gut reaction, but this whole scenario sounds preposterous. Do you guys seriously believe that you can create something as complex as a superhuman AI, and prove that it is completely safe before turning it on? Isn't that as unbelievable as the idea that you can prove that a particular zygote will never grow up to be an evil dictator? Surely this violates some principles of complexity, chaos, quantum mechanics, etc.? And I would also like to know who these "good guys" are, and what will prevent them from becoming "bad guys" when they wield this much power. This all sounds incredibly naive and lacking in common sense!
The main way complexity of this sort would be addressable is if the intellectual artifact that you tried to prove things about were simpler than the process that you meant the artifact to unfold into. For example, the mathematical specification of AIXI is pretty simple, even though the hypotheses that AIXI would (in principle) invent upon exposure to any given environment would mostly be complex. Or for a more concrete example, the Gallina kernel of the Coq proof engine is small and was verified to be correct using other proof tools, while most of the comp...
I can conceive of a social and technological order where transhuman power exists, but you may or may not want to live in it. This is a world where there are god-like entities doing wondrous things, and humanity lives in a state of awe and worship at what they have created. To like living in this world would require that you adopt a spirit of religious submission, perhaps not so different from modern-day monotheists who bow five times a day to their god. This may be the best post-Singularity order we can hope for.
I am going to assert that the fear of unfriendly AI over the threats you mention is a product of the same cognitive bias which makes us more fascinated by evil dictators and fictional dark lords than more mundane villains. The quality of "evil mind" is what really frightens us, not the impersonal swarm of "mindless" nanobots, viruses or locusts. However, since this quality of "mind," which encapsulates such qualities as "consciousness" and "volition," is so poorly understood by science and so totally undemo...
OK, but if we are positing the creation of artificial superintelligences, why wouldn't they also be morally superior to us? I find this fear of a superintelligence wanting to tile the universe with paperclips absurd; why is that likely to be the summum bonum to a being vastly smarter than us? Aren't smarter humans generally more benevolent toward animals than stupider humans and animals? Why shouldn't this hold for AI's? And if you say that the AI might be so much smarter than us that we will be like ants to it, then why would you care if such a species de...
It seems to me that humanity is faced with an epochal choice in this century, whether to:
a) Obsolete ourselves by submitting fully to the machine superorganism/superintelligence and embracing our posthuman destiny, or
b) Reject the radical implications of technological progress and return to various theocratic and traditionalist forms of civilization which place strict limits on technology and consider all forms of change undesirable (see the 3000-year reign of the Pharaohs, or the million-year reign of the hunter-gatherers)
Is there a plausible third ...
How useful are these surveys of "experts", given how wrong they've been over the years? If you conducted a survey of experts in 1960 asking questions like this, you probably would've gotten a peak probability for human level AI around 1980 and all kinds of scary scenarios happening long before now. Experts seem to be some of the most biased and overly optimistic people around with respect to AI (and many other technologies). You'd probably get more accurate predictions by taking a survey of taxi drivers!
Since I'm in a skeptical and contrarian mood today...
See, this is one of the predictions people get totally wrong when they try to interpret singularity activism using religion as a template. It's not "saving the universe from the heathens" its "optimizing the universe on behalf of everyone, even people who are foolish, shortsighted, and/or misinformed".
Well formed criticism (even if mean-spirited or uncharitable) is very useful, because it helps identify problems that can be corrected once recognized, and it reduces the likelihood of an insanity spiral due to people agree with each othe...
“Pure logical thinking cannot yield us any knowledge of the empirical world; all knowledge of reality starts from experience and ends in it. Propositions arrived at by pure logical means are completely empty of reality.” –Albert Einstein
I don't agree with Al here, but it's a nice quote I wanted to share.
My utility function can't be described by statistics; it involves purely irrational concepts such as "spirituality", "aesthetics", "humor", "creativity", "mysticism", etc. These are the values I care about, and I see nothing in your calculations that takes them into account. So I am rejecting the entire project of LessWrong on these grounds.
The fact that you don't see these things accounted for is a fact about your own perception, not about utilitarian values (which actually do account for these things)....
What bothers me is that the real agenda of the LessWrong/Singularity Institute folks is being obscured by all these abstract philosophical discussions. I know that Peter Thiel and other billionaires are not funding these groups for academic reasons -- this is ultimately a quest for power.
I've been told by Michael Anissimov personally that they are working on real, practical AI designs behind the scenes, but how often is this discussed here? Am I supposed to feel secure knowing that these groups are seeking the One Ring of Power, but it's OK because they'...
You raise a good point here, which relates to my question: Is Good's "intelligence explosion" a mathematically well-defined idea, or is it just a vague hypothesis that sounds plausible? When we are talking about something as poorly defined as intelligence, it seems a bit ridiculous to jump to these "lather, rinse, repeat, FOOM, the universe will soon end" conclusions as many people seem to like to do. Is there a mathematical description of this recursive process which takes into account its own complexity, or are these just very vague and overly reductionist claims by people who perhaps suffer from an excessive attachment to their own abstract models and a lack of exposure to the (so-called) real world?
Well I just want to rule the world. To want to abstractly "save the world" seems rather absurd, particularly when it's not clear that the world needs saving. I suspect that the "I want to save the world" impulse is really the "I want to rule the world" impulse in disguise, and I prefer to be up front about my motives...
I think what Viliam_Bur is trying to say in a rather complicated fashion is simply this: humans are tribal animals. Tribalism is perhaps the single biggest mind-killer, as you have just illustrated.
Am I correct in assuming that you identify yourself with the tribe called "Jews"? For me, who has no tribal dog in this particular fight, I can't get too worked up about it, though if the conflict involved, say, Irish people, I'm sure I would feel rather differently. This is just a reality that we should all acknowledge: Our attempts to "overcome bias" with respect to tribalism are largely self-delusion, and perhaps even irrational.