I created a new page on the wiki to collect links like the Big Scary Idea one: Criticism of the sequences. If anyone knows of more links to intelligent disagreement with ideas prevailing on Less Wrong, please add them!
Wasn't there a professional physicist who criticizes something specific in Eliezer's QM sequence? Can't remember where...
Also, not sure if these count, but:
If anyone knows of more links to intelligent disagreement with ideas prevailing on Less Wrong...
Well, people like user:wedrifid disagree but I don't think that the following posts are fallacious (or at least I haven't heard counterarguments that would render those posts obsolete):
(I am currently writing up a post for my personal blog where I list all requirements that need to be true in conjunction for SIAI to be the best choice when it comes to charitable giving.)
(I am currently writing up a post for my personal blog where I list all requirements that need to be true in conjunction for SIAI to be the best choice when it comes to charitable giving.)
Be careful, it's very common for people to gerrymander such probability estimates by unjustifiably assuming complete independence or complete dependence of certain terms. (This is true even if the "probability estimate" is only implicit in the qualitative structure of the argument.) If people think that's what you're doing then they're likely to disregard your conclusions even if the conclusions could have been supported by a weaker argument.
Well, people like user:wedrifid disagree but I don't think that the following posts are fallacious
I can confirm this. Or at least the second of the links is fallacious. The first was merely overwhelmingly weak (and so only fallacious to the extent that strong conclusions were declared.)
Good work.
Alternatively, one might construe the argument this way:
But this may be a less useful structure than the more detailed one you propose. My version simply packs more sub-arguments and discussion into each premise.
The premises (in your argument) that I feel least confident about are #1, #2, and #4.
I prefer this briefer formalization, since it avoids some of the vagueness of "adequate preparations" and makes premise (6) clearer.
ahh, by the way, the points i have the most confidence about: 4, 9 . It seems virtually certain for me that precautions will not be adequate. The situation is similar to getting a server of some kind unhackable on the first compile and run.
Same goes also for creation of friendly AI. The situation is worse than writing a first autopilot ever, and on the first run of that autopilot software, flying in it complete with automated takeoff and landing. The plane's just going to crash, period. We are this sloppy at software development and there is nothing we can do about it. The worst that can happen is the AI that is not friendly but does treat humans as special; it can euthanise humans even if we are otherwise useful for it, for example. A buggy friendly AI is probably the worst outcome. Seriously, the people who don't develop software got all sorts of entirely wrong intuitions with regards to ability to make something work right on the first try (even with automated theorem proving). Furthermore, a very careful try is a very slow one as well, and is unlikely to be the first. What I am hoping for is that the AIs will just quietly wirehead themselves.
Well, the critique I have:
1: We don't know that AI can go FOOM. It may be just as hard to prevent self improving AI from wireheading (when it becomes super-intelligent) as it is to ensure friendliness. Note: perfect wireheading has infinite utility according to agent prone to wireheading; the length of wireheading experience in time (or it's volume in space) is then irrelevant. The whole premise of fear of UFAI is that intelligence (human intelligence) can have faulty self improvement; it's inconsistent to assume that about human intelligence but not about any AI.
2: We don't know that the AI would likely to be substantially unfriendly. Other humans, and especially groups of humans (corporations, governments) are non-you non-friendly-to-you intelligences too, with historical examples of extreme unfriendliness (i'm going to coin a law that the (un)friendly intelligence discussion is incomplete without mention of nazis), yet they can be friendly enough - permitting you to live normal life while paying taxes (but note the military draft, which happens when meta-organism is threatened). It is plausible enough that the AI would be friendly enough. Humans would be cheap to store.
3: We may get there by mind uploading, which seems to me like the safest option.
4: We don't actually know if FAI attempt is more, or less dangerous than messy AI like 'replicate function of cortical columns, simulate a lot of cortical columns'. FAI attempt could just as well be more dangerous. You get it wrong, AI euthanizes you with the best intentions. The extrapolated volition idea btw entirely ignores fact that you are a massively parallel system that can have different goals in different parts of itself (and the mankind too is that kind of system, albeit less well connected).
The argumentation everywhere has very low external probabilities (when I evaluate probabilities if I see conflicting arguments that are opposite and both look similarly plausible, I assume external probability of zero, even if its 1 argument vs 10; much more so for 10 arguments vs 10), and so acting upon those arguments has rather low utility values.
Well, the critique I have:
1: We don't know that AI can go FOOM. It may be just as hard to prevent self improving AI from wireheading (when it becomes super-intelligent) as it is to ensure friendliness. Note: perfect wireheading has infinite utility according to agent prone to wireheading; the length of wireheading experience in time (or it's volume in space) is then irrelevant. The whole premise of fear of UFAI is that intelligence (human intelligence) can have faulty self improvement.
2: We don't know that the AI would likely to be substantially unfriendly. Other humans, and especially groups of humans (corporations, governments) are non-you non-friendly-to-you intelligences too, with historical examples of extreme unfriendliness (i'm going to coin a law that the (un)friendly intelligence discussion is incomplete without mention of nazis), yet they can be friendly enough - permitting you to live normal life while paying taxes (but note the military draft). It is plausible enough that the AI would be friendly enough. Humans would be cheap to store.
3: We may get there by mind uploading, which seems to me like the safest option (and a botched attempt at FAI as a very dangerous one).
4: We don't actually know if FAI attempt is more, or less dangerous than messy AI like 'replicate function of cortical columns, simulate a lot of cortical columns'. It could just as well be more dangerous.
The argumentation everywhere has very low external probabilities, and so acting upon that argumentation has rather low utility values.
- At some point in the development of AI, there will be a very swift increase in the optimization power of the most powerful AI, moving from a non-dangerous level to a level of superintelligence. (Fast takeoff)
...unless people want it to go slowly. It isn't a law of nature that things will go quickly. It seems likely that a more unified society will be able to progress as slowly as it wants to. There are plenty of proposals to throttle development - via "nannies" or other kinds of safety valve.
Insistence on a rapid takeoff arises from a position of technological determinism. It ignores sociological factors.
IMO, the "rapid takeoff" idea should probably be seen as a fundraising ploy. It's big, scary, and it could conceivably happen - just the kind of thing for stimulating donations.
IMO, the "rapid takeoff" idea should probably be seen as a fundraising ploy. It's big, scary, and it could conceivably happen - just the kind of thing for stimulating donations.
It seems that SIAI would have more effective methods for fundraising, e.g. simply capitalizing on "Rah Singularity!". I therefore find this objection somewhat implausible.
Many people complain that the Singularity Institute's "Big Scary Idea" (AGI leads to catastrophe by default) has not been argued for with the clarity of, say, Chalmers' argument for the singularity. The idea would be to make explicit what the premise-and-inference structure of the argument is, and then argue about the strength of those premises and inferences.
Here is one way you could construe one version of the argument for the Singularity Institute's "Big Scary Idea":
My questions are: