Should I believe what the SIAI claims?

23 Post author: XiXiDu 12 August 2010 02:33PM

Major update here.

The state of affairs regarding the SIAI and its underlying rationale and rules of operation are insufficiently clear. 

Most of the arguments involve a few propositions and the use of probability and utility calculations to legitimate action. Here much is uncertain to an extent that I'm not able to judge any nested probability estimations. Even if you tell me, where is the data on which you base those estimations?

There seems to be an highly complicated framework of estimations to support and reinforce each other. I'm not sure how you call this in English, but in German I'd call that a castle in the air.

I know that what I'm saying may simply be due to a lack of knowledge and education, that is why I am inquiring about it. How many of you, who currently support the SIAI, are able to analyse the reasoning that led you to support the SIAI in the first place, or at least substantiate your estimations with other kinds of evidence than a coherent internal logic?

I can follow much of the reasoning and arguments on this site. But I'm currently unable to judge their overall credence. Are the conclusions justified? Is the coherent framework build around the SIAI based on firm ground? There seems to be no critical inspection or examination by a third party. There is no peer review. Yet people are willing to donate considerable amounts of money.

I'm concerned that, although consistently so, the SIAI and its supporters are updating on fictional evidence. This post is meant to inquire about the foundations of your basic premises. Are you creating models to treat subsequent models or are your propositions based on fact?

An example here is the use of the Many-worlds interpretation. Itself a logical implication, can it be used to make further inferences and estimations without additional evidence? MWI might be the only consistent non-magic interpretation of quantum mechanics. The problem here is that such conclusions are, I believe, widely considered not to be enough to base further speculations and estimations on. Isn't that similar to what you are doing when speculating about the possibility of superhuman AI and its consequences? What I'm trying to say here is that if the cornerstone of your argumentation, if one of your basic tenets is the likelihood of superhuman AI, although a valid speculation given what we know about reality, you are already in over your head with debt. Debt in the form of other kinds of evidence. Not to say that it is a false hypothesis, that it is not even wrong, but that you cannot base a whole movement and a huge framework of further inference and supportive argumentation on such premises, on ideas that are themselves not based on firm ground.

The gist of the matter is that a coherent and consistent framework of sound argumentation based on unsupported inference is nothing more than its description implies. It is fiction. Imagination allows for endless possibilities while scientific evidence provides hints of what might be possible and what impossible. Science does provide the ability to assess your data. Any hint that empirical criticism provides gives you new information on which you can build on. Not because it bears truth value but because it gives you an idea of what might be possible. An opportunity to try something. There’s that which seemingly fails or contradicts itself and that which seems to work and is consistent.

And that is my problem. Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic, i.e. imagination or fiction, and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed by the SIAI.

Further, do you have an explanation for the circumstance that Eliezer Yudkowsky is the only semi-popular person who's aware of something that might shatter the universe? Why is it that people like Vernor Vinge, Robin Hanson or Ray Kurzweil are not running amok using all their influence to convince people of the risks ahead, or at least give all they have to the SIAI? Why aren't Eric Drexler, Gary Drescher or AI researches like Marvin Minsky worried to the extent that they signal their support?

I'm talking to quite a few educated people outside this community. They do not doubt all those claims for no particular reason. Rather they tell me that there are too many open questions to focus on the possibilities depicted by the SIAI and to neglect other near-term risks that might wipe us out as well.

I believe that many people out there know a lot more than I do, so far, about related topics and yet they seem not to be nearly as concerned about the relevant issues than the average Less Wrong member. I could have named other people. That's besides the point though, it's not just Hanson or Vinge but everyone versus Eliezer Yudkowsky and some unknown followers. What about the other Bayesians out there? Are they simply not as literate as Eliezer Yudkowsky in the maths or maybe somehow teach but not use their own methods of reasoning and decision making?

What do you expect me to do, just believe Eliezer Yudkowsky? Like I believed so much in the past which made sense but turned out to be wrong? Maybe after a few years of study I'll know more.

...

2011-01-06: As this post received over 500 comments I am reluctant to delete it. But I feel that it is outdated and that I could do much better today. This post has however been slightly improved to account for some shortcomings but has not been completely rewritten, neither have its conclusions been changed. Please account for this when reading comments that were written before this update.

2012-08-04: A list of some of my critical posts can be found here: SIAI/lesswrong Critiques: Index

Comments (600)

Comment author: thomblake 12 August 2010 02:46:20PM 3 points [-]

This was a very good job of taking a number of your comments and turning them into a coherent post. It raised my estimation that Eliezer will be able to do something similar with turning his blog posts into a book.

Comment author: Vladimir_Nesov 12 August 2010 11:05:44PM 3 points [-]

This was a very good job of taking a number of your comments and turning them into a coherent post. It raised my estimation that Eliezer will be able to do something similar with turning his blog posts into a book.

The connection to Eliezer's ability to write a book is bizarre (to say so politely).

Comment author: Alicorn 12 August 2010 11:11:23PM *  7 points [-]

I think the idea is that if one was originally skeptical about the general feasibility of stitching together separate posts into a single book, this post offers an example of it being done on a smaller scale and ups the estimate of that feasibility.

Comment author: Vladimir_Nesov 12 August 2010 11:14:18PM 0 points [-]

It's not in the nature of ideas to be blog posts. One generally can present ideas in a book form, depending on one's writing skills.

Comment author: EStokes 12 August 2010 11:11:06PM 3 points [-]

It didn't feel very clear/coherant, but I'm tired so meh. I think it could've done with more lists, or something like that. Something like an outline or clear summation of his points.

Comment author: utilitymonster 12 August 2010 04:02:35PM *  6 points [-]

I'm not exactly an SIAI true believer, but I think they might be right. Here are some questions I've thought about that might help you out. I think it would help others out if you told us exactly where you'd be interested in getting off the boat.

  1. How much of your energy are you willing to spend on benefiting others, if the expected benefits to others will be very great? (It needn't be great for you to support SIAI.)
  2. Are you willing to pursue a diversified altruistic strategy if it saves fewer expected lives (it almost always will for donors giving less than $1 million or so)?
  3. Do you think mitigating x-risk is more important than giving to down-to-earth charities (GiveWell style)? (This will largely turn on how you feel about supporting causes with key probabilities that are tough to estimate, and how you feel about low-probability, high expected utility prospects.)
  4. Do you think that trying to negotiate a positive singularity is the best way to mitigate x-risk?
  5. Is any known organization likely to do better than SIAI in terms of negotiating a positive singularity (in terms of decreasing x-risk) on the margin?
  6. Are you likely to find an organization that beats SIAI in the future?

Judging from your post, you seem most skeptical about putting your efforts into causes whose probability of success is very difficult to estimate, and perhaps low.

Comment author: XiXiDu 12 August 2010 06:30:10PM *  0 points [-]
  1. Maximal utlity for everyone is a preference but secondary. Most of all in whatever I support my personal short and long-term benefit is a priority.
  2. No
  3. Yes (Edit)
  4. Uncertain/Unable to judge.
  5. Maybe, but I don't know of one. That doesn't mean that we shouldn't create one, if only for the uncertainty of Eliezer Yudkowsky' possible unstated goals.
  6. Uncertain/Unable to judge. See 5.
Comment author: utilitymonster 12 August 2010 06:48:44PM 2 points [-]

Given your answers to 1-3, you should spend all of your altruistic efforts on mitigating x-risk (unless you're just trying to feel good, entertain yourself, etc.).

For 4, I shouldn't have asked you whether you "think" something beats negotiating a positive singularity in terms of x-risk reduction. Better: Is there some other fairly natural class of interventions (or list of potential examples) such that, given your credences, has a higher expected value? What might such things be?

For 5-6, perhaps you should think about what such organizations might be. Those interested in convincing XiXiDu might try listing some alternative best x-risk mitigating groups and provide arguments that they don't do as well. As for me, my credences are highly unstable in this area, so info is appreciated on my part as well.

Comment author: NancyLebovitz 12 August 2010 04:50:11PM 1 point [-]

I'm pretty sure that a gray goo nanotech disaster is generally not considered plausible-- if nothing else, it would generate so much heat the nanotech would fail.

This doesn't address less dramatic nanotech disasters-- say, a uFAI engineering viruses to wipe out the human race so that it can build what it wants without the risk of interference.

Comment author: jimrandomh 12 August 2010 04:59:24PM *  6 points [-]

I'm pretty sure that a gray goo nanotech disaster is generally not considered plausible--if nothing else, it would generate so much heat the nanotech would fail.

This argument can't be valid, because it also implies that biological life can't work either. At best, this implies a limit on the growth rate; but without doing the math, there is no particular reason to think that limit is slow.

Comment author: NancyLebovitz 12 August 2010 05:05:25PM 2 points [-]

Grey goo is assumed to be a really fast replicator that will eat anything. Arguably, it's a movie plot disaster.

Comment author: timtyler 13 August 2010 08:15:33AM 0 points [-]

From that thread, it seems that many people like to speculate on possible disasters.

Comment author: NancyLebovitz 13 August 2010 08:45:47AM 1 point [-]

The point is that they know they're doing it for the fun of it rather than actually coming up with anything that needs to be prevented.

Comment author: Eliezer_Yudkowsky 12 August 2010 08:54:26PM 1 point [-]

Google "global ecophagy".

Comment author: NancyLebovitz 13 August 2010 09:00:36AM 1 point [-]

I've done so. What's your take on the odds of the biosphere being badly deteriorated?

Comment author: timtyler 13 August 2010 06:33:21AM 2 points [-]

Eric Drexler decided it was implausible some time ago:

"Nanotech guru turns back on 'goo'"

However, some still flirt with the corresponding machine intelligence scenarios - though those don't seem much more likely to me.

Comment author: ciphergoth 12 August 2010 05:19:44PM 3 points [-]

Do you have any reason to suppose that Charlie Stross has even considered SIAI's claims?

Comment author: XiXiDu 12 August 2010 06:50:00PM 3 points [-]

If someone like me who failed secondary school can come up with such ideas before coming across the SIAI, I thought that someone who writes SF novels about the idea of a technological singualrity might too. And you don't have to link me to the post about 'Generalizing From One Example', I'm aware of it.

And Charles Stross was not the only person that I named, by the way. At least one of those people is a member on this site.

Comment author: Wei_Dai 13 August 2010 03:12:25AM 8 points [-]

At least one of those people is a member on this site.

If you're referring to Gary Drescher, I forwarded him a link of your post, and asked him what his views of SIAI actually are. He said that he's tied up for the next couple of days, but will reply by the weekend.

Comment author: XiXiDu 13 August 2010 08:05:51AM 3 points [-]

Great, thank you! I was thinking of asking some people to actually comment here.

Comment author: ciphergoth 13 August 2010 10:06:28AM 4 points [-]

I plan on asking Stross about this next time I visit Edinburgh, if he's in town.

Comment author: XiXiDu 13 August 2010 10:12:10AM 4 points [-]

That be great. I'd be excited to have as many opinions as possible about the SIAI from people that are not associated with it.

I wonder if we could get some experts to actual write a informed critique about the whole matter. Not just some SF writers. Although I think Stross is probably as educated as EY.

What is Robin Hanson' opinion about all this, does anyone know? Is he as worried about the issues in question? Is he donating to the SIAI?

Comment author: Unknowns 13 August 2010 10:18:01AM 2 points [-]

Robin Hanson said that he thought the probability of an AI being able to foom and destroy the world was about 1%. However, note that since this would be a 1% chance of destroying the world, he considers it reasonable to take precautions against this.

Comment author: CarlShulman 13 August 2010 10:33:41AM *  3 points [-]

That's AI built by a very small group fooming to take over the world at 1%, going from a millionth or less of the rest of the world economy to much larger very quickly. That doesn't account for risk from AI built by large corporations or governments, Darwinian AI evolution destroying everything we value, AI arms race leading to war (and accidental screwups), etc. His AI (80% of which he says is brain emulations) x-risk estimate is higher. He says between 1% and 50%.

Comment deleted 13 August 2010 10:27:37AM [-]
Comment author: XiXiDu 13 August 2010 10:45:21AM 4 points [-]

Me looking for some form of peer review is deemed to be bizarre? It is not my desire to crush the SIAI but to figure out what is the right thing to do.

You know what I would call bizarre? That someone writes in bold and all caps calling someone an idiot and afterwards banning his post. All that based on ideas that themselves are resulting from and based on unsupported claims. That is what EY is doing and I am trying to assess the credibility of such reactions.

Comment author: CarlShulman 13 August 2010 11:19:03AM 6 points [-]

Robin thinks emulations will probably come before AI, that non-emulation AI would probably be developed by large commercial or military organizations, that AI capacity would ramp up relatively slowly, and that extensive safety measures will likely prevent organizations from losing control of their AIs. He says that still leaves enough of an existential risk to be worth working on, but I don't know his current estimate. Also, some might differ from Robin in valuing a Darwinian/burning the cosmic commons outcome.

I don't know of any charitable contributions Robin has made to any organization, or any public analysis or ranking of charities by him.

Comment author: XiXiDu 13 August 2010 12:15:08PM 1 point [-]

Thanks, this is the kind of informed (I believe in Hansons case) contrarian third party opinions about main isuess that I perceive to be missing.

Surely I could have found out about this myself. But if I was going to wait until I first finished my studies of the basics, i.e. catch up on formal education, then read the relevant background information and afterwards all of LW, I could as well not donate to the SIAI at all for the next half decade.

Where is the summary that is available for other issues like climate change. The Talk.origins of existential risks, especially superhuman AI?

Comment author: MartinB 12 August 2010 07:00:25PM 4 points [-]

Lets all try not to confuse SF writers with futurists and neither with researchers or engineers. Stories follow the rules of awesome, or they don't sell well. There is a wonderful Letter from Heinlein to a fan that asked why he wrote, and the top answer was:'to put food on the table'. It is probably online, but I could not find it atm. Comparing the work of the SIAI to any particular writer is like comparing the british navy with Jack London.

Comment author: NancyLebovitz 13 August 2010 09:01:47AM 4 points [-]

Heinlein also described himself as competing for his reader's beer money.

Comment author: whpearson 12 August 2010 07:27:15PM 1 point [-]

I'd be surprised if he hasn't at least come across the early arguments; he was active on the Extropy-Chat mailing list the same time as Eliezer. I didn't follow it closely enough to see if their paths crossed though.

Comment author: Wei_Dai 12 August 2010 08:10:14PM 2 points [-]

Stross's views are simply crazy. See his “21st Century FAQ” and others' critiques of it.

I do wonder why Ray Kurzweil isn't more concerned about the risk of a bad Singularity. I'm guessing he must have heard SIAI's claims, since he co-founded the Singularity Summit along with SIAI. Has anyone put the question to him?

Comment author: ciphergoth 12 August 2010 09:11:31PM 2 points [-]

I think "simply crazy" is overstating it, but it's striking he makes the same mistake that Wright and other critics make: SIAI's work is focussed on AI risks, while the critics focus on AI benefits. This I assume is because rather than addressing what SIAI actually say, they're addressing their somewhat religion-like picture of it.

Comment author: Vladimir_Nesov 12 August 2010 09:15:52PM 0 points [-]

I think "simply crazy" is overstating it, but it's striking he makes the same mistake that Wright and other critics make: SIAI's work is focussed on AI risks, while the critics focus on AI benefits.

Well, I also try to focus on AI benefits. The critics fail because of broken models, not because of the choice of claims they try to address.

Comment author: whpearson 12 August 2010 09:23:20PM 5 points [-]

I got the sense that he is very pessimistic about the chance of controlling things if they do go FOOM. If he is that pessimistic and also believes that the advance of AI will be virtually impossible to stop, then forgetting about will be as purposeful as worrying about it.

Comment author: CarlShulman 13 August 2010 07:17:21AM 2 points [-]

I think this is an accurate picture of Stross' point.

Comment author: timtyler 13 August 2010 06:27:45AM *  3 points [-]

Re: "I do wonder why Ray Kurzweil isn't more concerned about the risk of a bad Singularity"

http://www.cio.com/article/29790/Ray_Kurzweil_on_the_Promise_and_Peril_of_Technology_in_the_21st_Century

Comment author: CarlShulman 13 August 2010 12:38:16PM 0 points [-]

Crazy in which respect? It seemed to me that those critiques were narrow and mostly talking past Stross. The basic point that space is going to remain much more expensive and less pleasant than expansion on Earth for quite some time, conditioning on no major advances in AI, nanotechnology, biotechnology, etc, is perfectly reasonable. And Stross does so condition.

He has a few lines about it in The Singularity is Near, basically saying that FAI seems very hard (no foolproof solutions available, he says), but that AI will probably be well integrated. I don't think he means "uploads come first, and manage AI after that," as he predicts Turing-Test passing AIs well before uploads, but he has said things suggesting that those Turing Tests will be incomplete, with the AIs not capable of doing original AI research. Or he may mean that the ramp up in AI ability will be slow, and that IA will improve our ability to monitor and control AI systems institutionally, aided by non-FAI engineering of AI motivational systems and the like.

Comment author: Wei_Dai 13 August 2010 12:48:51PM *  2 points [-]

Crazy in which respect?

Look at his answer for The Singularity:

The rapture of the nerds, like space colonization, is likely to be a non-participatory event for 99.999% of humanity — unless we're very unlucky. If it happens and it's interested in us, all our plans go out the window. If it doesn't happen, sitting around waiting for the AIs to save us from the rising sea level/oil shortage/intelligent bioengineered termites looks like being a Real Bad Idea. The best approach to the singularity is to apply Pascal's Wager — in reverse — and plan on the assumption that it ain't going to happen, much less save us from ourselves.

He doesn't even consider the possibility of trying to nudge it in a good direction. It's either "plan on the assumption that it ain't going to happen", or sit around waiting for AIs to save us.

ETA: The "He" in your second paragraph is Kurtzweil, I presume?

Comment author: Rain 13 August 2010 12:54:48PM 1 point [-]

That quote could also be interpreted as saying that UFAI is far more likely than FAI.

Comment author: Risto_Saarelma 13 August 2010 01:36:46PM 1 point [-]

Pretty much how I read it. It should acknowledge the attempts to make a FAI, but it seems like a reasonable pessimistic opinion that FAI is too difficult to ever be pulled off successfully before strong AI in general.

Seems like a sensible default stance to me. Since humans exist, we know that a general intelligence can be built out of atoms, and since humans have many obvious flaws as physical computation systems, we know that any successful AGI is likely to end up at least weakly superhuman. There isn't a similarly strong reason to assume a FAI can be built, and the argument for one seems to be more on the lines of things being likely to go pretty weird and bad for humans if one can't be built but an AGI can.

Comment author: Wei_Dai 13 August 2010 01:52:26PM 2 points [-]

Thinking that FAI is extremely difficult or unlikely isn't obviously crazy, but Stross isn't just saying "don't bother trying FAI" but rather "don't bother trying anything with the aim of making a good Singularity more likely". The first sentence of his answer, which I neglected to quote, is "Forget it."

Comment author: Johnicholas 12 August 2010 05:19:48PM 5 points [-]

This is an attempt (against my preference) to defend SIAI's reasoning.

Let's characterize the predictions of the future into two broad groups: 1. business as usual, or steady-state. 2. aware of various alarmingly exponential trends broadly summarized as "Moore's law". Let's subdivide the second category into two broad groups: 1. attempting to take advantage of the trends in roughly a (kin-) selfish manner 2. attempting to behave extremely unselfishly.

If you study how the world works, the lack of steady-state-ness is everywhere. We cannot use fossil fuels or arable land indefinitely at the current rates. "Business as usual" depends crucially on progress in order to continue! We're investing heavily in medical research, and there's no reason to expect that to stop, Self-replicating molecular-scale entities already exist, and there is every reason to expect that we will understand how to build them better than we currently do.

Supposing that the above paragraph has convinced the reader that the lack of steady--state-ness is fairly obvious, and given the reader's knowledge of human nature, how many people do you expect would be trying to behave in extremely unselfish manner?

Your post seems to expect that if various luminaries believed that incomprehensibly-sophisticated computing engines and dangerous self-replicating atomic-scale entities were likely in our future, then they would behave extremely unselfishly - is that reasonable? Supposing that almost everyone was aware of this probable future, how would they take advantage of it in a (kin-)selfish manner as best they can? I think the hypothesized world would look very much like this one.

Comment author: Wei_Dai 12 August 2010 05:37:16PM 10 points [-]

I think Vernor Vinge at least has made a substantial effort to convince people of the risks ahead. What do you think A Fire Upon the Deep is? Or, here is a more explicit version:

If the Singularity can not be prevented or confined, just how bad could the Post-Human era be? Well ... pretty bad. The physical extinction of the human race is one possibility. (Or as Eric Drexler put it of nanotechnology: Given all that such technology can do, perhaps governments would simply decide that they no longer need citizens!). Yet physical extinction may not be the scariest possibility. Again, analogies: Think of the different ways we relate to animals. Some of the crude physical abuses are implausible, yet.... In a Post- Human world there would still be plenty of niches where human equivalent automation would be desirable: embedded systems in autonomous devices, self- aware daemons in the lower functioning of larger sentients. (A strongly superhuman intelligence would likely be a Society of Mind [16] with some very competent components.) Some of these human equivalents might be used for nothing more than digital signal processing. They would be more like whales than humans. Others might be very human-like, yet with a one-sidedness, a dedication that would put them in a mental hospital in our era. Though none of these creatures might be flesh-and-blood humans, they might be the closest things in the new enviroment to what we call human now. (I. J. Good had something to say about this, though at this late date the advice may be moot: Good [12] proposed a "Meta-Golden Rule", which might be paraphrased as "Treat your inferiors as you would be treated by your superiors." It's a wonderful, paradoxical idea (and most of my friends don't believe it) since the game-theoretic payoff is so hard to articulate. Yet if we were able to follow it, in some sense that might say something about the plausibility of such kindness in this universe.)

I have argued above that we cannot prevent the Singularity, that its coming is an inevitable consequence of the humans' natural competitiveness and the possibilities inherent in technology. And yet ... we are the initiators. Even the largest avalanche is triggered by small things. We have the freedom to establish initial conditions, make things happen in ways that are less inimical than others. Of course (as with starting avalanches), it may not be clear what the right guiding nudge really is:

He goes on to talk about intelligence amplification, and then:

Originally, I had hoped that this discussion of IA would yield some clearly safer approaches to the Singularity. (After all, IA allows our participation in a kind of transcendance.) Alas, looking back over these IA proposals, about all I am sure of is that they should be considered, that they may give us more options. But as for safety ... well, some of the suggestions are a little scarey on their face. One of my informal reviewers pointed out that IA for individual humans creates a rather sinister elite. We humans have millions of years of evolutionary baggage that makes us regard competition in a deadly light. Much of that deadliness may not be necessary in today's world, one where losers take on the winners' tricks and are coopted into the winners' enterprises. A creature that was built de novo might possibly be a much more benign entity than one with a kernel based on fang and talon. And even the egalitarian view of an Internet that wakes up along with all mankind can be viewed as a nightmare [26].

Comment author: XiXiDu 13 August 2010 08:37:04AM 2 points [-]

As I wrote in another comment, Eliezer Yudkowsky hasn't come up with anything unique. And there is no argument in saying that he's simply he smartest fellow around since clearly, other people have come up with the same ideas before him. And that was my question, why are they not signaling their support for the SIAI. Or in case they don't know about the SIAI, why are they not using all their resources and publicity and try to stop the otherwise inevitable apocalypse?

It looks like there might be arguments against the kind of fearmongering that can be found within this community. So why is nobody out to inquire about the reasons for the great silence within the group of those aware of a possible singularity but who nevertheless keep quiet? Maybe they know something you don't, or are you people so sure of your phenomenal intelligence?

Comment author: Unknowns 13 August 2010 09:36:40AM -1 points [-]
Comment author: XiXiDu 13 August 2010 09:56:20AM *  5 points [-]

Absence of evidence is not evidence of absence?

There's simply no good reason to argue against cryonics. It is a chance in case of the worst case scenario and it is considerably higher than rotting six feet under.

Have you thought about the possibility that most experts simply are reluctant to come up with detailed critics about specific issues posed by the SIAI, EY and LW? Maybe they consider it not worth the effort as the data that is already available does not justify given claims in the first place.

Anyway, I think I might write some experts and all of the people mentioned in my post, if I'm not too lazy.

I've already got one reply, whom I'm not going to name right now. But let's first consider Yudkowsky' attitude of adressing other people:

You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong...

Now the first of those people I contacted about it:

There are certainly many reasons to doubt the belief system of a cult based around the haphazard musings of a high school dropout, who has never written a single computer program but professes to be an expert on AI. As you point out none of the real AI experts are crying chicken little, and only a handful of AI researchers, cognitive scientists or philosophers take the FAI idea seriously.

Read Moral Machines for current state of the art thinking on how to build a moral machine mind.

SIAI dogma makes sense if you ignore the uncertainties at every step of their logic. It's like assigning absolute numbers to every variable in the Drake equation and determining that aliens must be all around us in the solar system, and starting a church on the idea that we are being observed by spaceships hidden on the dark side of the moon. In other words, religious thinking wrapped up to look like rationality.

ETA

I was told the person I quoted above is stating full ad hominem falsehoods regarding Eliezer. I think it is appropriate to edit the message to show that indeed the person might not has been honest, or clueful. Otherwise I'll unnecessary end up perpetuating possible ad hominem attacks.

Comment author: Unknowns 13 August 2010 10:03:13AM 1 point [-]

This isn't contrary to Robin's post (except what you say about cryonics.) Robin was saying that there is a reluctance to criticize those things in part because the experts think they are not worth bothering with.

Comment author: utilitymonster 13 August 2010 12:26:28PM 8 points [-]

I feel some of the force of this...I do think we should take the opinions of other experts seriously, even if their arguments don't seem good.

I sort of think that many of these criticisms of SIAI turn on not being Bayesian enough. Lots of people only want to act on things they know, where knowing requires really solid evidence, the kind of evidence you get through conventional experimental science, with low p-values and all. It is just impossible to have that kind of robust confidence about the far future. So you're going to have people just more or less ignore speculative issues about the far future, even if those issues are by far the most important. Once you adopt a Bayesian perspective, and you're just interested in maximizing expected utility, the complaint that we don't have a lot of evidence about what will be best for the future, or the complaint that we just don't really know whether SIAI's mission and methodology are going to work seems to lose a lot of force.

Comment author: multifoliaterose 13 August 2010 12:44:34PM 2 points [-]

I sort of think that many of these criticisms of SIAI turn on not being Bayesian enough. Lots of people only want to act on things they know, where knowing requires really solid evidence, the kind of evidence you get through conventional experimental science, with low p-values and all. It is just impossible to have that kind of robust confidence about the far future. So you're going to have people just more or less ignore speculative issues about the far future, even if those issues are by far the most important. Once you adopt a Bayesian perspective, and you're just interested in maximizing expected utility, the complaint that we don't have a lot of evidence about what will be best for the future, or the complaint that we just don't really know whether SIAI's mission and methodology are going to work seems to lose a lot of force.

I have some sympathy for your remark.

The real question is just whether SIAI has greatly overestimated at least one of the relevant probabilities. I have high confidence that the SIAI staff have greatly overestimated their ability to have a systematically positive impact on existential risk reduction.

Comment author: utilitymonster 13 August 2010 01:07:20PM 3 points [-]

Have you read Nick Bostrom's paper, Astronomical Waste? You don't have to be able to affect the probabilities by very much for existential risk to be the thing to worry about, especially if you have a decent dose of credence in utilitarianism.

Is there a decent chance, in your view, of decrease x-risk by 10^-18 if you put all of your resources into it? That could be enough. (I agree that this kind of argument is worrisome; maybe expected utility theory or utilitarianism breaks down with these huge numbers and tiny probabilities, but it is worth thinking about.)

If you're sold on x-risk, are there some candidate other things that might have higher expectations of x-risk reductions on the margin (after due reflection)? (I'm not saying SIAI clearly wins, I just want to know what else you're thinking about.)

Comment author: Rain 13 August 2010 01:52:45PM *  4 points [-]

[...] many reasons to doubt [...] belief system of a cult [...] haphazard musings of a high school dropout [...] never written a single computer program [...] professes to be an expert [...] crying chicken little [...] only a handful take the FAI idea seriously.

[...] dogma [...] ignore the uncertainties at every step [...] starting a church [...] religious thinking wrapped up to look like rationality.

I am unable to take this criticism seriously. It's just a bunch of ad hominem and hand-waving. What are the reasons to doubt? How are they ignoring the uncertainties when they list them on their webpage and bring them up in every interview? How is a fiercely atheist group religious at all? How is it a cult (there are lots of posts about this in the LessWrong archive)? How is it irrational?

Edit: And I'm downvoted. You actually think a reply that's 50% insult and emotionally loaded language has substance that I should be engaging with? I thought it was a highly irrational response on par with anti-cryonics writing of the worst order. Maybe you should point out the constructive portion.

Comment author: CarlShulman 13 August 2010 11:47:13AM *  8 points [-]

David Chalmers has been writing and presenting to philosophers about AI and intelligence explosion since giving his talk at last year's Singularity Summit. He estimates the probability of human-level AI by 2100 at "somewhat more than one-half," thinks an intelligence explosion following that quite likely, and considers possible disastrous consequences quite important relative to other major causes today. However, he had not written or publicly spoken about his views, and probably would not have for quite some time had he not been invited to the Singularity Summit.

He reports a stigma around the topic as a result of the combination of science-fiction associations and the early failures of AI, and the need for some impetus to brave that. Within the AI field, there is also a fear that discussion of long-term risks, or unlikely short-term risks may provoke hostile reactions against the field thanks to public ignorance and affect heuristic. Comparisons are made to genetic engineering of agricultural crops, where public attention seems to be harmful on net in unduly slowing the development of more productive plants.

Comment author: XiXiDu 13 August 2010 12:22:59PM *  4 points [-]

Thanks. This is more, I think you call it rational evidence, from an outsider. But it doesn't answer the primary question of my post. How do you people arrive at the estimations you state? Where can I find the details of how you arrived at your conclusions about the likelihood of those events?

If all this was supposed to be mere philosophy, I wouldn't inquire about it to such an extent. But the SIAI is asking for the better part of your income and resources. There are strong claims being made by Eliezer Yudkowsky and calls for action. Is it reasonable to follow given the current state of evidence?

Comment author: orthonormal 12 August 2010 05:58:36PM *  7 points [-]

These are reasonable questions to ask. Here are my thoughts:

  • Superhuman Artificial Intelligence (the runaway kind, i.e. God-like and unbeatable not just at Chess or Go).
  • Advanced real-world molecular nanotechnology (the grey goo kind the above intelligence could use to mess things up).

Virtually certain that these things are possible in our physics. It's possible that transhuman AI is too difficult for human beings to feasibly program, in the same way that we're sure chimps couldn't program trans-simian AI. But this possibility seems slimmer when you consider that humans will start boosting their own intelligence pretty soon by other means (drugs, surgery, genetic engineering, uploading) and it's hard to imagine that recursive improvement would cap out any time soon. At some point we'll have a descendant who can figure out self-improving AI; it's just a question of when.

  • The likelihood of exponential growth versus a slow development over many centuries.
  • That it is worth it to spend most on a future whose likelihood I cannot judge.

These are more about decision theory than logical uncertainty, IMO. If a self-improving AI isn't actually possible for a long time, then funding SIAI (and similar projects, when they arise) is a waste of cash. If it is possible soon, then it's a vital factor in existential risk. You'd have to have strong evidence against the possibility of rapid self-improvement for Friendly AI research to be a bad investment within the existential risk category.

For the other, this falls under the fuzzies and utilons calculation. Insofar as you want to feel confident that you're helping the world (and yes, any human altruist does want this), pick a charity certain to do good in the present. Insofar as you actually want to maximize your expected impact, you should weight charities by their uncertainty and their impact, multiply it out, and put all your eggs in the best basket (unless you've just doubled a charity's funds and made them less marginally efficient than the next one on your list, but that's rare).

  • That Eliezer Yudkowsky (the SIAI) is the right and only person who should be leading, respectively institution that should be working, to soften the above.

Aside from any considerations in his favor (development of TDT, for one publicly visible example), this sounds too much like a price for joining— if your really take the risk of Unfriendly AI seriously, what else could you do about it? In fact, the more well-known SIAI gets in the AI community and the more people take it seriously, the more likely that it will (1) instill in other GAI researchers some necessary concern for goal systems and (2) give rise to competing Friendly AI projects which might improve on SIAI in any relevant respects. Unless you thought they were doing as much harm as good, it still seems optimal to fund SIAI now if you're concerned about self-improving AI.

Further, do you have an explanation for the circumstance that Eliezer Yudkowsky is the only semi-popular person who has figured all this out?

My best guess is that the first smart/motivated/charismatic person who comes to these conclusions immediately tries to found something like SIAI rather than doing other things with their life. There's a very unsurprising selection bias here.

ETA: Reading the comments, I just found that XiXiDu has not actually read the Sequences before claiming that the evidence presented is inadequate. I've downvoted this post, and I now feel kind of stupid for having written out this huge reply.

Comment author: whpearson 12 August 2010 07:56:57PM 1 point [-]

Virtually certain that these things are plausible.

What do you mean by plausible in this instance? Not currently refuted by our theories of intelligence or chemistry? Or something stronger.

Comment author: orthonormal 12 August 2010 11:59:12PM *  0 points [-]

Oh yeah, oops, I meant to say "possible in our physics". Edited accordingly.

Comment author: XiXiDu 13 August 2010 09:07:15AM 0 points [-]

It's possible that transhuman AI is too difficult for human beings to feasibly program, in the same way that we're sure chimps couldn't program trans-simian AI.

Where is the evidence that does support the claims that it is not only possible, but that it will also turn out to be MUCH smarter than a human being, not just more rational or faster. Where is the evidence for an intelligence explosion? Is action justified simply based on the mere possibility that it might be physical possible?

...when you consider that humans will start boosting their own intelligence pretty soon by other means (drugs, surgery, genetic engineering, uploading)...

Not even your master believes this.

At some point we'll have a descendant who can figure out self-improving AI; it's just a question of when.

Yes, once they turned themselves into superhuman intelligences? Isn't this what Kurzweil believes? No risks by superhuman AI because we'll go the same way anyway?

If a self-improving AI isn't actually possible for a long time, then funding SIAI (and similar projects, when they arise) is a waste of cash.

Yep.

You'd have to have strong evidence against the possibility of rapid self-improvement for Friendly AI research to be a bad investment within the existential risk category.

Yes, but to allocate all my egs to them? Remember, they ask for more than simple support.

Insofar as you actually want to maximize your expected impact...

I want to maximize my expected survival. If there are medium midterm risks that could kill me with a higher probability than AI in future, that is as important as the AI killing me later.

...development of TDT...

Highly interesting. Sadly it is not a priority.

...if your really take the risk of Unfriendly AI seriously, what else could you do about it?

I could, for example, start my own campaign to make people aware of possible risks. I could talk to people. I bet there's a lot more you smart people could do besides supporting EY.

...the more well-known SIAI gets in the AI community.

The SIAI and specially EY does not have the best reputation within the x-risk community and I bet that's the same in the AI community.

Unless you thought they were doing as much harm as good...

That might very well be the case given how they handle public relations.

My best guess is that the first smart/motivated/charismatic person who comes to these conclusions immediately tries to found something like SIAI.

He wasn't the first smart person who came to these conclusions. And he sure isn't charismatic.

XiXiDu has not actually read the Sequences before claiming that the evidence presented is inadequate.

I've read and heard enough to be in doubt since I haven't come across a single piece of evidence besides some seemingly sound argumentation (as far as I can tell) in favor of some basic principles of unknown accuracy. And even those arguments are sufficiently vague that you cannot differentiate them from mere philosophical musing.

And if you feel stupid because I haven't read hundreds of articles to find a single piece of third party evidence in favor of the outstanding premises used to ask for donations, then you should feel stupid.

Comment author: kodos96 13 August 2010 09:52:06AM 2 points [-]

Since I've now posted several comments on this thread defending and/or "siding with" XiXiDu, I feel I should state, for the record, that I think this last comment is a bit over the line, and I don't want to be associated with the kind of unnecessarily antagonistic tone displayed here.

Although there are a couple pieces of the SIAI thesis that I'm not yet 100% sold on, I don't reject it in its entirety, as it now sounds like XiXiDu does - I just want to hear some more thorough explanation on a couple of sticking points before I buy in.

Also, charisma is in the eye of the beholder ;)

Comment author: XiXiDu 13 August 2010 10:05:14AM *  3 points [-]

...and I don't want to be associated with the kind of unnecessarily antagonistic tone displayed here.

Have you seen me complaining about the antagonistic tone that EY is exhibiting in his comments? Here are the first two replies of people in the academics I wrote about this post, addressing EY:

Wow, that's an incredibly arrogant put-down by Eliezer..SIAI won't win many friends if he puts things like that...

and

...he seems to have lost his mind and written out of strong feelings. I disagree with him on most of these matters.

Comment author: kodos96 13 August 2010 10:15:46AM 2 points [-]

Have you seen me complaining about the antagonistic tone that EY is exhibiting in his comments?

I have been pointing that out as well - although I would describe his reactions more as "defensive" than "antagonistic". Regardless, it seemed to be out of character for Eliezer. Do the two of you have some kind of history I'm not aware of?

Comment author: XiXiDu 13 August 2010 10:34:05AM *  7 points [-]

I think I should say more about this. That EY has no charisma is, I believe, a reasonable estimation. Someone who says of himself that he's not neurotypical likely isn't a very appealing person in the eye of the average person. Then I got much evidence in the form of direct comments about EY that show that many people do not like him personally.

Now let's examine if I am hostile to EY and his movement. First a comment I made regarding Michael Anissimov' 26th birthday. I wrote:

Happy birthday!

I’m also 26…I’ll need another 26 years to reach your level though :-)

I’ll donate to SIAI again as soon as I can.

And keep up this great blog.

Have fun!!!

Let's examine my opinion about Eliezer Yudkowsky.

  • Here I suggest EY to be the most admirable person.
  • When I recommended reading Good and Real to a professional philosopher I wrote, "Don't know of a review, a recommendation by Eliezer Yudkowsky as 'great' is more than enough for me right now."
  • Here a long discussion with some physicists in which I try to defend MWI by linking them to EY' writings. Note: It is a backup since I deleted my comments there as I was angered by their hostile tone.

There is a lot more which I'm too lazy to look up now. You can check it for yourself, I'm promoting EY and the SIAI all the time, everywhere.

And I'm pretty disappointed that rather than answering my questions or linking me up to some supportive background information, I mainly seem to be dealing with a bunch of puffed up adherents.

Comment author: Aleksei_Riikonen 12 August 2010 06:39:47PM *  3 points [-]

This post makes very weird claims regarding what SIAI's positions would be.

"Spend most on a particular future"? "Eliezer Yudkowsky is the right and only person who should be leading"?

It doesn't at all seem to me that stuff such as these would be SIAI's position. Why doesn't the poster provide references for these weird claims?

Here's a good reference for what SIAI's position actually is:

http://singinst.org/riskintro/index.html

Comment author: Nick_Tarleton 12 August 2010 07:02:28PM *  1 point [-]

Seconded, plus I don't understand what the link from "worth it" has to do with the topic.

Comment author: XiXiDu 12 August 2010 07:19:38PM *  1 point [-]

I'll let the master himself answer this one:

Fun Theory, for instance: the questions of “What do we actually do all day, if things turn out well?,” “How much fun is there in the universe?,” “Will we ever run out of fun?,” “Are we having fun yet?” and “Could we be having more fun?” In order to answer questions like that, obviously, you need a Theory of Fun.

[...]

The question is: Is this what actually happens to you if you achieve immortality? Because, if that’s as good as it gets, then the people who go around asking “what’s the point?” are quite possibly correct.

Comment author: Aleksei_Riikonen 12 August 2010 07:12:13PM 1 point [-]

From the position paper I linked above, a key quote on what SIAI sees itself as doing:

"We aim to seed the above research programs. We are too small to carry out all the needed research ourselves, but we can get the ball rolling."

The poster makes claims that are completely at odds with even the most basic familiarity with what SIAI's position actually is.

Comment author: XiXiDu 12 August 2010 07:29:23PM *  1 point [-]

Less Wrong Q&A with Eliezer Yudkowsky: Video Answers

Q: The only two legitimate occupations for an intelligent person in our current world? Answer

Q: What's your advice for Less Wrong readers who want to help save the human race? Answer

Comment author: Aleksei_Riikonen 12 August 2010 07:41:15PM 2 points [-]

How do your quotes claim that Eliezer Yudkowsky is the only person who should be leading?

(I would say that factually, there are also other people in leadership positions within SIAI, and Eliezer is extremely glad that this is so, instead of thinking that it should be only him.)

How do they demonstrate that donating to SIAI is "spending on a particular future"?

(I see it as trying to prevent a particular risk.)

Comment author: Vladimir_Nesov 12 August 2010 09:57:20PM *  0 points [-]

http://singinst.org/riskintro/index.html

By the way, is it linked to from the SIAI site somewhere? It's a good summary, but I only ever saw the direct link (and the page is not in SIAI site format).

Comment author: Aleksei_Riikonen 12 August 2010 10:05:28PM 2 points [-]

It's linked from the sidepanel here at least:

http://singinst.org/overview

But indeed it's not very prominently featured on the site. It's a problem of most of the site having been written substantially earlier than this particular summary, and there not (yet) having been a comprehensive change from that earlier state of how the site is organized.

Comment author: Vladimir_Nesov 12 August 2010 10:11:36PM *  1 point [-]

I see. This part of the site doesn't follow the standard convention of selecting the first sub-page in a category when you click on the category, instead it selects the second, which confused me before. I thought that I was reading "Introduction" when in fact I was reading the next item. Bad design decision.

Comment author: JoshuaZ 12 August 2010 08:12:32PM 6 points [-]

The Charlie Stross example seems to be less than ideal. Much of what Stross has wrote about touches upon or deals intensely with issues connected to runaway AI. For example, the central premise of "Singularity Sky" involves an AI in the mid 20th century going from stuck in a lab to godlike in possibly a few seconds. His short story "Antibodies" focuses on the idea that very bad fast burns occur very frequently. He also has at least one (unpublished) story the central premises of which is that Von Neumann and Turing proved that P=NP and that the entire cold war was actually a way of keeping lots of weapons online ready to nuke any rogue AIs.

Note also that you mention Greg Egan who has also wrote fiction in which rogue AIs and bad nanotech make things very unpleasant (see for example Crystal Nights).

As to the other people you mention as to why they aren't very worried about the possibilities that Eliezer takes seriously, at least one person on your list (Kurzweil) is an incredible optimist and not much of a rationalist and so it seems extremely unlikely that he would ever become convinced that any risk situation was of high likelyhood unless the evidence for the risk was close to overwhelming.

MWI, I've read this sequence and it seems that Eliezer makes one of the strongest cases for Many-Worlds that I've seen. However, I know that there are a lot of people who have thought about this issue and have much more physics background and have not reached this conclusion. I'm therefore extremely uncertain about MWI. So what should one do if one doesn't know much about this? In this case, the answer is pretty easy, since MWI doesn't alter actual behavior much (unless you are intending to engage in quantum suicide or the like). So figuring out whether Eliezer is correct about MWI should not be a high priority, except in so far as it provides a possible data point for deciding if Eliezer is correct about other things.

Advanced real-world molecular nanotechnology - Of the points you bring up this one seems to me to be the most unlikely to be actually correct. There are a lot of technical barriers to grey goo and most of the people actually working with nanotech don't seem to see that sort of situation as very likely. But it also seems clear that that doesn't mean that there aren't many other possible things that molecular nanotech could do that wouldn't make things very unpleasant for us. Here, Eliezer is by far not the only person worried about this. See for example, this article which is a few years of date but does show that there's serious worry in this regards by academics and governments.

Runaway AI/AI going FOOM - This is potentially the most interesting of your points simply because it is so much more unique to the SIAI and Eliezer. So what can one do to figure out if this is correct? One thing to do is to examine the arguments and claims being made in detail. And see what other experts think on the subject. In this context, most AI people seem to consider this to be an unlikely problem, so maybe look at what they have to say? Note also that Robert Hanson of Overcoming Bias has discussed these issues extensively with Eliezer and has not been at all convinced (they had a written debate a while ago but I can't find the link right now. If someone else can track it down I'd appreciate it). One thing to note is that estimates for nanotech can impact the chance of an AI going FOOM substantially. If cheap easy nanotech exists than an AI may be able to improve its hardware at a very fast rate. If however, such nanotech does not exist then an AI will be limited to self-improvement primarily by improving software, which might be much more limited. See this subthread, where I bring up some of the possible barriers to software improvement and become by the end of it substantially more convinced by cousin_it that the barriers to escalating software improvement may be small.

What about the other Bayesians out there? Are they simply not as literate as Eliezer Yudkowsky in the maths or maybe somehow teach but not use their own methods of reasoning and decision making?

Note that even practiced Bayesians are from from perfect rationalists. If one hasn't thought about an issue or even considered that something is possible there's not much one can do about it. Moreover, a fair number of people who self-identify as Bayesian rationalists aren't very rational, and the set of people who do self-identify as such is pretty small.

Maybe after a few years of study I'll know more. But right now, if I was forced to choose the future over the present, the SIAI or to have some fun. I'd have some fun.

Given your data set this seems reasonable to me. Frankly, if I were to give money or support the SIAI I would do so primarily because I think that the Singularity Summits are clearly helpful and getting together lots of smart people and that this is true even if one assigns a low probability for any Singularity type event occurring in the next 50 years.

Comment author: utilitymonster 13 August 2010 01:38:45PM 2 points [-]

Runaway AI/AI going FOOM - This is potentially the most interesting of your points simply because it is so much more unique to the SIAI and Eliezer. So what can one do to figure out if this is correct? One thing to do is to examine the arguments and claims being made in detail. And see what other experts think on the subject. In this context, most AI people seem to consider this to be an unlikely problem, so maybe look at what they have to say? Note also that Robert Hanson of Overcoming Bias has discussed these issues extensively with Eliezer and has not been at all convinced (they had a written debate a while ago but I can't find the link right now. If someone else can track it down I'd appreciate it).

FOOM Debate

Comment author: Vladimir_Nesov 12 August 2010 08:19:40PM *  7 points [-]

The questions of speed/power of AGI and possibility of its creation in the near future are not very important. If AGI is fast and near, we must work on FAI faster, but we must work on FAI anyway.

The reason to work on FAI is to prevent any non-Friendly process from eventually taking control over the future, however fast or slow, suddenly powerful or gradual it happens to be. And the reason to work on FAI now is because the fate of the world is at stake. The main anti-prediction to get is that the future won't be Friendly if it's not specifically made Friendly, even if it happens slowly. We can as easily slowly drift away from things we value. You can't optimize for something you don't understand.

It doesn't matter if it takes another thousand years, we still have to think about this hugely important problem. And since we can't guarantee that the deadline is not near, expected utility calculation says we must still work as fast as possible, just in case. If AGI won't be feasible for a long while, that's great news, more time to prepare, to understand what we want.

(To be clear, I do believe that AGIs FOOM, and that we are at risk in the near future, but the arguments for that are informal and difficult to communicate, while accepting these claims is not necessary to come to the same conclusion about policy.)

Comment author: multifoliaterose 12 August 2010 08:31:19PM *  4 points [-]

As I've said elsewhere:

(a) There are other existential risks, not just AGI. I think it more likely than not that one of these other existential risks will hit before an unfriendly AI is created. I have not seen anybody present a coherent argument that AGI is likely to be developed before any other existential risk hits us,

(b) Even if AGI deserves top priority, there's still the important question of how to go about working toward a FAI. As far as I can tell, working to build an AGI right now now makes sense only if AGI is actually near (a few decades away).

(c) Even if AGI is near, there are still serious issues of accountability and transparency connected with SIAI. How do we know that they're making a careful effort to use donations in an optimal way? As things stand, I believe that it would be better to start a organization which exhibits high transparency and accountability, fund that, and let SIAI fold. I might change my mind on this point if SIAI decided to strive toward transparency and accountability.

Comment author: Vladimir_Nesov 12 August 2010 08:40:31PM *  1 point [-]

My comment was specifically about importance of FAI irrespective of existential risks, AGI or not. If we manage to survive at all, this is what we must succeed at. It also prevents all existential risks on completion, where theoretically possible.

Comment author: multifoliaterose 12 August 2010 08:47:57PM 1 point [-]

Okay, we had this back and forth before and I didn't understand you then and now I do. I guess I was being dense before. Anyway, the probability of current action leading to FAI might still be sufficiently small so that it makes sense to focus on other existential risks for the moment. And my other points remain.

Comment author: Vladimir_Nesov 12 August 2010 08:58:26PM *  4 points [-]

This is the same zero-sum thinking as in your previous post: people are currently not deciding between different causes, they are deciding whether to take a specific cause seriously. If you already contribute everything you could to a nanotech-risk-prevention organization, then we could ask whether switching to SIAI will do more good. But it's not the question usually posed.

As far as I can tell, working to build an AGI right now now makes sense only if AGI is actually near (a few decades away).

Working to build AGI right now is certainly a bad idea, at best leading nowhere, at worst killing us all. SIAI doesn't work on building AGI right now, no no no. We need understanding, not robots. Like this post, say.

Comment author: multifoliaterose 12 August 2010 11:32:12PM *  5 points [-]

This is the same zero-sum thinking as in your previous post: people are currently not deciding between different causes, they are deciding whether to take a specific cause seriously. If you already contribute everything you could to a nanotech-risk-prevention organization, then we could ask whether switching to SIAI will do more good. But it's not the question usually posed.

I agree that in general people should be more concerned about existential risk and that it's worthwhile to promote general awareness of existential risk.

But there is a zero-sum aspect to philanthropic efforts. See the GiveWell blog entry titled Denying The Choice.

More to the point, I think that one of the major factors keeping people away from studying existential risk is the fact that the many of the people who are interested in existential risk (including Eliezer) have low credibility on account expressing confident, apparently sensationalist claims without supporting them with careful, well reasoned arguments. I'm seriously concerned about this issue.

If Eliezer can't explain why it's pretty obvious to him that AGI will be developed within the next century, then he should explicitly say something like "I believe that AGI will be developed over the next 100 years but it's hard for me to express why so it's understandable that people don't believe me" or "I'm uncertain as to whether or not AGI will be developed over the next 100 years"

When he makes unsupported claims that sound like the sort of thing that somebody would say just to get attention, he's actively damaging the cause of existential risk.

Comment author: timtyler 13 August 2010 08:19:20AM 0 points [-]

Re: "AGI will be developed over the next 100 years"

I list various estimates from those interested enough in the issue to bother giving probabality density functions at the bottom of:

http://alife.co.uk/essays/how_long_before_superintelligence/

Comment author: multifoliaterose 13 August 2010 10:29:13AM 0 points [-]

Thanks, I'll check this out when I get a chance. I don't know whether I'll agree with your conclusions, but it looks like you've at least attempted to answer one of my main questions concerning the feasibility of SIAI's approach.

Comment author: CarlShulman 13 August 2010 11:58:46AM 1 point [-]

Those surveys suffer from selection bias. Nick Bostrom is going to try to get a similar survey instrument administered to a less-selected AI audience. There was also a poll at the AI@50 conference.

Comment author: gwern 13 August 2010 01:37:06PM 0 points [-]

Any chance of piggybacking questions relevant to Maes-Garreau on that survey? As you point out on that page, better stats are badly needed.

Comment author: timtyler 13 August 2010 08:13:03AM 1 point [-]

Re: "Working to build AGI right now is certainly a bad idea, at best leading nowhere, at worst killing us all."

The marginal benefit of making machines smarter seems large - e.g. see automobile safety applications: http://www.youtube.com/watch?v=I4EY9_mOvO8

I don't really see that situation changing much anytime soon - there will probably be such marginal benefits for a long time to come.

Comment author: mkehrt 12 August 2010 08:51:52PM 2 points [-]

I really agree with both a and b (although I do not care about c). I am glad to see other people around here who think both these things.

Comment author: timtyler 13 August 2010 06:41:44AM 0 points [-]

Re: "There are other existential risks, not just AGI. I think it more likely than not that one of these other existential risks will hit before an unfriendly AI is created hits."

The humans are going to be obliterated soon?!?

Alas, you don't present your supporting reasoning.

Comment author: multifoliaterose 13 August 2010 10:26:41AM *  1 point [-]

No, no, I'm not at all confident that humans will be obliterated soon. But why, for example, is it more likely that humans will go extinct due to AGI than that humans will go extinct due to a large scale nuclear war? It could be that AGI deserves top priority, but I haven't seen a good argument for why.

Comment author: ciphergoth 13 August 2010 11:17:17AM 4 points [-]

I think AGI wiping out humanity is far more likely than nuclear war doing so (it's hard to kill everyone with a nuclear war) but even if I didn't, I'd still want to work on the issue which is getting the least attention, since the marginal contribution I can make is greater.

Comment author: multifoliaterose 13 August 2010 12:33:04PM *  0 points [-]

Yes, I actually agree with you about nuclear war (and did before I mentioned it!) - I should have picked a better example. How about existential risk from asteroid strikes?

Several points:

(1) Nuclear war could still cause an astronomical waste in the form that I discuss here.

(2) Are you sure that the marginal contribution that you can make to the issue which is getting the least attention is the greatest? The issues getting the least attention may be getting little attention precisely because people know that there's nothing that can be done about them.

(3) If you satisfactorially address my point (a), points (b) and (c) will remain.

Comment author: whpearson 12 August 2010 11:26:39PM 1 point [-]

Slowly gives the option of figuring out some things about the space of possible AIs with experimentation. Which might then constrain the possible ways to make them friendly.

To use the tired flying metaphor. The type of stabilisation you need for flying depends on the method of generating lift. If fixed wing aircraft are impossible there is not much point looking at ailerons and tails. If helicopters are possible then we should be looking at tail rotors.

Comment author: timtyler 12 August 2010 08:30:50PM *  3 points [-]

Two key propositions seem to be:

  1. The world is at risk from a superintelligence-gone-wrong;

  2. The SIAI can help to do something about that.

Both propositions seem debatable. For the first point, certainly some scenarios are better than others - but the superintelligence causing widespread havoc by turning on its creators hypothesises substantial levels of incompetence, followed up by a complete failure of the surrounding advanced man-machine infrastructure to deal with the problem. Most humans may well have more to fear from a superintelligence-gone-right, but in dubious hands.

Comment author: Rain 12 August 2010 08:37:19PM *  39 points [-]

(Disclaimer: My statements about SIAI are based upon my own views, and should in no way be interpreted as representing their stated or actual viewpoints on the subject matter. I am talking about my personal thoughts, feelings, and justifications, no one else's. For official information, please check the SIAI website.)

Although this may not answer your questions, here are my reasons for supporting SIAI:

  • I want what they're selling. I want to understand morality, intelligence, and consciousness. I want a true moral agent outside of my own thoughts, something that can help solve that awful, plaguing question, "Why?" I want something smarter than me that can understand and explain the universe, providing access to all the niches I might want to explore. I want something that will save me from death and pain and find a better way to live.

  • It's the most logical next step. In the evolution of mankind, intelligence is a driving force, so "more intelligent" seems like an incredibly good idea, a force multiplier of the highest order. No other solution captures my view of a proper future like friendly AI, not even "...in space!"

  • No one else cares about the big picture. (Nick Bostrom and the FHI excepted; if they came out against SIAI, I might change my view.) Every other organization seems to focus on the 'generic now', leaving unintended consequences to crush their efforts in the long run, or avoiding the true horrors of the world (pain, age, poverty) due to not even realizing they're solvable. The ability to predict the future, through knowledge, understanding, and computation power, are the key attributes toward making that future a truly good place. The utility calculations are staggeringly in support of the longest view, such as that provided by SIAI.

  • It's the simplest of the 'good outcome' possibilities. Everything else seems to depend on magical hand-waving, or an overly simplistic view of how the world works or what a single advance would mean, rather than the way it interacts with all the diverse improvements that happen along side it and how real humans would react to them. Friendly AI provides 'intelligence-waving' that seems far more likely to work in a coherent fashion.

  • I don't see anything else to give me hope. What else solves all potential problems at the same time, rather than leaving every advancement to be destroyed by that one failure mode you didn't think of? Of course! Something that can think of those failure modes for you, and avoid them before you even knew they existed.

  • It's cheap and easy to do so on a meaningful scale. It's very easy to make up a large percentage of their budget; I personally provided more than 3 percent of their annual operating costs for this year, and I'm only upper middle class. They also have an extremely low barrier to entry (any amount of US dollars and a stamp, or a credit card, or PayPal).

  • They're thinking about the same things I am. They're providing a tribe like LessWrong, and they're pushing, trying to expand human knowledge in the ways I think are most important, such as existential risk, humanity's future, rationality, effective and realistic reversal of pain and suffering, etc.

  • I don't think we have much time. The best predictions aren't very good, but human power has increased to the point where there's a true threat we'll destroy ourselves within the next 100 years through means nuclear, biological, nano, AI, wireheading, or nerf the world. Sitting on money and hoping for a better deal, or donating to institutions now that will compound into advancements generations in the future seems like too little, too late.

I still put more money into savings accounts than I give to SIAI. I'm investing in myself and my own knowledge more than the purported future of humanity as they envision. I think it's very likely SIAI will fail in their mission in every way. They're just what's left after a long process of elimination. Give me a better path and I'll switch my donations. But I don't see any other group that comes close.

Comment author: multifoliaterose 12 August 2010 11:42:01PM 2 points [-]

Good, informative comment.

Comment author: XiXiDu 13 August 2010 08:25:58AM *  3 points [-]

I want what they're selling.

Yeah, that's why I'm donating as well.

It's the most logical next step.

Sure, but why the SIAI?

No one else cares about the big picture.

I accept this. Although I'm not sure if the big picture should be a top priority right now. And as I wrote, I'm unable to survey the utility calculations at this point.

It's the simplest of the 'good outcome' possibilities.

So you replace a simple view that is evidende based with one that might or might not be based on really shaky ideas such as an intelligence explosion.

I don't see anything else to give me hope.

I think you overestimate the friendliness of friendly AI. Too bad Roko's posts have been censored.

It's cheap and easy to do so on a meaningful scale.

I want to believe.

They're thinking about the same things I am.

Beware of those who agree with you?

I don't think we have much time.

Maybe we do have enough time regarding AI and the kind of threats depicted on this site. Maybe we don't have enough time regarding other kinds of threats.

I think it's very likely SIAI will fail in their mission in every way. They're just what's left after a long process of elimination. Give me a better path and I'll switch my donations. But I don't see any other group that comes close.

I can accept that. But I'm unable to follow the process of elimination yet.

Comment author: Rain 13 August 2010 12:13:11PM *  6 points [-]

It's the most logical next step.

Sure, but why the SIAI?

Who else is working directly on creating smarter-than-human intelligence with non-commercial goals? And if there are any, are they self-reflective enough to recognize its potential failure modes?

No one else cares about the big picture.

I accept this. Although I'm not sure if the big picture should be a top priority right now. And as I wrote, I'm unable to survey the utility calculations at this point.

I used something I developed which I call Point-In-Time Utility to guide my thinking on this matter. It basically boils down to, 'the longest view wins', and I don't see anyone else talking about potentially real pangalactic empires.

It's the simplest of the 'good outcome' possibilities.

So you replace a simple view that is evidende based with one that might or might not be based on really shaky ideas such as an intelligence explosion.

I don't think it has to be an explosion at all, just smarter-than-human. I'm willing to take things one step at a time, if necessary. Though it seems unlikely we could build a smarter-than-human intelligence without understanding what intelligence is, and thus knowing where to tweak, if even retroactively. That said, I consider intelligence tweaking itself to be a shaky idea, though I view alternatives as failure modes.

I don't see anything else to give me hope.

I think you overestimate the friendliness of friendly AI. Too bad Roko's posts have been censored.

I think you overestimate my estimation of the friendliness of friendly AI. Note that at the end of my post I said it is very likely SIAI will fail. My hope total is fairly small. Roko deleted his own posts, and I was able to read the article Eliezer deleted since it was still in my RSS feed. It didn't change my thinking on the matter; I'd heard arguments like it before.

They're thinking about the same things I am.

Beware of those who agree with you?

Hi. I'm human. At least, last I checked. I didn't say all my reasons were purely rational. This one is dangerous (reinforcement), but I do a lot of reading of opposing opinions as well, and there's still a lot I disagree with regarding SIAI's positions.

I don't think we have much time.

Maybe we do have enough time regarding AI and the kind of threats depicted on this site. Maybe we don't have enough time regarding other kinds of threats.

The latter is what I'm worried about. I see all of these threats as being developed simultaneously, in a race to see which one passes the threshold into reality first. I'm hoping that Friendly AI beats them.

I think it's very likely SIAI will fail in their mission in every way. They're just what's left after a long process of elimination. Give me a better path and I'll switch my donations. But I don't see any other group that comes close.

I can accept that. But I'm unable to follow the process of elimination yet.

I haven't seen you name any other organization you're donating to or who might compete with SIAI. Aside from the Future of Humanity Institute or the Lifeboat Foundation, both of which seem more like theoretical study groups than action-takers, people just don't seem to be working on these problems. Even the Methuselah Foundation is working on a very narrow portion which, although very useful and awesome if it succeeds, doesn't guard against the threats we're facing.

Comment author: XiXiDu 13 August 2010 01:02:22PM *  3 points [-]

Who else is working directly on creating smarter-than-human intelligence with non-commercial goals?

That there are no other does not mean we shouldn't be keen to create them, to establish competition. Or do it at all at this point.

...'the longest view wins', and I don't see anyone else talking about potentially real pangalactic empires.

I'm not sure about this.

I don't think it has to be an explosion at all, just smarter-than-human.

I feel there are too many assumptions in what you state to come up with estimations like a 1% probability of uFAI turning everything into paperclips.

I think you overestimate my estimation of the friendliness of friendly AI.

You are right, never mind what I said.

I see all of these threats as being developed simultaneously...

Yeah and how is their combined probability less worrying than that of AI? That doesn't speak against the effectiveness of donating all to the SIAI of course. Creating your own God to fix the problems the imagined one can't is indeed a promising and appealing idea, given it is feasible.

I haven't seen you name any other organization you're donating to or who might compete with SIAI.

I'm mainly concerned about my own well-being. If I was threated by something near-term within Germany, that would be my top-priority. So the matter is more complicated for me than for the people who are merely conerned about the well-being of all beings.

As I said before, it is not my intention to discredit the SIAI but to steer some critical discussion for us non-expert, uneducated but concerned people.

Comment author: Rain 13 August 2010 01:20:22PM *  6 points [-]

That there are no other does not mean we shouldn't be keen to create them, to establish competition.

Absolutely agreed. Though I'm barely motivated enough to click on a PayPal link, so there isn't much hope of my contributing to that effort. And I'd hope they'd be created in such a way as to expand total funding, rather than cannibalizing SIAI's efforts.

I'm not sure about this.

Certainly there are other ways to look at value / utility / whatever and how to measure it. That's why I mentioned I had a particular theory I was applying. I wouldn't expect you to come to the same conclusions, since I haven't fully outlined how it works. Sorry.

I feel there are too many assumptions in what you state to come up with estimations like a 1% probability of uFAI turning everything into paperclips.

I'm not sure what this is saying. I think UFAI is far more likely than FAI, and I also think that donating to SIAI contributes somewhat to UFAI, though I think it contributes more to FAI, such that in the race I was talking about, FAI should come out ahead. At least, that's the theory. There may be no way to save us.

Yeah and how is their combined probability less worrying than that of AI?

AI is one of the things on the list racing against FAI. I think AI is actually the most dangerous of them, and from what I've read, so does Eliezer, which is why he's working on that problem instead of, say, nanotech.

I'm mainly concerned about my own well-being.

I've mentioned before that I'm somewhat depressed, so I consider my philanthropy to be a good portion 'lack of caring about self' more than 'being concerned about the well-being of all beings'. Again, a subtractive process.

As I said before, it is [...] my intention [...] to steer some critical discussion for us non-expert, uneducated but concerned people.

Thanks! I think that's probably a good idea, though I would also appreciate more critical discussion from experts and educated people, a sort of technical minded anti-Summit, without all the useless politics of the IEET and the like.

Comment author: XiXiDu 13 August 2010 02:11:01PM 0 points [-]

I think UFAI is far more likely than FAI...

It's more likely that the Klingon warbird can overpower the USS Enterprise.

I think AI is actually the most dangerous of them...

Why? Because EY told you? I'm not trying to make snide remarks here but how people arrived at this conclusion was what I have been inquiring about in the first place.

...though I would also appreciate more critical discussion from experts and educated people...

Me too, but I was the only one around willing to start one at this point. That's the sorry state of critical examination.

Comment author: EStokes 12 August 2010 11:04:58PM 4 points [-]

I don't think this post was well-written, at the least. I didn't even understand the tl;dr?

tldr; Is the SIAI evidence-based or merely following a certain philosophy? I'm currently unable to judge if the Less Wrong community and the SIAI are updating on fictional evidence or if the propositions, i.e. the basis for the strong arguments for action that are proclaimed on this site, are based on fact.

I don't see much precise expansion on this, except for MWI? There's a sequence on it.

And that is my problem. Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic, i.e. imagination or fiction and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed on this site.

Have you read the sequences?

As for why there aren't more people supporting SIAI, first of all, it's not widely known, second of all, it's liable to be dismissed on first impressions. Not many have examined the SIAI. Also, only (http://en.wikipedia.org/wiki/Religion#cite_ref-49)[4% of the general public in the US believe in neither a god nor a higher power]. The majority isn't always right.

I don't understand why this post has upvotes. It was unclear and seems topics went unresearched. The usefulness of donating to the SIAI has been discussed before, I think someone probably would've posted a link if asked in the open thread.

Comment author: kodos96 13 August 2010 05:18:28AM 12 points [-]

I don't understand why this post has upvotes.

I think the obvious answer to this is that there are a significant number of people out there, even out there in the LW community, who share XiXiDu's doubts about some of SIAIs premises and conclusions, but perhaps don't speak up with their concerns either because a) they don't know quite how to put them into words, or b) they are afraid of being ridiculed/looked down on.

Unfortunately, the tone of a lot of the responses to this thread lead me to believe that those motivated by the latter option may have been right to worry.

Comment author: Furcas 13 August 2010 05:23:20AM 7 points [-]

Personally, I upvoted the OP because I wanted to help motivate Eliezer to reply to it. I don't actually think it's any good.

Comment author: kodos96 13 August 2010 05:47:44AM *  11 points [-]

Yeah, I agree (no offense XiXiDu) that it probably could have been better written, cited more specific objections etc. But the core sentiment is one that I think a lot of people share, and so it's therefore an important discussion to have. That's why it's so disappointing that Eliezer seems to have responded with such an uncharacteristically thin skin, and basically resorted to calling people stupid (sorry, "low g-factor") if they have trouble swallowing certain parts of the SIAI position.

Comment author: Wei_Dai 13 August 2010 08:51:30AM 5 points [-]

I think your upvote probably backfired, because (I'm guessing) Eliezer got frustrated that such a badly written post got upvoted so quickly (implying that his efforts to build a rationalist community were less successful than he had thought/hoped) and therefore responded with less patience than he otherwise might have.

Comment author: JamesAndrix 13 August 2010 02:28:02AM 4 points [-]
  • That Eliezer Yudkowsky (the SIAI) is the right and only person who should be leading, respectively institution that should be working, to soften the above.

I don't believe that is necessarily true, just that no one else is doing it. I think other teams working on FAI Specifically would be a good thing, provided they were competent enough not to be dangerous.

Likewise, Lesswrong (then Overcoming bias) is just the only place I've found that actually looked at the morality problem is a non-obviously wrong way. When I arrived I had a different view on morality than EY, but I was very happy to see another group of people at least working on the problem.

Also note that you only need to believe in the likelihood of UFAI -or- nanotech -or- other existential threats in order to want FAI . I'd have to step back a few feet to wrap my head around considering it infeasible at this point.

Comment author: CarlShulman 13 August 2010 07:26:31AM *  7 points [-]

That Eliezer Yudkowsky (the SIAI) is the right and only person who should be leading, respectively institution that should be working, to soften the above.

That's just a weird claim. When Richard Posner or David Chalmers does writing in the area SIAI folk cheer, not boo. And I don't know anyone at SIAI who thinks that the Future of Humanity Institute's work in the area isn't a tremendously good thing.

Likewise, Lesswrong (then Overcoming bias) is just the only place I've found that actually looked at the morality problem is a non-obviously wrong way.

Have you looked into the philosophical literature?

Comment author: CarlShulman 13 August 2010 07:11:14AM 2 points [-]

Here's the Future of Humanity Institute's survey results from their Global Catastrophic Risks conference. The median estimate of extinction risk by 2100 is 19%, with 5% for AI-driven extinction by 2100:

http://www.fhi.ox.ac.uk/selected_outputs/fohi_publications/global_catastrophic_risks_survey

Unfortunately, the survey didn't ask for probabilities of AI development by 2100, so one can't get probability of catastrophe conditional on AI development from there.

Comment author: timtyler 13 August 2010 08:02:32AM *  7 points [-]

That sample is drawn from those who think risks are important enough to go to a conference about the subject.

That seems like a self-selected sample of those with high estimates of p(DOOM).

The fact that this is probably a biased sample from the far end of a long tail should inform interpretations of the results.

Comment author: Rain 13 August 2010 12:42:48PM *  4 points [-]

There's also the possibility that people dismiss it out of hand, without even thinking, and the more you look into the facts, the more your estimate rises. In this instance, the people at the conference just have the most facts.

Comment author: ciphergoth 13 August 2010 07:55:07AM 8 points [-]

Is there more to this than "I can't be bothered to read the Sequences - please justify everything you've ever said in a few paragraphs for me"?

Comment author: whpearson 13 August 2010 08:13:29AM 6 points [-]

My charitable reading is that he is arguing there will be other people like him and if SIAI wishes to continue growing there does need to be easily digested material.

Comment author: [deleted] 13 August 2010 12:45:49PM 8 points [-]

From my experience as a long-time lurker and occasional poster, LW is not easily accessible to new users. The Sequences are indeed very long and time consuming, and most of them have multiple links to other posts you are supposed to have already read, creating confusion if you should happen to forget the gist of a particular post. Besides, Eliezer draws a number of huge philosophical conclusions (reductionism, computationalism, MWI, the Singularity, etc.), and a lot of people aren't comfortable swallowing all of that at once. Indeed, the "why should I buy all this?" question has popped into my head many times while reading.

Furthermore, I think criticism like this is good, and the LW crowd should not have such a negative reaction to it. After all, the Sequences do go on and on about not getting unduly emotionally attached to beliefs; if the community can't take criticism, that is probably a sign that it is getting a little too cozy with its current worldview.

Comment author: ciphergoth 13 August 2010 12:57:48PM 6 points [-]

Criticism is good, but this criticism isn't all that useful. Ultimately, what SIAI does is the conclusion of a chain of reasoning; the Sequences largely present that reasoning. Pointing to a particular gap or problem in that chain is useful; just ignoring it and saying "justify yourselves!" doesn't advance the debate.

Comment author: [deleted] 13 August 2010 01:07:11PM *  4 points [-]

Agreed--criticism of this sort vaguely reminds me of criticism of evolution in that it attacks a particular part of the desired target rather than its fundamental assumptions (my apologies to the original poster). Still, I think we should question the Sequences as much as possible, and even misguided criticism can be useful. I'm not saying we should welcome an unending series of top-level posts like this, but I for one would like to see critical essays on of some of LW's most treasured posts. (There goes my afternoon...)

Comment author: ciphergoth 13 August 2010 01:50:08PM 2 points [-]

Of course, substantive criticism of specific arguments is always welcome.

Comment author: Simulation_Brain 13 August 2010 07:56:24AM 5 points [-]

I think there are very good questions in here. Let me try to simplify the logic:

First, the sociological logic: if this is so obviously serious, why is no one else proclaiming it? I think the simple answer is that a) most people haven't considered it deeply and b) someone has to be first in making a fuss. Kurzweil, Stross, and Vinge (to name a few that have thought about it at least a little) seem to acknowledge a real possibility of AI disaster (they don't make probability estimates).

Now to the logical argument itself:

a) We are probably at risk from the development of strong AI. b) The SIAI can probably do something about that.

The other points in the OP are not terribly relevant; Eliezer could be wrong about a great many things, but right about these.

This is not a castle in the sky.

Now to argue for each: There's no good reason to think AGI will NOT happen within the next century. Our brains produce AGI; why not artificial systems? Artificial systems didn't produce anything a century ago; even without a strong exponential, they're clearly getting somewhere.

There are lots of arguments for why AGI WILL happen soon; see Kurzweil among others. I personally give it 20-40 years, even allowing for our remarkable cognitive weaknesses.

Next, will it be dangerous? a) Something much smarter than us will do whatever it wants, and very thoroughly. (this doesn't require godlike AI, just smarter than us. Self-improving helps, too.) b) The vast majority of possible "wants" done thoroughly will destroy us. (Any goal taken to extremes will use all available matter in accomplishing it.) Therefore, it will be dangerous if not VERY carefully designed. Humans are notably greedy and bad planners individually, and often worse in groups.

Finally, it seems that SIAI might be able to do something about it. If not, they'll at least help raise awareness of the issue. And as someone pointed out, achieving FAI would have a nice side effect of preventing most other existential disasters.

While there is a chain of logic, each of the steps seems likely, so multiplying probabilities gives a significant estimate of disaster, justifying some resource expenditure to prevent it (especially if you want to be nice). (Although spending ALL your money or time on it probably isn't rational, since effort and money generally have sublinear payoffs toward happiness).

Hopefully this lays out the logic; now, which of the above do you NOT think is likely?

Comment author: kodos96 13 August 2010 08:47:35AM 3 points [-]

The only part of the chain of logic that I don't fully grok is the "FOOM" part. Specifically, the recursive self improvement. My intuition tells me that an AGI trying to improve itself by rewriting its own code would encounter diminishing returns after a point - after all, there would seem to be a theoretical minimum number of instructions necessary to implement an ideal Bayesian reasoner. Once the AGI has optimized its code down to that point, what further improvements can it do (in software)? Come up with something better than Bayesianism?

Now in your summary here, you seem to downplay the recursive self-improvement part, implying that it would 'help,' but isn't strictly necessary. But my impression from reading Eliezer was that he considers it an integral part of the thesis - as it would seem to be to me as well. Because if the intelligence explosion isn't coming from software self-improvement, then where is it coming from? Moore's Law? That isn't fast enough for a "FOOM", even if intelligence scaled linearly with the hardware you threw at it, which my intuition tells me it probably wouldn't.

Now of course this is all just intuition - I haven't done the math, or even put a lot of thought into it. It's just something that doesn't seem obvious to me, and I've never heard a compelling explanation to convince me my intuition is wrong.

Comment author: ShardPhoenix 13 August 2010 09:27:46AM *  5 points [-]

I don't think anyone argues that there's no limit to recursive self-improvement, just that the limit is very high. Personally I'm not sure if a really fast FOOM is possible, but I think it's likely enough to be worth worrying about (or at least letting the SIAI worry about it...).

Comment author: utilitymonster 13 August 2010 01:30:55PM 5 points [-]

a) Something much smarter than us will do whatever it wants, and very thoroughly. (this doesn't require godlike AI, just smarter than us. Self-improving helps, too.) b) The vast majority of possible "wants" done thoroughly will destroy us. (Any goal taken to extremes will use all available matter in accomplishing it.) Therefore, it will be dangerous if not VERY carefully designed. Humans are notably greedy and bad planners individually, and often worse in groups.

I've heard a lot of variations on this theme. They all seem to assume that the AI will be a maximizer rather than a satisficer. I agree the AI could be a maximizer, but don't see that it must be. How much does this risk go away if we give the AI small ambitions?

Comment author: Mitchell_Porter 13 August 2010 11:09:51AM 17 points [-]

Can I say, first of all, that if you want to think realistically about a matter like this, you will have to find better authorities than science-fiction writers. Their ideas are generally not their own, but come from scientific and technological culture or from "futurologists" (who are also a very mixed bunch in terms of intellect, realism, and credibility); their stories present speculation or even falsehood as fact. It may be worthwhile going "cold turkey" on all the SF you have ever read, bearing in mind that it's all fiction that was ground out, word by word, by some human being living a very ordinary life, in a place and time not very far from you. Purge all the imaginary experience of transcendence from your system and see what's left.

Of course science-fictional thinking, treating favorite authors as gurus, and so forth is endemic in this subculture. The very name, "Singularity Institute", springs from science fiction. And SF occasionally gets things right. But it is far more a phenomenon of the time, a symptom of real things, rather than a key to understanding reality. Plain old science is a lot closer to being a reliable guide to reality, though even there - treating science as your authority - there are endless ways to go wrong.

A lot of the discourse here and in similar places is science fiction minus plot, characters, and other story-telling apparatus. Just the ideas - often the utopia of the hard-SF fan, bored by the human interactions and wanting to get on with the transcendent stuff. With transhumanist and singularity culture, this utopia has arrived, because you can talk all day about these radical futurist ideas without being tied to a particular author or oeuvre. The ideas have leapt from the page and invaded our brains, where they live even during the dull hours of daylight life. Hallelujah!

So, before you evaluate SIAI and its significance, there are a few more ideas that I would like you to drive from your brain: The many-worlds metaphysics. The idea of trillion-year lifespans. The idea that the future of the whole observable universe depends on the outcome of Earth's experiment with artificial intelligence. These are a few of the science-fiction or science-speculation ideas which have become a fixture in the local discourse.

I'm giving you this lecture because so many of your doubts about LW's favorite crypto-SF ideas masquerading as reality, are expressed in terms of ... what your favorite SF writers and futurist gurus think! But those people all have the same problem: they are trying to navigate issues where there simply aren't authorities yet. Stross and Egan have exactly the same syndrome affecting everyone here who writes about mind copies, superintelligence, alien utility functions, and so on. They live in two worlds, the boring everyday world and the world of their imagination. The fact that they produce descriptions of whole fictional worlds in order to communicate their ideas, rather than little Internet essays, and the fact that they earn a living doing this... I'm not sure if that means they have the syndrome more under control, or less under control, compared to the average LW contributor.

Probably you already know this, probably everyone here knows it. But it needs to be said, however clumsily: there is an enormous amount of guessing going on here, and it's not always recognized as such, and furthermore, there isn't much help we can get from established authorities, because we really are on new terrain. This is a time of firsts for the human species, both conceptually and materially.

Now I think I can start to get to the point. Suppose we entertain the idea of a future where none of these scenarios involving very big numbers (lifespan, future individuals, galaxies colonized, amount of good or evil accomplished) apply, and where none of these exciting info-metaphysical ontologies turns out to be correct. A future which mostly remains limited in the way that all human history to date has been limited, limited in the ways which inspire such angst and such promethean determination to change things, or determination to survive until they change, among people who have caught the singularity fever. A future where everyone is still going to die, where the human race and its successors only last a few thousand years, not millions or billions of them. If that is the future, could SIAI still matter?

My answer is yes, because artificial intelligence still matters in such a future. For the sake of argument, I may have just poured cold water on a lot of popular ideas of transcendence, but to go further and say that only natural life and natural intelligence will ever exist really would be obtuse. If we do accept that "human-level" artificial intelligence is possible and is going to happen, then it is a matter at least as consequential as the possibility of genocide or total war. Ignoring, again for the sake of a limited argument, all the ideas about planet-sized AIs and superintelligence, and it's still easy to see that AI which can out-think human beings and which has no interest in their survival ought to be possible. So even in this humbler futurology, AI is still an extinction risk.

The solution to the problem of unfriendly AI most associated with SIAI - producing the coherent extrapolated volition of the human race - is really a solution tailored to the idea of a single super-AI which undergoes a "hard takeoff", a rapid advancement in power. But SIAI is about a lot more than researching, promoting, and implementing CEV. There's really no organization like it in the whole sphere of "robo-ethics" and "ethical AI". The connection that has been made between "friendliness" and the (still scientifically unknown) complexities of the human decision-making process is a golden insight that has already justified SIAI's existence and funding many times over. And of course SIAI organizes the summits, and fosters a culture of discussion, both in real life and online (right here), which is a lot broader than SIAI's particular prescriptions.

So despite the excesses and enthusiasms of SIAI's advocates, supporters, and leading personalities, it really is the best thing we have going when it comes to the problem of unfriendly AI. Whether and how you personally should be involved with its work - only you can make that decision. (Even constructive criticism is a way of helping.) But SIAI is definitely needed.