Eliezer_Yudkowsky comments on Should I believe what the SIAI claims? - Less Wrong

23 Post author: XiXiDu 12 August 2010 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (600)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 13 August 2010 08:17:23PM 20 points [-]

I'm currently preparing for the Summit so I'm not going to hunt down and find links. Those of you who claimed they wanted to see me do this should hunt down the links and reply with a list of them.

Given my current educational background I am not able to judge the following claims (among others) and therefore perceive it as unreasonable to put all my eggs in one basket:

You should just be discounting expected utilities by the probability of the claims being true, and then putting all your eggs into the basket that has the highest marginal expected utility per dollar, unless you have enough resources to invest that the marginal utility goes down. This is straightforward to anyone who knows about expected utility and economics, and anyone who knows about scope insensitivity knows why this result is counterintuitive to the human brain. We don't emphasize this very hard when people talk in concrete terms about donating to more than one organization, because charitable dollars are not substitutable from a limited pool, the main thing is the variance in the tiny fraction of their income people donate to charity in the first place and so the amount of warm glow people generate for themselves is important; but when they talk about "putting all eggs in one basket" as an abstract argument we will generally point out that this is, in fact, the diametrically wrong direction in which abstract argument should be pushing.

  • Superhuman Artificial Intelligence (the runaway kind, i.e. God-like and unbeatable not just at Chess or Go).

Read the Yudkowsky-Hanson AI Foom Debate. (Someone link to the sequence.)

  • Advanced real-world molecular nanotechnology (the grey goo kind the above intelligence could use to mess things up).

Read Eric Drexler's Nanosystems. (Someone find an introduction by Foresight and link to it, that sort of thing is their job.) Also the term you want is not "grey goo", but never mind.

  • The likelihood of exponential growth versus a slow development over many centuries.

Exponentials are Kurzweil's thing. They aren't dangerous. See the Yudkowsky-Hanson Foom Debate.

  • That it is worth it to spend most on a future whose likelihood I cannot judge.

Unless you consider yourself entirely selfish, any altruistic effort should go to whatever has the highest marginal utility. Things you spend on charitable efforts that just make you feel good should be considered selfish. If you are entirely selfish but you can think past a hyperbolic discount rate then it's still possible you can get more hedons per dollar by donating to existential risk projects.

Your difficulties in judgment should be factored into a probability estimate. Your sense of aversion to ambiguity may interfere with warm glows, but we can demonstrate preference reversals and inconsistent behaviors that result from ambiguity aversion which doesn't cash out as a probability estimate and factor straight into expected utility.

  • That Eliezer Yudkowsky (the SIAI) is the right and only person who should be leading, respectively institution that should be working, to soften the above.

Michael Vassar is leading. I'm writing a book. When I'm done writing the book I plan to learn math for a year. When I'm done with that I'll swap back to FAI research hopefully forever. I'm "leading" with respect to questions like "What is the form of the AI's goal system?" but not questions like "Do we hire this guy?"

My judgement of and attitude towards a situation is necessarily as diffuse as my knowledge of its underlying circumstances and the reasoning involved. The state of affairs regarding the SIAI and its underlying rationale and rules of operation are not sufficiently clear to me to give it top priority. Therefore I perceive it as unreasonable to put all my eggs in one basket.

Someone link to relevant introductions of ambiguity aversion as a cognitive bias and do the detailed explanation on the marginal utility thing.

What I mean to say by using that idiom is that I cannot expect, given my current knowledge, to get the promised utility payoff that would justify to make the SIAI a prime priority. That is, I'm donating to the SIAI but also spend considerable amounts of resources maximizing utility at present. Enjoying life, so to say, is therefore a safety net given that my inability to judge the probability of a positive payoff will be answered negative in future.

Can someone else do the work of showing how this sort of satisficing leads to a preference reversal if it can't be viewed as expected utility maximization?

Much of all arguments on this site involve a few propositions and the use of probability to legitimate action in case of their asserted accuracy. Here much is uncertain to an extent that I'm not able to judge any nested probability estimations. I'm already unable to judge what the likelihood of something like the existential risk of exponential evolving superhuman AI is compared to us living in a simulated reality. Even if you tell me, am I to believe the data you base those estimations on?

Simplify things. Take the version of reality that involves AIs being built and not going FOOM, and the one that involves them going FOOM, and ask which one makes more sense. Don't look at just one side and think about how much you doubt it and can't guess. Look at both of them. Also, read the FOOM debate.

And this is what I'm having trouble to accept, let alone look through. There seems to be a highly complicated framework of estimations to support and reinforce each other. I'm not sure how you call this in English, but in German I'd call this a castle in the air.

Do you have better data from somewhere else? Suspending judgment is not a realistic policy. If you're looking for supporting arguments on FOOM they're in the referenced debate.

You could tell me to learn about Solomonoff induction etc., I know that what I'm saying may simply be due to a lack of education. But that's what I'm arguing and inquiring about here. And I dare to bet that many who support the SIAI cannot interpret the reasoning which lead them to support the SIAI in the first place, or at least cannot substantiate the estimations with other kinds of evidence than a coherent internal logic of reciprocal supporting probability estimations.

Nobody's claiming that having consistent probability estimates makes you rational. (Having inconsistent estimates makes you irrational, of course.)

I can however follow much of the reasoning and arguments on this site. But I'm currently unable to judge their overall credence. That is, are the conclusions justified? Is the coherent framework build around the SIAI based on firm ground?

It sounds like you haven't done enough reading in key places to expect to be able to judge the overall credence out of your own estimates.

There seems to be no critical inspection or examination by a third party. There is no peer review. Yet people are willing to donate considerable amounts of money.

You may have an unrealistic picture of what it takes to get scientists interested enough in you that they will read very long arguments and do lots of work on peer review. There's no prestige payoff for them in it, so why would they?

I'm concerned that although consistently so, the LW community is updating on fictional evidence. This post is meant to inquire the basic principles, the foundation of the sound argumentation's and the basic premises that they are based upon . That is, are you creating models to treat subsequent models or are the propositions based on fact?

You have a sense of inferential distance. That's not going to go away until you (a) read through all the arguments that nail down each point, e.g. the FOOM debate, and (b) realize that most predictions are actually antipredictions (someone link) and that most arguments are actually just defeating anthropomorphic counterarguments to the antiprediction.

Comment author: Eliezer_Yudkowsky 13 August 2010 08:17:31PM 13 points [-]

An example here is the treatment and use of MWI (a.k.a. the "many-worlds interpretation") and the conclusions, arguments and further estimations based on it. No doubt MWI is the only consistent non-magic interpretation of quantum mechanics. But that's it, an interpretation. A logical consistent deduction. Or should I rather call it an induction, as the inference seems to be of greater generality than the premises, at least as understood within the LW community? But that's besides the point. The problem here is that such conclusions are, I believe, widely considered to be weak evidence to base further speculations and estimations on.

Reading the QM sequence (someone link) will show you that to your surprise and amazement, what seemed to you like an unjustified leap and a castle in the air, a mere interpretation, is actually nailed down with shocking solidity.

What I'm trying to argue here is that if the cornerstone of your argumentation, if one of your basic tenets is the likelihood of exponential evolving superhuman AI, although a valid speculation given what we know about reality, you are already in over your head with debt. Debt in the form of other kinds of evidence. Not to say that it is a false hypothesis, that it is not even wrong, but that you cannot base a whole movement and a huge framework of further inference and supportive argumentation on such premises, on ideas that are themselves not based on firm ground.

Actually, now that I read this paragraph, it sounds like you think that "exponential", "evolving" AI is an unsupported premise, rather than "AI go FOOM" being the conclusion of a lot of other disjunctive lines of reasoning. That explains a lot about the tone of this post. And if you're calling it "exponential" or "evolving", which are both things the reasoning would specifically deny (it's supposed to be faster-than-exponential and have nothing to do with natural selection), then you probably haven't read the supporting arguments. Read the FOOM debate.

Further, do you have an explanation for the circumstance that Eliezer Yudkowsky is the only semi-popular person who has figured all this out? The only person who's aware of something that might shatter the utility of the universe, if not multiverse? Why is it that people like Vernor Vinge, Charles Stross or Ray Kurzweil are not running amok using all their influence to convince people of the risks ahead, or at least give all they have to the SIAI?

After reading enough sequences you'll pick up enough of a general sense of what it means to treat a thesis analytically, analyze it modularly, and regard every detail of a thesis as burdensome, that you'll understand people here would mention Bostrom or Hanson instead. The sort of thinking where you take things apart into pieces and analyze each piece is very rare, and anyone who doesn't do it isn't treated by us as a commensurable voice with those who do. Also, someone link an explanation of pluralistic ignorance and bystander apathy.

I'm talking to quite a few educated people outside this community. They are not, as some assert, irrational nerds who doubt all all those claims for no particular reason. Rather they tell me that there are too many open questions to worry about the possibilities depicted on this site and by the SIAI rather than other near-term risks that might very well wipe us out.

An argument which makes sense emotionally (ambiguity aversion, someone link to hyperbolic discounting, link to scope insensitivity for the concept of warm glow) but not analytically (the expected utility intervals are huge, research often has long lead times).

I believe that hard-SF authors certainly know a lot more than I do, so far, about related topics and yet they seem not to be nearly as concerned about the relevant issues than the average Less Wrong member. I could have picked Greg Egan. That's besides the point though, it's not just Stross or Egan but everyone versus Eliezer Yudkowsky and some unknown followers. What about the other Bayesians out there? Are they simply not as literate as Eliezer Yudkowsky in the maths or maybe somehow teach but not use their own methods of reasoning and decision making?

Good reasoning is very rare, and it only takes a single mistake to derail. "Teach but not use" is extremely common. You might as well ask "Why aren't there other sites with the same sort of content as LW?" Reading enough, and either you'll pick up a visceral sense of the quality of reasoning being higher than anything you've ever seen before, or you'll be able to follow the object-level arguments well enough that you don't worry about other sources casually contradicting them based on shallower examinations, or, well, you won't.

What do you expect me to do? Just believe Eliezer Yudkowsky? Like I believed so much in the past which made sense but turned out to be wrong? And besides, my psychic condition wouldn't allow me to devote all my resource to the SIAI, or even a substantial amount of my income. The thought makes me reluctant to give anything at all.

Start out with a recurring Paypal donation that doesn't hurt, let it fade into the background, consider doing more after the first stream no longer takes a psychic effort, don't try to make any commitment now or think about it now in order to avoid straining your willpower.

Maybe after a few years of study I'll know more. But right now, if I was forced to choose the future over the present, the SIAI or to have some fun. I'd have some fun.

I forget the term for the fallacy of all-or-nothing reasoning, someone look it up and link to it.

Comment author: JGWeissman 13 August 2010 08:40:38PM *  12 points [-]
Comment author: Cyan 13 August 2010 08:59:02PM 5 points [-]

No bystander apathy here!

Comment author: thomblake 13 August 2010 09:02:54PM 5 points [-]

I forget the term for the fallacy of all-or-nothing reasoning, someone look it up and link to it.

The relevant fallacy in 'Aristotelian' logic is probably false dilemma, though there are a few others in the neighborhood.

Comment author: Jonathan_Graehl 17 August 2010 06:59:47PM 3 points [-]

I haven't done the work to understand MWI yet, but if this FAQ is accurate, almost nobody likes the Copenhagen interpretation (observers are SPECIAL) and a supermajority of "cosmologists and quantum field theorists" think MWI is true.

Since MWI seems to have no practical impact on my decision making, this is good enough for me. Also, Feynman likes it :)

Comment author: wedrifid 14 August 2010 06:16:02AM 3 points [-]

Thanks for taking the time to give a direct answer. I enjoyed reading this and these replies will likely serve as useful comments to when people ask similar questions in the future.

Comment author: NancyLebovitz 13 August 2010 08:39:57PM 3 points [-]

I forget the term for the fallacy of all-or-nothing reasoning, someone look it up and link to it.

Probably black and white thinking.