JGWeissman comments on Should I believe what the SIAI claims? - Less Wrong

23 Post author: XiXiDu 12 August 2010 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (600)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 13 August 2010 08:17:23PM 20 points [-]

I'm currently preparing for the Summit so I'm not going to hunt down and find links. Those of you who claimed they wanted to see me do this should hunt down the links and reply with a list of them.

Given my current educational background I am not able to judge the following claims (among others) and therefore perceive it as unreasonable to put all my eggs in one basket:

You should just be discounting expected utilities by the probability of the claims being true, and then putting all your eggs into the basket that has the highest marginal expected utility per dollar, unless you have enough resources to invest that the marginal utility goes down. This is straightforward to anyone who knows about expected utility and economics, and anyone who knows about scope insensitivity knows why this result is counterintuitive to the human brain. We don't emphasize this very hard when people talk in concrete terms about donating to more than one organization, because charitable dollars are not substitutable from a limited pool, the main thing is the variance in the tiny fraction of their income people donate to charity in the first place and so the amount of warm glow people generate for themselves is important; but when they talk about "putting all eggs in one basket" as an abstract argument we will generally point out that this is, in fact, the diametrically wrong direction in which abstract argument should be pushing.

  • Superhuman Artificial Intelligence (the runaway kind, i.e. God-like and unbeatable not just at Chess or Go).

Read the Yudkowsky-Hanson AI Foom Debate. (Someone link to the sequence.)

  • Advanced real-world molecular nanotechnology (the grey goo kind the above intelligence could use to mess things up).

Read Eric Drexler's Nanosystems. (Someone find an introduction by Foresight and link to it, that sort of thing is their job.) Also the term you want is not "grey goo", but never mind.

  • The likelihood of exponential growth versus a slow development over many centuries.

Exponentials are Kurzweil's thing. They aren't dangerous. See the Yudkowsky-Hanson Foom Debate.

  • That it is worth it to spend most on a future whose likelihood I cannot judge.

Unless you consider yourself entirely selfish, any altruistic effort should go to whatever has the highest marginal utility. Things you spend on charitable efforts that just make you feel good should be considered selfish. If you are entirely selfish but you can think past a hyperbolic discount rate then it's still possible you can get more hedons per dollar by donating to existential risk projects.

Your difficulties in judgment should be factored into a probability estimate. Your sense of aversion to ambiguity may interfere with warm glows, but we can demonstrate preference reversals and inconsistent behaviors that result from ambiguity aversion which doesn't cash out as a probability estimate and factor straight into expected utility.

  • That Eliezer Yudkowsky (the SIAI) is the right and only person who should be leading, respectively institution that should be working, to soften the above.

Michael Vassar is leading. I'm writing a book. When I'm done writing the book I plan to learn math for a year. When I'm done with that I'll swap back to FAI research hopefully forever. I'm "leading" with respect to questions like "What is the form of the AI's goal system?" but not questions like "Do we hire this guy?"

My judgement of and attitude towards a situation is necessarily as diffuse as my knowledge of its underlying circumstances and the reasoning involved. The state of affairs regarding the SIAI and its underlying rationale and rules of operation are not sufficiently clear to me to give it top priority. Therefore I perceive it as unreasonable to put all my eggs in one basket.

Someone link to relevant introductions of ambiguity aversion as a cognitive bias and do the detailed explanation on the marginal utility thing.

What I mean to say by using that idiom is that I cannot expect, given my current knowledge, to get the promised utility payoff that would justify to make the SIAI a prime priority. That is, I'm donating to the SIAI but also spend considerable amounts of resources maximizing utility at present. Enjoying life, so to say, is therefore a safety net given that my inability to judge the probability of a positive payoff will be answered negative in future.

Can someone else do the work of showing how this sort of satisficing leads to a preference reversal if it can't be viewed as expected utility maximization?

Much of all arguments on this site involve a few propositions and the use of probability to legitimate action in case of their asserted accuracy. Here much is uncertain to an extent that I'm not able to judge any nested probability estimations. I'm already unable to judge what the likelihood of something like the existential risk of exponential evolving superhuman AI is compared to us living in a simulated reality. Even if you tell me, am I to believe the data you base those estimations on?

Simplify things. Take the version of reality that involves AIs being built and not going FOOM, and the one that involves them going FOOM, and ask which one makes more sense. Don't look at just one side and think about how much you doubt it and can't guess. Look at both of them. Also, read the FOOM debate.

And this is what I'm having trouble to accept, let alone look through. There seems to be a highly complicated framework of estimations to support and reinforce each other. I'm not sure how you call this in English, but in German I'd call this a castle in the air.

Do you have better data from somewhere else? Suspending judgment is not a realistic policy. If you're looking for supporting arguments on FOOM they're in the referenced debate.

You could tell me to learn about Solomonoff induction etc., I know that what I'm saying may simply be due to a lack of education. But that's what I'm arguing and inquiring about here. And I dare to bet that many who support the SIAI cannot interpret the reasoning which lead them to support the SIAI in the first place, or at least cannot substantiate the estimations with other kinds of evidence than a coherent internal logic of reciprocal supporting probability estimations.

Nobody's claiming that having consistent probability estimates makes you rational. (Having inconsistent estimates makes you irrational, of course.)

I can however follow much of the reasoning and arguments on this site. But I'm currently unable to judge their overall credence. That is, are the conclusions justified? Is the coherent framework build around the SIAI based on firm ground?

It sounds like you haven't done enough reading in key places to expect to be able to judge the overall credence out of your own estimates.

There seems to be no critical inspection or examination by a third party. There is no peer review. Yet people are willing to donate considerable amounts of money.

You may have an unrealistic picture of what it takes to get scientists interested enough in you that they will read very long arguments and do lots of work on peer review. There's no prestige payoff for them in it, so why would they?

I'm concerned that although consistently so, the LW community is updating on fictional evidence. This post is meant to inquire the basic principles, the foundation of the sound argumentation's and the basic premises that they are based upon . That is, are you creating models to treat subsequent models or are the propositions based on fact?

You have a sense of inferential distance. That's not going to go away until you (a) read through all the arguments that nail down each point, e.g. the FOOM debate, and (b) realize that most predictions are actually antipredictions (someone link) and that most arguments are actually just defeating anthropomorphic counterarguments to the antiprediction.

Comment author: JGWeissman 13 August 2010 08:26:37PM *  5 points [-]