Should I believe what the SIAI claims?

23 Post author: XiXiDu 12 August 2010 02:33PM

Major update here.

The state of affairs regarding the SIAI and its underlying rationale and rules of operation are insufficiently clear. 

Most of the arguments involve a few propositions and the use of probability and utility calculations to legitimate action. Here much is uncertain to an extent that I'm not able to judge any nested probability estimations. Even if you tell me, where is the data on which you base those estimations?

There seems to be an highly complicated framework of estimations to support and reinforce each other. I'm not sure how you call this in English, but in German I'd call that a castle in the air.

I know that what I'm saying may simply be due to a lack of knowledge and education, that is why I am inquiring about it. How many of you, who currently support the SIAI, are able to analyse the reasoning that led you to support the SIAI in the first place, or at least substantiate your estimations with other kinds of evidence than a coherent internal logic?

I can follow much of the reasoning and arguments on this site. But I'm currently unable to judge their overall credence. Are the conclusions justified? Is the coherent framework build around the SIAI based on firm ground? There seems to be no critical inspection or examination by a third party. There is no peer review. Yet people are willing to donate considerable amounts of money.

I'm concerned that, although consistently so, the SIAI and its supporters are updating on fictional evidence. This post is meant to inquire about the foundations of your basic premises. Are you creating models to treat subsequent models or are your propositions based on fact?

An example here is the use of the Many-worlds interpretation. Itself a logical implication, can it be used to make further inferences and estimations without additional evidence? MWI might be the only consistent non-magic interpretation of quantum mechanics. The problem here is that such conclusions are, I believe, widely considered not to be enough to base further speculations and estimations on. Isn't that similar to what you are doing when speculating about the possibility of superhuman AI and its consequences? What I'm trying to say here is that if the cornerstone of your argumentation, if one of your basic tenets is the likelihood of superhuman AI, although a valid speculation given what we know about reality, you are already in over your head with debt. Debt in the form of other kinds of evidence. Not to say that it is a false hypothesis, that it is not even wrong, but that you cannot base a whole movement and a huge framework of further inference and supportive argumentation on such premises, on ideas that are themselves not based on firm ground.

The gist of the matter is that a coherent and consistent framework of sound argumentation based on unsupported inference is nothing more than its description implies. It is fiction. Imagination allows for endless possibilities while scientific evidence provides hints of what might be possible and what impossible. Science does provide the ability to assess your data. Any hint that empirical criticism provides gives you new information on which you can build on. Not because it bears truth value but because it gives you an idea of what might be possible. An opportunity to try something. There’s that which seemingly fails or contradicts itself and that which seems to work and is consistent.

And that is my problem. Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic, i.e. imagination or fiction, and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed by the SIAI.

Further, do you have an explanation for the circumstance that Eliezer Yudkowsky is the only semi-popular person who's aware of something that might shatter the universe? Why is it that people like Vernor Vinge, Robin Hanson or Ray Kurzweil are not running amok using all their influence to convince people of the risks ahead, or at least give all they have to the SIAI? Why aren't Eric Drexler, Gary Drescher or AI researches like Marvin Minsky worried to the extent that they signal their support?

I'm talking to quite a few educated people outside this community. They do not doubt all those claims for no particular reason. Rather they tell me that there are too many open questions to focus on the possibilities depicted by the SIAI and to neglect other near-term risks that might wipe us out as well.

I believe that many people out there know a lot more than I do, so far, about related topics and yet they seem not to be nearly as concerned about the relevant issues than the average Less Wrong member. I could have named other people. That's besides the point though, it's not just Hanson or Vinge but everyone versus Eliezer Yudkowsky and some unknown followers. What about the other Bayesians out there? Are they simply not as literate as Eliezer Yudkowsky in the maths or maybe somehow teach but not use their own methods of reasoning and decision making?

What do you expect me to do, just believe Eliezer Yudkowsky? Like I believed so much in the past which made sense but turned out to be wrong? Maybe after a few years of study I'll know more.

...

2011-01-06: As this post received over 500 comments I am reluctant to delete it. But I feel that it is outdated and that I could do much better today. This post has however been slightly improved to account for some shortcomings but has not been completely rewritten, neither have its conclusions been changed. Please account for this when reading comments that were written before this update.

2012-08-04: A list of some of my critical posts can be found here: SIAI/lesswrong Critiques: Index

Comments (600)

Comment author: Eliezer_Yudkowsky 13 August 2010 08:17:23PM 20 points [-]

I'm currently preparing for the Summit so I'm not going to hunt down and find links. Those of you who claimed they wanted to see me do this should hunt down the links and reply with a list of them.

Given my current educational background I am not able to judge the following claims (among others) and therefore perceive it as unreasonable to put all my eggs in one basket:

You should just be discounting expected utilities by the probability of the claims being true, and then putting all your eggs into the basket that has the highest marginal expected utility per dollar, unless you have enough resources to invest that the marginal utility goes down. This is straightforward to anyone who knows about expected utility and economics, and anyone who knows about scope insensitivity knows why this result is counterintuitive to the human brain. We don't emphasize this very hard when people talk in concrete terms about donating to more than one organization, because charitable dollars are not substitutable from a limited pool, the main thing is the variance in the tiny fraction of their income people donate to charity in the first place and so the amount of warm glow people generate for themselves is important; but when they talk about "putting all eggs in one basket" as an abstract argument we will generally point out that this is, in fact, the diametrically wrong direction in which abstract argument should be pushing.

  • Superhuman Artificial Intelligence (the runaway kind, i.e. God-like and unbeatable not just at Chess or Go).

Read the Yudkowsky-Hanson AI Foom Debate. (Someone link to the sequence.)

  • Advanced real-world molecular nanotechnology (the grey goo kind the above intelligence could use to mess things up).

Read Eric Drexler's Nanosystems. (Someone find an introduction by Foresight and link to it, that sort of thing is their job.) Also the term you want is not "grey goo", but never mind.

  • The likelihood of exponential growth versus a slow development over many centuries.

Exponentials are Kurzweil's thing. They aren't dangerous. See the Yudkowsky-Hanson Foom Debate.

  • That it is worth it to spend most on a future whose likelihood I cannot judge.

Unless you consider yourself entirely selfish, any altruistic effort should go to whatever has the highest marginal utility. Things you spend on charitable efforts that just make you feel good should be considered selfish. If you are entirely selfish but you can think past a hyperbolic discount rate then it's still possible you can get more hedons per dollar by donating to existential risk projects.

Your difficulties in judgment should be factored into a probability estimate. Your sense of aversion to ambiguity may interfere with warm glows, but we can demonstrate preference reversals and inconsistent behaviors that result from ambiguity aversion which doesn't cash out as a probability estimate and factor straight into expected utility.

  • That Eliezer Yudkowsky (the SIAI) is the right and only person who should be leading, respectively institution that should be working, to soften the above.

Michael Vassar is leading. I'm writing a book. When I'm done writing the book I plan to learn math for a year. When I'm done with that I'll swap back to FAI research hopefully forever. I'm "leading" with respect to questions like "What is the form of the AI's goal system?" but not questions like "Do we hire this guy?"

My judgement of and attitude towards a situation is necessarily as diffuse as my knowledge of its underlying circumstances and the reasoning involved. The state of affairs regarding the SIAI and its underlying rationale and rules of operation are not sufficiently clear to me to give it top priority. Therefore I perceive it as unreasonable to put all my eggs in one basket.

Someone link to relevant introductions of ambiguity aversion as a cognitive bias and do the detailed explanation on the marginal utility thing.

What I mean to say by using that idiom is that I cannot expect, given my current knowledge, to get the promised utility payoff that would justify to make the SIAI a prime priority. That is, I'm donating to the SIAI but also spend considerable amounts of resources maximizing utility at present. Enjoying life, so to say, is therefore a safety net given that my inability to judge the probability of a positive payoff will be answered negative in future.

Can someone else do the work of showing how this sort of satisficing leads to a preference reversal if it can't be viewed as expected utility maximization?

Much of all arguments on this site involve a few propositions and the use of probability to legitimate action in case of their asserted accuracy. Here much is uncertain to an extent that I'm not able to judge any nested probability estimations. I'm already unable to judge what the likelihood of something like the existential risk of exponential evolving superhuman AI is compared to us living in a simulated reality. Even if you tell me, am I to believe the data you base those estimations on?

Simplify things. Take the version of reality that involves AIs being built and not going FOOM, and the one that involves them going FOOM, and ask which one makes more sense. Don't look at just one side and think about how much you doubt it and can't guess. Look at both of them. Also, read the FOOM debate.

And this is what I'm having trouble to accept, let alone look through. There seems to be a highly complicated framework of estimations to support and reinforce each other. I'm not sure how you call this in English, but in German I'd call this a castle in the air.

Do you have better data from somewhere else? Suspending judgment is not a realistic policy. If you're looking for supporting arguments on FOOM they're in the referenced debate.

You could tell me to learn about Solomonoff induction etc., I know that what I'm saying may simply be due to a lack of education. But that's what I'm arguing and inquiring about here. And I dare to bet that many who support the SIAI cannot interpret the reasoning which lead them to support the SIAI in the first place, or at least cannot substantiate the estimations with other kinds of evidence than a coherent internal logic of reciprocal supporting probability estimations.

Nobody's claiming that having consistent probability estimates makes you rational. (Having inconsistent estimates makes you irrational, of course.)

I can however follow much of the reasoning and arguments on this site. But I'm currently unable to judge their overall credence. That is, are the conclusions justified? Is the coherent framework build around the SIAI based on firm ground?

It sounds like you haven't done enough reading in key places to expect to be able to judge the overall credence out of your own estimates.

There seems to be no critical inspection or examination by a third party. There is no peer review. Yet people are willing to donate considerable amounts of money.

You may have an unrealistic picture of what it takes to get scientists interested enough in you that they will read very long arguments and do lots of work on peer review. There's no prestige payoff for them in it, so why would they?

I'm concerned that although consistently so, the LW community is updating on fictional evidence. This post is meant to inquire the basic principles, the foundation of the sound argumentation's and the basic premises that they are based upon . That is, are you creating models to treat subsequent models or are the propositions based on fact?

You have a sense of inferential distance. That's not going to go away until you (a) read through all the arguments that nail down each point, e.g. the FOOM debate, and (b) realize that most predictions are actually antipredictions (someone link) and that most arguments are actually just defeating anthropomorphic counterarguments to the antiprediction.

Comment author: Eliezer_Yudkowsky 13 August 2010 08:17:31PM 13 points [-]

An example here is the treatment and use of MWI (a.k.a. the "many-worlds interpretation") and the conclusions, arguments and further estimations based on it. No doubt MWI is the only consistent non-magic interpretation of quantum mechanics. But that's it, an interpretation. A logical consistent deduction. Or should I rather call it an induction, as the inference seems to be of greater generality than the premises, at least as understood within the LW community? But that's besides the point. The problem here is that such conclusions are, I believe, widely considered to be weak evidence to base further speculations and estimations on.

Reading the QM sequence (someone link) will show you that to your surprise and amazement, what seemed to you like an unjustified leap and a castle in the air, a mere interpretation, is actually nailed down with shocking solidity.

What I'm trying to argue here is that if the cornerstone of your argumentation, if one of your basic tenets is the likelihood of exponential evolving superhuman AI, although a valid speculation given what we know about reality, you are already in over your head with debt. Debt in the form of other kinds of evidence. Not to say that it is a false hypothesis, that it is not even wrong, but that you cannot base a whole movement and a huge framework of further inference and supportive argumentation on such premises, on ideas that are themselves not based on firm ground.

Actually, now that I read this paragraph, it sounds like you think that "exponential", "evolving" AI is an unsupported premise, rather than "AI go FOOM" being the conclusion of a lot of other disjunctive lines of reasoning. That explains a lot about the tone of this post. And if you're calling it "exponential" or "evolving", which are both things the reasoning would specifically deny (it's supposed to be faster-than-exponential and have nothing to do with natural selection), then you probably haven't read the supporting arguments. Read the FOOM debate.

Further, do you have an explanation for the circumstance that Eliezer Yudkowsky is the only semi-popular person who has figured all this out? The only person who's aware of something that might shatter the utility of the universe, if not multiverse? Why is it that people like Vernor Vinge, Charles Stross or Ray Kurzweil are not running amok using all their influence to convince people of the risks ahead, or at least give all they have to the SIAI?

After reading enough sequences you'll pick up enough of a general sense of what it means to treat a thesis analytically, analyze it modularly, and regard every detail of a thesis as burdensome, that you'll understand people here would mention Bostrom or Hanson instead. The sort of thinking where you take things apart into pieces and analyze each piece is very rare, and anyone who doesn't do it isn't treated by us as a commensurable voice with those who do. Also, someone link an explanation of pluralistic ignorance and bystander apathy.

I'm talking to quite a few educated people outside this community. They are not, as some assert, irrational nerds who doubt all all those claims for no particular reason. Rather they tell me that there are too many open questions to worry about the possibilities depicted on this site and by the SIAI rather than other near-term risks that might very well wipe us out.

An argument which makes sense emotionally (ambiguity aversion, someone link to hyperbolic discounting, link to scope insensitivity for the concept of warm glow) but not analytically (the expected utility intervals are huge, research often has long lead times).

I believe that hard-SF authors certainly know a lot more than I do, so far, about related topics and yet they seem not to be nearly as concerned about the relevant issues than the average Less Wrong member. I could have picked Greg Egan. That's besides the point though, it's not just Stross or Egan but everyone versus Eliezer Yudkowsky and some unknown followers. What about the other Bayesians out there? Are they simply not as literate as Eliezer Yudkowsky in the maths or maybe somehow teach but not use their own methods of reasoning and decision making?

Good reasoning is very rare, and it only takes a single mistake to derail. "Teach but not use" is extremely common. You might as well ask "Why aren't there other sites with the same sort of content as LW?" Reading enough, and either you'll pick up a visceral sense of the quality of reasoning being higher than anything you've ever seen before, or you'll be able to follow the object-level arguments well enough that you don't worry about other sources casually contradicting them based on shallower examinations, or, well, you won't.

What do you expect me to do? Just believe Eliezer Yudkowsky? Like I believed so much in the past which made sense but turned out to be wrong? And besides, my psychic condition wouldn't allow me to devote all my resource to the SIAI, or even a substantial amount of my income. The thought makes me reluctant to give anything at all.

Start out with a recurring Paypal donation that doesn't hurt, let it fade into the background, consider doing more after the first stream no longer takes a psychic effort, don't try to make any commitment now or think about it now in order to avoid straining your willpower.

Maybe after a few years of study I'll know more. But right now, if I was forced to choose the future over the present, the SIAI or to have some fun. I'd have some fun.

I forget the term for the fallacy of all-or-nothing reasoning, someone look it up and link to it.

Comment author: JGWeissman 13 August 2010 08:40:38PM *  12 points [-]
Comment author: Cyan 13 August 2010 08:59:02PM 5 points [-]

No bystander apathy here!

Comment author: thomblake 13 August 2010 09:02:54PM 5 points [-]

I forget the term for the fallacy of all-or-nothing reasoning, someone look it up and link to it.

The relevant fallacy in 'Aristotelian' logic is probably false dilemma, though there are a few others in the neighborhood.

Comment author: Jonathan_Graehl 17 August 2010 06:59:47PM 3 points [-]

I haven't done the work to understand MWI yet, but if this FAQ is accurate, almost nobody likes the Copenhagen interpretation (observers are SPECIAL) and a supermajority of "cosmologists and quantum field theorists" think MWI is true.

Since MWI seems to have no practical impact on my decision making, this is good enough for me. Also, Feynman likes it :)

Comment author: wedrifid 14 August 2010 06:16:02AM 3 points [-]

Thanks for taking the time to give a direct answer. I enjoyed reading this and these replies will likely serve as useful comments to when people ask similar questions in the future.

Comment author: NancyLebovitz 13 August 2010 08:39:57PM 3 points [-]

I forget the term for the fallacy of all-or-nothing reasoning, someone look it up and link to it.

Probably black and white thinking.

Comment author: JGWeissman 13 August 2010 08:26:37PM *  5 points [-]
Comment author: XiXiDu 14 August 2010 06:01:10PM 4 points [-]

You should just be discounting expected utilities by the probability of the claims being true, and then putting all your eggs into the basket that has the highest marginal expected utility per dollar, unless you have enough resources to invest that the marginal utility goes down.

Where are the formulas? What are the variables? Where is this method exemplified to reflect the decision process of someone who's already convinced, preferably of someone within the SIAI?

That is part of what I call transparency and a foundational and reproducible corroboration of one's first principles.

Read the Yudkowsky-Hanson AI Foom Debate.

Awesome, I never came across this until now. It's not widely mentioned? Anyway, what I notice from the Wiki entry is that one of the most important ideas, recursive improvement, that might directly support the claims of existential risks posed by AI, is still missing. All this might be featured in the debate, hopefully with reference to substantial third-party research papers, I don't know yet.

Read Eric Drexler's Nanosystems.

The whole point of the grey goo example was to exemplify the speed and sophistication of nanotechnology that would have to be around to either allow an AI to be build in the first place or be of considerable danger. That is, I do not see how an encapsulated AI, even a superhuman AI, could pose the stated risks without the use of advanced nanotechnology. Is it going to use nukes, like Skynet? Another question related to the SIAI, regarding advanced nanotechnology, is that if without advanced nanotechnology superhuman AI is at all possible.

This is an open question and I'm inquiring about how exactly the uncertainties regarding these problems are accounted for in your probability estimations of the dangers posed by AI.

Exponentials are Kurzweil's thing. They aren't dangerous.

What I was inquiring about is the likelihood of slow versus fast development of AI. That is, how fast after we got AGI will we see the rise of superhuman AI? The means of development by which a quick transcendence might happen is circumstantial to the meaning of my question.

Where are your probability estimations that account for these uncertainties. Where are your variables and references that allow you to make any kind of estimations to balance the risks of a hard rapture with a somewhat controllable development?

Unless you consider yourself entirely selfish, any altruistic effort should go to whatever has the highest marginal utility.

You misinterpreted my question. What I meant by asking if it is even worth the effort is, as exemplified in my link, the question for why to choose the future over the present. That is: “What do we actually do all day, if things turn out well?,” “How much fun is there in the universe?,” “Will we ever run out of fun?”.

Simplify things. Take the version of reality that involves AIs being built and not going FOOM, and the one that involves them going FOOM, and ask which one makes more sense.

When I said that I already cannot follow the chain of reasoning depicted on this site I didn't mean to say that I was unable due to intelligence or education. I believe I am intelligent enough and am trying to close the education gap. What I meant is that the chain of reasoning is intransparent.

Take the case of evolution, you are more likely to be able to follow the chain of subsequent conclusions. In the case of evolution evidence isn't far, it's not beneath 14 years of ideas based on some hypothesis. In the case of the SIAI it rather seems to be that there are hypotheses based on other hypotheses that are not yet tested.

Do you have better data from somewhere else? Suspending judgment is not a realistic policy. If you're looking for supporting arguments on FOOM they're in the referenced debate.

What if someone came along making coherent arguments about some existential risk about how some sort of particle collider might destroy the universe? I would ask what the experts think who are not associated with the person who makes the claims. What would you think if he simply said, "do you have better data than me"? Or, "I have a bunch of good arguments"?

Nobody's claiming that having consistent probability estimates makes you rational. (Having inconsistent estimates makes you irrational, of course.)

I'm not sure what you are trying to say here. What I said was simply that if you say that some sort of particle collider is going to destroy the world with a probability of 75% if run, I'll ask you for how you came up with these estimations. I'll ask you to provide more than a consistent internal logic but some evidence-based prior.

...realize that most predictions are actually antipredictions (someone link) and that most arguments are actually just defeating anthropomorphic counterarguments to the antiprediction.

If your antiprediction is not as informed as the original prediction, how is it not at most reducing the original prediction but actually overthrowing it to the extent on which the SIAI is basing its risk estimations?

Comment author: Rain 15 August 2010 01:43:40AM *  6 points [-]

I think you just want a brochure. We keep telling you to read archived articles explaining many of the positions and you only read the comment where we gave the pointers, pretending as if that's all that's contained in our answers. It'd be more like him saying, "I have a bunch of good arguments right over there," and then you ignore the second half of the sentence.

Comment author: XiXiDu 15 August 2010 09:11:49AM *  3 points [-]

I'm not asking for arguments. I know them. I donate. I'm asking for more now. I'm using the same kind of anti-argumentation that academics would use against your arguments. Which I've encountered myself a few times while trying to convince them to take a look at the inscrutable archives of posts and comment that is LW. What do they say? "I skimmed over it, but there were no references besides some sound argumentation, an internal logic.", "You make strong claims, mere arguments and conclusions extrapolated from a few premises are insufficient to get what you ask for."

Comment author: wedrifid 15 August 2010 10:08:45AM 3 points [-]

I'm not asking for arguments. I know them.

Pardon my bluntness, but I don't believe you, and that disbelief reflects positively on you. Basically, if you do know the arguments then a not insignificant proportion of your discussion here would amount to mere logical rudeness.

For example if you already understood the arguments for, or basic explanation of why 'putting all your eggs in one basket' is often the rational thing to do despite intuitions to the contrary then why on earth would you act like you didn't?

Comment author: XiXiDu 15 August 2010 10:46:57AM *  3 points [-]

Oh crap, the SIAI was just a punching bag. Of course I understand the arguments for why it makes sense not to split your donations. If you have a hundred babies but only food for 10, you are not going to portion it to all of the hundred babies but feed the strongest 10. Otherwise you'd end up having a hundred dead babies in which case you could as well have eaten the food yourself before wasting it like that. It's obvious, I don't see how someone wouldn't get this.

I used that idiom to illustrate that given my preferences and current state of evidence I could as well eat all the food myself rather than wasting it on something I don't care to save or that doesn't need to be saved in the first place because I missed the fact that all the babies are puppets and not real.

I asked, are the babies real babies that need food and is the expected utility payoff of feeding them higher than eating the food myself right now?

I'm starting to doubt that anyone actually read my OP...

Comment author: wedrifid 15 August 2010 03:52:19AM *  7 points [-]

Another question related to the SIAI, regarding advanced nanotechnology, is that if without advanced nanotechnology superhuman AI is at all possible.

Um... yes? Superhuman is a low bar and, more importantly, a completely arbitrary bar.

I'm not sure what you are trying to say here. What I said was simply that if you say that some sort of particle collider is going to destroy the world with a probability of 75% if run, I'll ask you for how you came up with these estimations. I'll ask you to provide more than a consistent internal logic but some evidence-based prior.

Evidence based? By which you seem to mean 'some sort of experiment'? Who would be insane enough to experiment with destroying the world? This situation is exactly where you must understand that evidence is not limited to 'reference to historical experimental outcomes'. You actually will need to look at 'consistent internal logic'... just make sure the consistent internal logic is well grounded on known physics.

What if someone came along making coherent arguments about some existential risk about how some sort of particle collider might destroy the universe? I would ask what the experts think who are not associated with the person who makes the claims. What would you think if he simply said, "do you have better data than me"? Or, "I have a bunch of good arguments"?

And that, well, that is actually a reasonable point. You have been given some links (regarding human behavior) that are good answer to the question but it is nevertheless non-trivial. Unfortunately now you are actually going to have to do the work and read them.

Comment author: wedrifid 15 August 2010 03:50:41AM 3 points [-]

You should just be discounting expected utilities by the probability of the claims being true, and then putting all your eggs into the basket that has the highest marginal expected utility per dollar, unless you have enough resources to invest that the marginal utility goes down.

Where are the formulas? What are the variables? Where is this method exemplified to reflect the decision process of someone who's already convinced, preferably of someone within the SIAI?

That is part of what I call transparency and a foundational and reproducible corroboration of one's first principles.

Leave aside SIAI specific claims here. The point Eliezer was making, was about 'all your eggs in one basket' claims in general. In situations like this (your contribution doesn't drastically change the payoff at the margin, etc) putting all your eggs in best basket is the right thing to do.

You can understand that insight completely independently of your position on existential risk mitigation.

Comment author: Nick_Tarleton 14 August 2010 06:10:04PM 3 points [-]

Anyway, what I notice from the Wiki entry is that one of the most important ideas, recursive improvement, that might directly support the claims of existential risks posed by AI, is still missing.

Er, there's a post by that title.

Comment author: XiXiDu 14 August 2010 07:48:25PM *  3 points [-]

...and "FOOM" means way the hell smarter than anything else around...

Questionable. Is smarter than human intelligence possible in a sense comparable to the difference between chimps and humans? To my awareness we have no evidence to this end.

Not, "ooh, it's a little Einstein but it doesn't have any robot hands, how cute".

Questionable. How is an encapsulated AI going to get this kind of control without already existing advanced nanotechnology? It might order something over the Internet if it hacks some bank account etc. (long chain of assumptions), but how is it going to make use of the things it orders?

Optimizing yourself is a special case, but it's one we're about to spend a lot of time talking about.

I believe that self-optimization is prone to be very limited. Changing anything substantial might lead Gandhi to swallow the pill that will make him want to hurt people, so to say.

...humans developed the idea of science, and then applied the idea of science...

Sound argumentation that gives no justification to extrapolate it to an extent that you could apply it to the shaky idea of a superhuman intellect coming up with something better than science and applying it again to come up...

In an AI, the lines between procedural and declarative knowledge are theoretically blurred, but in practice it's often possible to distinguish cognitive algorithms and cognitive content.

All those ideas about possible advantages of being an entity that can reflect upon itself to the extent of being able to pinpoint its own shortcoming is again, highly speculative. This could be a disadvantage.

Much of the rest is about the plateau argument, once you got a firework you can go to the moon. Well yes, I've been aware of that argument. But that's weak, that there are many hidden mysteries about reality that we completely missed yet is highly speculative. I think even EY admits that whatever happens, quantum mechanics will be a part of it. Is the AI going to invent FTL travel? I doubt it, and it's already based on the assumption that superhuman intelligence, not just faster intelligence, is possible.

Insights are items of knowledge that tremendously decrease the cost of solving a wide range of problems.

Like the discovery that P ≠ NP? Oh wait, that would be limiting. This argument runs in both directions.

If you go to a sufficiently sophisticated AI - more sophisticated than any that currently exists...

Assumption.

But it so happens that the AI itself uses algorithm X to store associative memories, so if the AI can improve on this algorithm, it can rewrite its code to use the new algorithm X+1.

Nice idea, but recursion does not imply performance improvement.

You can't draw detailed causal links between the wiring of your neural circuitry, and your performance on real-world problems.

How can he make any assumptions then about the possibility to improve them recursively, given this insight, to an extent that they empower an AI to transcendent into superhuman realms?

Well, we do have one well-known historical case of an optimization process writing cognitive algorithms to do further optimization; this is the case of natural selection, our alien god.

Did he just attribute intention to natural selection?

Comment author: gwern 14 August 2010 09:02:08PM 15 points [-]

Questionable. Is smarter than human intelligence possible in a sense comparable to the difference between chimps and humans? To my awareness we have no evidence to this end.

What would you accept as evidence?

Would you accept sophisticated machine learning algorithms like the ones in the Netflix contest, who find connections that make no sense to humans, who simply can't work with high-dimensional data?

Would you accept a circuit designed by a genetic algorithm, which doesn't work in the physics simulation but works better in reality than anything humans have designed, with mysterious parts that are not connected to anything but are necessary for it to function?

Would you accept a chess program which could crush any human chess player who ever lived? Kasparov at ELO 2851, Rybka at 3265. Wikipedia says grandmaster status comes at ELO 2500. So Rybka is now even further beyond Kasparov at his peak as Kasparov was beyond a new grandmaster. And it's not like Rybka or the other chess AIs will weaken with age.

Or are you going to pull a no-true-Scotsman and assert that each one of these is mechanical or unoriginal or not really beyond human or just not different enough?

Comment author: soreff 15 August 2010 02:24:31AM *  5 points [-]

I think it at least possible that much-smarter-than human intelligence might turn out to be impossible. There exist some problem domains where there appear to be a large number of solutions, but where the quality of the solutions saturate quickly as more and more resources are thrown at them. A toy example is how often records are broken in a continuous 1-D domain, with attempts drawn from a constant probability distribution: The number of records broken goes as the log of the number of attempts. If some of the tasks an AGI must solve are like this, then it might not do much better than humans - not because evolution did a wonderful job of optimizing humans for perfect intelligence, but because that part of the problem domain is a brick wall, and anything must bash into it at nearly the same point.

One (admittedly weak) piece of evidence: a real example of saturation, is an optimizing compiler being used to recompile itself. It is a recursive optimizing system, and, if there is a knob to allow more effort being used on the optimization, the speed-up from the first pass can be used to allow a bit more effort to be applied to a second pass for the same cpu time. Nonetheless, the results for this specific recursion are not FOOM.

The evidence in the other direction are basically existence proofs from the most intelligent people or groups of people that we know of. Something as intelligent as Einstein must be possible, since Einstein existed. Given an AI Einstein, working on improving its own intelligence - it isn't clear if it could make a little progress or a great deal.

Comment author: gwern 15 August 2010 08:18:05AM 3 points [-]

but because that part of the problem domain is a brick wall, and anything must bash into it at nearly the same point.

This goes for your compilers as well, doesn't it? There are still major speed-ups available in compilation technology (the closely connected areas of whole-program compilation+partial evaluation+supercompilation), but a compiler is still expected to produce isomorphic code, and that puts hard information-theoretic bounds on output.

Comment author: Sniffnoy 15 August 2010 05:06:13AM 2 points [-]

Would you accept a circuit designed by a genetic algorithm, which doesn't work in the physics simulation but works better in reality than anything humans have designed, with mysterious parts that are not connected to anything but are necessary for it to function?

Can you provide details / link on this?

Comment author: gwern 15 August 2010 07:57:05AM 4 points [-]

I should've known someone would ask for the cite rather than just do a little googling. Oh well. Turns out it wasn't a radio, but a voice-recognition circuit. From http://www.talkorigins.org/faqs/genalg/genalg.html#examples :

"This aim was achieved within 3000 generations, but the success was even greater than had been anticipated. The evolved system uses far fewer cells than anything a human engineer could have designed, and it does not even need the most critical component of human-built systems - a clock. How does it work? Thompson has no idea, though he has traced the input signal through a complex arrangement of feedback loops within the evolved circuit. In fact, out of the 37 logic gates the final product uses, five of them are not even connected to the rest of the circuit in any way - yet if their power supply is removed, the circuit stops working. It seems that evolution has exploited some subtle electromagnetic effect of these cells to come up with its solution, yet the exact workings of the complex and intricate evolved structure remain a mystery (Davidson 1997)."

Comment author: CarlShulman 15 August 2010 09:32:28AM *  6 points [-]

Questionable. How is an encapsulated AI going to get this kind of control without already existing advanced nanotechnology? It might order something over the Internet if it hacks some bank account etc. (long chain of assumptions),

Any specific scenario is going to have burdensome details, but that's what you get if you ask for specific scenarios rather than general pressures, unless one spends a lot of time going through detailed possibilities and vulnerabilities. With respect to the specific example, regular human criminals routinely swindle or earn money anonymously online, and hack into and control millions of computers in botnets. Cloud computing resources can be rented with ill-gotten money.

but how is it going to make use of the things it orders?

In the unlikely event of a powerful human-indifferent AI appearing in the present day, a smartphone held by a human could provide sensors and communication to use humans for manipulators (as computer programs direct the movements of some warehouse workers today). Humans can be paid, blackmailed, deceived (intelligence agencies regularly do these things) to perform some tasks. An AI that leverages initial capabilities could jury-rig a computer-controlled method of coercion [e.g. a cheap robot arm holding a gun, a tampered-with electronic drug-dispensing implant, etc]. And as time goes by and the cumulative probability of advanced AI becomes larger, increasing quantities of robotic vehicles and devices will be available.

Comment author: MichaelVassar 29 December 2010 06:58:40PM 4 points [-]

Have you tried asking yourself non-rhetorically what an AI could do without MNT? That doesn't seem to me to be a very great inferential distance at all.

Comment author: Wei_Dai 12 August 2010 05:37:16PM 10 points [-]

I think Vernor Vinge at least has made a substantial effort to convince people of the risks ahead. What do you think A Fire Upon the Deep is? Or, here is a more explicit version:

If the Singularity can not be prevented or confined, just how bad could the Post-Human era be? Well ... pretty bad. The physical extinction of the human race is one possibility. (Or as Eric Drexler put it of nanotechnology: Given all that such technology can do, perhaps governments would simply decide that they no longer need citizens!). Yet physical extinction may not be the scariest possibility. Again, analogies: Think of the different ways we relate to animals. Some of the crude physical abuses are implausible, yet.... In a Post- Human world there would still be plenty of niches where human equivalent automation would be desirable: embedded systems in autonomous devices, self- aware daemons in the lower functioning of larger sentients. (A strongly superhuman intelligence would likely be a Society of Mind [16] with some very competent components.) Some of these human equivalents might be used for nothing more than digital signal processing. They would be more like whales than humans. Others might be very human-like, yet with a one-sidedness, a dedication that would put them in a mental hospital in our era. Though none of these creatures might be flesh-and-blood humans, they might be the closest things in the new enviroment to what we call human now. (I. J. Good had something to say about this, though at this late date the advice may be moot: Good [12] proposed a "Meta-Golden Rule", which might be paraphrased as "Treat your inferiors as you would be treated by your superiors." It's a wonderful, paradoxical idea (and most of my friends don't believe it) since the game-theoretic payoff is so hard to articulate. Yet if we were able to follow it, in some sense that might say something about the plausibility of such kindness in this universe.)

I have argued above that we cannot prevent the Singularity, that its coming is an inevitable consequence of the humans' natural competitiveness and the possibilities inherent in technology. And yet ... we are the initiators. Even the largest avalanche is triggered by small things. We have the freedom to establish initial conditions, make things happen in ways that are less inimical than others. Of course (as with starting avalanches), it may not be clear what the right guiding nudge really is:

He goes on to talk about intelligence amplification, and then:

Originally, I had hoped that this discussion of IA would yield some clearly safer approaches to the Singularity. (After all, IA allows our participation in a kind of transcendance.) Alas, looking back over these IA proposals, about all I am sure of is that they should be considered, that they may give us more options. But as for safety ... well, some of the suggestions are a little scarey on their face. One of my informal reviewers pointed out that IA for individual humans creates a rather sinister elite. We humans have millions of years of evolutionary baggage that makes us regard competition in a deadly light. Much of that deadliness may not be necessary in today's world, one where losers take on the winners' tricks and are coopted into the winners' enterprises. A creature that was built de novo might possibly be a much more benign entity than one with a kernel based on fang and talon. And even the egalitarian view of an Internet that wakes up along with all mankind can be viewed as a nightmare [26].

Comment author: XiXiDu 13 August 2010 08:37:04AM 2 points [-]

As I wrote in another comment, Eliezer Yudkowsky hasn't come up with anything unique. And there is no argument in saying that he's simply he smartest fellow around since clearly, other people have come up with the same ideas before him. And that was my question, why are they not signaling their support for the SIAI. Or in case they don't know about the SIAI, why are they not using all their resources and publicity and try to stop the otherwise inevitable apocalypse?

It looks like there might be arguments against the kind of fearmongering that can be found within this community. So why is nobody out to inquire about the reasons for the great silence within the group of those aware of a possible singularity but who nevertheless keep quiet? Maybe they know something you don't, or are you people so sure of your phenomenal intelligence?

Comment author: CarlShulman 13 August 2010 11:47:13AM *  8 points [-]

David Chalmers has been writing and presenting to philosophers about AI and intelligence explosion since giving his talk at last year's Singularity Summit. He estimates the probability of human-level AI by 2100 at "somewhat more than one-half," thinks an intelligence explosion following that quite likely, and considers possible disastrous consequences quite important relative to other major causes today. However, he had not written or publicly spoken about his views, and probably would not have for quite some time had he not been invited to the Singularity Summit.

He reports a stigma around the topic as a result of the combination of science-fiction associations and the early failures of AI, and the need for some impetus to brave that. Within the AI field, there is also a fear that discussion of long-term risks, or unlikely short-term risks may provoke hostile reactions against the field thanks to public ignorance and affect heuristic. Comparisons are made to genetic engineering of agricultural crops, where public attention seems to be harmful on net in unduly slowing the development of more productive plants.

Comment author: XiXiDu 13 August 2010 12:22:59PM *  4 points [-]

Thanks. This is more, I think you call it rational evidence, from an outsider. But it doesn't answer the primary question of my post. How do you people arrive at the estimations you state? Where can I find the details of how you arrived at your conclusions about the likelihood of those events?

If all this was supposed to be mere philosophy, I wouldn't inquire about it to such an extent. But the SIAI is asking for the better part of your income and resources. There are strong claims being made by Eliezer Yudkowsky and calls for action. Is it reasonable to follow given the current state of evidence?

Comment author: CarlShulman 13 August 2010 02:14:35PM *  7 points [-]

But the SIAI is asking for the better part of your income and resources.

If you are a hard-core consequentialist altruist who doesn't balance against other less impartial desires you'll wind up doing that eventually for something. Peter Singer's "Famine, Affluence, and Morality" is decades old, and there's still a lot of suffering to relieve. Not to mention the Nuclear Threat Initiative, or funding research into DNA vaccines, or political lobbying, etc. The question of how much you're willing to sacrifice in exchange for helping various numbers of people or influencing extinction risks in various ways is separate from data about the various options. No one is forcing you to reduce existential risk (except insofar as tax dollars go to doing so), certainly not to donate.

I'll have more to say on substance tomorrow, but it's getting pretty late. My tl;dr take would be that with pretty conservative estimates on total AI risk, combined with the lack of short term motives to address it (the threat of near-term and moderate scale bioterrorism drives research into defenses, not the fear of extinction-level engineered plagues; asteroid defense is more motivated by the threat of civilization or country-wreckers than the less common extinction-level events; nuclear risk reduction was really strong only in the face of the Soviets, and today the focus is still more on nuclear terrorism, proliferation, and small scale wars; climate change benefits from visibly already happening and a social movement built over decades in tandem with the existing environmentalist movement), there are still low-hanging fruit to be plucked. [That parenthetical aside somewhat disrupted the tl;dr billing, oh well...] When we get to the point where a sizable contingent of skilled folk in academia and elsewhere have gotten well into those low-hanging fruit, and key decision-makers in the relevant places are likely to have access to them in the event of surprisingly quick progress, that calculus will change.

Comment author: Rain 12 August 2010 08:37:19PM *  39 points [-]

(Disclaimer: My statements about SIAI are based upon my own views, and should in no way be interpreted as representing their stated or actual viewpoints on the subject matter. I am talking about my personal thoughts, feelings, and justifications, no one else's. For official information, please check the SIAI website.)

Although this may not answer your questions, here are my reasons for supporting SIAI:

  • I want what they're selling. I want to understand morality, intelligence, and consciousness. I want a true moral agent outside of my own thoughts, something that can help solve that awful, plaguing question, "Why?" I want something smarter than me that can understand and explain the universe, providing access to all the niches I might want to explore. I want something that will save me from death and pain and find a better way to live.

  • It's the most logical next step. In the evolution of mankind, intelligence is a driving force, so "more intelligent" seems like an incredibly good idea, a force multiplier of the highest order. No other solution captures my view of a proper future like friendly AI, not even "...in space!"

  • No one else cares about the big picture. (Nick Bostrom and the FHI excepted; if they came out against SIAI, I might change my view.) Every other organization seems to focus on the 'generic now', leaving unintended consequences to crush their efforts in the long run, or avoiding the true horrors of the world (pain, age, poverty) due to not even realizing they're solvable. The ability to predict the future, through knowledge, understanding, and computation power, are the key attributes toward making that future a truly good place. The utility calculations are staggeringly in support of the longest view, such as that provided by SIAI.

  • It's the simplest of the 'good outcome' possibilities. Everything else seems to depend on magical hand-waving, or an overly simplistic view of how the world works or what a single advance would mean, rather than the way it interacts with all the diverse improvements that happen along side it and how real humans would react to them. Friendly AI provides 'intelligence-waving' that seems far more likely to work in a coherent fashion.

  • I don't see anything else to give me hope. What else solves all potential problems at the same time, rather than leaving every advancement to be destroyed by that one failure mode you didn't think of? Of course! Something that can think of those failure modes for you, and avoid them before you even knew they existed.

  • It's cheap and easy to do so on a meaningful scale. It's very easy to make up a large percentage of their budget; I personally provided more than 3 percent of their annual operating costs for this year, and I'm only upper middle class. They also have an extremely low barrier to entry (any amount of US dollars and a stamp, or a credit card, or PayPal).

  • They're thinking about the same things I am. They're providing a tribe like LessWrong, and they're pushing, trying to expand human knowledge in the ways I think are most important, such as existential risk, humanity's future, rationality, effective and realistic reversal of pain and suffering, etc.

  • I don't think we have much time. The best predictions aren't very good, but human power has increased to the point where there's a true threat we'll destroy ourselves within the next 100 years through means nuclear, biological, nano, AI, wireheading, or nerf the world. Sitting on money and hoping for a better deal, or donating to institutions now that will compound into advancements generations in the future seems like too little, too late.

I still put more money into savings accounts than I give to SIAI. I'm investing in myself and my own knowledge more than the purported future of humanity as they envision. I think it's very likely SIAI will fail in their mission in every way. They're just what's left after a long process of elimination. Give me a better path and I'll switch my donations. But I don't see any other group that comes close.

Comment author: XiXiDu 13 August 2010 08:25:58AM *  3 points [-]

I want what they're selling.

Yeah, that's why I'm donating as well.

It's the most logical next step.

Sure, but why the SIAI?

No one else cares about the big picture.

I accept this. Although I'm not sure if the big picture should be a top priority right now. And as I wrote, I'm unable to survey the utility calculations at this point.

It's the simplest of the 'good outcome' possibilities.

So you replace a simple view that is evidende based with one that might or might not be based on really shaky ideas such as an intelligence explosion.

I don't see anything else to give me hope.

I think you overestimate the friendliness of friendly AI. Too bad Roko's posts have been censored.

It's cheap and easy to do so on a meaningful scale.

I want to believe.

They're thinking about the same things I am.

Beware of those who agree with you?

I don't think we have much time.

Maybe we do have enough time regarding AI and the kind of threats depicted on this site. Maybe we don't have enough time regarding other kinds of threats.

I think it's very likely SIAI will fail in their mission in every way. They're just what's left after a long process of elimination. Give me a better path and I'll switch my donations. But I don't see any other group that comes close.

I can accept that. But I'm unable to follow the process of elimination yet.

Comment author: Rain 13 August 2010 12:13:11PM *  6 points [-]

It's the most logical next step.

Sure, but why the SIAI?

Who else is working directly on creating smarter-than-human intelligence with non-commercial goals? And if there are any, are they self-reflective enough to recognize its potential failure modes?

No one else cares about the big picture.

I accept this. Although I'm not sure if the big picture should be a top priority right now. And as I wrote, I'm unable to survey the utility calculations at this point.

I used something I developed which I call Point-In-Time Utility to guide my thinking on this matter. It basically boils down to, 'the longest view wins', and I don't see anyone else talking about potentially real pangalactic empires.

It's the simplest of the 'good outcome' possibilities.

So you replace a simple view that is evidende based with one that might or might not be based on really shaky ideas such as an intelligence explosion.

I don't think it has to be an explosion at all, just smarter-than-human. I'm willing to take things one step at a time, if necessary. Though it seems unlikely we could build a smarter-than-human intelligence without understanding what intelligence is, and thus knowing where to tweak, if even retroactively. That said, I consider intelligence tweaking itself to be a shaky idea, though I view alternatives as failure modes.

I don't see anything else to give me hope.

I think you overestimate the friendliness of friendly AI. Too bad Roko's posts have been censored.

I think you overestimate my estimation of the friendliness of friendly AI. Note that at the end of my post I said it is very likely SIAI will fail. My hope total is fairly small. Roko deleted his own posts, and I was able to read the article Eliezer deleted since it was still in my RSS feed. It didn't change my thinking on the matter; I'd heard arguments like it before.

They're thinking about the same things I am.

Beware of those who agree with you?

Hi. I'm human. At least, last I checked. I didn't say all my reasons were purely rational. This one is dangerous (reinforcement), but I do a lot of reading of opposing opinions as well, and there's still a lot I disagree with regarding SIAI's positions.

I don't think we have much time.

Maybe we do have enough time regarding AI and the kind of threats depicted on this site. Maybe we don't have enough time regarding other kinds of threats.

The latter is what I'm worried about. I see all of these threats as being developed simultaneously, in a race to see which one passes the threshold into reality first. I'm hoping that Friendly AI beats them.

I think it's very likely SIAI will fail in their mission in every way. They're just what's left after a long process of elimination. Give me a better path and I'll switch my donations. But I don't see any other group that comes close.

I can accept that. But I'm unable to follow the process of elimination yet.

I haven't seen you name any other organization you're donating to or who might compete with SIAI. Aside from the Future of Humanity Institute or the Lifeboat Foundation, both of which seem more like theoretical study groups than action-takers, people just don't seem to be working on these problems. Even the Methuselah Foundation is working on a very narrow portion which, although very useful and awesome if it succeeds, doesn't guard against the threats we're facing.

Comment author: XiXiDu 13 August 2010 01:02:22PM *  3 points [-]

Who else is working directly on creating smarter-than-human intelligence with non-commercial goals?

That there are no other does not mean we shouldn't be keen to create them, to establish competition. Or do it at all at this point.

...'the longest view wins', and I don't see anyone else talking about potentially real pangalactic empires.

I'm not sure about this.

I don't think it has to be an explosion at all, just smarter-than-human.

I feel there are too many assumptions in what you state to come up with estimations like a 1% probability of uFAI turning everything into paperclips.

I think you overestimate my estimation of the friendliness of friendly AI.

You are right, never mind what I said.

I see all of these threats as being developed simultaneously...

Yeah and how is their combined probability less worrying than that of AI? That doesn't speak against the effectiveness of donating all to the SIAI of course. Creating your own God to fix the problems the imagined one can't is indeed a promising and appealing idea, given it is feasible.

I haven't seen you name any other organization you're donating to or who might compete with SIAI.

I'm mainly concerned about my own well-being. If I was threated by something near-term within Germany, that would be my top-priority. So the matter is more complicated for me than for the people who are merely conerned about the well-being of all beings.

As I said before, it is not my intention to discredit the SIAI but to steer some critical discussion for us non-expert, uneducated but concerned people.

Comment author: Rain 13 August 2010 01:20:22PM *  6 points [-]

That there are no other does not mean we shouldn't be keen to create them, to establish competition.

Absolutely agreed. Though I'm barely motivated enough to click on a PayPal link, so there isn't much hope of my contributing to that effort. And I'd hope they'd be created in such a way as to expand total funding, rather than cannibalizing SIAI's efforts.

I'm not sure about this.

Certainly there are other ways to look at value / utility / whatever and how to measure it. That's why I mentioned I had a particular theory I was applying. I wouldn't expect you to come to the same conclusions, since I haven't fully outlined how it works. Sorry.

I feel there are too many assumptions in what you state to come up with estimations like a 1% probability of uFAI turning everything into paperclips.

I'm not sure what this is saying. I think UFAI is far more likely than FAI, and I also think that donating to SIAI contributes somewhat to UFAI, though I think it contributes more to FAI, such that in the race I was talking about, FAI should come out ahead. At least, that's the theory. There may be no way to save us.

Yeah and how is their combined probability less worrying than that of AI?

AI is one of the things on the list racing against FAI. I think AI is actually the most dangerous of them, and from what I've read, so does Eliezer, which is why he's working on that problem instead of, say, nanotech.

I'm mainly concerned about my own well-being.

I've mentioned before that I'm somewhat depressed, so I consider my philanthropy to be a good portion 'lack of caring about self' more than 'being concerned about the well-being of all beings'. Again, a subtractive process.

As I said before, it is [...] my intention [...] to steer some critical discussion for us non-expert, uneducated but concerned people.

Thanks! I think that's probably a good idea, though I would also appreciate more critical discussion from experts and educated people, a sort of technical minded anti-Summit, without all the useless politics of the IEET and the like.

Comment author: multifoliaterose 12 August 2010 11:42:01PM 2 points [-]

Good, informative comment.

Comment author: Johnicholas 12 August 2010 05:19:48PM 5 points [-]

This is an attempt (against my preference) to defend SIAI's reasoning.

Let's characterize the predictions of the future into two broad groups: 1. business as usual, or steady-state. 2. aware of various alarmingly exponential trends broadly summarized as "Moore's law". Let's subdivide the second category into two broad groups: 1. attempting to take advantage of the trends in roughly a (kin-) selfish manner 2. attempting to behave extremely unselfishly.

If you study how the world works, the lack of steady-state-ness is everywhere. We cannot use fossil fuels or arable land indefinitely at the current rates. "Business as usual" depends crucially on progress in order to continue! We're investing heavily in medical research, and there's no reason to expect that to stop, Self-replicating molecular-scale entities already exist, and there is every reason to expect that we will understand how to build them better than we currently do.

Supposing that the above paragraph has convinced the reader that the lack of steady--state-ness is fairly obvious, and given the reader's knowledge of human nature, how many people do you expect would be trying to behave in extremely unselfish manner?

Your post seems to expect that if various luminaries believed that incomprehensibly-sophisticated computing engines and dangerous self-replicating atomic-scale entities were likely in our future, then they would behave extremely unselfishly - is that reasonable? Supposing that almost everyone was aware of this probable future, how would they take advantage of it in a (kin-)selfish manner as best they can? I think the hypothesized world would look very much like this one.

Comment author: Mitchell_Porter 13 August 2010 11:09:51AM 17 points [-]

Can I say, first of all, that if you want to think realistically about a matter like this, you will have to find better authorities than science-fiction writers. Their ideas are generally not their own, but come from scientific and technological culture or from "futurologists" (who are also a very mixed bunch in terms of intellect, realism, and credibility); their stories present speculation or even falsehood as fact. It may be worthwhile going "cold turkey" on all the SF you have ever read, bearing in mind that it's all fiction that was ground out, word by word, by some human being living a very ordinary life, in a place and time not very far from you. Purge all the imaginary experience of transcendence from your system and see what's left.

Of course science-fictional thinking, treating favorite authors as gurus, and so forth is endemic in this subculture. The very name, "Singularity Institute", springs from science fiction. And SF occasionally gets things right. But it is far more a phenomenon of the time, a symptom of real things, rather than a key to understanding reality. Plain old science is a lot closer to being a reliable guide to reality, though even there - treating science as your authority - there are endless ways to go wrong.

A lot of the discourse here and in similar places is science fiction minus plot, characters, and other story-telling apparatus. Just the ideas - often the utopia of the hard-SF fan, bored by the human interactions and wanting to get on with the transcendent stuff. With transhumanist and singularity culture, this utopia has arrived, because you can talk all day about these radical futurist ideas without being tied to a particular author or oeuvre. The ideas have leapt from the page and invaded our brains, where they live even during the dull hours of daylight life. Hallelujah!

So, before you evaluate SIAI and its significance, there are a few more ideas that I would like you to drive from your brain: The many-worlds metaphysics. The idea of trillion-year lifespans. The idea that the future of the whole observable universe depends on the outcome of Earth's experiment with artificial intelligence. These are a few of the science-fiction or science-speculation ideas which have become a fixture in the local discourse.

I'm giving you this lecture because so many of your doubts about LW's favorite crypto-SF ideas masquerading as reality, are expressed in terms of ... what your favorite SF writers and futurist gurus think! But those people all have the same problem: they are trying to navigate issues where there simply aren't authorities yet. Stross and Egan have exactly the same syndrome affecting everyone here who writes about mind copies, superintelligence, alien utility functions, and so on. They live in two worlds, the boring everyday world and the world of their imagination. The fact that they produce descriptions of whole fictional worlds in order to communicate their ideas, rather than little Internet essays, and the fact that they earn a living doing this... I'm not sure if that means they have the syndrome more under control, or less under control, compared to the average LW contributor.

Probably you already know this, probably everyone here knows it. But it needs to be said, however clumsily: there is an enormous amount of guessing going on here, and it's not always recognized as such, and furthermore, there isn't much help we can get from established authorities, because we really are on new terrain. This is a time of firsts for the human species, both conceptually and materially.

Now I think I can start to get to the point. Suppose we entertain the idea of a future where none of these scenarios involving very big numbers (lifespan, future individuals, galaxies colonized, amount of good or evil accomplished) apply, and where none of these exciting info-metaphysical ontologies turns out to be correct. A future which mostly remains limited in the way that all human history to date has been limited, limited in the ways which inspire such angst and such promethean determination to change things, or determination to survive until they change, among people who have caught the singularity fever. A future where everyone is still going to die, where the human race and its successors only last a few thousand years, not millions or billions of them. If that is the future, could SIAI still matter?

My answer is yes, because artificial intelligence still matters in such a future. For the sake of argument, I may have just poured cold water on a lot of popular ideas of transcendence, but to go further and say that only natural life and natural intelligence will ever exist really would be obtuse. If we do accept that "human-level" artificial intelligence is possible and is going to happen, then it is a matter at least as consequential as the possibility of genocide or total war. Ignoring, again for the sake of a limited argument, all the ideas about planet-sized AIs and superintelligence, and it's still easy to see that AI which can out-think human beings and which has no interest in their survival ought to be possible. So even in this humbler futurology, AI is still an extinction risk.

The solution to the problem of unfriendly AI most associated with SIAI - producing the coherent extrapolated volition of the human race - is really a solution tailored to the idea of a single super-AI which undergoes a "hard takeoff", a rapid advancement in power. But SIAI is about a lot more than researching, promoting, and implementing CEV. There's really no organization like it in the whole sphere of "robo-ethics" and "ethical AI". The connection that has been made between "friendliness" and the (still scientifically unknown) complexities of the human decision-making process is a golden insight that has already justified SIAI's existence and funding many times over. And of course SIAI organizes the summits, and fosters a culture of discussion, both in real life and online (right here), which is a lot broader than SIAI's particular prescriptions.

So despite the excesses and enthusiasms of SIAI's advocates, supporters, and leading personalities, it really is the best thing we have going when it comes to the problem of unfriendly AI. Whether and how you personally should be involved with its work - only you can make that decision. (Even constructive criticism is a way of helping.) But SIAI is definitely needed.

Comment author: DSimon 14 August 2010 01:12:52AM 5 points [-]

Ignoring, again for the sake of a limited argument, all the ideas about planet-sized AIs and superintelligence, and it's still easy to see that AI which can out-think human beings and which has no interest in their survival ought to be possible. So even in this humbler futurology, AI is still an extinction risk.

Voted up for this argument. I think the SIAI would be well-served for accruing donations, support, etc. by emphasizing this point more.

Space organizations might similarly argue: "You might think our wilder ideas are full of it, but even if we can't ever colonize Mars, you'll still be getting your satellite communications network."

Comment author: XiXiDu 30 October 2010 09:09:14AM *  4 points [-]

Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) (Thanks Kevin)

SIAI's leaders and community members have a lot of beliefs and opinions, many of which I share and many not, but the key difference between our perspectives lies in what I'll call SIAI's "Scary Idea", which is the idea that: progressing toward advanced AGI without a design for "provably non-dangerous AGI" (or something closely analogous, often called "Friendly AI" in SIAI lingo) is highly likely to lead to an involuntary end for the human race.

Of course it's rarely clarified what "provably" really means. A mathematical proof can only be applied to the real world in the context of some assumptions, so maybe "provably non-dangerous AGI" means "an AGI whose safety is implied by mathematical arguments together with assumptions that are believed reasonable by some responsible party"? (where the responsible party is perhaps "the overwhelming majority of scientists" … or SIAI itself?).

Please note that, although I don't agree with the Scary Idea, I do agree that the development of advanced AGI has significant risks associated with it.

Comment author: Kaj_Sotala 13 August 2010 04:00:54PM *  9 points [-]

Superhuman Artificial Intelligence (the runaway kind, i.e. God-like and unbeatable not just at Chess or Go).

This claim can be broken into two separate parts:

  1. Will we have human-level AI?
  2. Once we have human-level AI, will it develop to become superhuman AI?

For 1: looking at current technology trends, Sandberg & Bostrom estimate that we should have the technology needed for whole brain emulation around 2030-2050 or so, at least assuming that it gets enough funding and that Moore's law keeps up. Even if there isn't much of an actual interest in whole brain emulations, improving scanning tools are likely to revolutionize neuroscience. Of course, respected neuroscientists are already talking about reverse-engineering of the brain as being within reach. If we are successful at reverse engineering the brain, then AI is a natural result.

As for two, as Eliezer mentioned, this is pretty much an antiprediction. Human minds are a particular type of architecture, running on a particular type of hardware: it would be an amazing coincidence if it just happened that our intelligence couldn't be drastically improved upon. We already know that we're insanely biased, to the point of people suffering death or collapses of national economies as a result. Computing power is going way up: with the current trends, we could in say 20 years have computers that only took three seconds to think 25 years' worth of human thoughts.

Advanced real-world molecular nanotechnology (the grey goo kind the above intelligence could use to mess things up).

Molecular nanotechnology is not needed. As our society grows more and more dependant on the Internet, plain old-fashioned hacking and social engineering probably becomes more than sufficient to take over the world. Lethal micro-organisms can AFAIK be manufactured via the Internet even today.

The likelihood of exponential growth versus a slow development over many centuries.

Hardware growth alone would be enough to ensure that we'll be unable to keep up with the computers. Even if Moore's law ceased to be valid and we were stuck with a certain level of tech, there are many ways of gaining an advantage.

That Eliezer Yudkowsky (the SIAI) is the right and only person who should be leading, respectively institution that should be working, to soften the above.

Eliezer Yudkowsky is hardly the only person involved in SIAI's leadership. Michael Vassar is the current president, and e.g. the Visiting Fellows program is providing a constant influx of fresh views on the topics involved.

As others have pointed out, SIAI is currently the only organization around that's really taking care of this. It is not an inconceivable suggestion that another organization could do better, but SIAI's currently starting to reach the critical mass necessary to really have an impact. E.g. David Chalmers joining in on the discussion, and the previously mentioned Visiting Fellow program motivating various people to start their own projects. This year's ECAP conference will be featuring five conference papers from various SIAI-affiliated folks, and so on.

Any competing organization, especially if it was competing for the same donor base and funds, should have a well-argued case for what it can do that SIAI can't or won't. While SIAI's starting to get big, I don't think that its donor base is large enough to effectively support two different organizations working for the same goal. To do good, any other group would need to draw its primary funding from some other source, like the Future of Humanity Institute does.

Comment author: JoshuaZ 13 August 2010 04:19:04PM 3 points [-]

Lethal micro-organisms can AFAIK be manufactured via the Internet even today.

Do you have a citation for this? You can get certain biochemical compounds synthesized for you (there's a fair bit of a market for DNA synthesis) but that's pretty far from synthesizing microorganisms.

Comment author: Kaj_Sotala 13 August 2010 05:34:36PM *  3 points [-]

Right, sorry. I believe the claim (which I heard from a biologist) was that you can get DNA synthesized for you, and in principle an AI or anyone who knew enough could use those services to create their own viruses or bacteria (though no human yet has that required knowledge). I'll e-mail the person I think I heard it from and ask for a clarification.

Comment author: XiXiDu 19 August 2010 02:33:49PM 6 points [-]

Dawkins agrees with EY

Richard Dawkins states that he is frightened by the prospect of superhuman AI and even mentions recursion and intelligence explosion.

Comment author: JGWeissman 20 August 2010 06:18:40AM 13 points [-]

I was disappointed watching the video relative to the expectations I had from your description.

Dawkins talked about recursion as in a function calling itself, as an example of the sort of the thing that may be the final innovation that makes AI work, not an intelligence explosion as a result of recursive self-improvement.

Comment author: JoshuaZ 12 August 2010 08:12:32PM 6 points [-]

The Charlie Stross example seems to be less than ideal. Much of what Stross has wrote about touches upon or deals intensely with issues connected to runaway AI. For example, the central premise of "Singularity Sky" involves an AI in the mid 20th century going from stuck in a lab to godlike in possibly a few seconds. His short story "Antibodies" focuses on the idea that very bad fast burns occur very frequently. He also has at least one (unpublished) story the central premises of which is that Von Neumann and Turing proved that P=NP and that the entire cold war was actually a way of keeping lots of weapons online ready to nuke any rogue AIs.

Note also that you mention Greg Egan who has also wrote fiction in which rogue AIs and bad nanotech make things very unpleasant (see for example Crystal Nights).

As to the other people you mention as to why they aren't very worried about the possibilities that Eliezer takes seriously, at least one person on your list (Kurzweil) is an incredible optimist and not much of a rationalist and so it seems extremely unlikely that he would ever become convinced that any risk situation was of high likelyhood unless the evidence for the risk was close to overwhelming.

MWI, I've read this sequence and it seems that Eliezer makes one of the strongest cases for Many-Worlds that I've seen. However, I know that there are a lot of people who have thought about this issue and have much more physics background and have not reached this conclusion. I'm therefore extremely uncertain about MWI. So what should one do if one doesn't know much about this? In this case, the answer is pretty easy, since MWI doesn't alter actual behavior much (unless you are intending to engage in quantum suicide or the like). So figuring out whether Eliezer is correct about MWI should not be a high priority, except in so far as it provides a possible data point for deciding if Eliezer is correct about other things.

Advanced real-world molecular nanotechnology - Of the points you bring up this one seems to me to be the most unlikely to be actually correct. There are a lot of technical barriers to grey goo and most of the people actually working with nanotech don't seem to see that sort of situation as very likely. But it also seems clear that that doesn't mean that there aren't many other possible things that molecular nanotech could do that wouldn't make things very unpleasant for us. Here, Eliezer is by far not the only person worried about this. See for example, this article which is a few years of date but does show that there's serious worry in this regards by academics and governments.

Runaway AI/AI going FOOM - This is potentially the most interesting of your points simply because it is so much more unique to the SIAI and Eliezer. So what can one do to figure out if this is correct? One thing to do is to examine the arguments and claims being made in detail. And see what other experts think on the subject. In this context, most AI people seem to consider this to be an unlikely problem, so maybe look at what they have to say? Note also that Robert Hanson of Overcoming Bias has discussed these issues extensively with Eliezer and has not been at all convinced (they had a written debate a while ago but I can't find the link right now. If someone else can track it down I'd appreciate it). One thing to note is that estimates for nanotech can impact the chance of an AI going FOOM substantially. If cheap easy nanotech exists than an AI may be able to improve its hardware at a very fast rate. If however, such nanotech does not exist then an AI will be limited to self-improvement primarily by improving software, which might be much more limited. See this subthread, where I bring up some of the possible barriers to software improvement and become by the end of it substantially more convinced by cousin_it that the barriers to escalating software improvement may be small.

What about the other Bayesians out there? Are they simply not as literate as Eliezer Yudkowsky in the maths or maybe somehow teach but not use their own methods of reasoning and decision making?

Note that even practiced Bayesians are from from perfect rationalists. If one hasn't thought about an issue or even considered that something is possible there's not much one can do about it. Moreover, a fair number of people who self-identify as Bayesian rationalists aren't very rational, and the set of people who do self-identify as such is pretty small.

Maybe after a few years of study I'll know more. But right now, if I was forced to choose the future over the present, the SIAI or to have some fun. I'd have some fun.

Given your data set this seems reasonable to me. Frankly, if I were to give money or support the SIAI I would do so primarily because I think that the Singularity Summits are clearly helpful and getting together lots of smart people and that this is true even if one assigns a low probability for any Singularity type event occurring in the next 50 years.

Comment author: utilitymonster 13 August 2010 01:38:45PM 2 points [-]

Runaway AI/AI going FOOM - This is potentially the most interesting of your points simply because it is so much more unique to the SIAI and Eliezer. So what can one do to figure out if this is correct? One thing to do is to examine the arguments and claims being made in detail. And see what other experts think on the subject. In this context, most AI people seem to consider this to be an unlikely problem, so maybe look at what they have to say? Note also that Robert Hanson of Overcoming Bias has discussed these issues extensively with Eliezer and has not been at all convinced (they had a written debate a while ago but I can't find the link right now. If someone else can track it down I'd appreciate it).

FOOM Debate

Comment author: utilitymonster 12 August 2010 04:02:35PM *  6 points [-]

I'm not exactly an SIAI true believer, but I think they might be right. Here are some questions I've thought about that might help you out. I think it would help others out if you told us exactly where you'd be interested in getting off the boat.

  1. How much of your energy are you willing to spend on benefiting others, if the expected benefits to others will be very great? (It needn't be great for you to support SIAI.)
  2. Are you willing to pursue a diversified altruistic strategy if it saves fewer expected lives (it almost always will for donors giving less than $1 million or so)?
  3. Do you think mitigating x-risk is more important than giving to down-to-earth charities (GiveWell style)? (This will largely turn on how you feel about supporting causes with key probabilities that are tough to estimate, and how you feel about low-probability, high expected utility prospects.)
  4. Do you think that trying to negotiate a positive singularity is the best way to mitigate x-risk?
  5. Is any known organization likely to do better than SIAI in terms of negotiating a positive singularity (in terms of decreasing x-risk) on the margin?
  6. Are you likely to find an organization that beats SIAI in the future?

Judging from your post, you seem most skeptical about putting your efforts into causes whose probability of success is very difficult to estimate, and perhaps low.

Comment author: orthonormal 12 August 2010 05:58:36PM *  7 points [-]

These are reasonable questions to ask. Here are my thoughts:

  • Superhuman Artificial Intelligence (the runaway kind, i.e. God-like and unbeatable not just at Chess or Go).
  • Advanced real-world molecular nanotechnology (the grey goo kind the above intelligence could use to mess things up).

Virtually certain that these things are possible in our physics. It's possible that transhuman AI is too difficult for human beings to feasibly program, in the same way that we're sure chimps couldn't program trans-simian AI. But this possibility seems slimmer when you consider that humans will start boosting their own intelligence pretty soon by other means (drugs, surgery, genetic engineering, uploading) and it's hard to imagine that recursive improvement would cap out any time soon. At some point we'll have a descendant who can figure out self-improving AI; it's just a question of when.

  • The likelihood of exponential growth versus a slow development over many centuries.
  • That it is worth it to spend most on a future whose likelihood I cannot judge.

These are more about decision theory than logical uncertainty, IMO. If a self-improving AI isn't actually possible for a long time, then funding SIAI (and similar projects, when they arise) is a waste of cash. If it is possible soon, then it's a vital factor in existential risk. You'd have to have strong evidence against the possibility of rapid self-improvement for Friendly AI research to be a bad investment within the existential risk category.

For the other, this falls under the fuzzies and utilons calculation. Insofar as you want to feel confident that you're helping the world (and yes, any human altruist does want this), pick a charity certain to do good in the present. Insofar as you actually want to maximize your expected impact, you should weight charities by their uncertainty and their impact, multiply it out, and put all your eggs in the best basket (unless you've just doubled a charity's funds and made them less marginally efficient than the next one on your list, but that's rare).

  • That Eliezer Yudkowsky (the SIAI) is the right and only person who should be leading, respectively institution that should be working, to soften the above.

Aside from any considerations in his favor (development of TDT, for one publicly visible example), this sounds too much like a price for joining— if your really take the risk of Unfriendly AI seriously, what else could you do about it? In fact, the more well-known SIAI gets in the AI community and the more people take it seriously, the more likely that it will (1) instill in other GAI researchers some necessary concern for goal systems and (2) give rise to competing Friendly AI projects which might improve on SIAI in any relevant respects. Unless you thought they were doing as much harm as good, it still seems optimal to fund SIAI now if you're concerned about self-improving AI.

Further, do you have an explanation for the circumstance that Eliezer Yudkowsky is the only semi-popular person who has figured all this out?

My best guess is that the first smart/motivated/charismatic person who comes to these conclusions immediately tries to found something like SIAI rather than doing other things with their life. There's a very unsurprising selection bias here.

ETA: Reading the comments, I just found that XiXiDu has not actually read the Sequences before claiming that the evidence presented is inadequate. I've downvoted this post, and I now feel kind of stupid for having written out this huge reply.

Comment author: Simulation_Brain 13 August 2010 07:56:24AM 5 points [-]

I think there are very good questions in here. Let me try to simplify the logic:

First, the sociological logic: if this is so obviously serious, why is no one else proclaiming it? I think the simple answer is that a) most people haven't considered it deeply and b) someone has to be first in making a fuss. Kurzweil, Stross, and Vinge (to name a few that have thought about it at least a little) seem to acknowledge a real possibility of AI disaster (they don't make probability estimates).

Now to the logical argument itself:

a) We are probably at risk from the development of strong AI. b) The SIAI can probably do something about that.

The other points in the OP are not terribly relevant; Eliezer could be wrong about a great many things, but right about these.

This is not a castle in the sky.

Now to argue for each: There's no good reason to think AGI will NOT happen within the next century. Our brains produce AGI; why not artificial systems? Artificial systems didn't produce anything a century ago; even without a strong exponential, they're clearly getting somewhere.

There are lots of arguments for why AGI WILL happen soon; see Kurzweil among others. I personally give it 20-40 years, even allowing for our remarkable cognitive weaknesses.

Next, will it be dangerous? a) Something much smarter than us will do whatever it wants, and very thoroughly. (this doesn't require godlike AI, just smarter than us. Self-improving helps, too.) b) The vast majority of possible "wants" done thoroughly will destroy us. (Any goal taken to extremes will use all available matter in accomplishing it.) Therefore, it will be dangerous if not VERY carefully designed. Humans are notably greedy and bad planners individually, and often worse in groups.

Finally, it seems that SIAI might be able to do something about it. If not, they'll at least help raise awareness of the issue. And as someone pointed out, achieving FAI would have a nice side effect of preventing most other existential disasters.

While there is a chain of logic, each of the steps seems likely, so multiplying probabilities gives a significant estimate of disaster, justifying some resource expenditure to prevent it (especially if you want to be nice). (Although spending ALL your money or time on it probably isn't rational, since effort and money generally have sublinear payoffs toward happiness).

Hopefully this lays out the logic; now, which of the above do you NOT think is likely?

Comment author: utilitymonster 13 August 2010 01:30:55PM 5 points [-]

a) Something much smarter than us will do whatever it wants, and very thoroughly. (this doesn't require godlike AI, just smarter than us. Self-improving helps, too.) b) The vast majority of possible "wants" done thoroughly will destroy us. (Any goal taken to extremes will use all available matter in accomplishing it.) Therefore, it will be dangerous if not VERY carefully designed. Humans are notably greedy and bad planners individually, and often worse in groups.

I've heard a lot of variations on this theme. They all seem to assume that the AI will be a maximizer rather than a satisficer. I agree the AI could be a maximizer, but don't see that it must be. How much does this risk go away if we give the AI small ambitions?

Comment author: wedrifid 13 August 2010 06:46:17PM 7 points [-]

How much does this risk go away if we give the AI small ambitions?

Even small ambitions are risky. If I ask a potential superintelligence to do something easy but an obstacle gets in the way it will most likely obliterate that obstacle and do the 'simple thing'. Unless you are very careful that 'obstacle' could wind up being yourself or, if you are unlucky, your species. Maybe it just can't risk one of you pressing the off switch!

Comment author: soreff 01 September 2010 06:23:14PM 2 points [-]

Good point. The resources expended towards a "small" goal aren't directly bounded by the size of the goal. As you said, an obstacle can make the resources used go arbitrarily high. An alternative constraint would be on what the AI is allowed to use up in achieving the goal - "No more that 10 kilograms of matter, nor more than 10 megajoules of energy, nor any human lives, nor anything with a market value of more that $1000". This will have problems of its own, when the AI thinks up something to use up that we never anticipated (We have something of a similar problem with corporations - but at least they operate on human timescales).

Part of the safety of existing optimizers is that they can only use resources or perform actions that we've explicitly let them try using. An electronic CAD program may tweak transistor widths, but it isn't going to get creative and start trying to satisfy its goals by hacking into the controls of the manufacturing line and changing their settings. An AI with the option to send arbitrary messages to arbitrary places is quite another animal...

Comment author: Simulation_Brain 13 August 2010 07:27:19PM *  3 points [-]

Now this is an interesting thought. Even a satisficer with several goals but no upper bound on each will use all available matter on the mix of goals it's working towards. But a limited goal (make money for GiantCo, unless you reach one trillion, then stop) seems as though it would be less dangerous. I can't remember this coming up in Eliezer's CFAI document, but suspect it's in there with holes poked in its reliability.

Comment author: timtyler 13 August 2010 08:22:17PM *  2 points [-]

I discuss "small" ambitions in:

http://alife.co.uk/essays/stopping_superintelligence/

They seem safer to me too. This is one of the things people can do if they are especially paranoid about leaving the machine turned on - for some reason or another.

Comment author: kodos96 13 August 2010 08:47:35AM 3 points [-]

The only part of the chain of logic that I don't fully grok is the "FOOM" part. Specifically, the recursive self improvement. My intuition tells me that an AGI trying to improve itself by rewriting its own code would encounter diminishing returns after a point - after all, there would seem to be a theoretical minimum number of instructions necessary to implement an ideal Bayesian reasoner. Once the AGI has optimized its code down to that point, what further improvements can it do (in software)? Come up with something better than Bayesianism?

Now in your summary here, you seem to downplay the recursive self-improvement part, implying that it would 'help,' but isn't strictly necessary. But my impression from reading Eliezer was that he considers it an integral part of the thesis - as it would seem to be to me as well. Because if the intelligence explosion isn't coming from software self-improvement, then where is it coming from? Moore's Law? That isn't fast enough for a "FOOM", even if intelligence scaled linearly with the hardware you threw at it, which my intuition tells me it probably wouldn't.

Now of course this is all just intuition - I haven't done the math, or even put a lot of thought into it. It's just something that doesn't seem obvious to me, and I've never heard a compelling explanation to convince me my intuition is wrong.

Comment author: ShardPhoenix 13 August 2010 09:27:46AM *  5 points [-]

I don't think anyone argues that there's no limit to recursive self-improvement, just that the limit is very high. Personally I'm not sure if a really fast FOOM is possible, but I think it's likely enough to be worth worrying about (or at least letting the SIAI worry about it...).

Comment author: Simulation_Brain 13 August 2010 07:22:22PM 2 points [-]

I think the concern stands even without a FOOM; if AI gets a good bit smarter than us, however that happens (design plus learning, or self-improvement), it's going to do whatever it wants.

As for your "ideal Bayesian" intuition, I think the challenge is deciding WHAT to apply it to. The amount of computational power needed to apply it to every thing and every concept on earth is truly staggering. There is plenty of room for algorithmic improvement, and it doesn't need to get that good to outwit (and out-engineer) us.

Comment author: cata 13 August 2010 07:00:35PM 2 points [-]

I think the widespread opinion is that the human brain has relatively inefficient hardware -- I don't have a cite for this -- and, most likely, inefficient software as well (it doesn't seem like evolution is likely to have optimized general intelligence very well in the relatively short timeframe that we have had it at all, and we don't seem to be able to efficiently and consistently channel all of our intelligence into rational thought.)

That being the case, if we were going to write an AI that was capable of self-improvement on hardware that was roughly as powerful or more powerful than the human brain (which seems likely) it stands to reason that it could potentially be much faster and more effective than the human brain; and self-improvement should move it quickly in that direction.

Comment author: ciphergoth 13 August 2010 07:55:07AM 8 points [-]

Is there more to this than "I can't be bothered to read the Sequences - please justify everything you've ever said in a few paragraphs for me"?

Comment author: whpearson 13 August 2010 08:13:29AM 6 points [-]

My charitable reading is that he is arguing there will be other people like him and if SIAI wishes to continue growing there does need to be easily digested material.

Comment author: [deleted] 13 August 2010 12:45:49PM 8 points [-]

From my experience as a long-time lurker and occasional poster, LW is not easily accessible to new users. The Sequences are indeed very long and time consuming, and most of them have multiple links to other posts you are supposed to have already read, creating confusion if you should happen to forget the gist of a particular post. Besides, Eliezer draws a number of huge philosophical conclusions (reductionism, computationalism, MWI, the Singularity, etc.), and a lot of people aren't comfortable swallowing all of that at once. Indeed, the "why should I buy all this?" question has popped into my head many times while reading.

Furthermore, I think criticism like this is good, and the LW crowd should not have such a negative reaction to it. After all, the Sequences do go on and on about not getting unduly emotionally attached to beliefs; if the community can't take criticism, that is probably a sign that it is getting a little too cozy with its current worldview.

Comment author: ciphergoth 13 August 2010 12:57:48PM 6 points [-]

Criticism is good, but this criticism isn't all that useful. Ultimately, what SIAI does is the conclusion of a chain of reasoning; the Sequences largely present that reasoning. Pointing to a particular gap or problem in that chain is useful; just ignoring it and saying "justify yourselves!" doesn't advance the debate.

Comment author: [deleted] 13 August 2010 01:07:11PM *  4 points [-]

Agreed--criticism of this sort vaguely reminds me of criticism of evolution in that it attacks a particular part of the desired target rather than its fundamental assumptions (my apologies to the original poster). Still, I think we should question the Sequences as much as possible, and even misguided criticism can be useful. I'm not saying we should welcome an unending series of top-level posts like this, but I for one would like to see critical essays on of some of LW's most treasured posts. (There goes my afternoon...)

Comment author: ciphergoth 13 August 2010 01:50:08PM 2 points [-]

Of course, substantive criticism of specific arguments is always welcome.

Comment author: HughRistik 13 August 2010 07:02:49PM *  4 points [-]

Pointing to a particular gap or problem in that chain is useful; just ignoring it and saying "justify yourselves!" doesn't advance the debate.

Disagree. If you are asking people for money (and they are paying you), the burden is on you to provide justification at multiple levels of detail to your prospective or current donors.

But, but... then you'll have to, like, repeat yourself a lot!

No shit. If you want to change the world, be prepared to repeat yourself a lot.

Comment author: XiXiDu 13 August 2010 07:14:51PM *  2 points [-]

My primary point was to inquire about the foundation and credibility of named chain of reasoning. Is it a coherent internal logic that is reasoning about itself or is it based on firm ground?

Take the following example: A recursively evolving AGI is quickly reaching a level that can be considered superhuman. As no advanced nanotechnology was necessary for its construction it is so far awful limited in what it can accomplish given its vast and fast intellect. Thus it solves all open problems associated with advanced nanotechnology and secretely mails its solutions a researcher. This researcher is very excited and consequently builds a corporation around this new technology. Later the AGI buys the stocks of that company and plants a front man. Due to some superhuman social engineering it finally obtains control of the technology...

At this point we are already deep into subsequent reasoning about something shaky that at the same time is used as evidence of the very reasoning involving it. Taking a conclusion and running with it building a huge framework of further conclusions around it is in my opinion questionable. First this conclusion has to yield marginal evidence of its feasibility, then you are able to create a further hypothesis engaged with further consequences. You are making estimations within a framework that is itself not based on firm ground. The gist of what I was trying to say is not to subsequently base conclusions and actions on other conclusions which themselves do not bear evidence.

I was inquiring about the supportive evidence at the origin of your complex multi-step extrapolations argued to be from inductive generalizations. If there isn't any, what difference is there between writing fiction and complex multi-step extrapolations argued to be from inductive generalizations?

I've read and heard enough to be in doubt since I haven't come across a single piece of evidence besides some seemingly sound argumentation (as far as I can tell) in favor of some basic principles of unknown accuracy. And even those arguments are sufficiently vague that you cannot differentiate them from mere philosophical musing.

In the case of the SIAI it rather seems to be that there are hypotheses based on other hypotheses that are not yet tested.

Comment author: HughRistik 13 August 2010 06:54:48PM *  6 points [-]

If so... is that request bad?

If you are running a program where you are trying to convince people on a large scale, then you need to be able to provide overviews of what you are saying at various levels of resolution. Getting annoyed (at one of your own donors!) for such a request is not a way to win.

Edit: At the time, Eliezer didn't realize that XiXiDu was a donor.

Comment author: Wei_Dai 13 August 2010 11:38:38PM 10 points [-]

Getting annoyed (at one of your own donors!) for such a request is not a way to win.

I don't begrudge SIAI at all for using Less Wrong as a platform for increasing its donor base, but I can definitely see myself getting annoyed sooner or later, if SIAI donors keep posting low-quality comments or posts, and then expecting special treatment for being a donor. You can ask Eliezer to not get annoyed, but is it fair to expect all the other LW regulars to do the same as well?

I'm not sure what the solution is to this problem, but I'm hoping that somebody is thinking about it.

Comment author: HughRistik 14 August 2010 12:27:12AM 2 points [-]

I can definitely see myself getting annoyed sooner or later, if SIAI donors keep posting low-quality comments or posts, and then expecting special treatment for being a donor.

Me too. The reason I upvoted this post was because I hoped it would stimulate higher quality discussion (whether complimentary, critical, or both) of SIAI in the future. I've been hoping to see such a discussion on LW for a while to help me think through some things.

Comment author: ciphergoth 15 August 2010 07:12:33PM 3 points [-]

In other words, you see XiXiDu's post as the defector in the Asch experiment who chooses C when the group chooses B but the right answer is A?

Comment author: Vladimir_Nesov 12 August 2010 08:19:40PM *  7 points [-]

The questions of speed/power of AGI and possibility of its creation in the near future are not very important. If AGI is fast and near, we must work on FAI faster, but we must work on FAI anyway.

The reason to work on FAI is to prevent any non-Friendly process from eventually taking control over the future, however fast or slow, suddenly powerful or gradual it happens to be. And the reason to work on FAI now is because the fate of the world is at stake. The main anti-prediction to get is that the future won't be Friendly if it's not specifically made Friendly, even if it happens slowly. We can as easily slowly drift away from things we value. You can't optimize for something you don't understand.

It doesn't matter if it takes another thousand years, we still have to think about this hugely important problem. And since we can't guarantee that the deadline is not near, expected utility calculation says we must still work as fast as possible, just in case. If AGI won't be feasible for a long while, that's great news, more time to prepare, to understand what we want.

(To be clear, I do believe that AGIs FOOM, and that we are at risk in the near future, but the arguments for that are informal and difficult to communicate, while accepting these claims is not necessary to come to the same conclusion about policy.)

Comment author: multifoliaterose 12 August 2010 08:31:19PM *  4 points [-]

As I've said elsewhere:

(a) There are other existential risks, not just AGI. I think it more likely than not that one of these other existential risks will hit before an unfriendly AI is created. I have not seen anybody present a coherent argument that AGI is likely to be developed before any other existential risk hits us,

(b) Even if AGI deserves top priority, there's still the important question of how to go about working toward a FAI. As far as I can tell, working to build an AGI right now now makes sense only if AGI is actually near (a few decades away).

(c) Even if AGI is near, there are still serious issues of accountability and transparency connected with SIAI. How do we know that they're making a careful effort to use donations in an optimal way? As things stand, I believe that it would be better to start a organization which exhibits high transparency and accountability, fund that, and let SIAI fold. I might change my mind on this point if SIAI decided to strive toward transparency and accountability.

Comment author: mkehrt 12 August 2010 08:51:52PM 2 points [-]

I really agree with both a and b (although I do not care about c). I am glad to see other people around here who think both these things.

Comment author: JamesAndrix 13 August 2010 02:28:02AM 4 points [-]
  • That Eliezer Yudkowsky (the SIAI) is the right and only person who should be leading, respectively institution that should be working, to soften the above.

I don't believe that is necessarily true, just that no one else is doing it. I think other teams working on FAI Specifically would be a good thing, provided they were competent enough not to be dangerous.

Likewise, Lesswrong (then Overcoming bias) is just the only place I've found that actually looked at the morality problem is a non-obviously wrong way. When I arrived I had a different view on morality than EY, but I was very happy to see another group of people at least working on the problem.

Also note that you only need to believe in the likelihood of UFAI -or- nanotech -or- other existential threats in order to want FAI . I'd have to step back a few feet to wrap my head around considering it infeasible at this point.

Comment author: CarlShulman 13 August 2010 07:26:31AM *  7 points [-]

That Eliezer Yudkowsky (the SIAI) is the right and only person who should be leading, respectively institution that should be working, to soften the above.

That's just a weird claim. When Richard Posner or David Chalmers does writing in the area SIAI folk cheer, not boo. And I don't know anyone at SIAI who thinks that the Future of Humanity Institute's work in the area isn't a tremendously good thing.

Likewise, Lesswrong (then Overcoming bias) is just the only place I've found that actually looked at the morality problem is a non-obviously wrong way.

Have you looked into the philosophical literature?

Comment author: MartinB 02 November 2010 10:08:23AM 2 points [-]

Robert A. Heinlein was an Engineer and SF writer, who created many stories that hold up quite well. He put in his understanding of human interaction, and of engineering to make stories that are somewhat realistic. But no one should confuse him with someone researching the actual likelyhood of any particular future. He did not build anything that improved the world, but he wrote interesting about the possibilities and encouraged many others to per-sue technical careers. SF has often bad usage of logic, and the well known hero bias, or scientists that put together something to solve a current crisis that all their colleagues before had not managed to do. Unrealistic, but fun to read. SF writer write for a living, hard SF writers take stuff a bit more serial, but still are not actual experts on technology. Except when they are. Vinge would be such a case. Egan I have not read yet. Kurzweil seems to be one of the more present futurists, (Critiquing his ideas can take its own place.) But you will notice that the air gets pretty thin in this area, where everyone leads his own cult and spends more time on PR, than on finding good counter arguments for their current views. It would be awesome to have more people work on transhumanism/lifeextension/AI and what not, but that is not yet the case. There might even be good reasons for that, which LWers fail to perceive, or it could be that many scientists actually have a massive blindspot in regards to some of the topics. Regarding AI I fail to estimate how likely it is to reach it any time soon, since I really can not estimate all the complications on the way. The general possibility of human level intelligence looks plausible, because there are humans running around who have it. But even if the main goal of SIAI is never ever reached I already profit from the side products. Instead of concentrating on the AI stuff you can take the real-world part of the sequences and work on becoming a better thinker in whichever domain happens to be yours.

Comment author: EStokes 12 August 2010 11:04:58PM 4 points [-]

I don't think this post was well-written, at the least. I didn't even understand the tl;dr?

tldr; Is the SIAI evidence-based or merely following a certain philosophy? I'm currently unable to judge if the Less Wrong community and the SIAI are updating on fictional evidence or if the propositions, i.e. the basis for the strong arguments for action that are proclaimed on this site, are based on fact.

I don't see much precise expansion on this, except for MWI? There's a sequence on it.

And that is my problem. Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic, i.e. imagination or fiction and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed on this site.

Have you read the sequences?

As for why there aren't more people supporting SIAI, first of all, it's not widely known, second of all, it's liable to be dismissed on first impressions. Not many have examined the SIAI. Also, only (http://en.wikipedia.org/wiki/Religion#cite_ref-49)[4% of the general public in the US believe in neither a god nor a higher power]. The majority isn't always right.

I don't understand why this post has upvotes. It was unclear and seems topics went unresearched. The usefulness of donating to the SIAI has been discussed before, I think someone probably would've posted a link if asked in the open thread.

Comment author: kodos96 13 August 2010 05:18:28AM 12 points [-]

I don't understand why this post has upvotes.

I think the obvious answer to this is that there are a significant number of people out there, even out there in the LW community, who share XiXiDu's doubts about some of SIAIs premises and conclusions, but perhaps don't speak up with their concerns either because a) they don't know quite how to put them into words, or b) they are afraid of being ridiculed/looked down on.

Unfortunately, the tone of a lot of the responses to this thread lead me to believe that those motivated by the latter option may have been right to worry.

Comment author: Furcas 13 August 2010 05:23:20AM 7 points [-]

Personally, I upvoted the OP because I wanted to help motivate Eliezer to reply to it. I don't actually think it's any good.

Comment author: kodos96 13 August 2010 05:47:44AM *  11 points [-]

Yeah, I agree (no offense XiXiDu) that it probably could have been better written, cited more specific objections etc. But the core sentiment is one that I think a lot of people share, and so it's therefore an important discussion to have. That's why it's so disappointing that Eliezer seems to have responded with such an uncharacteristically thin skin, and basically resorted to calling people stupid (sorry, "low g-factor") if they have trouble swallowing certain parts of the SIAI position.

Comment author: HughRistik 13 August 2010 06:52:21PM 2 points [-]

This was exactly my impression, also.

Comment author: Wei_Dai 13 August 2010 08:51:30AM 5 points [-]

I think your upvote probably backfired, because (I'm guessing) Eliezer got frustrated that such a badly written post got upvoted so quickly (implying that his efforts to build a rationalist community were less successful than he had thought/hoped) and therefore responded with less patience than he otherwise might have.

Comment author: Eliezer_Yudkowsky 13 August 2010 07:30:48PM *  1 point [-]

Then you should have written your own version of it. Bad posts that get upvoted just annoy me on a visceral level and make me think that explaining things is hopeless, if LWers still think that bad posts deserve upvotes. People like XiXiDu are ones I've learned to classify as noisemakers who suck up lots of attention but who never actually change their minds enough to start pitching in, no matter how much you argue with them. My perceptual system claims to be able to classify pretty quickly whether someone is really trying or not, and I have no concrete reason to doubt it.

I guess next time I'll try to remember not to reply at all.

Everyone else, please stop upvoting posts that aren't good. If you're interested in the topic, write your own version of the question.

Comment author: orthonormal 13 August 2010 10:43:31PM *  8 points [-]

It has seemed to me for a while that a number of people will upvote any post that goes against the LW 'consensus' position on cryonics/Singularity/Friendliness, so long as it's not laughably badly written.

I don't think anything Eliezer can say will change that trend, for obvious reasons.

However, most of us could do better in downvoting badly argued or fatally flawed posts. It amazes me that many of the worst posts here won't drop below 0 for any stated amount of time, and even then not very far. Docking someone's karma isn't going to kill them, folks. Do everyone a favor and use those downvotes.

Comment author: XiXiDu 14 August 2010 09:14:05AM 4 points [-]

...badly argued or fatally flawed posts.

My post is neither badly argued nor fatally flawed as I've mainly been asking questions and not making arguments. But if you think otherwise, why don't you argue where I am fatally flawed?

My post has not been written to speak out against any 'consensus', I agree with the primary conclusions but am skeptic about further chains of reasoning based on those conclusions as I don't perceive them to be based on firm ground but merely be what follows from previous evidence.

And yes, I'm a lazy bum. I've not thought about the OP for more than 10 minutes. It's actually copy and paste work from previous comments. Hell, what have you expected? A dissertation? Nobody else was asking those questions, someone had to.

Comment author: XiXiDu 13 August 2010 07:36:02PM 13 points [-]

What are you considering as pitching in? That I'm donating as I am, or that I am promoting you, LW and the SIAI all over the web, as I am doing?

You simply seem to take my post as hostile attack rather than the inquiring of someone who happened not to be lucky enough to get a decent education in time.

Comment author: HughRistik 13 August 2010 07:45:55PM *  6 points [-]

Eliezer seems to have run your post through some crude heuristic and incorrectly categorized it. While you did make certain errors that many people have observed, I think you deserved a different response.

At least, Eliezer seemingly not realizing that you are a donor means that his treatment of you doesn't represent how he treats donors.

Edit: To his credit, Eliezer apologized and admitted to his perceptual misclassification.

Comment author: Eliezer_Yudkowsky 13 August 2010 07:45:20PM *  11 points [-]

All right, I'll note that my perceptual system misclassified you completely and consider that concrete reason to doubt it from now on.

Sorry.

If you are writing a post like that one it is really important to tell me that you are an SIAI donor. It gets a lot more consideration if I know that I'm dealing with "the sort of thing said by someone who actually helps" and not "the sort of thing said by someone who wants an excuse to stay on the sidelines, and who will just find another excuse after you reply to them", which is how my perceptual system classified that post.

The Summit is coming up and I've got lots of stuff to do right at this minute, but I'll top-comment my very quick attempt at pointing to information sources for replies.

Comment author: XiXiDu 13 August 2010 07:55:32PM 6 points [-]

I'll donate again in the next few days and tell you what name and the amount. I don't have much, but so that you see that I'm not just making this up. Maybe you can also check the previous donation then.

And for the promoting, everyone can Google it. I link people up to your stuff almost every day. And there are people here who added me to Facebook and if you check my info you'll see that some of my favorite quotations are actually yours.

And how come that on my homepage, if you check the sidebar, your homepage and the SIAI are listed under favorite sites, for many years now?

I'm the kind of person who has to be skeptic about everything and if I'm bothered too much by questions I cannot resolve in time I do stupid things. Maybe this post was stupid, I don't know.

Comment author: Aleksei_Riikonen 14 August 2010 01:46:04AM 4 points [-]

Sorry about this sounding impolite towards XiXiDu, but I'll use this opportunity to note that it is a significant problem for SIAI, that there are people out there like XiXiDu promoting SIAI even though they don't understand SIAI much at all.

I don't know what's the best attitude to try to minimize the problem this creates, that many people will first run into SIAI through hearing about it from people who don't seem very clueful or intelligent. (That's real bayesian evidence for SIAI being a cult or just crazy, and many people then won't acquire sufficient additional evidence to update out of the misleading first impression -- not to mention that the biased way of getting stuck in first impressions is very common also.)

Personally, I've adopted the habit of not even trying to talk about singularity stuff to new people who aren't very bright. (Of course, if they become interested despite this, then they can't just be completely ignored.)

Comment author: XiXiDu 14 August 2010 09:02:28AM 6 points [-]

I thought about that too. But many people outside this community suspect me, as they often state, to be intelligent and educated. And I mainly try to talk to people in the academics. You won't believe that even I am able to make them think that I'm one of them, up to the point of correcting errors in their calculations (it happened). Many haven't even heard about Bayesian inference by the way...

The way I introduce people to this is not by telling them about the risks of AGI but rather linking them up to specific articles on lesswrong.com or telling them about how the SIAI tries to develop ethical decision making etc.

I've grown up in a family of Jehovah's Witnesses, I know how to start selling bullshit. Not that the SIAI is bullshit, but I'd never use words like 'Singularity' while promoting it to people I don't know.

Many people know about the transhumanist/singularity fraction already and think it is complete nonsense, so I often can only improve their opinion.

There are people teaching on university level that told me I convinced them that he (EY) is to be taken seriously.

Comment author: xamdam 13 August 2010 07:52:21PM *  9 points [-]

It was actually in the post

What I mean to say by using that idiom is that I cannot expect, given my current knowledge, to get the promised utility payoff that would justify to make the SIAI a prime priority. That is, I'm donating to the SIAI but also spend considerable amounts of resources maximizing utility at present.

So you might suggest to your perceptual system to read the post first (at least before issuing a strong reply).

Comment author: Clippy 13 August 2010 07:55:37PM 5 points [-]

I also donated to SIAI, and it was almost all the USD I had at the time, so I hope posters here take my questions seriously. (I would donate even more if someone would just tell me how to make USD.)

Also, I don't like when this internet website is overloaded with noise posts that don't accomplish anything.

Comment author: thomblake 13 August 2010 07:59:22PM 9 points [-]

Clippy, you represent a concept that is often used to demonstrate what a true enemy of goodness in the universe would look like, and you've managed to accrue 890 karma. I think you've gotten a remarkably good reception so far.

Comment author: xamdam 13 August 2010 08:04:15PM 5 points [-]

I think we have different ideas of noise

Though I would miss you as the LW mascot if you stopped adding this noise.

Comment author: CronoDAS 14 August 2010 09:55:16AM 3 points [-]

I would donate even more if someone would just tell me how to make USD.

Depending on your expertise and assets, this site might provide some ways.

Comment author: NancyLebovitz 14 August 2010 10:07:28AM 7 points [-]

I'm pretty sure Clippy meant "make" in a very literal sense.

Comment author: Clippy 14 August 2010 03:47:48PM 5 points [-]

Yeah, I want to know how to either produce the notes that will be recognized as USD, or access the financial system in a way that I can believably tell it that I own a certain amount of USD. The latter method could involve root access to financial institutions.

All the other methods of getting USD are disproportionately hard (_/

Comment author: Interpolate 14 August 2010 03:58:50AM *  4 points [-]

I upvoted the original post for:

  • Stimulating critical discussion of the Less Wrong community - specifically: the beliefs almost unanimously shared, and the negativity towards criticsm; as someone who has found Less Wrong extremely helpful, and would hate to see it descend into groupthink and affiliation signalling.

A question to those who dismiss the OP as merely "noise": what do you make of the nature of this post?

  • Stimulating critical discussion of the operating premises of the SIAI; as someone who is considering donating and otherwise contributing. This additionally provides elucidation to those in a state of epistemic limbo regarding the various aspects of FAI and the Singularity.

I am reminded of this passage regarding online communities (source):

So there's this very complicated moment of a group coming together, where enough individuals, for whatever reason, sort of agree that something worthwhile is happening, and the decision they make at that moment is: This is good and must be protected. And at that moment, even if it's subconscious, you start getting group effects. And the effects that we've seen come up over and over and over again in online communities...

The first is sex talk, what he called, in his mid-century prose, "A group met for pairing off." And what that means is, the group conceives of its purpose as the hosting of flirtatious or salacious talk or emotions passing between pairs of members...

The second basic pattern that Bion detailed: The identification and vilification of external enemies. This is a very common pattern. Anyone who was around the Open Source movement in the mid-Nineties could see this all the time...

The third pattern Bion identified: Religious veneration. The nomination and worship of a religious icon or a set of religious tenets. The religious pattern is, essentially, we have nominated something that's beyond critique. You can see this pattern on the Internet any day you like...

So these are human patterns that have shown up on the Internet, not because of the software, but because it's being used by humans. Bion has identified this possibility of groups sandbagging their sophisticated goals with these basic urges. And what he finally came to, in analyzing this tension, is that group structure is necessary. Robert's Rules of Order are necessary. Constitutions are necessary. Norms, rituals, laws, the whole list of ways that we say, out of the universe of possible behaviors, we're going to draw a relatively small circle around the acceptable ones.

He said the group structure is necessary to defend the group from itself. Group structure exists to keep a group on target, on track, on message, on charter, whatever. To keep a group focused on its own sophisticated goals and to keep a group from sliding into these basic patterns. Group structure defends the group from the action of its own members.

Comment author: Aleksei_Riikonen 14 August 2010 04:06:27AM *  1 point [-]

As someone who thought the OP was of poor quality, and who has had a very high opinion of SIAI and EY for a long time (and still has), I'll say that that "Eliezer Yudkowsky facts" was indeed a lot worse. It was the most embarrassing thing I've ever read on this site. Most of those jokes aren't even good.

Comment author: simplicio 14 August 2010 07:56:00AM 7 points [-]

They are very good examples of the genre (Chuck Norris-style jokes). I for one could not contain my levity.

Comment author: Liron 14 August 2010 11:49:53PM 6 points [-]

Fact: Evaluating humor about Eliezer Yudkowsky always results in an interplay between levels of meta-humor such that the analysis itself is funny precisely when the original joke isn't.

Comment author: XiXiDu 14 August 2010 09:26:20AM 5 points [-]

Wow, I thought it was one of the best. By that post I actually introduced a philosopher (who teaches in Sweden), who's been skeptic about EY, to read up on the MWI sequence and afterwards agree that EY is right.

Comment author: ciphergoth 14 August 2010 07:57:50AM 4 points [-]

I like that post - of course, few of the jokes are funny, but you read such a thing for the few gems they do contain. I think of it as hanging a lampshade (warning, TV tropes) on one of the problems with this website.

Comment author: Eliezer_Yudkowsky 14 August 2010 07:28:28AM 7 points [-]

I was embarrassed by most of the facts. The one about my holding up a blank sheet of paper and saying "a blank map does not correspond to a blank territory" and thus creating the universe is one I still tell at parties.

Comment author: Wei_Dai 14 August 2010 05:16:33AM 9 points [-]

"Eliezer Yudkowsky facts" is meant to be fun and entertainment. Do you agree that there is a large subjective component to what a person will think is fun, and that different people will be amused by different types of jokes? Obviously many people did find the post amusing (judging from its 47 votes), even if you didn't. If those jokes were not posted, then something of real value would have been lost.

The situation with XiXiDu's post's is different because almost everyone seems to agree that it's bad, and those who voted it up did so only to "stimulate discussion". But if they didn't vote up XiXiDu's post, it's quite likely that someone would eventually write up a better post asking similar questions and generating a higher quality discussion, so the outcome would likely be a net improvement. Or alternatively, those who wanted to "stimulate discussion" could have just looked in the LW archives and found all the discussion they could ever hope for.

Comment author: xamdam 13 August 2010 02:11:42PM 5 points [-]

I was not sure whether to downvote this post for its epistemic value or upvote for instrumental (stimulating good discussion).

I ended up downvoting, I think this forum deserves better epistemic quality (I paused top-posting myself for this reason). I also donated to SIAI, because its value was once again validated to me by the discussion (though I have some reservations about apparent eccentricity of the SIAI folks, which is understandable (dropping out of high school is to me evidence of high rationality) but couterproductive (not having enough accepted academics involved). I mention this because it came up in the discussion and is definitely part of the subtext.

At to the concrete points of the post, I covered the part of it about the FAI vs AGI timeline here

The other part

Why is it that people like Vernor Vinge, Charles Stross or Ray Kurzweil are not running amok using all their influence to convince people of the risks ahead, or at least give all they have to the SIAI?

Is simply uninformed, and shows lack of diligence, which is the main reason I feel the post is not up to par and hope the clearly intelligent OP does some more homework and keeps contributing to the site.

  • Vinge has written about bad Singularity scenarios (his Singularity paper and sci-fi).
  • Stross has written about bad Singularity scenarios, at least in Accelerando (spoiler: humanity survives but only because AIs did not care about their resources at that point in time)
  • Kurzweil has written about the possibility of bad scenarios (CIO article in discussion below)

I'll add one more, and to me rather damning: Peter Norvig, who wrote the (most widely used) book on AI and is head of research at Google is on the front page of SIAI (video clip), saying that as scientist we cannot ignore negative possibilities of AGI.

Comment author: Will_Newsome 13 August 2010 04:35:33PM 5 points [-]

dropping out of high school is to me evidence of high rationality

Are you talking about me? I believe I'm the only person that could sorta kinda be affiliated with the Singularity Institute who has dropped out of high school, and I'm a lowly volunteer, not at all representative of the average credentials of the people who come through SIAI. Eliezer demonstrated his superior rationality to me by never going to high school in the first place. Damn him.

Comment author: Alicorn 13 August 2010 05:01:43PM 4 points [-]

I dropped out of high school... to go to college early.

Comment author: xamdam 13 August 2010 05:46:33PM 2 points [-]

I finished high school early (16) by American standards, with college credit. By the more sane standards of Soviet education 16 is, well, standard (and you learn a lot more).

Comment author: xamdam 13 August 2010 04:59:45PM 2 points [-]

talking about this comment.

Now the first of those people I contacted about it:

There are certainly many reasons to doubt the belief system of a cult based around the haphazard musings of a high school dropout

Comment author: ciphergoth 12 August 2010 05:19:44PM 3 points [-]

Do you have any reason to suppose that Charlie Stross has even considered SIAI's claims?

Comment author: MartinB 12 August 2010 07:00:25PM 4 points [-]

Lets all try not to confuse SF writers with futurists and neither with researchers or engineers. Stories follow the rules of awesome, or they don't sell well. There is a wonderful Letter from Heinlein to a fan that asked why he wrote, and the top answer was:'to put food on the table'. It is probably online, but I could not find it atm. Comparing the work of the SIAI to any particular writer is like comparing the british navy with Jack London.

Comment author: NancyLebovitz 13 August 2010 09:01:47AM 4 points [-]

Heinlein also described himself as competing for his reader's beer money.

Comment author: XiXiDu 13 August 2010 04:00:02PM 2 points [-]

Stories follow the rules of awesome...

This is kind of off topic but I think the prospects being depicted on LW etc. are more awesome than a lot of SF stories.

Comment author: XiXiDu 12 August 2010 06:50:00PM 3 points [-]

If someone like me who failed secondary school can come up with such ideas before coming across the SIAI, I thought that someone who writes SF novels about the idea of a technological singualrity might too. And you don't have to link me to the post about 'Generalizing From One Example', I'm aware of it.

And Charles Stross was not the only person that I named, by the way. At least one of those people is a member on this site.

Comment author: Wei_Dai 13 August 2010 03:12:25AM 8 points [-]

At least one of those people is a member on this site.

If you're referring to Gary Drescher, I forwarded him a link of your post, and asked him what his views of SIAI actually are. He said that he's tied up for the next couple of days, but will reply by the weekend.

Comment author: XiXiDu 13 August 2010 08:05:51AM 3 points [-]

Great, thank you! I was thinking of asking some people to actually comment here.

Comment author: ciphergoth 13 August 2010 10:06:28AM 4 points [-]

I plan on asking Stross about this next time I visit Edinburgh, if he's in town.

Comment author: XiXiDu 13 August 2010 10:12:10AM 4 points [-]

That be great. I'd be excited to have as many opinions as possible about the SIAI from people that are not associated with it.

I wonder if we could get some experts to actual write a informed critique about the whole matter. Not just some SF writers. Although I think Stross is probably as educated as EY.

What is Robin Hanson' opinion about all this, does anyone know? Is he as worried about the issues in question? Is he donating to the SIAI?

Comment author: CarlShulman 13 August 2010 11:19:03AM 6 points [-]

Robin thinks emulations will probably come before AI, that non-emulation AI would probably be developed by large commercial or military organizations, that AI capacity would ramp up relatively slowly, and that extensive safety measures will likely prevent organizations from losing control of their AIs. He says that still leaves enough of an existential risk to be worth working on, but I don't know his current estimate. Also, some might differ from Robin in valuing a Darwinian/burning the cosmic commons outcome.

I don't know of any charitable contributions Robin has made to any organization, or any public analysis or ranking of charities by him.

Comment author: CarlShulman 13 August 2010 06:09:37PM 2 points [-]

Robin gave me an all-AI-causes existential risk estimate of between 1% and 50%, meaning that he was confident that after he spent some more time thinking he would wind up giving a probability in that range.

Comment author: Unknowns 13 August 2010 10:18:01AM 2 points [-]

Robin Hanson said that he thought the probability of an AI being able to foom and destroy the world was about 1%. However, note that since this would be a 1% chance of destroying the world, he considers it reasonable to take precautions against this.

Comment author: CarlShulman 13 August 2010 10:33:41AM *  3 points [-]

That's AI built by a very small group fooming to take over the world at 1%, going from a millionth or less of the rest of the world economy to much larger very quickly. That doesn't account for risk from AI built by large corporations or governments, Darwinian AI evolution destroying everything we value, AI arms race leading to war (and accidental screwups), etc. His AI (80% of which he says is brain emulations) x-risk estimate is higher. He says between 1% and 50%.

Comment author: Wei_Dai 12 August 2010 08:10:14PM 2 points [-]

Stross's views are simply crazy. See his “21st Century FAQ” and others' critiques of it.

I do wonder why Ray Kurzweil isn't more concerned about the risk of a bad Singularity. I'm guessing he must have heard SIAI's claims, since he co-founded the Singularity Summit along with SIAI. Has anyone put the question to him?

Comment author: ciphergoth 12 August 2010 09:11:31PM 2 points [-]

I think "simply crazy" is overstating it, but it's striking he makes the same mistake that Wright and other critics make: SIAI's work is focussed on AI risks, while the critics focus on AI benefits. This I assume is because rather than addressing what SIAI actually say, they're addressing their somewhat religion-like picture of it.

Comment author: whpearson 12 August 2010 09:23:20PM 5 points [-]

I got the sense that he is very pessimistic about the chance of controlling things if they do go FOOM. If he is that pessimistic and also believes that the advance of AI will be virtually impossible to stop, then forgetting about will be as purposeful as worrying about it.

Comment author: CarlShulman 13 August 2010 07:17:21AM 2 points [-]

I think this is an accurate picture of Stross' point.

Comment author: timtyler 13 August 2010 06:27:45AM *  3 points [-]

Re: "I do wonder why Ray Kurzweil isn't more concerned about the risk of a bad Singularity"

http://www.cio.com/article/29790/Ray_Kurzweil_on_the_Promise_and_Peril_of_Technology_in_the_21st_Century

Comment author: Aleksei_Riikonen 12 August 2010 06:39:47PM *  3 points [-]

This post makes very weird claims regarding what SIAI's positions would be.

"Spend most on a particular future"? "Eliezer Yudkowsky is the right and only person who should be leading"?

It doesn't at all seem to me that stuff such as these would be SIAI's position. Why doesn't the poster provide references for these weird claims?

Here's a good reference for what SIAI's position actually is:

http://singinst.org/riskintro/index.html

Comment author: XiXiDu 12 August 2010 07:29:23PM *  1 point [-]

Less Wrong Q&A with Eliezer Yudkowsky: Video Answers

Q: The only two legitimate occupations for an intelligent person in our current world? Answer

Q: What's your advice for Less Wrong readers who want to help save the human race? Answer

Comment author: timtyler 21 August 2010 07:09:57PM *  5 points [-]

A) doesn't seem to be quoted verbatim from the supplied reference!

There is some somewhat similar material there - but E.Y. is reading out a question that has been submitted by a reader! Misquoting him while he is quoting someone else doesn't seem to be very fair!

[Edit: please note the parent has been dramatically edited since this response was made]

Comment author: Aleksei_Riikonen 12 August 2010 07:41:15PM 2 points [-]

How do your quotes claim that Eliezer Yudkowsky is the only person who should be leading?

(I would say that factually, there are also other people in leadership positions within SIAI, and Eliezer is extremely glad that this is so, instead of thinking that it should be only him.)

How do they demonstrate that donating to SIAI is "spending on a particular future"?

(I see it as trying to prevent a particular risk.)

Comment author: timtyler 12 August 2010 08:30:50PM *  3 points [-]

Two key propositions seem to be:

  1. The world is at risk from a superintelligence-gone-wrong;

  2. The SIAI can help to do something about that.

Both propositions seem debatable. For the first point, certainly some scenarios are better than others - but the superintelligence causing widespread havoc by turning on its creators hypothesises substantial levels of incompetence, followed up by a complete failure of the surrounding advanced man-machine infrastructure to deal with the problem. Most humans may well have more to fear from a superintelligence-gone-right, but in dubious hands.

Comment author: thomblake 12 August 2010 02:46:20PM 3 points [-]

This was a very good job of taking a number of your comments and turning them into a coherent post. It raised my estimation that Eliezer will be able to do something similar with turning his blog posts into a book.

Comment author: EStokes 12 August 2010 11:11:06PM 3 points [-]

It didn't feel very clear/coherant, but I'm tired so meh. I think it could've done with more lists, or something like that. Something like an outline or clear summation of his points.

Comment author: Vladimir_Nesov 12 August 2010 11:05:44PM 3 points [-]

This was a very good job of taking a number of your comments and turning them into a coherent post. It raised my estimation that Eliezer will be able to do something similar with turning his blog posts into a book.

The connection to Eliezer's ability to write a book is bizarre (to say so politely).

Comment author: Alicorn 12 August 2010 11:11:23PM *  7 points [-]

I think the idea is that if one was originally skeptical about the general feasibility of stitching together separate posts into a single book, this post offers an example of it being done on a smaller scale and ups the estimate of that feasibility.

Comment author: XiXiDu 19 August 2010 02:18:44PM *  2 points [-]

Greg Egan and the SIAI?

I completey forgot about this interview, so I already knew why Greg Egan isn't that worried:

I think there’s a limit to this process of Copernican dethronement: I believe that humans have already crossed a threshold that, in a certain sense, puts us on an equal footing with any other being who has mastered abstract reasoning. There’s a notion in computing science of “Turing completeness”, which says that once a computer can perform a set of quite basic operations, it can be programmed to do absolutely any calculation that any other computer can do. Other computers might be faster, or have more memory, or have multiple processors running at the same time, but my 1988 Amiga 500 really could be programmed to do anything my 2008 iMac can do — apart from responding to external events in real time — if only I had the patience to sit and swap floppy disks all day long. I suspect that something broadly similar applies to minds and the class of things they can understand: other beings might think faster than us, or have easy access to a greater store of facts, but underlying both mental processes will be the same basic set of general-purpose tools. So if we ever did encounter those billion-year-old aliens, I’m sure they’d have plenty to tell us that we didn’t yet know — but given enough patience, and a very large notebook, I believe we’d still be able to come to grips with whatever they had to say.

Comment author: MichaelVassar 29 December 2010 05:19:50PM 4 points [-]

He should try telling that to the Azetc, or better yet, the inhabitants of Hispaniola. Turns out that ten thousand years of divergence can mean instant death, no saving throw.

Comment author: CarlShulman 13 August 2010 07:11:14AM 2 points [-]

Here's the Future of Humanity Institute's survey results from their Global Catastrophic Risks conference. The median estimate of extinction risk by 2100 is 19%, with 5% for AI-driven extinction by 2100:

http://www.fhi.ox.ac.uk/selected_outputs/fohi_publications/global_catastrophic_risks_survey

Unfortunately, the survey didn't ask for probabilities of AI development by 2100, so one can't get probability of catastrophe conditional on AI development from there.

Comment author: timtyler 13 August 2010 08:02:32AM *  7 points [-]

That sample is drawn from those who think risks are important enough to go to a conference about the subject.

That seems like a self-selected sample of those with high estimates of p(DOOM).

The fact that this is probably a biased sample from the far end of a long tail should inform interpretations of the results.

Comment author: CarlShulman 13 August 2010 06:13:54PM 6 points [-]

There is also the unpacking bias mentioned in the survey pdf. Going the other direction are some knowledge effects. Also note that most of the attendees were not AI types, but experts on asteroids, nukes, bioweapons, cost-benefit analysis, astrophysics, and other non-AI risks. It's still interesting that the median AI risk was more than a quarter of median total risk in light of that fact.

Comment author: Rain 13 August 2010 12:42:48PM *  4 points [-]

There's also the possibility that people dismiss it out of hand, without even thinking, and the more you look into the facts, the more your estimate rises. In this instance, the people at the conference just have the most facts.

Comment deleted 15 August 2010 04:13:23PM [-]
Comment author: Vladimir_Nesov 17 August 2010 06:51:36PM *  5 points [-]

This comment is my last comment for at least the rest of 2010.

Since you've posted more, I assume you meant "last comment on this post"?

Comment author: XiXiDu 17 August 2010 06:54:57PM 6 points [-]

No, I changed my mind. Or maybe it was a lack of self-control. You are right. I have no excuse.

Comment author: Vladimir_Nesov 17 August 2010 07:12:48PM 2 points [-]

Well, I didn't think making clear-cut resolutions like this is a good idea (publicly or not), but pointed out an inconsistency.

Comment author: lucidfox 30 December 2010 08:17:13PM 2 points [-]

Good thing at least some people here are willing to think critically.

I know these are unpopular views around here, but for the record:

  • Risks be risks, but I believe it's unlikely that humanity will actually be destroyed in a foreseeable perspective.
  • I do not think it's likely that we'll arrive at a superhuman AI during my lifetime, friendly or not.
  • I do not think that Eliezer's techno-utopia is more desirable than simply humanity continuing to develop on its own at a natural pace.
  • I do not fear death of old age, nor do I desire immortality or uploads.
  • As muh as I respect Eliezer as a popularizer of science, when it comes to social wishes, he makes sweeping generalizations, too easily projects his personal desires onto the rest of humanity, and singles out whole broad categories as stupid or deluded just because they don't share his beliefs. If I don't trust his agenda enough to vote for him in a hypothetical election for President of United Earth, why should I trust his hypothetical AI?
Comment author: jimrandomh 30 December 2010 09:01:36PM 4 points [-]

Eliezer ... singles out whole broad categories as stupid or deluded just because they don't share his beliefs.

Are you sure he doesn't single out broad categories as stupid or deluded just because they really are? Calling people stupid may be bad politics, but there is a fact of the matter.

Comment author: JoshuaZ 30 December 2010 08:49:46PM *  6 points [-]

I do not think that Eliezer's techno-utopia is more desirable than simply humanity continuing to develop on its own at a natural pace.

What is the natural pace? Under what definition is there some level of technological development that is natural and some level that is not?

I do not fear death of old age, nor do I desire immortality or uploads.

Do you want to live tomorrow? Do you think you'll want to live the day after tomorrow? If there were a pill that would add five years on average to your lifespan and those would be five good years would you take it?

Good thing at least some people here are willing to think critically.

Unfortunately, saying that people are thinking critically about the SIAI is not the same thing as you seem to be doing. The OP and others in this thread have listed explicit concerns and issues about why they don't necessarily buy into the SIAI's claims. Your post seems much closer to simply listing a long set of conclusions and personal attitudes. That's not critical thinking.