Eliezer_Yudkowsky comments on The Importance of Self-Doubt - Less Wrong

23 Post author: multifoliaterose 19 August 2010 10:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (726)

You are viewing a single comment's thread. Show more comments above.

Comment author: Unknowns 20 August 2010 03:41:39AM 5 points [-]

No, I don't agree this is an implication. I would say that no one can reasonably believe all of the following at the same time with a high degree of confidence:

1) I am critical to this Friendly AI project that has a significant chance of success. 2) There is no significant chance of Friendly AI without this project. 3) Without Friendly AI, the world is doomed.

But then, as you know, I don't consider it reasonable to put a high degree in confidence in number 3. Nor do many other intelligent people (such as Robin Hanson.) So it isn't surprising that I would consider it unreasonable to be sure of all three of them.

I also agree with Tetronian's points.

Comment author: Eliezer_Yudkowsky 20 August 2010 03:57:56AM 4 points [-]

I would say that no one can reasonably believe all of the following at the same time with a high degree of confidence: 1) I am critical to this Friendly AI project that has a significant chance of success. 2) There is no significant chance of Friendly AI without this project. 3) Without Friendly AI, the world is doomed.

I see. So it's not that any one of these statements is a forbidden premise, but that their combination leads to a forbidden conclusion. Would you agree with the previous sentence?

BTW, nobody please vote down the parent below -2, that will make it invisible. Also it doesn't particularly deserve downvoting IMO.

Comment author: Perplexed 20 August 2010 04:16:27AM 5 points [-]

I would suggest that, in order for this set of beliefs to become (psychiatrically?) forbidden, we need to add a fourth item. 4) Dozens of other smart people agree with me on #3.

If someone believes that very, very few people yet recognize the importance of FAI, then the conjunction of beliefs #1 thru #3 might be reasonable. But after #4 becomes true (and known to our protagonist), then continuing to hold #1 and #2 may be indicative of a problem.

Comment author: Perplexed 20 August 2010 04:29:21AM 2 points [-]

With the hint from EY on another branch, I see a problem in my argument. Our protagonist might circumvent my straitjacket by also believing 5) The key to FAI is TDT, but I have been so far unsuccessful in getting many of those dozens of smart people to listen to me on that subject.

I now withdraw from this conversation with my tail between my legs.

Comment author: katydee 20 August 2010 04:32:30AM *  1 point [-]

All this talk of "our protagonist," as well the weird references to SquareSoft games, is very off-putting for me.

Comment author: Eliezer_Yudkowsky 20 August 2010 05:01:07AM 5 points [-]

Dozens isn't sufficient. I asked Marcello if he'd run into anyone who seemed to have more raw intellectual horsepower than me, and he said that John Conway gave him that impression. So there are smarter people than me upon the Earth, which doesn't surprise me at all, but it might take a wider net than "dozens of other smart people" before someone comes in with more brilliance and a better starting math education and renders me obsolete.

Comment author: [deleted] 20 August 2010 05:27:01AM 9 points [-]
Comment author: Spurlock 20 August 2010 05:26:47PM 8 points [-]

Simply out of curiosity:

Plenty of criticism (some of it reasonable) has been lobbed at IQ tests and at things like the SAT. Is there a method known to you (or anyone reading) that actually measures "raw intellectual horsepower" in a reliable and accurate way? Aside from asking Marcello.

Comment author: thomblake 20 August 2010 06:44:08PM 10 points [-]

Aside from asking Marcello.

I was beginning to wonder if he's available for consultation.

Comment author: rabidchicken 21 August 2010 05:02:22PM *  6 points [-]

Read the source code, and then visualize a few levels from Crysis or Metro 2033 in your head. While you render it, count the average Frames per second. Alternatively, see how quickly you can find the prime factors of every integer from 1 to 1000.

Which is to say... Humans in general have extremely limited intellectual power. instead of calculating things efficiently, we work by using various tricks with caches and memory to find answers. Therefore, almost all tasks are more dependant on practice and interest than they are on intelligence. So, rather then testing the statement "Eliezer is smart" it has more bearing on this debate to confirm "Eliezer has spent a large amount of time optimizing his cache for tasks relating to rationality, evolution, and artificial intelligence". Intelligence is overrated.

Comment author: XiXiDu 20 August 2010 10:29:58AM *  3 points [-]

Sheer curiosity, but have you or anyone ever contacted John Conway about the topic of u/FAI and asked him what the thinks about the topic, the risks associated with it and maybe the SIAI itself?

Comment author: xamdam 20 August 2010 04:12:42PM 1 point [-]

"raw intellectual power" != "relevant knowledge". Looks like he worked on some game theory, but otherwise not much relevancy. Should we ask Steven Hawking? Or take a poll of Nobel Laureates?

I am not saying that he can't be brought up to date in this kind of discussion, and has a lot to consider, but not asking him as things are indicates little.

Comment author: XiXiDu 20 August 2010 08:09:55PM 0 points [-]

Richard Dawkins seems to have enough power to infer the relevant knowledge from a single question.

Comment author: Perplexed 20 August 2010 05:05:43AM 1 point [-]

Candid, and fair enough.

Comment author: whowhowho 29 January 2013 02:33:03AM 0 points [-]

Raw intellectual horsepower is not the right kind of smart.

Comment author: TheAncientGeek 17 June 2015 11:37:50AM -1 points [-]

Domain knowledge is much more relevant than raw intelligence.

Comment author: Unknowns 20 August 2010 09:13:42AM 2 points [-]

I wouldn't put it in terms of forbidden premises or forbidden conclusions.

But if each of these statements has a 90% of being true, and if they are assumed to be independent (which admittedly won't be exactly true), then the probability that all three are true would be only about 70%, which is not an extremely high degree of confidence; more like saying, "This is my opinion but I could easily be wrong."

Personally I don't think 1) or 3), taken in a strict way, could reasonably be said to have more than a 20% chance of being true. I do think a probability of 90% is a fairly reasonable assignment for 2), because most people are not going to bother about Friendliness. Accounting for the fact that these are not totally independent, I don't consider a probability assignment of more than 5% for the conjunction to be reasonable. However, since there are other points of view, I could accept that someone might assign the conjunction a 70% chance in accordance with the previous paragraph, without being crazy. But if you assign a probability much more than that I would have to withdraw this.

If the statements are weakened as Carl Shulman suggests, then even the conjunction could reasonably be given a much higher probability.

Also, as long as it is admitted that the probability is not high, you could still say that the possibility needs to be taken seriously because you are talking about the possible (if yet improbable) destruction of the world.

Comment author: Eliezer_Yudkowsky 20 August 2010 06:21:21PM 17 points [-]

I certainly do not assign a probability as high as 70% to the conjunction of all three of those statements.

And in case it wasn't clear, the problem I was trying to point out was simply with having forbidden conclusions - not forbidden by observation per se, but forbidden by forbidden psychology - and using that to make deductions about empirical premises that ought simply to be evaluated by themselves.

I s'pose I might be crazy, but you all are putting your craziness right up front. You can't extract milk from a stone!

Comment author: Unknowns 20 August 2010 06:29:01PM 2 points [-]

That's good to know. I hope multifoliaterose reads this comment, as he seemed to think that you would assign a very high probability to the conjunction (and it's true that you've sometimes given that impression by your way of talking.)

Also, I didn't think he was necessarily setting up forbidden conclusions, since he did add some qualifications allowing that in some circumstances it could be justified to hold such opinions.

Comment author: PaulAlmond 28 August 2010 09:55:00PM *  3 points [-]

Just curious (and not being 100% serious here): Would you have any concerns about the following argument (and I am not saying I accept it)?

  1. Assume that famous people will get recreated as AIs in simulations a lot in the future. School projects, entertainment, historical research, interactive museum exhibits, idols to be worshipped by cults built up around them, etc.
  2. If you save the world, you will be about the most famous person ever in the future.
  3. Therefore there will be a lot of Eliezer Yudkowsky AIs created in the future.
  4. Therefore the chances of anyone who thinks he is Eliezer Yudkowsky actually being the orginal, 21st century one are very small.
  5. Therefore you are almost certainly an AI, and none of the rest of us are here - except maybe as stage props with varying degrees of cognition (and you probably never even heard of me before, so someone like me would probably not get represented in any detail in an Eliezer Yudkowsky simulation). That would mean that I am not even conscious and am just some simple subroutine. Actually, now I have raised the issue to be scary, it looks a lot more alarming for me than it does for you as I may have just argued myself out of existence...
Comment author: wedrifid 29 August 2010 02:45:07AM 2 points [-]

Actually, now I have raised the issue to be scary, it looks a lot more alarming for me than it does for you as I may have just argued myself out of existence...

That doesn't seem scary to me at all. I still know that there is at least one of me that I can consider 'real'. I will continue to act as if I am one of the instances that I consider me/important. I've lost no existence whatsoever.

Comment author: Wei_Dai 29 August 2010 02:04:48AM 0 points [-]

You can see Eliezer's position on the Simulation Argument here.

Comment author: multifoliaterose 20 August 2010 06:53:48PM *  -2 points [-]

To be quite clear about which of Unknowns' points I object, my main objection is to the point:

I am critical to this Friendly AI project that has a significant chance of success

where 'I' is replaced by "Eliezer." I assign a probability of less than 10^(-9) to you succeeding in playing a critical role on the Friendly AI project that you're working on. (Maybe even much less than that - I would have to spend some time calibrating my estimate to make a judgment on precisely how low a probability I assign to the proposition.)

My impression is that you've greatly underestimated the difficulty of building a Friendly AI.

Comment author: Eliezer_Yudkowsky 20 August 2010 07:00:52PM 15 points [-]

I assign a probability of less than 10^(-9) to you succeeding in playing a critical role on the Friendly AI project that you're working on.

I wish the laws of argument permitted me to declare that you had blown yourself up at this point, and that I could take my toys and go home. Alas, arguments are not won on a points system.

My impression is that you've greatly underestimated the difficulty of building a Friendly AI.

Out of weary curiosity, what is it that you think you know about Friendly AI that I don't?

And has it occurred to you that if I have different non-crazy beliefs about Friendly AI then my final conclusions might not be so crazy either, no matter what patterns they match in your craziness recognition systems?

Comment author: ata 20 August 2010 07:11:45PM 13 points [-]

I wish the laws of argument permitted me to declare that you had blown yourself up at this point, and that I could take my toys and go home. Alas, arguments are not won on a points system.

On the other hand, assuming he knows what it means to assign something a 10^-9 probability, it sounds like he's offering you a bet at 1000000000:1 odds in your favour. It's a good deal, you should take it.

Comment author: rabidchicken 21 August 2010 04:42:27PM *  4 points [-]

Indeed. I do not know how many people are actively involved in FAI research, but i would guess that it is only in the the dozens to hundreds. Given the small pool of competition, it seems likely that at some point Eliezer will, or already has, made a unique contribution to the field. Get Multi to put some money on it, offer him 1 cent if you do not make a useful contribution in the next 50 years, and if you do, he can pay you 10 million dollars.

Comment author: Unknowns 20 August 2010 07:08:40PM 13 points [-]

I agree it's kind of ironic that multi has such an overconfident probability assignment right after criticizing you for being overconfident. I was quite disappointed with his response here.

Comment author: multifoliaterose 20 August 2010 07:52:48PM 2 points [-]

Why does my probability estimate look overconfident?

Comment author: steven0461 20 August 2010 09:02:03PM *  15 points [-]

One could offer many crude back-of-envelope probability calculations. Here's one: let's say there's

  • a 10% chance AGI is easy enough for the world to do in the next few decades
  • a 1% chance that if the world can do it, a team of supergeniuses can do the Friendly kind first
  • an independent 10% chance Eliezer succeeds at putting together such a team of supergeniuses

That seems conservative to me and implies at least a 1 in 10^4 chance. Obviously there's lots of room for quibbling here, but it's hard for me to see how such quibbling could account for five orders of magnitude. And even if post-quibbling you think you have a better model that does imply 1 in 10^9, you only need to put little probability mass on my model or models like it for them to dominate the calculation. (E.g., a 9 in 10 chance of a 1 in 10^9 chance plus a 1 in 10 chance of a 1 in 10^4 chance is close to a 1 in 10^5 chance.)

Comment author: multifoliaterose 20 August 2010 09:58:40PM *  1 point [-]

I don't find these remarks compelling. I feel similar remarks could be used to justify nearly anything. Of course, I owe you an explanation. One will follow later on.

Comment author: Unknowns 21 August 2010 05:26:44AM *  2 points [-]

Unless you've actually calculated the probability mathematically, a probability of one in a billion for a natural language claim that a significant number of people accept as likely true is always overconfident. Even Eliezer said that he couldn't assign a probability as low as one in a billion for the claim "God exists" (although Michael Vassar criticized him for this, showing himself to be even more overconfident than Eliezer.)

Comment author: komponisto 23 August 2010 11:25:52AM 5 points [-]

Unless you've actually calculated the probability mathematically, a probability of one in a billion for a natural language claim that a significant number of people accept as likely true is always overconfident.

I'm afraid I have to take severe exception to this statement.

You give the human species far too much credit if you think that our mere ability to dream up a hypothesis automatically raises its probability above some uniform lower bound.

Comment author: [deleted] 21 August 2010 07:54:24AM 1 point [-]

The product of two probabilities above your threshold-for-overconfidence can be below your threshold-for-overconfidence. Have you at least thought this through before?

For instance, the claim "there is a God" is not that much less spectacular than the claim "there is a God, and he's going to make the next 1000 times you flip a coin turn up heads." If one-in-a-billion is a lower bound for the probability that God exists, then one-in-a-billion-squared is a generous lower bound for the probability that the next 1000 times you flip a coin will turn up heads. (One-in-a-billion-squared is about 2-to-the-sixty). You're OK with that?

Comment author: multifoliaterose 21 August 2010 05:33:43AM 0 points [-]

My estimate does come some effort at calibration, although there's certainly more that I could do. Maybe I should have qualified my statement by saying "this estimate may be a gross overestimate or a gross underestimate."

In any case, I was not being disingenuous or flippant. I have carefully considered the question of how likely it is that Eliezer will be able to play a crucial role in a FAI project if he continues to exhibit a strategy qualitatively similar to his current one and my main objection to SIAI's strategy is that I think it extremely unlikely that Eliezer will be able to have an impact if he proceeds as he has up until this point.

I will be detailing why I don't think that Eliezer's present strategy toward working toward an FAI is a fruitful one in a later top level post.

Comment author: multifoliaterose 20 August 2010 07:09:42PM 0 points [-]

I wish the laws of argument permitted me to declare that you had blown yourself up at this point, and that I could take my toys and go home. Alas, arguments are not won on a points system.

I don't understand this remark.

What probability do you assign to your succeeding in playing a critical role on the Friendly AI project that you're working on? I can engage with a specific number. I don't know if your object is that my estimate is off by a single of order of magnitude or by many orders of magnitude.

Out of weary curiosity, what is it that you think you know about Friendly AI that I don't?

I should clarify that my comment applies equally to AGI.

I think that I know the scientific community better than you, and have confidence that if creating an AGI was as easy as you seem to think it is (how easy I don't know because you didn't give a number) then there would be people in the scientific community who would be working on AGI.

And has it occurred to you that if I have different non-crazy beliefs about Friendly AI then my final conclusions might not be so crazy either, no matter what patterns they match in your craziness recognition systems?

Yes, this possibility has certainly occurred to me. I just don't know what your different non-crazy beliefs might be.

Why do you think that AGI research is so uncommon within academia if it's so easy to create an AGI?

Comment author: khafra 20 August 2010 07:44:57PM *  4 points [-]

This question sounds disingenuous to me. There is a large gap between "10^-9 chance of Eliezer accomplishing it" and "so easy for the average machine learning PhD." Whatever else you think about him, he's proved himself to be at least one or two standard deviations above the average PhD in ability to get things done, and some dimension of rationality/intelligence/smartness.

Comment author: multifoliaterose 20 August 2010 07:56:56PM *  0 points [-]

My remark was genuine. Two points:

  1. I think that the chance that any group of the size of SIAI will develop AGI over the next 50 years is quite small.

  2. Eliezer has not proved himself to be at the same level of the average machine learning PhD at getting things done. As far as I know he has no experience with narrow AI research. I see familiarity with narrow AI as a prerequisite to AGI research.

Comment author: XiXiDu 20 August 2010 08:16:25PM 3 points [-]

Eliezer has not proved himself to be at the same level of the average machine learning PhD at getting things done.

He actually stated that himself several times.

So I do understand that, and I did set out to develop such a theory, but my writing speed on big papers is so slow that I can't publish it. Believe it or not, it's true.

Yes, ok, this does not mean his intellectual power isn't on par, but his ability to function in an academic environment.

As far as I know he has no experience with narrow AI research.

Well...

I tried - once - going to an interesting-sounding mainstream AI conference that happened to be in my area. [...] And I gave up and left before the conference was over, because I kept thinking "What am I even doing here?"

Comment author: Vladimir_Nesov 20 August 2010 08:51:12PM 1 point [-]

As far as I know he has no experience with narrow AI research. I see familiarity with narrow AI as a prerequisite to AGI research.

Most things can be studied through the use of textbooks. Some familiarity with AI is certainly helpful, but it seems that most AI-related knowledge is not on the track to FAI (and most current AGI stuff is nonsense or even madness).

Comment author: Emile 20 August 2010 09:04:58PM 3 points [-]

I think that I know the scientific community better than you, and have confidence that if creating an AGI was as easy as you seem to think it is (how easy I don't know because you didn't give a number) then there would be people in the scientific community who would be working on AGI.

Um, and there aren't?

Comment author: multifoliaterose 20 August 2010 09:53:55PM 1 point [-]

Give some examples. There may be a few people in the scientific community working on AGI, but my understanding is that basically everybody is doing narrow AI.

Comment author: Vladimir_Nesov 20 August 2010 11:24:04PM *  5 points [-]

What is currently called the AGI field will probably bear no fruit, perhaps except for the end-game when it borrows then-sufficiently powerful tools from more productive areas of research (and destroys the world). "Narrow AI" develops the tools that could eventually allow the construction of random-preference AGI.

Comment author: Nick_Tarleton 20 August 2010 09:57:49PM *  4 points [-]

The folks here, for a start.

Comment author: [deleted] 20 August 2010 10:42:06PM *  2 points [-]

Why are people boggling at the 1-in-a-billion figure? You think it's not plausible that there are three independent 1-in-a-thousand events that would have to go right for EY to "play a critical role in Friendly AI success"? Not plausible that there are 9 1-in-10 events that would have to go right? Don't I keep hearing "shut up and multiply" around here?

Edit: Explain to me what's going on. I say that it seems to me that events A, B are likely to occur with probability P(A), P(B). You are allowed to object that I must have made a mistake, because P(A) times P(B) seems too small to you? (That is leaving aside the idea that 10-to-the-minus-nine counts as one of these too-small-to-be-believed numbers, which is seriously making me physiologically angry, ha-ha.)

Comment author: steven0461 20 August 2010 10:51:06PM *  10 points [-]

The 1-in-a-billion follows not from it being plausible that there are three such events, but from it being virtually certain. Models without such events will end up dominating the final probability. I can easily imagine that if I magically happened upon a very reliable understanding of some factors relevant to future FAI development, the 1 in a billion figure would be the right thing to believe. But I can easily imagine it going the other way, and absent such understanding, I have to use estimates much less extreme than that.

Comment author: [deleted] 20 August 2010 10:55:11PM *  0 points [-]

I'm having trouble parsing your comment. Could you clarify?

A billion is not so big a number. Its reciprocal is not so small a number.

Edit: Specifically, what's "it" in "it being virtually certain." And in the second sentence -- models of what, final probability of what?

Edit 2: -1 now that I understand. +1 on the child, namaste. (+1 on the child, but I just disagree about how big one billion is. So what do we do?)

Comment author: steven0461 20 August 2010 11:05:38PM *  4 points [-]

what's "it" in "it being virtually certain."

"it being virtually certain that there are three independent 1 in 1000 events required, or nine independent 1 in 10 events required, or something along those lines"

models of what, final probability of what?

Models of the world that we use to determine how likely it is that Eliezer will play a critical role through a FAI team. Final probability of that happening.

A billion is big compared to the relative probabilities we're rationally entitled to have between models where a series of very improbable successes is required, and models where only a modest series of modestly improbable successes is required.

Comment author: multifoliaterose 20 August 2010 10:43:53PM 1 point [-]

Yes, this is of course what I had in mind.

Comment author: Vladimir_Nesov 20 August 2010 09:09:25PM 0 points [-]

Replied to this comment and the other (seeming contradictory) one here.

Comment author: CarlShulman 20 August 2010 04:21:41AM *  2 points [-]

1) can be finessed easily on its own with the idea that since we're talking about existential risk even quite small probabilities are significant.

3) could be finessed by using a very broad definition of "Friendly AI" that amounted to "taking some safety measures in AI development and deployment."

But if one uses the same senses in 2), then one gets the claim that most of the probability of non-disastrous AI development is concentrated in one's specific project, which is a different claim than "project X has a better expected value, given what I know now about capacities and motivations, than any of the alternatives (including future ones which will likely become more common as a result of AI advance and meme-spreading independent of me) individually, but less than all of them collectively."

Comment author: WrongBot 20 August 2010 04:29:45AM 5 points [-]

Who else is seriously working on FAI right now? If other FAI projects begin, then obviously updating will be called for. But until such time, the claim that "there is no significant chance of Friendly AI without this project" is quite reasonable, especially if one considers the development of uFAI to be a potential time limit.

Comment author: CarlShulman 20 August 2010 04:45:23AM *  5 points [-]

"there is no significant chance of Friendly AI without this project" Has to mean over time to make sense.

People who will be running DARPA, or Google Research, or some hedge fund's AI research group in the future (and who will know about the potential risks or be able to easily learn if they find themselves making big progress) will get the chance to take safety measures. We have substantial uncertainty about how extensive those safety measures would need to be to work, how difficult they would be to create, and the relevant timelines.

Think about resource depletion or climate change: even if the issues are neglected today relative to an ideal level, as a problem becomes more imminent, with more powerful tools and information to deal with it, you can expect to see new mitigation efforts spring up (including efforts by existing organizations such as governments and corporations).

However, acting early can sometimes have benefits that outweigh the lack of info and resources available further in the future. For example, geoengineering technology can provide insurance against very surprisingly rapid global warming, and cheap plans that pay off big in the event of surprisingly easy AI design may likewise have high expected value. Or, if AI timescales are long, there may be slowly compounding investments, like lines of research or building background knowledge in elites, which benefit from time to grow. And to the extent these things are at least somewhat promising, there is substantial value of information to be had by investigating now (similar to increasing study of the climate to avoid nasty surprises).