The Importance of Self-Doubt

23 Post author: multifoliaterose 19 August 2010 10:47PM

[Added 02/24/14: After I got feedback on this post, I realized that it carried unnecessary negative connotations (despite conscious effort on my part to avoid them), and if I were to write it again, I would have framed things differently. See Reflections on a Personal Public Relations Failure: A Lesson in Communication for more information. SIAI (now MIRI) has evolved substantially since 2010 when I wrote this post, and the criticisms made in the post don't apply to MIRI as presently constituted.

Follow-up to: Other Existential Risks, Existential Risk and Public Relations

Related to: Tsuyoku Naritai! (I Want To Become Stronger), Affective Death Spirals, The Proper Use of Doubt, Resist the Happy Death Spiral, The Sin  of Underconfidence

In Other Existential Risks I began my critical analysis of what I understand to be SIAI's most basic claims. In particular I evaluated part of the claim

(1) At the margin, the best way for an organization with SIAI's resources to prevent global existential catastrophe is to promote research on friendly Artificial Intelligence, work against unsafe Artificial Intelligence, and encourage rational thought.

It's become clear to me that before I evaluate the claim

(2) Donating to SIAI is the most cost-effective way for charitable donors to reduce existential risk.

I should (a) articulate my reasons for believing in the importance of self-doubt and (b) give the SIAI staff an opportunity to respond to the points which I raise in the present post as well as my two posts titled Existential Risk and Public Relations and Other Existential Risks.

Yesterday SarahC described to me how she had found Eliezer's post Tsuyoku Naritai! (I Want To Become Stronger) really moving. She explained:

I thought it was good: the notion that you can and must improve yourself, and that you can get farther than you think.

I'm used to the other direction: "humility is the best virtue."

I mean, this is a big fuck-you to the book of Job, and it appeals to me.

I was happy to learn that SarahC had been positively affected by Eliezer's post. Self-actualization is a wonderful thing and it appears as though Eliezer's posting has helped her self-actualize. On the other hand, rereading the post prompted me to notice that there's something about it which I find very problematic. The last few paragraphs of the post read:

Take no pride in your confession that you too are biased; do not glory in your self-awareness of your flaws.  This is akin to the principle of not taking pride in confessing your ignorance; for if your ignorance is a source of pride to you, you may become loathe to relinquish your ignorance when evidence comes knocking.  Likewise with our flaws - we should not gloat over how self-aware we are for confessing them; the occasion for rejoicing is when we have a little less to confess.

Otherwise, when the one comes to us with a plan for correcting the bias, we will snarl, "Do you think to set yourself above us?"  We will shake our heads sadly and say, "You must not be very self-aware."

Never confess to me that you are just as flawed as I am unless you can tell me what you plan to do about it.  Afterward you will still have plenty of flaws left, but that's not the point; the important thing is to do better, to keep moving ahead, to take one more step forward.  Tsuyoku naritai!

There's something to what Eliezer is saying here: when people are too strongly committed to the idea that humans are fallible this can become a self-fulfilling prophecy where humans give up on trying to improve things and as a consequence remain fallible when they could have improved. As Eliezer has said in The Sin of Underconfidence, there are social pressures that push against having high levels of confidence even when confidence is epistemically justified:

To place yourself too high - to overreach your proper place - to think too much of yourself - to put yourself forward - to put down your fellows by implicit comparison - and the consequences of humiliation and being cast down, perhaps publicly - are these not loathesome and fearsome things?

To be too modest - seems lighter by comparison; it wouldn't be so humiliating to be called on it publicly, indeed, finding out that you're better than you imagined might come as a warm surprise; and to put yourself down, and others implicitly above, has a positive tinge of niceness about it, it's the sort of thing that Gandalf would do.

I have personal experience with underconfidence. I'm a careful thinker and when I express a position with confidence my position is typically well considered. For many years I generalized from one example and assumed when people express positions with confidence they've thought their positions out as well as I have. Even after being presented with massive evidence that few people think things through as carefully as I do, I persisted in granting the (statistically ill-considered) positions of others far more weight than they deserved for the very reason that Eliezer describes above. This seriously distorted my epistemology because it led to me systematically giving ill-considered positions substantial weight. I feel that I have improved on this point, but even now, from time to time I notice that I'm exhibiting irrationally low levels of confidence in my positions.

At the same time, I know that at times I've been overconfident as well. In high school I went through a period when I believed that I was a messianic figure whose existence had been preordained by a watchmaker God who planned for me to save the human race. It's appropriate to say that during this period of time I suffered from extreme delusions of grandeur. I viscerally understand how it's possible to fall into an affective death spiral.

In my view one of the central challenges of being human is to find an instrumentally rational balance between subjecting oneself to influences which push one in the direction of overconfidence and subjecting oneself to influences which push one in the direction of underconfidence.

In Tsuyoku Naritai! Eliezer describes how Orthodox Judaism attaches an unhealthy moral significance to humility. Having grown up in a Jewish household and as a consequence having had peripheral acquaintance with orthodox Judaism I agree with Eliezer's analysis of Orthodox Judaism in this regard. In the proper use of doubt, Eliezer describes how the Jesuits allegedly are told to doubt their doubts about Catholicism. I agree with Eliezer that self-doubt can be misguided and abused.

However, reversed stupidity is not intelligence. The fact that it's possible to ascribe too much moral significance to self-doubt and humility does not mean that one should not attach moral significance to self-doubt and humility. I strongly disagree with Eliezer's prescription: "Take no pride in your confession that you too are biased; do not glory in your self-awareness of your flaws."

The mechanism that determines human action is that we do what makes us feel good (at the margin) and refrain from doing what makes us feel bad (at the margin). This principle applies to all humans, from Gandhi to Hilter. Our ethical challenge is to shape what makes us feel good and what makes us feel bad in a way that incentivizes us to behave in accordance with our values. There are times when it's important to recognize that we're biased and flawed. Under such circumstances, we should feel proud that we recognize that we're biased we should glory in our self-awareness of our flaws. If we don't, then we will have no incentive to recognize that we're biased and be aware of our flaws.

We did not evolve to exhibit admirable and noble behavior. We evolved to exhibit behaviors which have historically been correlated with maximizing our reproductive success. Because our ancestral climate was very much a zero-sum situation, the traits that were historically correlated with maximizing our reproductive success had a lot to do with gaining high status within our communities. As Yvain has said, it appears that a fundamental mechanism of the human brain which was historically correlated with gaining high status is to make us feel good when we have high self-image and feel bad when we have low self-image.

When we obtain new data, we fit it into a narrative which makes us feel as good about ourselves as possible; a way conducive to having a high self-image. This mode of cognition can lead to very seriously distorted epistemology. This is what happened to me in high school when I believed that I was a messianic figure sent by a watchmaker God. Because we flatter ourselves by default, it's very important that those of us who aspire to epistemic rationality incorporate a significant element of "I'm the sort of person who engages in self-doubt because it's the right thing to do" into our self-image. If we do this, when we're presented with evidence which entails a drop in our self-esteem, we don't reject it out of hand or minimize it as we've been evolutionarily conditioned to do because wound of properly assimilating data is counterbalanced by the salve of the feeling "At least I'm a good person as evidenced by the fact that I engage in self-doubt" and failing to exhibit self-doubt would itself entail an emotional wound.

This is the only potential immunization to the disease of self-serving narratives which afflicts all utilitarians out of virtue of their being human. Until technology allows us to modify ourselves in a radical way, we cannot hope to be rational without attaching moral significance to the practice of engaging in self-doubt. As the RationalWiki's page on LessWrong says:

A common way for very smart people to be stupid is to think they can think their way out of being apes with pretensions. However, there is no hack that transcends being human...You are an ape with pretensions. Playing a "let's pretend" game otherwise doesn't mean you win all arguments, or any. Even if it's a very elaborate one, you won't transcend being an ape. Any "rationalism" that doesn't expressly take into account humans being apes with pretensions, isn't.


In Existential Risk and Public Relations I suggested that some of Eliezer's remarks convey the impression that Eliezer has an unjustifiably high opinion of himself. In the comments to the post JRMayne wrote

I think the statements that indicate that [Eliezer] is the most important person in human history - and that seems to me to be what he's saying - are so seriously mistaken, and made with such a high confidence level, as to massively reduce my estimated likelihood that SIAI is going to be productive at all.

And that's a good thing. Throwing money into a seriously suboptimal project is a bad idea. SIAI may be good at getting out the word of existential risk (and I do think existential risk is serious, under-discussed business), but the indicators are that it's not going to solve it. I won't give to SIAI if Eliezer stops saying these things, because it appears he'll still be thinking those things.

When Eliezer responded to JRMayne's comment, Eliezer did not dispute the claim that JRMayne attributed to him. I responded to Eliezer saying

If JRMayne has misunderstood you, you can effectively deal with the situation by making a public statement about what you meant to convey.

Note that you have not made a disclaimer which rules out the possibility that you claim that you're the most important person in human history. I encourage you to make such a disclaimer if JRMayne has misunderstood you.

I was disappointed, but not surprised, that Eliezer did not respond. As far as I can tell, Eliezer does have confidence in the idea that he is (at least nearly) the most important person in human history. Eliezer's silence only serves to further confirm my earlier impressions. I hope that Eliezer subsequently proves me wrong. [Edit: As Airedale points out Eliezer has in fact exhibited public self-doubt in his abilities in his posting The Level Above Mine. I find this reassuring and it significantly lowers my confidence that Eliezer claims that he's the most important person in human history. But Eliezer still hasn't made a disclaimer on this matter decisively indicating that he does not hold such a view.] The modern world is sufficiently complicated so that no human no matter how talented can have good reason to believe himself or herself to be the most important person in human history without actually doing something which very visibly and decisively alters the fate of humanity. At present, anybody who holds such a belief is suffering from extreme delusions of grandeur.

There's some sort of serious problem with the present situation. I don't know whether it's a public relations problem or if the situation is that Eliezer actually suffers from extreme delusions of grandeur, but something has gone very wrong. The majority of the people who I know who outside of Less Wrong who have heard of Eliezer and Less Wrong have the impression that Eliezer is suffering from extreme delusions of grandeur. To such people, this fact (quite reasonably) calls into question of the value of SIAI and Less Wrong. On one hand, SIAI looks like an organization which is operating under beliefs which Eliezer has constructed to place himself in as favorable a position as possible rather than with a view toward reducing existential risk. On the other hand, Less Wrong looks suspiciously like the cult of Objectivism: a group of smart people who are obsessed with the writings of a very smart person who is severely deluded and describing these writings and the associated ideology as "rational" although they are nothing of the kind.

My own views are somewhat more moderate. I think that the Less Wrong community and Eliezer are considerably more rational than the Objectivist movement and Ayn Rand (respectively). I nevertheless perceive unsettling parallels.


In the comments to Existential Risk and Public Relations, timtyler said

...many people have inflated views of their own importance. Humans are built that way. For one thing, It helps them get hired, if they claim that they can do the job. It is sometimes funny - but surely not a big deal.

I disagree with timtyler. Anything that has even a slight systematic negative impact on existential risk is a big deal.

Some of my most enjoyable childhood experiences involved playing Squaresoft RPGs. Games like Chrono Trigger, Illusion of Gaia, Earthbound, Xenogears, and the Final Fantasy series are all stories about a group of characters who bond and work together to save the world. I found these games very moving and inspiring. They prompted me to fantasize about meeting allies who I could bond with and work together with to save the world. I was lucky enough to meet one such person in high school who I've been friends with since. When I first encountered Eliezer I found him eerily familiar, as though he was a long lost brother. This is the same feeling that is present between Siegmund and Sieglinde in the Act 1 of Wagner's Die Walküre (modulo erotic connotations). I wish that I could be with Eliezer in a group of characters as in a Squaresoft RPG working to save the world. His writings such as One Life Against the World and Yehuda Yudkowsky, 1985-2004 reveal him to be a deeply humane and compassionate person.

This is why it's so painful for me to observe that Eliezer appears to be deviating so sharply from leading a genuinely utilitarian lifestyle. I feel a sense of mono no aware, wondering how things could have been under different circumstances.

One of my favorite authors is Kazuo Ishiguro, who writes about the themes of self-deception and people's attempts to contribute to society. In a very good interview Ishiguro said

I think that's partly what interests me in people, that we don't just wish to feed and sleep and reproduce then die like cows or sheep. Even if they're gangsters, they seem to want to tell themselves they're good gangsters and they're loyal gangsters, they've fulfilled their 'gangstership' well. We do seem to have this moral sense, however it's applied, whatever we think. We don't seem satisfied, unless we can tell ourselves by some criteria that we have done it well and we haven't wasted it and we've contributed well. So that is one of the things, I think, that distinguishes human beings, as far as I can see.

But so often I've been tracking that instinct we have and actually looking at how difficult it is to fulfill that agenda, because at the same time as being equipped with this kind of instinct, we're not actually equipped. Most of us are not equipped with any vast insight into the world around us. We have a tendency to go with the herd and not be able to see beyond our little patch, and so it is often our fate that we're at the mercy of larger forces that we can't understand. We just do our little thing and hope it works out. So I think a lot of the themes of obligation and so on come from that. This instinct seems to me a kind of a basic thing that's interesting about human beings. The sad thing is that sometimes human beings think they're like that, and they get self-righteous about it, but often, they're not actually contributing to anything they would approve of anyway.

[...]

There is something poignant in that realization: recognizing that an individual's life is very short, and if you mess it up once, that's probably it. But nevertheless, being able to at least take some comfort from the fact that the next generation will benefit from those mistakes. It's that kind of poignancy, that sort of balance between feeling defeated but nevertheless trying to find reason to feel some kind of qualified optimism. That's always the note I like to end on. There are some ways that, as the writer, I think there is something sadly pathetic but also quite noble about this human capacity to dredge up some hope when really it's all over. I mean, it's amazing how people find courage in the most defeated situations.

Ishiguro's quote describes how people often behave in accordance with sincere desire to contribute and end up doing things that are very different from what they thought they were doing (things which are relatively unproductive or even counterproductive). Like Ishiguro I find this phenomenon very sad. As Ishiguro hints at, this phenomenon can also result in crushing disappointment later in life. I feel a deep spiritual desire to prevent this from happening to Eliezer.

Comments (726)

Comment author: Friendly-HI 29 January 2013 12:41:42AM *  4 points [-]

As of yet Eliezer's importance is just a stochastic variable yet to be realized, for all I know he could be killed in a car accident tomorrow or simply fail at his task of "saving the world" in numerous ways.

Up until now Vasili Arkhipov, Stanislav Petrov and a few other people I do not know the names of (including our earliest ancestors who managed to avoid being killed during their emigration out of Africa) trump Eliezer by a tiny margin of actually saving humanity -or at least civilization.

All that being said Eliezer is still pretty awesome by my standards. And he writes good fanfiction, too.

Comment author: Eneasz 25 August 2010 06:06:10PM 2 points [-]

As far as I can tell, Eliezer does have confidence in the idea that he is (at least nearly) the most important person in human history. Eliezer's silence only serves to further confirm my earlier impressions

I suppose you also believe that Obama must prove he's not a muslim? And must do so again every time someone asserts that he is?

Let me say that Eliezer may have already done more to save the world than most people in history. This is going on the assumption that FAI is a serious existential risk. Even if he is doing it wrong and his work will never directly contribute to FAI in any way, his efforts at popularizing the existence of this threat have vastly increased the pool of people who know of it and want to help in some way.

His skill at explanation and inspiration have brought more attention to this issue than any other single person I know of. The fact that he also has the intellect to work directly on the problem is simply an added bonus. And I strongly doubt that it's driven away anyone who would have otherwise helped.

You said you had delusions of messianic grandeur in high school, but you're better now. But then you post an exceptionally well done personal take-down of someone who YOU believe is too self-confident and who (more importantly) has convinced others that his confidence is justified. I think your delusions of messiah-hood are still present, perhaps unacknowledged, and you are suffering from envy of someone you view as "a more successful messiah".

Comment author: multifoliaterose 25 August 2010 09:37:16PM *  3 points [-]

I suppose you also believe that Obama must prove he's not a muslim? And must do so again every time someone asserts that he is?

I don't see the situation that you cite as comparable. Obama has stated that he's a Christian, and this seriously calls into question the idea that he's a Muslim.

Has Eliezer ever said something which calls my interpretation of the situation into question? If so I'll gladly link a reference to it in my top level post.

(As an aside, I agree with Colin Powell that whether or not Obama is a Muslim has no bearing on whether he's fit to be president.)

Let me say that Eliezer may have already done more to save the world than most people in history. This is going on the assumption that FAI is a serious existential risk. Even if he is doing it wrong and his work will never directly contribute to FAI in any way, his efforts at popularizing the existence of this threat have vastly increased the pool of people who know of it and want to help in some way.

His skill at explanation and inspiration have brought more attention to this issue than any other single person I know of. The fact that he also has the intellect to work directly on the problem is simply an added bonus. And I strongly doubt that it's driven away anyone who would have otherwise helped.

I definitely agree that some of what Eliezer has done has reduced existential risk. As I've said elsewhere, I'm grateful to Eliezer for inspiring me personally to think more about existential risk.

However, as I've said, in my present epistemological state I believe that he's also had (needless) negative effects on existential risk on account of making strong claims with insufficient evidence. See especially my responses to komponisto's comment. I may be wrong about this.

In any case, I would again emphasize that my most recent posts should not be interpreted as personal attacks on Eliezer. I'm happy to support Eliezer to the extent that he does things that I understand to lower existential risk.

You said you had delusions of messianic grandeur in high school, but you're better now. But then you post an exceptionally well done personal take-down of someone who YOU believe is too self-confident and who (more importantly) has convinced others that his confidence is justified. I think your delusions of messiah-hood are still present, perhaps unacknowledged, and you are suffering from envy of someone you view as "a more successful messiah".

My conscious motivation making my most recent string of posts is given in my Transparency and Accountability posting. I have no conscious awareness of having a motivation of the type that you describe.

Of course, I may be deluded about this (just as all humans may be deluded about possessing any given belief). In line with my top level posting, I'm interested in seriously considering the possibility that my unconscious motivations are working against my conscious goals.

However, I see your own impression as very poor evidence that I may be deluded on this particular point in light of your expressed preference for donating to Eliezer and SIAI even if doing so is not socially optimal:

And my priests are Eliezer Yudkowsky and the SIAI fellows. I don't believe they leach off of me, I feel they earn every bit of respect and funding they get. But that's besides the point. The point is that even if the funds I gave were spent sub-optimally, I would STILL give them this money, simply because I want other people to see that MY priests are better taken care of than THEIR priests.

I don't judge you for having this motivation (we're all only human). But the fact that you seem interested in promoting Eliezer and SIAI independently of whether doing so benefits broader society has led me to greatly discount your claims and suggestions which relate to Eliezer and SIAI.

Comment author: Eneasz 26 August 2010 12:08:58AM *  2 points [-]

(As an aside, I agree with Colin Powell that whether or not Obama is a Muslim has no bearing on whether he's fit to be president.)

Does whether Eliezer is over-confident or not have any bearing on whether he's fit to work on FAI?

I believe that he's also had (needless) negative effects on existential risk on account of making strong claims with insufficient evidence. See especially my responses to komponisto's comment. I may be wrong about this.

From the comment:

My claim is that on average Eliezer's outlandish claims repel people from thinking about existential risk.

The claim is not credible. I've seen a few examples given, but with no way to determine if the people "repelled" would have ever been open to mitigating existential risk in the first place. I suspect anyone who actually cares about existential risk wouldn't dismiss an idea out of hand because a well-known person working to reduce risk thinks his work is very valuable. It is unlikely to be their true rejection

In any case, I would again emphasize that my most recent posts should not be interpreted as personal attacks on Eliezer.

The latest post made this clear, and cheers for that. But the previous ones are written as attacks on Eliezer. It's hard to see a diatribe against someone describing them as a cult leader who's increasing existential risk and would do best to shut up and not interpret it as a personal attack.

But the fact that you seem interested in promoting Eliezer and SIAI independently of whether doing so benefits broader society has led me to greatly discount your claims and suggestions which relate to Eliezer and SIAI.

Fair enough, can't blame you for that. I'm happy with my enthusiasm.

Comment author: multifoliaterose 26 August 2010 12:42:02AM 2 points [-]

Does whether Eliezer is over-confident or not have any bearing on whether he's fit to work on FAI?

Oh, I don't think so, see my response to Eliezer here.

The claim is not credible. I've seen a few examples given, but with no way to determine if the people "repelled" would have ever been open to mitigating existential risk in the first place. I suspect anyone who actually cares about existential risk wouldn't dismiss an idea out of hand because a well-known person working to reduce risk thinks his work is very valuable. It is unlikely to be their true rejection

Yes, so here it seems like there's enough ambiguity as to how the publicly available data is properly interpreted so that we may have a legitimate difference of opinion on account of having had different experiences. As Scott Aaronson mentioned in the blogging heads conversation, humans have their information stored in a form (largely subconscious) such that it's not readily exchanged.

All I would add to what I've said is that if you haven't already done so, see the responses to michaelkeenan's comment here (in particular those by myself, bentarm and wedrifid).

If you remain unconvinced, we can agree to disagree without hard feelings :-)

Comment author: JamesAndrix 23 August 2010 05:38:43AM 9 points [-]

How would you address this?

http://scienceblogs.com/pharyngula/2010/08/kurzweil_still_doesnt_understa.php

It seems to me like PZ Meyers really doesn't understand information theory. He's attacking Kurzweil and calling him a kook. Initially due to a relatively straightforward complexity estimate.

And I'm pretty confident that Myers is wrong on this, unless there is another information rich source of inheritance besides DNA, which Meyers knows about but Kurzweil and I do not.

This looks to me like a popular science blogger doing huge PR damage to everything singularity related, and being wrong about it. Even if he is later convinced of this point.

I don't see how to avoid this short of just holding back all claims which seem exceptional and that some 'reasonable' person might fail to understand and see as a sign of cultishness. If we can't make claims as basic as the design of the brain being in the genome, then we may as well just remain silent.

But then we wouldn't find out if we're wrong, and we're rationalists.

Comment author: knb 26 August 2010 05:17:21AM *  2 points [-]

Myers has always had a tendency to attack other people's arguments like enemy soldiers. A good example is his take on evolutionary psychology, which he hates so much it is actually funny.

And then look at the source: Satoshi Kanazawa, the Fenimore Cooper of Sociobiology, the professional fantasist of Psychology Today. He's like the poster boy for the stupidity and groundlessness of freakishly fact-free evolutionary psychology. Just ignore anything with Kanazawa's name on it.

He also claims to have desecrated a consecrated host (the sacramental wafers Catholics consider to be the body of Jesus). That will show those evil theists how a good, rational person behaves!

Comment author: wedrifid 24 August 2010 01:41:36AM *  -2 points [-]

And I'm pretty confident that Myers is wrong on this, unless there is another information rich source of inheritance besides DNA

Personal libraries.

Comment author: Kingreaper 23 August 2010 03:26:45PM 4 points [-]

And I'm pretty confident that Myers is wrong on this, unless there is another information rich source of inheritance besides DNA, which Meyers knows about but Kurzweil and I do not.

The environment is information-rich, especially the social environment.

Meyers make it quite clear that interactions with the environment are an expected input of information in his understanding.

Do you disagree with information input from the environment?

Comment author: JamesAndrix 23 August 2010 05:10:13PM 4 points [-]

Yes, I disagree.

If he's not talking about some stable information that is present in all environments that yield intelligent humans, then what's important is a kind of information that can be mass generated at low complexity cost.

Even language exposure is relatively low complexity, and the key parts might be inferable from brain processes. And we already know how to offer a socially rich environment, so I don't think it should add to the complexity costs of this problem.

And I think a reverse engineering of a newborn baby brain would be quite sufficient for kurzweil's goal.

In short: we know intelligent brains get reliably generated. We know it's very complex. The source of that complexity must be something information rich, stable, and universal. I know of exactly one such source.

Right now I'm reading myers argument as "a big part of human heredity is memetic rather than just genetic, and there is complex interplay between genes and memes, so you've got to count the memes as part of the total complexity."

I say that Kurzweil is trying to create something compatible with human memes in the first plalce, so we can load them the same way we load children (at worst) And even some classes of memes (age appropriate language exposure) do interact tightly with genes, their information content is not all that high.

Comment author: whpearson 23 August 2010 05:33:57PM -1 points [-]

And I think a reverse engineering of a newborn baby brain would be quite sufficient for kurzweil's goal.

While doable this seems like a very time consuming project and potentially morally dubious. How do you know when you have succeeded and not got a mildly brain damaged one, because you have missed an important detail for language learning?

We really don't want to be running multi year experiments, where humans have to interact with infant machines, that would be ruinously expensive. The quicker you can evaluate the capabilities of the machine the better.

Comment author: JamesAndrix 23 August 2010 05:53:39PM 0 points [-]

Well in Kurzweils' case, you'd look at the source code and debug it to make sure it's doinjg everything it's supposed to, because he's no dealing with a meat brain.

I guess my real point is that language learning should not be tacked on to the problem of reverse engineering the brain, If he makes something that is as capable of learning, that's a win for him. (Hopefully he also reverse engineers all of human morality.)

Comment author: whpearson 25 August 2010 04:36:31PM 0 points [-]

You are assuming the program found via the reverse engineering process is human understandable.... what if it is a strange cellular automata with odd rules. Or an algorithm with parameters you don't know why they are what they are.

Language is an important part of learning for humans. Imagine trying to learn chess if no one explained the legal moves. Something without the capability for language isn't such a big win IMHO.

Comment author: JamesAndrix 25 August 2010 05:09:03PM 0 points [-]

I think we might have different visions of what this reverse engineering would entail, By my concept, if you don't understand the function of the program you wrote, you're not done reverse engineering.

I do think that something capable of learning language would be necessary for a win. but the information content of the language does not count towards the complexity estimate of the thing capable of learning langauge.

Comment author: WrongBot 23 August 2010 03:15:39PM 9 points [-]

For instance, you can't measure the number of transistors in an Intel CPU and then announce, "A-ha! We now understand what a small amount of information is actually required to create all those operating systems and computer games and Microsoft Word, and it is much, much smaller than everyone is assuming."

This analogy made me cringe. Myers is disagreeing with the claim that human DNA completely encodes the structure and functioning of the human brain: the hardware and software, roughly. Looking at the complexity of the hardware and making claims about the complexity of the software, as he does here, is completely irrelevant to his disagreement. It serves only to obscure the actual point under debate, and demonstrates that he has no idea what he's talking about.

Comment author: Risto_Saarelma 23 August 2010 11:01:04AM *  7 points [-]

There seems to be a culture clash between computer scientists and biologists with this matter. DNA bit length as a back-of-the-envelope complexity estimate for a heavily compressed AGI source seems obvious to me, and, it seems, to Larry Page. Biologists are quick to jump to the particulars of protein synthesis and ignore the question of extra information, because biologists don't really deal with information theoretical existence proofs.

It really doesn't help the matter that Kurzweil threw out his estimate when talking about getting at AGI by specifically emulating the human brain, instead of just trying to develop a general human-equivalent AI using code suitable for the computation platform used. This seems to steer most people into thinking that Kurzweil was thinking of using the DNA as literal source code instead of just a complexity yardstick.

Myers seems to have pretty much gone into his creationist-bashing attack mode on this, so I don't have a very high hopes for any meaningful dialogue from him.

Comment author: whpearson 23 August 2010 12:24:51PM 3 points [-]

I'm still not sure what people are trying to say with this. Because the kolmogorov complexity of the human brain given the language of the genetic code and physics is low, therefore X? What is that X precisely?

Because of kolmogorov complexities additive constant, which could be anything from 0 to 3^^^3 or higher, I think it only gives us weak evidence for the amount of code we should expect it to take to code an AI on a computer. It is even weaker evidence for the amount of code it would take to code for it with limited resources. E.g. the laws of physics are simple and little information is taken from the womb, but to create an intelligence from them might require a quantum computer the size of the human head to decompress the compressed code. There might be short cuts to do it, but they might be of vastly greater complexity.

We tend to ignore additive constants when talking about Complexity classes, because human designed algorithms tend not to have huge additive constants. Although I have come across some in my time such as this...

Comment author: Emile 23 August 2010 03:45:42PM 3 points [-]

We have something like this going on like:

discrete DNA code -> lots of messy chemistry and biology -> human intelligence

and we're comparing it to :

discrete computer code -> computer -> human intelligence

Kurzweil is arguing that the size of the DNA code can tell us about the max size of the computer code needed to run an intelligent brain simulation (or a human-level AI), and PZ Myers is basically saying "no, 'cause that chemistry and biology is really really messy".

Now, I agree that the computer code and the DNA code are very very different ("a huge amount of enzymes interacting with each other in 3D real time" isn't the kind of thing you easily simulate on a computer), and the additive constant for converting one into the other is likely to be pretty darn big.

But I also don't see a reason for intelligence to be easier to express with messy biology and chemistry than with computer code. The things about intelligence that are the closest to biology (interfacing with the real world, how one neuron functions) are also the kind of things that we can already do quite well with computer programs.

There are some things that are "natural" to code in Prolog, but not natural in Fortran, fotran. So a short program in prolog might require a long program in Fotran to do the same thing, and for different programs it might be the other way around. I don't see any reason to think that it's easier to encode intelligence in DNA than it is in computer code.

(Now, Kurzweil may be overstating his case when he talks about "compressed" DNA, because to be fair you should compare that to compressed (or compiled) computer code, which translates to much more actual code. I still think the size of the DNA is a very reasonable upper limit, especially when you consider that the DNA was coded by a bloody idiot whose main design pattern is "copy-and-paste", resulting in the bloated code we know)

Comment author: whpearson 23 August 2010 04:52:59PM 1 point [-]

But I also don't see a reason for intelligence to be easier to express with messy biology and chemistry than with computer code.

Do you have any reason to expect it to be the same? Do we have any reason at all? I'm not arguing that it will take more than 50MBs of code, I'm arguing that the DNA value is not informative.

The things about intelligence that are the closest to biology (interfacing with the real world, how one neuron functions) are also the kind of things that we can already do quite well with computer programs.

We are far less good at the doing the equivalent of changing neural structure or adding new neurons (we don't know why or how neurogenesis works for one) in computer programs.

Comment author: Emile 23 August 2010 07:59:22PM 2 points [-]

But I also don't see a reason for intelligence to be easier to express with messy biology and chemistry than with computer code.

Do you have any reason to expect it to be the same? Do we have any reason at all?

If I know a certain concept X requires 12 seconds of speech to express in English, and I don't know anything about Swahili beyond the fact that it's a human language, my first guess will be that concept X requires 12 seconds of speech to express in Swahili.

I would also express compressed versions of translations in various languages of the same book to be roughly the same size.

So, even with very little information, a first estimate (with a big error margin) would be that it takes as many bits to "encode" intelligence in DNA than it does in computer code.

In addition, the fact that some intelligence-related abilities such as multiplying large numbers are easy to express in computer code, but rare in nature would make me revise that estimate towards "code as more expressive than DNA for some intelligence-related stuff".

In addition, knowledge about the history of evolution would make me suspect that large chunks of the human genome are not required for intelligence, either because they aren't expressed, or because they only concern traits that have no impact on our intelligence beyond the fact of keeping us alive. That would also make me revise my estimate downwards for the code size needed for intelligence.

None of those are very strong reasons, but they are reasons nonetheless!

Comment author: whpearson 23 August 2010 09:54:47PM 0 points [-]

If I know a certain concept X requires 12 seconds of speech to express in English, and I don't know anything about Swahili beyond the fact that it's a human language, my first guess will be that concept X requires 12 seconds of speech to express in Swahili.

You'd be very wrong for a lot of technical language, unless they just imported the English words whole sale. For example, "Algorithmic Information Theory," expresses a concept well but I'm guessing it would be hard to explain in Swahili.

Even given that, you can expect the languages of humans to all have roughly the same length because they are generated by the roughly the same hardware and have roughly the same concerns. E.g. things to do with humans.

To give a more realistic translation problem, how long would you expect it to take to express/explain any random English in C code or vice versa?

Comment author: Peter_de_Blanc 29 August 2010 05:00:53AM *  2 points [-]

Selecting a random English sentence will introduce a bias towards concepts that are easy to express in English.

Comment author: Emile 23 August 2010 08:31:54AM *  4 points [-]

It seems to me like PZ Meyers really doesn't understand information theory. He's attacking Kurzweil and calling him a kook. Initially due to a relatively straightforward complexity estimate.

I see it that way too. The DNA can give us an upper bound on the information needed to create a human brain, but PZ Myers reads that as "Kurzweil is saying we will be able to take a strand of DNA and build a brain from that in the next 10 years!", and then procede to attack that straw man.

This, however:

His timeline is absurd. I'm a developmental neuroscientist; I have a very good idea of the immensity of what we don't understand about how the brain works. No one with any knowledge of the field is claiming that we'll understand how the brain works within 10 years. And if we don't understand all but a fraction of the functionality of the brain, that makes reverse engineering extremely difficult.

... I am quite enclined to trust. I would trust it more if it wasn't followed by wrong statements about information theory (that seem wrong to me, at least).

Looking at the comments is depressing. I wish there was some "sane" ways for two communities (readers of PZ Myers and "singularitarians") to engage without it degenerating into name-calling.

Brian: "We should unite against our common enemy!"

Others: "The Judean People's Front?"

Brian: "No! The Romans!"

Though there are software solutions for that (takeonit and other stuff that's been discussed here), it wouldn't help either if the "leaders" (PZ Myers, Kurzweil, etc.) were a bit more responsible and made a genuine effort to acknowledge the other's points when there are strong. So they could converge or at least agree to disagree on something narrow.

But nooo, it's much more fun to get angry, and it gets you more traffic too!

Comment author: RobinZ 23 August 2010 01:09:16PM 0 points [-]

The DNA can give us an upper bound on the information needed to create a human brain [...]

Why do you say this? If humans were designed by human engineers, the 'blueprints' would actually be complete blueprints, sufficient unto the task of determining the final organism ... but they weren't. There's no particular reason to doubt that a significant amount of the final data is encoded in the gestational environment.

Comment author: JamesAndrix 23 August 2010 05:13:41PM 1 point [-]

Artificial wombs

Comment author: RobinZ 23 August 2010 05:17:38PM *  0 points [-]

Don't currently exist. I'm not sure that's a strong argument.

Comment author: Emile 23 August 2010 02:20:28PM 4 points [-]

I'm not sure about what you mean about the "complete blueprints" - I agree that the DNA isn't a complete blueprint, and that an alien civilization with a different chemistry would (probably) find it impossible to rebuild a human if they were just given it's DNA. The gestational environment is essential, I just don't think it encodes much data on the actual working of the brain.

It seems to me that the interaction between the baby and the gestational environment is relatively simple, at least compared to organ development and differentiation. There are a lot of essential things for it to go right, and hormones and nutrients, but 1) I don't see a lot of information transfer in there ("making the brain work a certain way" as opposed to "making the brain work period"), and 2) A lot of the information on how that works is probably encoded in the DNA too.

I would say that the important bits that may not be in the DNA (or in mitocondrial DNA) are the DNA interpretation system (transcription, translation).

Comment author: RobinZ 23 August 2010 03:23:05PM 0 points [-]

That's a strong point, but I think it's still worth bearing in mind that this subject is P. Z. Myers' actual research focus: developmental biology. It appears to me that Kurzweil should be getting Myers' help revising his 50 MB estimate*, not dismissing Myers arguments as misinformed.

Yes, Myers made a mistake in responding to a summary secondhand account rather than Kurzweil's actual position, but Kurzweil is making a mistake if he's ignoring expert opinion on a subject directly relating to his thesis.

* By the way: 50 MB? That's smaller than the latest version of gcc! If that's your complexity estimate, the complexity of the brain could be dominated by the complexity of the gestational environment!

Comment author: Emile 23 August 2010 04:02:08PM 1 point [-]

I agree that Kurzweil could have acknowledged P.Z.Myers' expertise a bit more, especially the "nobody in my field expects a brain simulation in the next ten years" bit.

50 MB - that's still a hefty amount of code, especially if it's 50MB of compiled code and not 50 MB of source code (comparing the size of the source code to the size of the compressed DNA looks fishy to me, but I'm not sure Kurzweil has been actually doing that - he's just been saying "it doesn't require trillions of lines of code").

Is the size of gcc the source code or the compiled version? I didn't see that info on Wikipedia, and don't have gcc on this machine.

Comment author: timtyler 23 August 2010 05:38:28PM 2 points [-]

As I see it, Myers delivered a totally misguided rant. When his mistakes were exposed he failed to apologise. Obviously, there is no such thing as bad publicity.

Comment author: RobinZ 23 August 2010 04:09:34PM 1 point [-]

I'm looking at gcc-4.5.0.tar.gz.

Comment author: Emile 23 August 2010 04:32:27PM 2 points [-]

That includes the source code, the binaries, the documentation, the unit tests, changelogs ... I'm not surpised it's pretty big!

I consider it pretty likely that it's possible to program a human-like intelligence with a compressed source code of less than 50 MB.

However, I'm much less confident that the source code of the first actual human-like intelligence coded by humans (if there is one) will be that size.

Comment author: Perplexed 23 August 2010 01:45:31PM 6 points [-]

There's no particular reason to doubt that a significant amount of the final data is encoded in the gestational environment.

To the contrary, there is every reason to doubt that. We already know that important pieces of the gestational environment (the genetic code itself, core metabolism, etc.) are encoded in the genome. By contrast, the amount of epigenetic information that we know of is miniscule. It is, of course, likely that we will discover more, but it is very unlikely that we will discover much more. The reason for this skepticism is that we don't know of any reliable epigenetic means of transmitting generic information from generation to generation. And the epigenetic information inheritance mechanisms that we do understand all require hundreds of times as much genetic information to specify the machinery as compared to the amount of epigenetic information that the machinery can transmit.

To my mind, it is very clear that (on this narrow point) Kurzweil was right and PZ wrong: The Shannon information content of the genome places a tight upper bound on the algorithmic (i.e. Kolmogorov) information content of the embryonic brain. Admittedly, when we do finally construct an AI, it may take it 25 years to get through graduate school, and it may have to read thru several hundred Wikipedia equivalents to get there, but I am very confident that specifying the process for generating the structure and interconnect of the embryonic AI brain will take well under 7 billion bits.

Comment author: timtyler 23 August 2010 05:08:44PM *  1 point [-]

To my mind, it is very clear that (on this narrow point) Kurzweil was right and PZ wrong: The Shannon information content of the genome places a tight upper bound on the algorithmic (i.e. Kolmogorov) information content of the embryonic brain.

I think you may have missed my devastating analysis of this issue a couple of years back:

"So, who is right? Does the brain's design fit into the genome? - or not?

The detailed form of proteins arises from a combination of the nucleotide sequence that specifies them, the cytoplasmic environment in which gene expression takes place, and the laws of physics.

We can safely ignore the contribution of cytoplasmic inheritance - however, the contribution of the laws of physics is harder to discount. At first sight, it may seem simply absurd to argue that the laws of physics contain design information relating to the construction of the human brain. However there is a well-established mechanism by which physical law may do just that - an idea known as the anthropic principle. This argues that the universe we observe must necessarily permit the emergence of intelligent agents. If that involves a coding the design of the brains of intelligent agents into the laws of physics then: so be it. There are plenty of apparently-arbitrary constants in physics where such information could conceivably be encoded: the fine structure constant, the cosmological constant, Planck's constant - and so on.

At the moment, it is not even possible to bound the quantity of brain-design information so encoded. When we get machine intelligence, we will have an independent estimate of the complexity of the design required to produce an intelligent agent. Alternatively, when we know what the laws of physics are, we may be able to bound the quantity of information encoded by them. However, today neither option is available to us."

Comment author: Perplexed 23 August 2010 06:24:06PM 3 points [-]

You suggest that the human brain might have a high Kolmogorov complexity, the information for which is encoded, not in the human genome (which contains a mere 7 gigabits of information), but rather in the laws of physics, which contain arbitrarily large amounts of information, encoded in the exact values of physical constants. For example, first 30 billion decimal digits of the fine structure constant contain 100 gigabits of information, putting the genome to shame.

Do I have that right?

Well, I will give you points for cleverness, but I'm not buying it. I doubt that it much matters what the constants are, out past the first hundred digits or so. Yes, I realize that the details of how the universe proceeds may be chaotic; it may involve sensitive dependence both on initial conditions and on physical constants. But I don't think that really matters. Physical constants haven't changed since the Cambrian, but genomes have. And I think that it is the change in genomes which led to the human brain, the dolphin brain, the parrot brain, and the octopus brain. Alter the fine structure constant in the 2 billionth decimal place, and those brain architectures would still work, and those genomes would still specify development pathways leading to them. Or so I believe.

Comment author: timtyler 23 August 2010 06:44:17PM *  0 points [-]

I doubt that it much matters what the constants are, out past the first hundred digits or so

What makes you think that?

I realize that the details of how the universe proceeds may be chaotic; it may involve sensitive dependence both on initial conditions and on physical constants. But I don't think that really matters.

...and why not?

Physical constants haven't changed since the Cambrian, but genomes have. And I think that it is the change in genomes which led to the human brain, the dolphin brain, the parrot brain, and the octopus brain.

Under the hypothesis that physics encodes relevant information, a lot of the required information was there from the beginning. The fact that brains only became manifest after the Cambrian doesn't mean the propensity for making brains was not there from the beginning. So: that observation doesn't tell you very much.

Alter the fine structure constant in the 2 billionth decimal place, and those brain architectures would still work, and those genomes would still specify development pathways leading to them. Or so I believe.

Right - but what evidence do you have of that? You are aware of chaos theory, no? Small changes can lead to dramatic changes surprisingly quickly.

Organisms inherit the laws of physics (and indeed the initial conditions of the universe they are in) - as well as their genomes. Information passes down the generations both ways. If you want to claim the design information is in one inheritance channel more than the other one, it seems to me that you need some evidence relating to that issue. The evidence you have presented so far seems pretty worthless - the delayed emergence of brains seems equally compatible with both of the hypotheses under consideration.

So: do you have any other relevant evidence?

Comment author: WrongBot 23 August 2010 06:59:07PM *  0 points [-]

No other rational [ETA: I meant physical and I am dumb] process is known to rely on physical constants to the degree you propose. What you propose is not impossible, but it is highly improbable.

Comment author: timtyler 23 August 2010 07:08:00PM *  1 point [-]

What?!? What makes you think that?

Sensitive dependence on initial conditions is an extremely well-known phenomenon. If you change the laws of physics a little bit, the result of a typical game of billiards will be different. This kind of phenomenon is ubiquitous in nature, from the orbit of planets, to the paths rivers take.

If a butterfly's wing flap can cause a tornado, I figure a small physical constant jog could easily make the difference between intelligent life emerging, and it not doing so billions of years later.

Sensitive dependence on initial conditions is literally everywhere. Check it out:

http://en.wikipedia.org/wiki/Chaos_theory

Comment author: JamesAndrix 24 August 2010 07:02:53AM 0 points [-]

I figure a small physical constant jog could easily make the difference between intelligent life emerging, and it not doing so billions of years later.

First, that is VERY different than the design information being in the constant, but not in the genome. (you could more validly say that the genome is what it is because the constant is precisely what it is.)

Second, the billiard ball example is invalid. It doesn't matter exactly where the billiard balls are if you're getting hustled. Neurons are not typically sensitive to the precise positions of their atoms. Information processing relies on the ability to largely overlook noise.

Comment author: WrongBot 23 August 2010 08:43:26PM 0 points [-]

What physical process would cease to function if you increased c by a billionth of a percent? Or one of the other Planck units? Processes involved in the functioning of both neurons and transistors don't count, because then there's no difference to account for.

Comment author: Kingreaper 23 August 2010 07:11:11PM 1 point [-]

Did you miss this bit:

to the degree you propose

Sensitivity to initial conditions is one thing. Sensitivity to 1 billion SF in a couple of decades?

Comment author: Mitchell_Porter 23 August 2010 07:44:47AM 2 points [-]

I'm pretty confident that Myers is wrong on this, unless there is another information rich source of inheritance besides DNA, which Myers knows about but Kurzweil and I do not.

Myers' thesis is that you are not going to figure out by brute-force physical simulation how the genome gives rise to the organism, knowing just the genomic sequence. On every scale - molecule, cell, tissue, organism - there are very complicated boundary conditions at work. You have to do experimental biology, observe those boundary conditions, and figure out what role they play. I predict he would be a lot more sympathetic if Kurzweil was talking about AIs figuring out the brain by doing experimental biology, rather than just saying genomic sequence + laws of physics will get us there.

Comment author: Perplexed 23 August 2010 04:03:45PM 5 points [-]

Myers' thesis is that you are not going to figure out by brute-force physical simulation how the genome gives rise to the organism, knowing just the genomic sequence.

And he is quite possibly correct. However, that has nothing at all to do with what Kurzweil said.

I predict he would be a lot more sympathetic if Kurzweil was talking about AIs figuring out the brain by doing experimental biology, rather than just saying genomic sequence + laws of physics will get us there.

I predict he would be more sympathetic if he just made the effort to figure out what Kurzweil said. But, of course, we all know there is no chance of that, so "conjecture" might be a better word than "predict".

Comment author: Mitchell_Porter 24 August 2010 11:18:29AM 2 points [-]

Myers doesn't have an argument against Kurzweil's estimate of the brain's complexity. But his skepticism about Kurzweil's timescale can be expressed in terms of the difficulty of searching large spaces. Let's say it does take a million lines of code to simulate the brain. Where is the argument that we can produce the right million lines of code within twenty years? The space of million-line programs is very large.

Comment author: Perplexed 24 August 2010 12:04:20PM 1 point [-]

I agree, both regarding timescale, and regarding reason for timescale difficulties.

As I understand Kurzweil, he is saying that we will build the AI, not by finding the program for development and simulating it, but rather by scanning the result of the development and duplicating it in a different medium. The only relevance of that hypothetical million-line program is that it effectively puts a bound on the scanning and manufacturing tolerances that we need to achieve. Well, while it is probably true in general that we don't need to get the wiring exactly right on all of the trillions of neurons, there may well be some where the exact right embryonic wiring is crucial to success. And, since we don't yet have or understand that million-line program that somehow gets the wiring right reliably, we probably won't get them right ourselves. At least not at first.

It feels a little funny to find myself making here an argument right out of Bill Dembski's playbook. No free lunch! Needle in a haystack. Only way to search that space is by exhaustion. Well, we shall see what we shall see.

Comment author: SilasBarta 23 August 2010 03:41:47PM 3 points [-]

I agree, but at the same time, I wish biologists would learn more information theory, since their focus should be identifying the information flows going on, as this is what will lead us to a comprehensible model of human development and functionality.

(I freely admit I don't have years in the trenches, so this may be a naive view, but if my experience with any other scientific turf war is any guide, this is important advice.)

Comment author: ciphergoth 23 August 2010 07:16:39AM 2 points [-]

This was cited to me in a blog discussion as "schoolboy biology EY gets wrong" (he said something similar, apparently).

Comment author: JamesAndrix 21 August 2010 05:11:28AM 5 points [-]

Who else is nearly as good or better at Friendly AI development than Eliezer Yudkowsky?

I mean besides me, obviously.

Comment author: simplicio 21 August 2010 05:01:00AM *  9 points [-]

The real bone of contention here seems to be the long chain of inference leading from common scientific/philosophical knowledge to the conclusion that uFAI is a serious existential risk. Any particular personal characteristics of EY would seem irrelevant till we have an opinion on that set of claims.

If EY were working on preventing asteroid impacts with earth, and he were the main driving force behind that effort, he could say "I'm trying to save the world" and nobody would look at him askance. That's because asteroid impacts have definitely caused mass extinctions before, so nobody can challenge the very root of his claim.

The FAI problem, on the other hand, is at the top of a large house of inferential cards, so that Eliezer is saving the world GIVEN that W, X, Y and Z are true.

My bottom line: what we should be discussing is simply "Are W, X, Y and Z true?" Once we have a good idea about how strong that house of cards is, it will be obvious whether Eliezer is in a "permissible" epistemic state, or whatever.

Maybe people who know about these questions should consider a series of posts detailing all the separate issues leading to FAI. As far as I can tell from my not-extremely-tech-savvy vantage point, the weakest pillar in that house is the question of whether strong AI is feasible (note I said "feasible," not "possible").

Comment author: Simulation_Brain 23 August 2010 04:55:28AM *  2 points [-]

Upvoted; the issue of FAI itself is more interesting than whether Eliezer is making an ass of himself and thereby the SIAI message (probably a bit; claiming you're smart isn't really smart, but then he's also doing a pretty good job as publicist).

One form of productive self-doubt is to have the LW community critically examine Eliezer's central claims. Two of my attempted simplifications of those claims are posted here and here on related threads.

Those posts don't really address whether strong AI feasible; I think most AI researchers agree that it will become so, but disagree on the timeline. I believe it's crucial but rarely recognized that the timeline really depends on how many resources are devoted to it. Those appear to be steadily increasing, so it might not be that long.

Comment author: jimrandomh 21 August 2010 04:48:54PM 3 points [-]

My bottom line: what we should be discussing is simply "Are W, X, Y and Z true?" Once we have a good idea about how strong that house of cards is, ...

You shouldn't deny knowledge of how strong claims are, and refer to those claims as "a house of cards" in the same sentence. Those two claims are mutually exclusive, and putting them close together like this set off my propagandometer.

Comment author: wedrifid 21 August 2010 08:42:34AM 0 points [-]

The real bone of contention here seems to be the long chain of inference leading from common scientific/philosophical knowledge to the conclusion that FAI is a serious existential risk.

I am assuming you meant uFAI or AGI instead of FAI.

The FAI problem, on the other hand, is at the top of a large house of inferential cards, so that Eliezer is saving the world GIVEN that W, X, Y and Z are true.

For my part the conclusion you mention seems to be the easy part. I consider that an answered question. The 'Eliezer is saving the world' part is far more difficult for me to answer due to the social and political intricacies that must be accounted for.

Comment author: Unknowns 23 August 2010 03:32:42AM *  -1 points [-]

Don't forget that some people, e.g. Roko, also think that FAI is a serious existential risk as well as uFAI.

Comment author: [deleted] 20 August 2010 08:57:17PM 15 points [-]

I don't think there's any point doing armchair diagnoses and accusing people of delusions of grandeur. I wouldn't go so far as to claim that Eliezer needs more self-doubt, in a psychological sense. That's an awfully personal statement to make publicly. It's not self-confidence I'm worried about, it's insularity.

Here's the thing. The whole SIAI project is not publicly affiliated with (as far as I've heard) other, more mainstream institutions with relevant expertise. Universities, government agencies, corporations. We don't have guest posts from Dr. X or Think Tank Fellow Y. The ideas related to friendly AI and existential risk have not been shopped to academia or evaluated by scientists in the usual way. So they're not being tested stringently enough.

It's speculative. It feels fuzzy to me -- I'm not an expert in AI, but I have some education in math, and things feel fuzzy around here.

If you want to claim you're working on a project that may save the world, fine. But there's got to be more to show for it, sooner or later, than speculative essays. At the very least, people worried about unfriendly AI will have to gather data and come up with some kind of statistical study that gives evidence of a threat! Look at climate science. For all the foibles and challenges of the climate change movement, those people actually gather data, create prediction models, predict the results of mitigating policies -- it works more or less like science.

If I'm completely off base here and SIAI is going to get to the science soon, I apologize, and I'll shut up about this for a while.

But look. All this advice about the "sin of underconfidence" is all very well (and actually I've taken it to heart somewhat.) But if you're going to go test your abilities, then test them. Against skeptics. Against people who'll look at you like you're a rotten fish if you don't have a graduate degree. Get something about FAI peer-reviewed or published by a reputable press. Show us something.

Sorry to be so blunt. It's just that I want this to be something. And I have my doubts because there's doesn't seem to be enough in this floating world in the way of unmistakable, concrete achievement.

Comment author: wedrifid 21 August 2010 10:16:43PM 2 points [-]

I agree with your conclusion but not this part:

If you want to claim you're working on a project that may save the world, fine. But there's got to be more to show for it, sooner or later, than speculative essays. At the very least, people worried about unfriendly AI will have to gather data and come up with some kind of statistical study that gives evidence of a threat! Look at climate science. For all the foibles and challenges of the climate change movement, those people actually gather data, create prediction models, predict the results of mitigating policies -- it works more or less like science.

I categorically do not want statistical studies of the type you mention done. I do want solid academic research done but not experiments. Some statistics on, for example, human predictions vs actual time till successful completion on tasks of various difficulties would be useful. But these do not appear to be the type of studies you are asking for, and nor do they target the most significant parts of the conclusion.

You are not entitled to that particular proof.

EDIT: The 'entitlement' link was broken.

Comment author: timtyler 21 August 2010 06:55:20AM *  2 points [-]

We don't have guest posts from Dr. X or Think Tank Fellow Y.

There's these fellows:

Some of them have contributed here:

Comment author: Perplexed 21 August 2010 05:29:59AM 1 point [-]

I only wish it were possible to upvote this comment more than once.

Comment author: multifoliaterose 21 August 2010 04:59:32AM 4 points [-]

I don't think there's any point doing armchair diagnoses and accusing people of delusions of grandeur.

I respectfully disagree with this statement, at least as an absolute. I believe that:

(A) In situations in which people are making significant life choices based on person X's claims and person X exhibits behavior which is highly correlated with delusions of grandeur, it's appropriate to raise the possibility that person X's claims arise from delusions of grandeur and ask that person X publicly address this possibility.

(B) When one raises the possibility that somebody is suffering from delusions of grandeur, this should be done in as polite and nonconfrontational way as possible given the nature of the topic.

I believe that if more people adopted these practices, this would would raise the sanity waterline.

I believe that the situation with respect to Eliezer and portions of the LW community is as in (A) and that I made a good faith effort at (B).

Comment author: steven0461 20 August 2010 09:48:22PM *  13 points [-]

The whole SIAI project is not publicly affiliated with (as far as I've heard) other, more mainstream institutions with relevant expertise. Universities, government agencies, corporations. We don't have guest posts from Dr. X or Think Tank Fellow Y.

According to the about page, LW is brought to you by the Future of Humanity Institute at Oxford University. Does this count? Many Dr. Xes have spoken at the Singularity Summits.

At the very least, people worried about unfriendly AI will have to gather data and come up with some kind of statistical study that gives evidence of a threat!

It's not clear how one would use past data to give evidence for or against a UFAI threat in any straightforward way. There's various kinds of indirect evidence that could be presented, and SIAI has indeed been trying more in the last year or two to publish articles and give conference talks presenting such evidence.

Points that SIAI would do better if it had better PR, had more transparency, published more in the scientific literature, etc., are all well-taken, but these things use limited resources, which to me makes it sound strange to use them as arguments to direct funding elsewhere.

Comment author: torekp 22 August 2010 01:23:12AM 1 point [-]

Thanks for that last link. The paper on Changing the frame of AI futurism is extremely relevant to this series of posts.

Comment author: [deleted] 20 August 2010 10:06:58PM 5 points [-]

My post was by way of explaining why some people (including myself) doubt the claims of SIAI. People doubt claims when, compared to other claims, they're not justified as rigorously, or haven't met certain public standards. Why do I agree with the main post that Eliezer isn't justified in his opinion of his own importance (and SIAI's importance)? Because there isn't (yet) a lot beyond speculation here.

I understand about limited resources. If I were trying to run a foundation like SIAI, I might do exactly what it's doing, at first, and then try to get the academic credentials. But as an outside person, trying to determine: is this worth my time? Is this worth further study? Is this a field I could work in? Is this worth my giving away part of my (currently puny) income in donations? I'm likely to hold off until I see something stronger.

And I'm likely to be turned off by statements with a tone that assumes anyone sufficiently rational should already be on board. Well, no! It's not an obvious, open-and shut deal.

What if there were an organization comprised of idealistic, speculative types, who, unknowingly, got themselves to believe something completely false based on sketchy philosophical arguments? They might look a lot like SIAI. Could an outside observer distinguish fruitful non-mainstream speculation from pointless non-mainstream speculation?

Comment author: timtyler 21 August 2010 06:59:06AM 0 points [-]

I think they are working on their "academic credentials":

http://singinst.org/grants/challenge

...lists some 13 academic papers under various stages of development.

Comment author: WrongBot 20 August 2010 09:13:15PM 7 points [-]

Here's the thing. The whole SIAI project is not publicly affiliated with (as far as I've heard) other, more mainstream institutions with relevant expertise.

LessWrong is itself a joint project of the SIAI and the Future of Humanity Institute at Oxford. Researchers at the SIAI have published these academic papers. The Singularity Summit's website includes a lengthy list of partners, including Google and Scientific American.

The SIAI and Eliezer may not have done the best possible job of engaging with the academic mainstream, but they haven't done a terrible one either, and accusations that they aren't trying are, so far as I am able to determine, factually inaccurate.

Comment author: Perplexed 21 August 2010 05:30:53PM *  6 points [-]

Researchers at the SIAI have published these academic papers.

But those don't really qualify as "published academic papers" in the sense that those terms are usually understood in academia. They are instead "research reports" or "technical reports".

The one additional hoop that these high-quality articles should pass through before they earn the status of true academic publications is to actually be published - i.e. accepted by a reputable (paper or online) journal. This hoop exists for a variety of reasons, including the claim that the research has been subjected to at least a modicum of unbiased review, a locus for post-publication critique (at least a journal letters-to-editor column), and a promise of stable curatorship. Plus inclusion in citation indexes and the like.

Perhaps the FHI should sponsor a journal, to serve as a venue and repository for research articles like these.

Comment author: CarlShulman 21 August 2010 05:48:02PM 1 point [-]

Perhaps the FHI should sponsor a journal

There are already relevant niche philosophy journals (Ethics and Information Technology, Minds and Machines, and Philosophy and Technology). Robin Hanson's "Economic Growth Given Machine Intelligence" has been accepted in an AI journal, and there are forecasting journals like Technological Forecasting and Social Change. For more unusual topics, there's the Journal of Evolution and Technology. SIAI folk are working to submit the current crop of papers for publication.

Comment author: Perplexed 21 August 2010 05:53:17PM 1 point [-]

Cool!

Comment author: wedrifid 21 August 2010 08:52:29AM 0 points [-]

The SIAI and Eliezer may not have done the best possible job of engaging with the academic mainstream, but they haven't done a terrible one either, and accusations that they aren't trying are, so far as I am able to determine, factually inaccurate.

... particularly in as much as they have become (somewhat) obsolete.

Comment author: MatthewBaker 05 July 2011 11:08:11PM 0 points [-]

Can you clarify please?

Comment author: wedrifid 07 July 2011 05:11:44PM 1 point [-]

Can you clarify please?

Basically, no. Whatever I meant seems to have been lost to me in the temporal context.

Comment author: MatthewBaker 07 July 2011 05:25:40PM 0 points [-]

No worries, I do the same thing sometimes.

Comment author: [deleted] 20 August 2010 09:25:43PM 4 points [-]

Okay, I take that back. I did know about the connection between SIAI and FHI and Oxford.

What are these academic papers published in? A lot of them don't provide that information; one is in Global Catastrophic Risks.

At any rate, I exaggerated in saying there isn't any engagement with the academic mainstream. But it looks like it's not very much. And I recall a post of Eliezer's that said, roughly, "It's not that academia has rejected my ideas, it's that I haven't done the work of trying to get academia's attention." Well, why not?

Comment author: WrongBot 20 August 2010 09:53:51PM 4 points [-]

And I recall a post of Eliezer's that said, roughly, "It's not that academia has rejected my ideas, it's that I haven't done the work of trying to get academia's attention." Well, why not?

Limited time and more important objectives, I would assume. Most academic work is not substantially better than trial-and-error in terms of usefulness and accuracy; it gets by on volume. Volume is a detriment in Friendliness research, because errors can have large detrimental effects relative to the size of the error. (Like the accidental creation of a paperclipper.)

Comment author: Eliezer_Yudkowsky 20 August 2010 09:39:34PM 0 points [-]

If you want it done, feel free to do it yourself. :)

Comment author: Morendil 20 August 2010 09:10:06PM 5 points [-]

We don't have guest posts from Dr. X or Think Tank Fellow Y.

Possibly because this blog is Less Wrong, positioned as "a community blog devoted to refining the art of human rationality", and not as the SIAI blog, or an existential risk blog, or an FAI blog.

Comment author: jimrandomh 20 August 2010 12:20:14PM *  5 points [-]

Take no pride in your confession that you too are biased; do not glory in your self-awareness of your flaws. This is akin to the principle of not taking pride in confessing your ignorance; for if your ignorance is a source of pride to you, you may become loathe to relinquish your ignorance when evidence comes knocking. Likewise with our flaws - we should not gloat over how self-aware we are for confessing them; the occasion for rejoicing is when we have a little less to confess.

There's something to what Eliezer is saying here: when people are too strongly committed to the idea that humans are fallible this can become a self-fulfilling prophecy where humans give up on trying to improve things and as a consequence remain fallible when they could have improved.

I actually read this as a literal, technical statement about when to let the reward modules of our minds trigger, and not a statement about whether low or high confidence is desirable. Finding a flaw in oneself is only valuable if it's followed by further investigation into details and fixes, and, as a purely practical matter, that investigation is more likely to happen if you feel good about having found a fix, than if you feel good about having found a flaw.

Comment author: Vladimir_Nesov 20 August 2010 12:05:31PM *  43 points [-]

This post suffers from lumping together orthogonal issues and conclusions from them. Let's consider individually the following claims:

  1. The world is in danger, and the feat of saving the world (if achieved) would be very important, more so than most other things we can currently do.
  2. Creating FAI is possible.
  3. Creating FAI, if possible, will be conductive to saving the world.
  4. If FAI is possible, person X's work contributes to developing FAI.
  5. Person X's work contributes to saving the world.
  6. Most people's work doesn't contribute to saving the world.
  7. Person X's activity is more important than that of most other people.
  8. Person X believes their activity is more important than that of most other people.
  9. Person X suffers from delusions of grandeur.

A priori, from (8) we can conclude (9). But assuming the a priori improbable (7), (8) is a rational thing for X to conclude, and (9) doesn't automatically follow. So, at this level of analysis, in deciding whether X is overconfident, we must necessarily evaluate (7). In most cases, (7) is obviously implausible, but the post itself suggests one pattern for recognizing when it isn't:

The modern world is sufficiently complicated so that no human no matter how talented can have good reason to believe himself or herself to be the most important person in human history without actually doing something which very visibly and decisively alters the fate of humanity.

Thus, "doing something which very visibly and decisively alters the fate of humanity" is the kind of evidence that allows to conclude (7). But unfortunately there is no royal road to epistemic rationality, we can't require this particular argument that (7) in all cases. Sometimes the argument has an incompatible form.

In our case, the shape of the argument that (7) is as follows. Assuming (2), from (3) and (4) it follows that (5), and from (1), (5) and (6) we conclude (7). Note that the only claim about a person is (4), that their work contributes to development of FAI. All the other claims are about the world, not about the person.

Given the structure of this argument for the abhorrent (8), something being wrong with the person can only affect the truth of (4), and not of the other claims. In particular, the person is overconfident if person X's work doesn't in fact contribute to FAI (assuming it's possible to contribute to FAI).

Now, the extent of overconfidence in evaluating (4) is not related to the weight of importance conveyed by the object level conclusions (1), (2) and (3). One can be underconfident about (4) and still (8) will follow. In fact, (8) is rather insensitive to the strength of assertion (4): even if you contribute to FAI a little bit, but the other object level claims hold, your work is still very important.

Finally, my impression is that Eliezer is indeed overconfident about his ability to technically contribute to FAI (4), but not to the extent this post suggests, since as I said the strength of claim (8) has nothing to do with the level of overconfidence in (4), and even small contribution to FAI is enough to conclude (8) given other object level assumptions. Indeed, Eliezer never claims that success is assured:

Success is not assured. I'm not sure what's meant by confessing to being "ambitious". Is it like being "optimistic"?

On the other hand, only few people are currently in the position to claim (4) to any extent. One needs to (a) understand the problem statement, (b) be talented enough, and (c) take the problem seriously enough to direct serious effort at it.

My ulterior motive to elaborating this argument is to make the situation a little bit clearer to myself, since I claim the same role, just to a smaller extent. (One reason I don't have much confidence is that each time I "level up", last time around this May, I realize how misguided my past efforts were, and how much time and effort it will take to develop the skillset necessary for the next step.) I don't expect to solve the whole problem (and I don't expect Eliezer or Marcello or Wei to solve the whole problem), but I do expect that over the years, some measure of progress can be made by mine and their efforts, and I expect other people will turn up (thanks to Eliezer's work on communicating the problem statement of FAI and new SIAI's work on spreading the word) whose contributions will be more significant.

Comment author: Wei_Dai 28 September 2012 07:22:57PM 1 point [-]

On the other hand, only few people are currently in the position to claim (4) to any extent. One needs to (a) understand the problem statement, (b) be talented enough, and (c) take the problem seriously enough to direct serious effort at it.

(4 here being "If FAI is possible, person X's work contributes to developing FAI.") This seems be a weak part of your argument. A successful FAI attempt will obviously have to use lots of philosophical and technical results that were not developed specifically with FAI in mind. Many people may be contributing to FAI, without consciously intending to do so. For example when I first started thinking about anthropic reasoning I was mainly thinking about human minds being copyable in the future and trying to solve philosophical puzzles related to that.

Another possibility is that the most likely routes to FAI go through intelligence enhancement or uploading, so people working in those fields are actually making more contributions to FAI than people like you and Eliezer.

Comment author: whpearson 20 August 2010 02:20:05PM 4 points [-]

Most people's work doesn't contribute to saving the world.

I'd argue that a lot of people's work does. Everybody that contributes to keeping the technological world running (from farmers to chip designers) enables us to potentially save ourselves from the longer term non-anthrogenic existential risks.

Comment author: Vladimir_Nesov 20 August 2010 02:32:29PM *  4 points [-]

Obviously, you need to interpret that statement as "Any given person's work doesn't significantly contribute to saving the world". In other words, if we "subtract" that one person, the future (in the aspect of the world not ending) changes insignificantly.

Comment author: MartinB 20 August 2010 02:58:29PM 1 point [-]

That makes me wonder who will replace Norman Borlaug, or lets say any particular influential writer or thinker.

Comment author: whpearson 20 August 2010 02:46:22PM 2 points [-]

Are you also amending 4) to have the significant clause?

Because there are lots of smart people that have worked on AI, whose work I doubt would be significant. And that is the nearest reference class I have for likely significance of people working on FAI.

Comment author: Vladimir_Nesov 20 August 2010 03:04:52PM *  1 point [-]

I'm not amending, I'm clarifying. (4) doesn't have world-changing power in itself, only through the importance of FAI implied by other arguments, and that part doesn't apply to activity of most people in the world. I consider the work on AI as somewhat significant as well, although obviously less significant than work on FAI at the margain, since much more people are working on AI. The argument, as applied to their work, makes them an existential threat (moderate to high when talking about the whole profession, rather weak when talking about individual people).

As for the character of work, I believe that at the current stage, productive work on FAI is close to pure mathematics (but specifically with problem statements not given), and very much unlike most of AI or even the more rigorous kinds from machine learning (statistics).

Comment author: CarlShulman 20 August 2010 02:23:30PM 1 point [-]

Agreed. More broadly, everyone affects anthropogenic existential risks too, which limits the number of orders of magnitude one can improve in impact from a positive start.

Comment author: JRMayne 20 August 2010 02:19:38PM *  -1 points [-]

Person X's activity is more important than that of most other people.

Person X believes their activity is more important than that of most other people.

Person X suffers from delusions of grandeur.

Person X believes that their activity is more important than all other people, and that no other people can do it.

Person X also believes that only this project is likely to save the world.

Person X also believes that FAI will save the world on all axes, including political and biological.

--JRM

Comment author: multifoliaterose 20 August 2010 02:16:12PM 5 points [-]

Your analysis is very careful and I agree with almost everything that you say.

I think that one should be hesitant to claim too much for a single person on account of the issue which Morendil raises - we are all connected. Your ability to work on FAI depends on the farmers who grow your food, the plumbers who ensure that you have access to running water, the teachers who you learned from, the people at Google who make it easier for you to access information, etc.

I believe that you (and others working on the FAI problem) can credibly hold the view that your work has higher expected value to humanity than that of a very large majority (e.g. 99.99%) of the population. Maybe higher.

I don't believe that Eliezer can credibly hold the view that he's the highest expected value human who has ever lived. Note that he has not offered a disclaimer denying the view that JRMayne has attributed to him despite the fact that I have suggested that he do so twice now.

Comment author: Vladimir_Nesov 20 August 2010 09:07:29PM *  7 points [-]

You wrote elsewhere in the thread:

I assign a probability of less than 10^(-9) to [Eliezer] succeeding in playing a critical role on the Friendly AI project that [he's] working on.

Does it mean that we need 10^9 Eliezer-level researchers to make progress? Considering that Eliezer is probably at about 1 in 10000 level of ability (if we forget about other factors that make research in FAI possible, such as getting in the frame of mind of understanding the problem and taking it seriously), we'd need about 1000 times more human beings than currently exists on the planet to produce a FAI, according to your estimate.

How does this claim coexist with the one you've made in the above comment?

I believe that you (and others working on the FAI problem) can credibly hold the view that your work has higher expected value to humanity than that of a very large majority (e.g. 99.99%) of the population. Maybe higher.

It doesn't compute, there is an apparent inconsistency between these two claims. (I see some ways to mend it by charitable interpretation, but I'd rather you make the intended meaning explicit yourself.)

Comment author: Jonathan_Graehl 20 August 2010 10:16:13PM 2 points [-]

Eliezer is probably at about 1 in 10000 level of ability [of G]

Agreed, and I like to imagine that he reads that and thinks to himself "only 10000? thanks a lot!" :)

In case anyone takes the above too seriously, I consider it splitting hairs to talk about how much beyond 1 in 10000 smart anyone is - eventually, motivation, luck, and aesthetic sense / rationality begin to dominate in determining results IMO.

Comment author: multifoliaterose 20 August 2010 10:08:00PM 1 point [-]

Does it mean that we need 10^9 Eliezer-level researchers to make progress?

No, in general p(n beings similar to A can do X) does not equal n multiplied by p(A can do X).

I'll explain my thinking on these matters later.

Comment author: Vladimir_Nesov 20 August 2010 10:14:05PM *  0 points [-]

No, in general p(n beings similar to A can do X) does not equal n multiplied by p(A can do X).

Yes, strictly speaking we'd need even more, if that. The more serious rendition of my remark is that you seem to imply that the problem itself is not solvable at all, by proxy of the estimate of Eliezer's ability to contribute to the solution. But it's OK, informal conclusions differ; what's not OK is that in the other comment you seem to contradict your claim.

Edit: I was not thinking clearly here.

Comment author: Tyrrell_McAllister 20 August 2010 10:28:58PM 1 point [-]

No, in general p(n beings similar to A can do X) does not equal n multiplied by p(A can do X).

Yes, strictly speaking we'd need even more, if that.

No. There is a very small chance that I will be able to move my couch down the stairs alone. But it's fairly likely that I and my friend will be able to do it together.

Similarly, 10^5 Eliezer-level researchers would together constitute a research community that could do things that Eliezer himself has less than probability 10^(-5) of doing on his own.

Comment author: Vladimir_Nesov 20 August 2010 10:32:22PM *  2 points [-]

Agreed, I was not thinking clearly. The original comment stands, since what you suggest is one way to dissolve the apparent inconsistency, but my elaboration was not lucid.

Comment author: multifoliaterose 20 August 2010 10:37:25PM 0 points [-]

Tyrrel_MacAllister's remark is a significant part of what I have in mind.

I presently think that the benefits of a (modestly) large and diverse research community are very substantial and that SIAI should not attempt to research Friendly AI unilaterally but rather should attempt to collaborate with existing institutions.

Comment author: Vladimir_Nesov 20 August 2010 11:01:13PM *  7 points [-]

I agree about the benefits of larger research community, although feasibility of "collaborating with existing institutions" is in question, due to the extreme difficulty of communicating the problem statement. There are also serious concerns about the end-game, where it will be relatively easy to instantiate a random-preference AGI on the basis of tools developed in the course of researching FAI.

Although the instinct is to say "Secrecy in science? Nonsense!", it would also be an example of outside view, where one completes a pattern while ignoring specific detail. Secrecy might make the development of a working theory less feasible, but if open research makes the risks of UFAI correspondingly even worse, it's not what we ought to do.

I'm currently ambivalent on this point, but it seems to me that at least preference theory (I'll likely have a post on that on my blog tomorrow) doesn't directly increase the danger, as it's about producing tools sufficient only to define Friendliness (aka human preference), akin to how logic allows to formalize open conjectures in number theory (of course, the definition of Friendliness has to reference some actual human beings, so it won't be simple when taken together with that, unlike conjectures in number theory), with such definition allowing to conclusively represent the correctness of any given (efficient algorithmic) solution, without constructing that solution.

On the other hand, I'm not confident that having a definition alone is not sufficient to launch the self-optimization process, given enough time and computing power, and thus published preference theory would constitute a "weapon of math destruction".

Comment author: cousin_it 23 August 2010 10:34:18PM *  1 point [-]

preference theory (I'll likely have a post on that on my blog tomorrow)

Hey, three days have passed and I want that post!

Comment author: multifoliaterose 21 August 2010 05:27:55AM 1 point [-]

I agree about the benefits of larger research community, although feasibility of "collaborating with existing institutions" is in question, due to the extreme difficulty of communicating the problem statement.

Maybe things could gradually change with more interface between people who are interested in FAI and researchers in academia.

There are also serious concerns about the end-game

I agree with this and believe that this could justify secrecy, but I think that it's very important that we hold the people who we trust with the end-game to very high standards for demonstrated epistemic rationality and scrupulousness.

I do not believe that the SIAI staff have met such standards. My belief on this matter regard is a major reason why I'm pursuing my current trajectory of postings.

Comment author: cata 20 August 2010 01:31:49PM *  0 points [-]

Generally speaking, your argument isn't very persuasive unless you believe that the world is doomed without FAI and that direct FAI research is the only significant contribution you can make to saving it. (EDIT: To clarify slightly after your response, I mean to point out that you didn't directly mention these particular assumptions, and that I think many people take issue with them.)

My personal, rather uninformed belief is that FAI would be a source of enormous good, but it's not necessary for humanity to continue to grow and to overcome x-risk (so 3 is weaker); X may be contributing to the development of FAI, but not that much (so 4 is weaker); and other people engaged in productive pursuits are also contributing a non-zero amount to "save the world" (so 6 is weaker.)

As such, I have a hard time concluding that X's activity is anywhere near the "most important" using your reasoning, although it may be quite important.

Comment author: Vladimir_Nesov 20 August 2010 01:36:26PM *  3 points [-]

Generally speaking, your argument isn't very persuasive unless you believe that the world is doomed without FAI and that direct FAI research is the only significant contribution you can make to saving it.

The argument I gave doesn't include justification of things it assumes (that you referred to). It only serves to separate the issues with claims about a person from issues with claims about what's possible in the world. Both kinds of claims (assumptions in the argument I gave) could be argued with, but necessarily separately.

Comment author: cata 20 August 2010 02:14:29PM *  0 points [-]

OK, I now see what your post was aimed at, a la this other post you made. I agree that criticism ought to be toward person X's beliefs about the world, not his conclusions about himself.

Comment author: prase 20 August 2010 10:55:08AM *  7 points [-]

An interesting post, well written, upvoted. Mere existence of such posts here constitutes a proof that LW is still far from Objectivism, not only because Eliezer is way more rational (and compassionate) than Ayn Rand, but mainly because the other people here are aware of dangers of cultism.

However, I am not sure whether the right way to prevent cultish behaviour (whether the risk is real or not) is to issue warning like this to the leader (or any sort of warning, perhaps). The dangers of cultism emerge from simply having a leader; whatever the level of personal rationality, being a single extraordinarily revered person in any group for any longer time probably harm's one's judgement, and the overall atmosphere of reverence is unhealthy for the group. Maybe more generally, the problem not necessarily depends on existence of a leader: if a group is too devoted to some single idea, it faces lots of dangers, the gravest thereof perhaps be separation from reality. Especially if the idea lives in an environment where relevant information is not abundant.

Therefore, I would prefer to see the community concentrate on a broader class of topics, and to continue in the tradition of disseminating rationality started on OB. Mitigating existential risk is a serious business indeed, and it has to be discussed appropriately, but we shouldn't lose perspective and become too fanatic about the issue. There were many statements written on LW in recent months or years, many of them not by EY, declaring absolute preference of existential risk mitigation above everything else; those statements I find unsettling.

Final nitpick: Gandhi is misspelled in the OP.

Comment author: CarlShulman 20 August 2010 12:37:34PM 3 points [-]

Therefore, I would prefer to see the community concentrate on a broader class of topics, and to continue in the tradition of disseminating rationality started on OB.

The best way to advance this goal being is probably to write an interesting top-level post.

Comment author: prase 20 August 2010 12:50:47PM 4 points [-]

I agree. However not everybody is able to.

Comment author: ciphergoth 20 August 2010 11:23:24AM 6 points [-]

There were many statements written on LW in recent months or years, many of them not by EY, declaring absolute preference of existential risk mitigation above everything else; those statements I find unsettling.

The case for devoting all of your altruistic efforts to a single maximally efficient cause seems strong to me, as does the case that existential risk mitigation is that maximally efficient cause. I take it you're familiar with that case (though see eg "Astronomical Waste" if not) so I won't set it all out again here. If you think I'm mistaken, actual counter-arguments would be more useful than emotional reactions.

Comment author: prase 20 August 2010 11:55:52AM *  3 points [-]

I don't object to devoting (almost) all efforts to a single cause generally. I do, however, object to such devotion in case of FAI and the Singularity.

If a person devotes all his efforts to a single cause, his subjective feeling of importance of the cause will probably increase and most people will subsequently overestimate how important the cause is. This danger can be faced by carefully comparing the results of one's deeds with the results of other people's efforts, using a set of selected objective criteria, or measure it using some scale ideally fixed at the beginning, to protect oneself from moving the goalposts.

The problem is, if the cause is put so far in the future and based so much on speculations, there is no fixed point to look at when countering one's own biases, and the risk of a gross overestimation of one's agenda becomes huge. So the reason why I dislike the mentioned suggestions (and I am speaking, for example, about the idea that it is a strict moral duty for everybody who can to support the FAI research as much as they can, which were implicitly present at least in the discussions about the forbidden topic) is not that I reject single-cause devotion in principle (although I like to be wary about it in most situations), but that I assign too low probability to the correctness of the underlying ideas. The whole business is based on future predictions of several tens or possibly hunderts years in advance, which is historically a very unsuccessful discipline. And I can't help but include it in that reference class.

Simultaneously, I don't accept the argument of very huge utility difference between possible outcomes, which should justify one's involvement even if the probability of success (or even probability that the effort has sense) is extremely low. Pascal-wageresque reasoning is unreliable, even if formalised, because it needs careful and precise estimation of probabilities close to 1 or 0, which humans are provably bad at.

Comment author: ciphergoth 20 August 2010 12:29:39PM 2 points [-]

Which of the axioms of the Von Neumann–Morgenstern utility theorem do you reject?

Comment author: prase 20 August 2010 12:48:01PM 1 point [-]

If I had to describe my actual choices, I don't know. No one necessarily, any of the axioms possibly. My inner decision algorithm is probably inconsistent in different ways, I don't believe for example that my choices always satisfy transitivity.

What I wanted to say is that although I know that my decisions are somewhat irrational and thus sub-optimal, in some situations, like Pascal wagers, I don't find consciously creating an utility function and to calculate the right decision to be an attractive solution. It would help me to be marginally more rational (as given by the VNM definition), but I am convinced that the resulting choices would be fairly arbitrary and probably will not reflect my actual preferences. In other words, I can't reach some of my preferences by introspection, and think that an actual attempt to reconstruct an utility function would sometimes do worse than simple, although inconsistent heuristic.

Comment author: Wei_Dai 20 August 2010 12:44:46PM *  3 points [-]

I think the theorem implicitly assumes logical omniscience, and using heuristics instead of doing explicit expected utility calculations should make sense in at least some types of situations for us. The question is whether it makes sense in this one.

I think this is actually an interesting question. Is there an argument showing that we can do better than prase's heuristic of rejecting all Pascal-like wagers, given human limitations?

Comment author: Wei_Dai 20 August 2010 12:15:16PM 5 points [-]

Pascal-wageresque reasoning is unreliable, even if formalised, because it needs careful and precise estimation of probabilities close to 1 or 0, which humans are provably bad at.

Assuming you're right, why doesn't rejection of Pascal-like wagers also require careful and precise estimation of probabilities close to 1 or 0?

Comment author: prase 20 August 2010 12:21:01PM 2 points [-]

I use a heuristic which tells me to ignore Pascal-like wagers and to do whatever I would do if I haven't learned about the wager (in first approximation). I don't behave like an utilitarian in this case, so I don't need to estimate the probabilities and utilities. (I think if I did, my decision would be fairly random, since the utilities and probabilities included would be almost certainly determined mostly by the anchoring effect).

Comment author: Perplexed 20 August 2010 03:22:31PM *  6 points [-]

I use a heuristic which tells me to ignore Pascal-like wagers

I am not sure exactly what using this heuristic entails. I certainly understand the motivation behind the heuristic:

  • when you multiply an astronomical utility (disutility) by a miniscule probability, you may get an ordinary-sized utility (disutility), apparently suitable for comparison with other ordinary-sized utilities. Don't trust the results of this calculation! You have almost certainly made an error in estimating the probability, or the utility, or both.

But how do you turn that (quite rational IMO) lack of trust into an action principle? I can imagine 4 possible precepts:

  • Don't buy lottery tickets
  • Don't buy insurance
  • Don't sell insurance
  • Don't sell back lottery tickets you already own.

Is it rationally consistent to follow all 4 precepts, or is there an inconsistency?

Comment author: prase 23 August 2010 02:14:16PM 0 points [-]

I indeed am motivated by reasons you gave, so lotteries aren't concern for this heuristics, since the probability is known. In fact, I have never thought about lotteries this way, probably because I know the probabilities. The value estimate is a bit less sure (to resonably buy a lottery, it would also need a convex utility curve, which I probably haven't), but the lotteries deal with money, which make pretty good first approximation for value. Insurances are more or less similar, and not all of them include probabilities too low or values too high to fall into the Pascal-wager category.

Actually, I do buy some most common insurances, although I avoid buying insurances against improbable risks (meteorite fall etc.). I don't buy lotteries.

The more interesting aspect of your question is the status-quo conserving potential inconsistency you have pointed out. I would probably consider real Pascal-wagerish assets to be of no value and sell them if I needed the money. This isn't exactly consistent with the "do nothing" strategy I have outlined, so I have to think about it a while to find out whether the potential inconsistencies are not too horrible.

Comment author: ShardPhoenix 21 August 2010 01:54:16AM 1 point [-]

What do those examples have to do with anything? In those cases we actually know the probabilities so they're not Pascal's-Wager-like scenarios.

Comment author: Perplexed 21 August 2010 02:56:01AM 1 point [-]

we actually know the probabilities

So, what is the probability that my house will burn? It may depend on whether I start smoking again. I hope the probability of both is low, but I don't know what it is.

I'm not sure exactly what the definition of Pascal's-Wager-like should be. Is there a definition I should read? Should we ask Prase what he meant? I understood the term to mean anything involving small estimated probabilities and large estimated utilities.

Comment author: ShardPhoenix 21 August 2010 01:11:19PM 0 points [-]

We know the probability to a reasonable level of accuracy - eg consider acturial tables. This is different from things like Pascal's wager where the actual probability may vary by many orders of magnitude from our best estimate.

Comment author: rhollerith_dot_com 21 August 2010 01:28:57PM *  1 point [-]

This is different from things like Pascal's wager where the actual probability may vary by many orders of magnitude from our best estimate.

According to the Bayesians, our best estimate is the actual probability. (According to the frequentists, the probabilities in Pascal's wager are undefined.)

What parent means by "We know the probability to a reasonable level of accuracy - eg consider acturial tables" is that it is possible for a human to give a probability without having to do or estimate a very hairy computation to compute a prior probability (the "starting probability" before any hard evidence is taken into account). ADDED. In other words, it should have been a statement about the difficulty of the computation of the probability, not a statement about the existence of the probability in principle.

Comment author: timtyler 21 August 2010 07:03:33AM 0 points [-]

I understood the term to mean anything involving small estimated probabilities and large estimated utilities.

That would be my reading.

Comment author: timtyler 20 August 2010 11:46:07PM 4 points [-]

Another red flag is when someone else helpfully does the calculation for you - and then expects you to update on the results. Looking at the long history of Pascal-like wagers, that is pretty likely to be an attempt at manipulation.

Comment author: timtyler 21 August 2010 06:52:10PM 2 points [-]

"I believe SIAI’s probability of success is lower than what we can reasonably conceptualize; this does not rule it out as a good investment (since the hoped-for benefit is so large), but neither does the math support it as an investment (donating simply because the hoped-for benefit multiplied by the smallest conceivable probability is large would, in my view, be a form of falling prey to “Pascal’s Mugging”."

Comment author: multifoliaterose 20 August 2010 11:15:45AM *  1 point [-]

Thanks for correcting the misspelling!

Totally agree about LW vs. Objectivism.

Comment author: Morendil 20 August 2010 09:11:16AM 2 points [-]

The mechanism that determines human action is that we do what makes us feel good (at the margin) and refrain from doing what makes us feel bad (at the margin).

"The" mechanism? Citation needed.

a fundamental mechanism of the human brain which was historically correlated with gaining high status is to make us feel good when we have high self-image and feel bad when we have low self-image.

Better, but still unsupported and unclear. What was correlated with what?

Comment author: Wei_Dai 20 August 2010 07:52:23AM 6 points [-]

I find it ironic that multifoliaterose said

I personally think that the best way to face the present situation is to gather more information about all existential risks rather than focusing on one particular existential risk

and then the next post, instead of delineating what he found out about other existential risks (or perhaps how we should go about doing that), is about how to save Eliezer.

Comment author: ata 20 August 2010 06:09:35AM *  14 points [-]

I'm inclined to think that Eliezer's clear confidence in his own very high intelligence and his apparent high estimation of his expected importance (not the dictionary-definition "expected", but rather, measured as an expected quantity the usual way) are not actually unwarranted, and only violate the social taboo against admitting to thinking highly of one's own intelligence and potential impact on the world, but I hope he does take away from this a greater sense of the importance of a "the customer is always right" attitude in managing his image as a public-ish figure. Obviously the customer is not always right, but sometimes you have to act like they are if you want to get/keep them as your customer... justified or not, there seems to be something about this whole endeavour (including but not limited to Eliezer's writings) that makes people think !!!CRAZY!!! and !!!DOOMSDAY CULT!!!, and even if is really they who are the crazy ones, they are nevertheless the people who populate this crazy world we're trying to fix, and the solution can't always just be "read the sequences until you're rational enough to see why this makes sense".

I realize it's a balance; maybe this tone is good for attracting people who are already rational enough to see why this isn't crazy and why this tone has no bearing on the validity of the underlying arguments, like Eliezer's example of lecturing on rationality in a clown suit. Maybe the people who have a problem with it or are scared off by it are not the sort of people who would be willing or able to help much anyway. Maybe if someone is overly wary of associating with a low-status yet extremely important project, they do not really intuitively grasp its importance or have a strong enough inclination toward real altruism anyway. But reputation will still probably count for a lot toward what SIAI will eventually be able to accomplish. Maybe at the point of hearing and evaluating the arguments, seeming weird or high-self-regard-taboo-violating on the surface level will only screen off people who would not have made important contributions anyway, but it does affect who will get far enough to hear the arguments in the first place. In a world full of physics and math and AI cranks promising imminent world-changing discoveries, reasonably smart people do tend to build up intuitive nonsense-detectors, build up an automatic sense of who's not even worth listening to or engaging with; if we want more IQ 150+ people to get involved in existential risk reduction, then perhaps SIAI needs to make a greater point of seeming non-weird long enough for smart outsiders to switch from "save time by evaluating surface weirdness" mode to "take seriously and evaluate arguments directly" mode.

(Meanwhile, I'm glad Eliezer says "I have a policy of keeping my thoughts on Friendly AI to the object level, and not worrying about how important or unimportant that makes me", and I hope he takes that seriously. But unfortunately, it seems that any piece of writing with the implication "This project is very important, and this guy happens, through no fault of his own, to be one of very few people in the world working on it" will always be read by some people as "This guy thinks he's one of the most important people in the world". That's probably something that can't be changed without downplaying the importance of the project, and downplaying the importance of FAI probably increases existential risk enough that the PR hit of sounding overly self-important to probable non-contributors may be well worth it in the end.)

Comment author: halcyon 17 June 2012 07:45:20AM -1 points [-]

In cases like this, I find ethics grounded in utilitarianism to be despicably manipulative positions. You are not treating people as rational agents, but pandering to their lack of virtue so as to recruit them as pawns in your game. If that's how you're going to play, why not manufacture evidence in support of your position if you're Really Sure your assessment is accurate? A clear line of division between "pandering: acceptable" & "evidence manufacture: unacceptable" is nothing but a temporary, culturally contingent consensus caring nothing for reason or consistency. To predict the future, see the direction in which the trend is headed.

No, I would scrupulously adhere to a position of utmost sincerity. Screw the easily offended customers. If this causes my downfall, so be it. That outcome is acceptable because personally, if my failure is caused by honesty and goodwill rather than incompetence, I would question if such a world is worth saving to begin with. I mean, if that is what this enlightened society is like and wants to be like, then I can rather easily imagine our species eventually ending up as the aggressors in one of those alien invasion movies like Independence Day. I keep wondering why, if they evolved in a symbiotic ecosystem analogous to ours, one morally committed individual among their number didn't wipe out their own race and rid the galaxy of this aimless, proliferating evil. It'd be better still to let them be smothered peacefully under their own absence of self-reflection and practice of rewarding corruption, without going out of your way to help them artificially reach a position of preeminence from which to bully others.

Comment author: multifoliaterose 12 December 2010 08:23:25AM 3 points [-]

I'm inclined to think that Eliezer's clear confidence in his own very high intelligence and his apparent high estimation of his expected importance (not the dictionary-definition "expected", but rather, measured as an expected quantity the usual way) are not actually unwarranted, and only violate the social taboo against admitting to thinking highly of one's own intelligence and potential impact on the world

Leaving aside the question of whether such apparently strong estimation is warranted in the case at hand; I would suggest that there's a serious possibility that the social taboo that you allude to is adaptive; that having a very high opinion of oneself (even if justified) is (on account of the affect heuristic) conducive to seeing a halo around oneself, developing overconfidence bias, rejecting criticisms prematurely, etc. leading to undesirable epistemological skewing.

Meanwhile, I'm glad Eliezer says "I have a policy of keeping my thoughts on Friendly AI to the object level, and not worrying about how important or unimportant that makes me", and I hope he takes that seriously.

Same here.

it seems that any piece of writing with the implication "This project is very important, and this guy happens, through no fault of his own, to be one of very few people in the world working on it" will always be read by some people as "This guy thinks he's one of the most important people in the world".

It's easy to blunt this signal.

Suppose that any of:

  1. A billionaire decided to devote most of his or her wealth to funding Friendly AI research.

  2. A dozen brilliant academics became interested in and started doing Friendly AI research.

  3. The probability of Friendly AI research leading to a Friendly AI is sufficiently low so that another existential risk reduction effort (e.g. pursuit of stable whole brain emulation) is many orders of magnitude more cost-effective at reducing existential risk than Friendly AI research.

Then the Eliezer would not (by most estimations) be the highest utilitarian expected value human in the world. If he were to mention such possibilities explicitly this would greatly mute the undesired connotations.

Comment author: Eliezer_Yudkowsky 12 December 2010 08:48:46AM 5 points [-]

If I thought whole-brain emulation were far more effective I would be pushing whole-brain emulation, FOR THE LOVE OF SQUIRRELS!

Comment author: multifoliaterose 12 December 2010 09:26:23AM *  2 points [-]

Good to hear from you :-)

  1. My understanding is that at present there's a great deal of uncertainty concerning how future advanced technologies are going to develop (I've gotten an impression that e.g. Nick Bostrom and Josh Tenenbaum hold this view). In view of such uncertainty, it's easy to imagine new data emerging over the next decades that makes it clear that pursuit of whole-brain emulation (or some currently unimagined strategy) is a far more effective strategy for existential risk reduction than Friendly AI research.

  2. At present it looks to me like a positive singularity is substantially more likely to occur starting with whole-brain emulation than with Friendly AI.

  3. Various people have suggested to me that initially pursuing Friendly AI might have higher expected value on the chance that it turns out to be easy. So I could imagine that it's rational for you personally to focus your efforts on Friendly AI research (EDIT: even if I'm correct in my estimation in the above point). My remarks in the grandparent above were not intended as a criticism of your strategy.

  4. I would be interested in hearing more about your own thinking about the relative feasibility of Friendly AI vs. stable whole-brain emulation and current arbitrage opportunities for existential risk reduction, whether on or off the record.

Comment author: Vladimir_Nesov 13 December 2010 09:28:54AM *  1 point [-]

At present it looks to me like a positive singularity is substantially more likely to occur starting with whole-brain emulation than with Friendly AI.

Do you mean that the role of ems is in developing FAI faster (as opposed to biological-human-built FAI), or are you thinking of something else? If ems merely speed time up, they don't change the shape of FAI challenge much, unless (and to the extent that) we leverage them in a way we can't for the human society to reduce existential risk before FAI is complete (but this can turn out worse as well, ems can well launch the first arbitrary-goal AGI).

Comment author: CarlShulman 14 December 2010 03:01:14PM 2 points [-]

Sped-up ems have slower computers relative to their thinking speed. If Moore's Law of Mad Science means that increasing computing power allows researchers to build AI with less understanding (and thus more risk of UFAI), then a speedup of researchers relative to computing speed makes it more likely that the first non-WBE AIs will be the result of a theory-intensive approach with high understanding. Anders Sandberg of FHI and I are working on a paper exploring some of these issues.

Comment author: Vladimir_Nesov 14 December 2010 09:20:47PM 2 points [-]

This argument lowers the estimate of danger, but AIs developed on relatively slow computers are not necessarily theory-intense, could also be coding-intense, which leads to UFAI. And theory-intense doesn't necessarily imply adequate concern about AI's preference.

Comment author: multifoliaterose 14 December 2010 03:08:41AM 1 point [-]

My idea here is the same as the one that Carl Shulman mentioned in a response to one of your comments from nine months ago.

Comment author: ata 13 December 2010 10:32:25PM *  4 points [-]

but this can turn out worse as well, ems can well launch the first arbitrary-goal AGI

That's the main thing that's worried me about the possibility of ems coming first. But it depends on who is able to upload and who wants to, I suppose. If an average FAI researcher is more likely to upload, increase their speed, and possibly make copies of themselves than an average non-FAI AGI researcher, then it seems like that would be a reduction in risk.

I'm not sure whether that would be the case — a person working on FAI is likely to consider their work to be a matter of life and death, and would want all the speed increases they could get, but an AGI researcher may feel the same way about the threat to their career and status posed by the possibility of someone else getting to AGI first. And if uploading is very expensive at first, it'll only be the most well-funded AGI researchers (i.e. not SIAI and friends) who will have access to it early on and will be likely to attempt it (if it provides enough of a speed increase that they'd consider it to be worth it).

(I originally thought that uploading would be of little to no help in increasing one's own intelligence (in ways aside from thinking the same way but faster), since an emulation of a brain isn't automatically any more comprehensible than an actual brain, but now I can see a few ways it could help — the equivalent of any kind of brain surgery could be attempted quickly, freely, and reversibly, and the same could be said for experimenting with nootropic-type effects within the emulation. So it's possible that uploaded people would get somewhat smarter and not just faster. Of course, that's only soft self-improvement, nowhere near the ability to systematically change one's cognition at the algorithmic level, so I'm not worried about an upload bootstrapping itself to superintelligence (as some people apparently are). Which is good, since humans are not Friendly.)

Comment author: multifoliaterose 14 December 2010 03:55:15AM 3 points [-]

There's a lot to respond to here. Some quick points:

  1. It should be born in mind that greatly increased speed and memory may by themselves strongly affect a thinking entity. I imagine that if I could think a million times as fast I would think a lot more carefully about my interactions with the outside world than I do now.

  2. I don't see any reason to think that SIAI will continue to be the only group thinking about safety considerations. If nothing else, SIAI or FHI can raise awareness of the dangers of AI within the community of AI researchers.

  3. Assuming that brain uploads precede superhuman artificial intelligence, it would obviously be very desirable to have the right sort of human uploaded first.

  4. I presently have a very dim view as to the prospects for modern day humans developing Friendly AI. This skepticism is the main reason why I think that pursuing whole-brain emulations first is more promising. See the comment by Carl that I mentioned in response to Vladimir Nesov's question. Of course, my attitude on this point is subject to change with incoming evidence.

Comment author: ata 12 December 2010 10:45:53AM *  2 points [-]

At present it looks to me like a positive singularity is substantially more likely to occur starting with whole-brain emulation than with Friendly AI.

That's an interesting claim, and you should post your analysis of it (e.g. the evidence and reasoning that you use to form the estimate that a positive singularity is "substantially more likely" given WBE).

Comment author: multifoliaterose 12 December 2010 06:09:40PM 1 point [-]

There's a thread with some relevant points (both for and against) titled Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future. I hadn't looked at the comments until just now and still have to read them all; but see in particular a comment by Carl Shulman.

After reading all of the comments I'll think about whether I have something to add beyond them and get back to you.

Comment author: CarlShulman 14 December 2010 03:07:15PM 3 points [-]

You may want to read this paper I presented at FHI. Note that there's a big difference between the probability of risk conditional on WBE coming first or AI coming first and marginal impact of effort. In particular some of our uncertainty is about logical facts about the space of algorithms and technology landscape, and some of it is about the extent and effectiveness of activism/intervention.

Comment author: multifoliaterose 14 December 2010 08:42:30PM 2 points [-]

Thanks for the very interesting reference! Is it linked on the SIAI research papers page? I didn't see it there.

Note that there's a big difference between the probability of risk conditional on WBE coming first or AI coming first and marginal impact of effort.

I appreciate this point which you've made to me previously (and which appears in your comment that I linked above!).

Comment author: Strange7 20 August 2010 03:33:40PM 4 points [-]

if we want more IQ 150+ people to get involved in existential risk reduction, then perhaps SIAI needs to make a greater point of seeming non-weird long enough for smart outsiders to switch from "save time by evaluating surface weirdness" mode to "take seriously and evaluate arguments directly" mode.

What about less-smart people? I mean, self-motivated idealistic genius nerds are certainly necessary for the core functions of programming an FAI, but any sufficiently large organization also needs a certain number of people who mostly just file paperwork, follow orders, answer the phone, etc. and things tend to work out more efficiently when those people are primarily motivated by the organization's actual goals rather than it's willingness to pay.

Comment author: HughRistik 20 August 2010 07:51:01PM *  1 point [-]

Good point. It's the people in the <130 range that SIAI needs to figure out how to attract. That's where you find people like journalists and politicians.

Comment author: wedrifid 31 August 2010 08:19:37AM 6 points [-]

It's the people in the <130 range that SIAI needs to figure out how to attract. That's where you find people like journalists and politicians.

You also find a lot of journalists and politicians in the 130 to 160 range but the important thing with those groups is that they optimise their beliefs and expressions thereof for appeal to a < 130 range audience.

Comment author: Eliezer_Yudkowsky 20 August 2010 07:01:08AM 10 points [-]

there seems to be something about this whole endeavour (including but not limited to Eliezer's writings) that makes people think !!!CRAZY!!! and !!!DOOMSDAY CULT!!!,

Yes, and it's called "pattern completion", the same effect that makes people think "Singularitarians believe that only people who believe in the Singularity will be saved".

Comment author: TheAncientGeek 17 June 2015 12:51:03PM -1 points [-]

Pattern completion isnt always wrong.

Comment author: ata 20 August 2010 06:23:07PM *  0 points [-]

I must know, have you actually encountered people who literally think that? I'm really hoping that's a comical exaggeration, but I guess I should not overestimate human brains.

Comment author: timtyler 20 August 2010 07:07:51PM *  5 points [-]

"It's basically a modern version of a religious belief system and there's no purpose to it, like why, why must we have another one of these things ... you get an afterlife out of it because you'll be on the inside track when the singularity happens - it's got all the trappings of a religion, it's the same thing." - Jaron here.

Comment author: Eliezer_Yudkowsky 20 August 2010 06:43:32PM 5 points [-]

I've encountered people who think Singularitarians think that, never any actual Singularitarians who think that.

Comment author: ata 20 August 2010 07:14:44PM *  8 points [-]

Yeah, "people who think Singularitarians think that" is what I meant.

I've actually met exactly one something-like-a-Singularitarian who did think something-like-that — it was at one of the Bay Area meetups, so you may or may not have talked to him, but anyway, he was saying that only people who invent or otherwise contribute to the development of Singularity technology would "deserve" to actually benefit from a positive Singularity. He wasn't exactly saying he believed that the nonbelievers would be left to languish when cometh the Singularity, but he seemed to be saying that they should.

Also, I think he tried to convert me to Objectivism.

Comment author: timtyler 20 August 2010 08:13:06PM *  -1 points [-]

Technological progress has increased weath inequality a great deal so far.

Machine intelligence probably has the potential to result in enormous weath inequality.

Comment author: WrongBot 20 August 2010 09:19:49PM 1 point [-]

How, in a post-AGI world, would you define wealth? Computational resources? Matter?

I don't think there's any foundation for speculation on this topic at this time.

Comment author: Vladimir_Nesov 20 August 2010 09:35:47PM 1 point [-]

Control, owned by preferences.

Comment author: khafra 20 August 2010 09:34:50PM *  2 points [-]

Unless we get a hard-takeoff singleton, which is admittedly the SIAI expectation, there will be massive inequality, with a few very wealthy beings and average income barely above subsistence. Thus saith Robin Hanson, and I've never seen any significant holes poked in that thesis.

Comment author: WrongBot 20 August 2010 09:45:48PM 0 points [-]

Robin Hanson seems to be assuming that human preferences will, in general, remain in their current ranges. This strikes me as unlikely in the face of technological self-modification.

Comment author: khafra 20 August 2010 11:20:07PM 2 points [-]

I've never gotten that impression. What I've gotten is that evolutionary pressures will, in the long term, still exist--even if technological self-modification leads to a population that's 99.99% satisfied to live within strict resource consumption limits, unless they harshly punish defectors the .01% with a drive for replication or expansion will overwhelm the rest within a few millenia, until the average income is back to subsistence. This doesn't depend on human preferences, just the laws of physics and natural selection.

Comment author: timtyler 20 August 2010 09:28:01PM *  0 points [-]

I wasn't trying to make an especially long-term prediction:

"We saw the first millionaire in 1716, the first billionaire in 1916 - and can expect the first trillionaire within the next decade - probably before 2016."

Comment author: WrongBot 20 August 2010 09:41:32PM *  5 points [-]
  1. Inflation.

  2. The richest person on earth currently has a net worth of $53.5 billion.

  3. The greatest peak net worth in recorded history, adjusted for inflation, was Bill Gates' $101 billion, which was ten years ago. No one since then has come close. A 10-fold increase in <6 years strikes me as unlikely.

  4. In any case, your extrapolated curve points to 2116, not 2016.

I am increasingly convinced that your comments on this topic are made in less than good faith.

Comment author: timtyler 20 August 2010 10:59:57PM *  0 points [-]

Yes, the last figure looks wrong to me too - hopefully I will revisit the issue.

Update 2011-05-30: yes: 2016 was a simple math mistake! I have updated the text I was quoting from to read "later this century".

Anyway, the huge modern wealth inequalities are well established - and projecting them into the future doesn't seem especially controversial. Today's winners in IT are hugely rich - and tomorrow's winners may well be even richer. People thinking something like they will "be on the inside track when the singularity happens" would not be very surprising.

Comment author: timtyler 20 August 2010 07:12:35PM 0 points [-]

What about the recent "forbidden topic"? Surely that is a prime example of this kind of thing.

Comment author: timtyler 20 August 2010 05:09:18PM *  7 points [-]

The outside view of the pitch:

  • DOOM! - and SOON!
  • GIVE US ALL YOUR MONEY;
  • We'll SAVE THE WORLD; you'll LIVE FOREVER in HEAVEN;
  • Do otherwise and YOU and YOUR LOVED ONES will suffer ETERNAL OBLIVION!

Maybe there are some bits missing - but they don't appear to be critical components of the pattern.

Indeed, this time there are some extra features not invented by those who went before - e.g.:

  • We can even send you to HEAVEN if you DIE a sinner - IF you PAY MORE MONEY to our partner organisation.
Comment author: cousin_it 20 August 2010 06:45:43PM *  3 points [-]

I don't understand why downvote this. It does sound like an accurate representation of the outside view.

Comment author: [deleted] 14 May 2011 10:10:03PM 3 points [-]

Given that a certain fraction of comments are foolish, you can expect that an even larger fraction of votes are foolish, because there are fewer controls on votes (e.g. a voter doesn't risk his reputation while a commenter does).

Comment author: timtyler 30 May 2011 08:23:31AM *  0 points [-]

Yes: votes should probably not be anonymous - and on "various other" social networking sites, they are not.

Comment author: rhollerith_dot_com 30 May 2011 05:01:42PM *  0 points [-]

Metafilter, for one. It is hard for an online community to avoid becoming worthless, but Metafilter has avoided that for 10 years.

Comment author: rhollerith_dot_com 15 May 2011 02:54:33AM *  2 points [-]

Which is why Slashdot (which was a lot more worthwhile in the past than it is now) introduced voting on how other people vote (which Slashdot called metamoderation). Worked pretty well: the decline of Slashdot was mild and gradual compared to the decline of almost every other social site that ever reached Slashdot's level of quality.

Comment author: Nick_Tarleton 20 August 2010 09:41:21PM 7 points [-]

We all already know about this pattern match. Its reiteration is boring and detracts from the conversation.

Comment author: timtyler 14 May 2011 04:09:50PM *  2 points [-]

We all already know about this pattern match. Its reiteration is boring and detracts from the conversation.

If this particular critique has been made more clearly elsewhere, perhaps let me know, and I will happily link to there in the future.

Update 2011-05-30: There's now this recent article: The “Rapture” and the “Singularity” Have Much in Common - which makes a rather similar point.

Comment author: Vladimir_Nesov 20 August 2010 09:23:43PM *  12 points [-]

This whole "outside view" methodology, where you insist on arguing from ignorance even where you have additional knowledge, is insane (outside of avoiding the specific biases such as planning fallacy induced by making additional detail available to your mind, where you indirectly benefit from basing your decision on ignorance).

In many cases outside view, and in particular reference class tennis, is a form of filtering the evidence, and thus "not technically" lying, a tool of anti-epistemology and dark arts, fit for deceiving yourself and others.

Comment author: Unknowns 20 August 2010 07:30:14PM 4 points [-]

It may have been downvoted for the caps.

Comment author: Perplexed 20 August 2010 07:12:44PM 3 points [-]

Perhaps downvoted for suggesting that the salvation-for-cash meme is a modern one. I upvoted, though.

Comment author: timtyler 20 August 2010 07:20:07PM 0 points [-]

Hmm - I didn't think of that. Maybe deathbed repentance is similar as well - in that it offers sinners a shot at eternal bliss in return for public endorsement - and maybe a slice of the will.

Comment author: CarlShulman 20 August 2010 05:16:31PM *  9 points [-]

Do otherwise and YOU and YOUR LOVED ONES will suffer ETERNAL OBLIVION.

This one isn't right, and is a big difference between religion and threats like extinction-level asteroids or AI disasters: one can free-ride if that's one's practice in collective action problems.

Also: Rapture of the Nerds, Not

Comment author: Emile 20 August 2010 09:59:05AM 2 points [-]

This is discussed in Imaginary Positions.

Comment author: Jordan 20 August 2010 06:02:24AM *  3 points [-]

Honestly, I don't think Eliezer would look overly eccentric if it weren't for LessWrong/Overcomingbias. Comp sci is notoriously eccentric, AI research possibly more so. The stigma against Eliezer isn't from his ideas, it isn't from his self confidence, it's from his following.

Kurzweil is a more dulled case: he has good ideas, but is clearly sensational, he has a large following, but that following isn't nearly as dedicated as the one to Eliezer (not necessarily to Eliezer himself, but to his writings and the "practicing of rationality"). And the effect? I have a visceral distaste whenever I hear someone from the Kurzweil camp say something pro-singularity. It's very easy for me to imagine that if I didn't already put stock in the notion of a singularity, that hearing a Kurzweilian talk would bias me against the idea.

Nonetheless, it may very well be the case that Kurzweil has done a net good to the singularity meme (and perhaps net harm to existential risk), spreading the idea wide and far, even while generating negative responses. Is the case with Eliezer the same? I don't know. My gut says no. Taking existential risk seriously is a much harder meme to catch than believing in a dumbed down version of the singularity.

My intuition is that Eliezer by himself, although abrasive in presentation, isn't turning people off by his self confidence and grandioseness. On the contrary, I -- and I suspect many -- love to argue with intelligent people with strong beliefs. In this sense, Eliezer's self assurance is a good bait. On the other hand, when someone with inferior debating skills goes around spurting off the message of someone else, that, to me, is purely repulsive: I have no desire to talk with those people. They're the people spouting off Aether nonsense on physics forums. There's no status to be won, even on the slim chance of victory.

Finally, aside from Eliezer as himself and Eliezer through the proxy of others, there's also Eliezer as a figurehead of SIAI. Here things are different as well, and Eliezer is again no longer merely himself. He speaks for an organisation, and, culturally, we expect serious organisations to temper their outlandish claims. Take cancer research: presumably all researchers want to cure cancer. Presumably at least some of them are optimistic and believe we actually will. But we rarely hear this, and we never hear it from organizations.

I think SIAI, and Eliezer in his capacity as a figure head, probably should temper their claims as well. The idea of existential risks from AI is already pervasive. Hollywood took care of that. What remains is a battle of credibility.

(Unfortunately, I really don't know how to go about tempering claims with the previous claims already on permanent record. But maybe this isn't as important as I think it is.)

Comment author: ata 20 August 2010 06:22:50AM *  2 points [-]

Honestly, I don't think Eliezer would look overly eccentric if it weren't for LessWrong/Overcomingbias. Comp sci is notoriously eccentric, AI research possibly more so. The stigma against Eliezer isn't from his ideas, it isn't from his self confidence, it's from his following.

Would you include SL4 there too? I think there were discussions there years ago (well before OB, and possibly before Kurzweil's overloaded Singularity meme complex became popular) about the perception of SIAI/Singularitarianism as a cult. (I wasn't around for any such discussions, but I've poked around in the archives from time to time. Here is one example.)

Comment author: Eliezer_Yudkowsky 20 August 2010 03:46:39AM 20 points [-]

Unknown reminds me that Multifoliaterose said this:

The modern world is sufficiently complicated so that no human no matter how talented can have good reason to believe himself or herself to be the most important person in human history without actually doing something which very visibly and decisively alters the fate of humanity. At present, anybody who holds such a belief is suffering from extreme delusions of grandeur.

This makes explicit something I thought I was going to have to tease out of multi, so my response would roughly go as follows:

  • If no one can occupy this epistemic state, that implies something about the state of the world - i.e., that it should not lead people into this sort of epistemic state.
  • Therefore you are deducing information about the state of the world by arguing about which sorts of thoughts remind you of your youthful delusions of messianity.
  • Reversed stupidity is not intelligence. In general, if you want to know something about how to develop Friendly AI, you have to reason about Friendly AI, rather than reasoning about something else.
  • Which is why I have a policy of keeping my thoughts on Friendly AI to the object level, and not worrying about how important or unimportant that makes me. In other words, I am reluctant to argue on this level not just for the obvious political reasons (it's a sure loss once the argument starts), but because you're trying to extract information about the real world from a class of arguments that can't possibly yield information about the real world.
  • That said, as far as I can tell, the world currently occupies a ridiculous state of practically nobody working on problems like "develop a reflective decision theory that lets you talk about self-modification". I agree that this is ridiculous, but seriously, blame the world, not me. Multi's principle would be reasonable only if the world occupied a much higher level of competence than it in fact does, a point which you can further appreciate by, e.g., reading the QM sequence, or counting cryonics signups, showing massive failure on simpler issues.
  • That reflective decision theory actually is key to Friendly AI is something I can only get information about by thinking about Friendly AI. If I try to get information about it any other way, I'm producing noise in my brain.
  • We can directly apply multi's stated principle to conclude that reflective decision theory cannot be known to be critical to Friendly AI. We were mistaken to start working on it; if no one else is working on it, it must not be knowably critical; because if it were knowably critical, we would occupy a forbidden epistemic state.
  • Therefore we have derived knowledge about which problems are critical in Friendly AI by arguing about personal psychology.
  • This constitutes a reductio of the original principle. QEA. (As was to be argued.)
Comment author: multifoliaterose 20 August 2010 06:39:52PM 0 points [-]

I agree with khafra. Your response to my post is distortionary. The statement which you quote was a statement about the reference class of people who believe themselves to be the most important person in the world. The statement which you quote was not a statement about FAI.

Any adequate response to the statement which you quote requires that you engage with the last point that khafra made:

Whether this likelihood ratio is large enough to overcome the evidence on AI-related existential risk and the paucity of serious effort dedicated to combating it is an open question.

You have not satisfactorily addressed this matter.

Comment author: Furcas 21 August 2010 03:36:59PM *  4 points [-]

It looks to me like Eliezer gave your post the most generous interpretation possible, i.e. that it actually contained an argument attempting to show that he's deluding himself, rather than just defining a reference class and pointing out that Eliezer fits into it. Since you've now clarified that your post did nothing more than that, there's not much left to do except suggest you read all of Eliezer's posts tagged 'FAI', and this.

Comment author: Unknowns 20 August 2010 09:27:13AM *  1 point [-]

Even if almost everything you say here is right, it wouldn't mean that there is a high probability that if you are killed in a car accident tomorrow, no one else will think about these things (reflective decision theory and so on) in the future, even people who know nothing about you personally. As Carl Shulman points out, if it is necessary to think about these things it is likely that people will, when it becomes more urgent. So it still wouldn't mean that you are the most important person in human history.

Comment author: Jonathan_Graehl 20 August 2010 04:18:46AM *  4 points [-]

Upvoted for being clever.

You've (probably) refuted the original statement as an absolute.

You're deciding not to engage the issue of hubris directly.

Does the following paraphrase your position:

  1. Here's what I (and also part of SIAI) intend to work on

  2. I think it's very important (and you should think so for reasons outline in my writings)

  3. If you agree with me, you should support us

? If so, I think it's fine for you to not say the obvious (that you're being quite ambitious, and that success is not assured). It seems like some people are really dying to hear you say the obvious.

Comment author: wedrifid 20 August 2010 10:08:09AM *  14 points [-]

Upvoted for being clever.

That's interesting. I downvoted it for being clever. It was a convoluted elaboration of a trivial technicality that only applies if you make the most convenient (for Eliezer) interpretation of multi's words. This kind of response may win someone a debating contest in high school but it certainly isn't what I would expect from someone well versed in the rationalism sequences, much less their author.

I don't pay all that much attention to what multi says (no offence intended to multi) but I pay close attention to what Eliezer does. I am overwhelmingly convinced of Eliezer's cleverness and brilliance as a rationalism theorist. Everything else, well, that's a lot more blurry.

Comment author: Furcas 20 August 2010 10:31:53AM *  2 points [-]

I don't think Eliezer was trying to be clever. He replied to the only real justification multi offered for why we should believe that Eliezer is suffering from delusions of grandeur. What else is he supposed to do?

Comment author: khafra 20 August 2010 04:49:22PM 1 point [-]

As Graehl and wedrifid observed, Eliezer responded as if the original statement were an absolute. He applied deductive reasoning and found a reductio ad absurdum. But if, instead of an absolute, you see multifoliaterose's characterization as a reference class: "People who believe themselves to be one of the few most important in the world without having already done something visible and obvious to dramatically change it," it can lower the probability that Eliezer is, in fact, that important by a large likelihood ratio.

Whether this likelihood ratio is large enough to overcome the evidence on AI-related existential risk and the paucity of serious effort dedicated to combating it is an open question.

Comment author: wedrifid 20 August 2010 12:00:48PM 5 points [-]

I got your reply and respect your position. I don't want to engage too much here since it would overlap with discussion surrounding Eliezer's initial reply and potentially be quite frustrating.

What I would like to see is multifoliaterose giving a considered response to the "If not, why not?" question in that link. That would give Eliezer the chance to respond to the meat of the topic at hand. Eliezer has been given a rare opportunity. He can always write posts about himself, giving justifications for whatever degree of personal awesomeness he claims. That's nothing new. But in this situation it wouldn't be perceived as Eliezer grabbing the megaphone for his own self-gratification. He is responding to a challenge, answering a request.

Why would you waste the chance to, say, explain the difference between "SIAI" and "Eliezer Yudkowsky"? Or at least give some treatment of p(someone other than Eliezer Yudkowsky is doing the most to save the world). Better yet, take that chance to emphasise the difference between p(FAI is the most important priority for humanity) and p(Eliezer is the most important human in the world).

Comment author: Eliezer_Yudkowsky 20 August 2010 05:03:09AM 9 points [-]

Success is not assured. I'm not sure what's meant by confessing to being "ambitious". Is it like being "optimistic"? I suppose there are people who can say "I'm being optimistic" without being aware that they are instantiating Moore's Paradox but I am not one of them.

I also disclaim that I do not believe myself to be the protagonist, because the world is not a story, and does not have a plot.

Comment author: Jonathan_Graehl 20 August 2010 10:02:35PM 0 points [-]

Yes, that was exactly the sense of "ambitious" I intended - the second person sneering one, which when used by oneself, would be more about signaling humility than truth. I see that's not your style.

Comment author: Perplexed 20 August 2010 05:14:49AM 1 point [-]

I hope that the double negative in the last sentence was an error.

I introduced the term "protagonist", because at that point we were discussing a hypothetical person who was being judged regarding his belief in a set of three propositions. Everyone recognized, of course, who that hypothetical person represented, but the actual person had not yet stipulated his belief in that set of propositions.