timtyler comments on Existential Risk and Public Relations - Less Wrong

36 Post author: multifoliaterose 15 August 2010 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (613)

You are viewing a single comment's thread.

Comment author: timtyler 15 August 2010 07:35:45AM *  3 points [-]

The one "uncredible" claim mentioned - about Eliezer being "hit by a meteorite" - sounds as though it is the kind of thing he might plausibly think. Not too much of a big deal, IMO.

As with many charities, it is easy to think the SIAI might be having a negative effect - simply because it occupies the niche of another organisation that could be doing a better job - but what to do? Things could be worse as well - probably much worse.

Comment author: multifoliaterose 15 August 2010 08:27:35AM 4 points [-]

The point of my post is not that there's a problem of SIAI staff making claims that you find uncredible, the point of my post is that there's a problem of SIAI making claims that people who are not already sold on taking existential risk seriously find uncredible.

Comment author: Wei_Dai 15 August 2010 09:23:35AM 6 points [-]

Can you give a few more examples of claims made by SIAI staff that people find uncredible? Because it's probably not entirely clear to them (or to others interested in existential risk advocacy) what kind of things a typical smart person would find uncredible.

Looking at your previous comments, I see that another example you gave was that AGI will be developed within the next century. Any other examples?

Comment author: whpearson 15 August 2010 12:34:18PM *  8 points [-]

Things that stretch my credibility.

  • AI will be developed by a small team (at this time) in secret
  • That formal theory involving infinite/near infinite computing power has anything to do with AI and computing in the real world. It might be vaguely useful for looking at computing in the limit (e.g. Galaxy sized computers), but otherwise it is credibility stretching.
Comment author: Wei_Dai 16 August 2010 11:52:18PM *  6 points [-]

AI will be developed by a small team (at this time) in secret

I find this very unlikely as well, but Anna Salamon once put it as something like "9 Fields-Medalist types plus (an eventual) methodological revolution" which made me raise my probability estimate from "negligible" to "very small", which I think given the potential payoffs, is enough for someone to be exploring the possibility seriously.

I have a suspicion that Eliezer isn't privately as confident about this as he appears, and his apparent confidence is itself a PR strategy.

That formal theory involving infinite/near infinite computing power has anything to do with AI and computing in the real world.

Turing's theories involving infinite computing power contributed to building actual computers, right? I don't see why such theories wouldn't be useful stepping stones for building AIs as well. There's a lot of work on making AIXI practical, for example (which may be disastrous if they succeeded since AIXI wasn't designed to be Friendly).

If this is really something that a typical smart person finds hard to believe at first, it seems like it would be relatively easy to convince them otherwise.

Comment author: whpearson 17 August 2010 12:56:22AM 1 point [-]

I have a suspicion that Eliezer isn't privately as confident about this as he appears, and his apparent confidence is itself a PR strategy.

The impression I have lingering from Sl4 days is that he thinks it the only way to do AI safely.

Turing's theories involving infinite computing power contributed to building actual computers, right? I don't see why such theories wouldn't be useful stepping stones for building AIs as well.

They only generally had infinite memory, rather than infinite processing power. The trouble with infinite processing power is that it doesn't encourage you to ask what hypotheses should be processed. You just sweep that issue under the carpet and do them all.

Comment author: PaulAlmond 17 August 2010 03:05:05AM 0 points [-]

I don't see this as being much of an issue for getting usable AI working: it may be an issue if we demand perfect modeling of reality from a system, but there is no reason to suppose we have that.

As I see it, we can set up a probabilistic model of reality and extend this model in an exploratory way. We would continually measure the relevance of features of the model - how much effect they have on predicted values that are of interest - and we would tend to keep those parts of the model that have high relevance. If we "grow" the model out from the existing model that is known to have high relevance, we should expect it to be more likely that we will encounter further, high-relevance "regions".

Comment author: whpearson 17 August 2010 12:01:27PM 0 points [-]

I feel we are going to get stuck in an AI bog. However... This seems to neglect linguistic information.

Let us say that you were interested in getting somewhere. You know you have a bike and a map and have cycled their many times.

What is the relevance of the fact that the word "car" refers to cars to this model? None directly.

Now if I was to tell you that "there is a car leaving at 2pm", then it would become relevant assuming you trusted what I said.

A lot of real world AI is not about collecting examples of basic input output pairings.

AIXI deals with this by simulating humans and hoping that that is the smallest world.

Comment author: JoshuaZ 17 August 2010 01:02:02AM 0 points [-]

That formal theory involving infinite/near infinite computing power has anything to do with AI and computing in the real world.

I'm not sure why that stretches your credibility. Note for example, that computability results often tell us not to try something. Thus for example, the Turing Halting Theorem and related results mean that we know we can't make a program that will in general tell if any arbitrary program will crash.

Similarly, theorems about the asymptotic ability of certain algorithms matters. A strong version of P != NP would have direct implications about AIs trying to go FOOM. Similarly, if trapdoor function or one-way functions exist they give us possible security procedures with handling young general AI.

Comment author: whpearson 17 August 2010 01:53:19AM 0 points [-]

I'm mainly talking about solomonoff induction here. Especially when Eliezer uses it as part of his argument about what we can expect from Super Intelligences. Or searching through 3^^^3 proofs without blinking an eye.

Comment author: JoshuaZ 17 August 2010 02:08:36AM 1 point [-]

The point in the linked post doesn't deal substantially with the limits of arbitrarily large computers. It is just an intuition pump for the idea that a fast moderately bright intelligence could be dangerous.

Comment author: whpearson 17 August 2010 10:42:05AM 0 points [-]

Is it a good intuition pump? To me it is like using a TM as an intuition pump about how much memory we might have in the future.

We will never have anywhere near infinite memory. We will have a lot more than what we have at the moment, but the concept of the TM is not useful in gauging the scope and magnitude.

I'm trying to find the other post that annoyed me in this fashion. Something to do with simulating universes.

Comment author: JanetK 15 August 2010 10:48:01AM 8 points [-]

Is accepting multi-universes important to the SIAI argument? There are a very, very large number of smart people who know very little about physics. They give lip service to quantum theory and relativity because of authority - but they do not understand them. Mentioning multi-universes just slams a door in their minds. If it is important then you will have to continue referring to it but if it is not then it would be better not to sound like you have science fiction type ideas.

Comment author: wedrifid 15 August 2010 11:04:58AM 4 points [-]

Is accepting multi-universes important to the SIAI argument?

Definitely not, for the purposes of public relations at least. It may make some difference when actually doing AI work.

If it is important then you will have to continue referring to it but if it is not then it would be better not to sound like you have science fiction type ideas.

Good point. Cryonics probably comes with a worse Sci. Fi. vibe but is unfortunately less avoidable.

Comment author: multifoliaterose 15 August 2010 11:13:21AM 5 points [-]

Cryonics probably comes with a worse Sci. Fi. vibe

This is a large part of what I implicitly had in mind making my cryonics post (which I guess really rubbed you the wrong way). You might be interested in taking a look at the updated version if you haven't already done so - I hope it's more clear than it was before.

Comment author: multifoliaterose 15 August 2010 10:27:00AM 4 points [-]

Good question. I'll get back to you on this when I get a chance, I should do a little bit of research on the topic first. The two examples that you've seen are the main ones that I have in mind that have been stated in public, but there may be others that I'm forgetting.

There are some other examples that I have in mind from my private correspondence with Michael Vassar. He's made some claims which I personally do not find at all credible. (I don't want to repeat these without his explicit permission.) I'm sold on the cause of existential risk reduction, so the issue in my top level post does not apply here. But in the course of the correspondence I got the impression that he may say similar things in private to other people who are not sold on the cause of existential risk.

Comment deleted 15 August 2010 02:35:51PM *  [-]
Comment author: Jonathan_Graehl 16 August 2010 10:29:49PM *  4 points [-]

The trauma caused by imagining torture blackmail is hard to relate to for most people (including me), because it's so easy to not take an idea like infinite torture blackmail seriously, on the grounds that the likelihood of ever actually encountering such a scenario seems vanishingly small.

I guess those who are disturbed by the idea have excellent imaginations, or more likely, emotional systems that can be fooled into trying to evaluate the idea of infinite torture ("hell").

Therefore, I agree that it's possible to make fun of people on this basis. I myself lean more toward accommodation. Sure, I think those hurt by it should have just avoided the discussion, but perhaps having EY speak for them and officially ban something gave them some catharsis. I feel like I'm beginning to make fun now, so I'll stop.

Comment author: ciphergoth 16 August 2010 06:30:03PM 5 points [-]

The form of blanking out you use isn't secure. Better to use pure black rectangles.

Comment author: RobinZ 16 August 2010 06:48:13PM *  4 points [-]
Comment author: SilasBarta 16 August 2010 06:55:01PM *  20 points [-]

Amusing anecdote: There was a story about this issue on Slashdot one time, where someone possessing kiddy porn had obscured the faces by doing a swirl distortion, but investigators were able to sufficiently reverse this by doing an opposite swirl and so were able to identify the victims.

Then someone posted a comment to say that if you ever want to avoid this problem, you need to do something like a Gaussian blur, which deletes the information contained in that portion of the image.

Somebody replied to that comment and said, "Yeah. Or, you know, you could just not molest children."

Comment author: wedrifid 16 August 2010 07:20:35PM 3 points [-]

Somebody replied to that comment and said, "Yeah. Or, you know, you could just not molest children."

Brilliant.

Comment author: wedrifid 16 August 2010 07:22:37PM 2 points [-]

Nice link. (It's always good to read articles where 'NLP' doesn't refer, approximately, to Jedi mind tricks.)

Comment author: timtyler 16 August 2010 06:38:41PM 3 points [-]

That document was knocking around on a public website for several days.

Using very much security would probably be pretty pointless.

Comment author: timtyler 15 August 2010 02:44:35PM *  5 points [-]

Perhaps that was a marketing effort.

After all, everyone likes to tell the tale of the forbidden topic and the apprentice being insulted. You are spreading the story around now - increasing the mystery and intrigue of these mythical events about which (almost!) all records have been deleted. The material was left in public for a long time - creating plenty of opportunities for it to "accidentally" leak out.

By allowing partly obfuscated forbidden materials to emerge, you may be contributing to the community folklaw, spreading and perpetuating the intrigue.

Comment author: Jonathan_Graehl 16 August 2010 10:58:00PM 1 point [-]

Sure, but it was fair of him to give evidence when challenged, whether or not he baited that challenge.

Comment author: jimrandomh 15 August 2010 02:55:20PM 4 points [-]

Please stop doing this. You are adding spaced repetition to something that I, and others, positively do not want to think about. That is a real harm and you do not appear to have taken it seriously.

Comment author: XiXiDu 15 August 2010 03:03:14PM 4 points [-]

I'm sorry, but people like Wei force me to do this as they make this whole movement look like being completely down-to-earth, when in fact most people, if they knew about the full complexity of beliefs within this community, would laugh out loud.

Comment author: wedrifid 16 August 2010 03:56:47AM *  11 points [-]

You have a good point. It would be completely unreasonable to ban topics in such a manner while simultaneously expecting to maintain an image of being down to earth or particularly credible to intelligent external observers. It also doesn't reflect well on the SIAI if their authorities claim they cannot consider relevant risks because due to psychological or psychiatric difficulties. That is incredibly bad PR. It is exactly the kind of problem this post discusses.

Comment author: HughRistik 16 August 2010 11:57:41PM 4 points [-]

That is incredibly bad PR.

Since the success of an organization is partly dependent on its PR, a rational donor should be skeptical of donating to an organization with bad PR. Any organization soliciting donations should keep this principle in mind.

Comment author: rhollerith_dot_com 17 August 2010 03:52:51PM *  7 points [-]

Since the success of an organization is partly dependent on its PR, a rational donor should be skeptical of donating to an organization with bad PR.

So let me see if I understand: if an organization uses its income to make a major scientific breakthrough or to prevent a million people from starving, but does not pay enough attention to avoiding bad PR with the result that the organization ends (but the productive employees take the skills they have accumulated there to other organizations), that is a bad organization, but if an organization in the manner of most non-profits focuses on staying in existence as long as possible to provide a secure personal income for its leaders, which entails paying close attention to PR, that is a good organization?

Well, let us take a concrete example: Doug Engelbart's lab at SRI International. Doug wasted too much time mentoring the young researchers in his lab with the result that he did not pay enough attention to PR and his lab was forced to close. Most of the young researchers got jobs at Xerox PARC and continued to develop Engelbart's vision of networked personal computers with graphical user interfaces, work that directly and incontrovertibly inspired the Macintosh computer. But let's not focus on that. Let's focus on the fact that Engelbart is a failure because he no longer runs an organization because the organization failed because Engelbart did not pay enough attention to PR and to the other factors needed to ensure the perpetuation of the organization.

Comment author: HughRistik 17 August 2010 05:04:01PM 1 point [-]

Yes, that would be an example. In general, organizations tend to need some level of PR to convince people to align with with its goal.

Comment author: timtyler 16 August 2010 04:58:08PM *  3 points [-]

I still have a hard time believing it actually happened. I have heard that there's no such thing as bad publicity - but surely nobody would pull this kind of stunt deliberately. It just seems to be such an obviously bad thing to do.

Comment author: katydee 16 August 2010 01:02:34AM 5 points [-]

The "laugh test" is not rational. I think that, if the majority of people fully understood the context of such statements, they would not consider them funny.

Comment author: wedrifid 16 August 2010 03:45:28AM 8 points [-]

The context asked 'what kind of things a typical smart person would find uncredible'. This is a perfect example of such a thing.

Comment author: katydee 16 August 2010 10:24:26AM -1 points [-]

A typical smart person would find the laugh test credible? We must have different definitions of "smart."

Comment author: timtyler 16 August 2010 05:01:26PM 2 points [-]

The topic was the banned topic and the deleted posts - not the laugh test. If you explained what happened to an outsider - they would have a hard time believing the story - since the explanation sounds so totally crazy and ridiculous.

Comment author: wedrifid 16 August 2010 12:36:19PM 1 point [-]

(Voted you back up to 0 here.)

I think you are right about the laugh test itself.

Comment author: JoshuaZ 16 August 2010 01:07:42AM 1 point [-]

You don't seem to realize that claims like the ones in the post in question are a common sort of claim to make people vulnerable to neuroses develop further problems. Regardless whether or not the claims are at all reasonable, repeatedly referencing them this way is likely to cause further psychological harm. Please stop.

Comment author: Vladimir_M 16 August 2010 04:27:11AM *  20 points [-]

JoshuaZ:

You don't seem to realize that claims like the ones in the post in question are a common sort of claim to make people vulnerable to neuroses develop further problems. Regardless whether or not the claims are at all reasonable, repeatedly referencing them this way is likely to cause further psychological harm.

However, it seems that in general, the mere fact that certain statements may cause psychological harm to some people is not considered a sufficient ground for banning or even just discouraging such statements here. For example, I am sure that many religious people would find certain views often expressed here shocking and deeply disturbing, and I have no doubt that many of them could be driven into serious psychological crises by exposure to such arguments, especially if they're stated so clearly and poignantly that they're difficult to brush off or rationalize away. Or, to take another example, it's very hard to scare me with hypotheticals, but the post "The Strangest Thing An AI Could Tell You" and the subsequent thread came pretty close; I'm sure that at least a few readers of this blog didn't sleep well if they happened to read that right before bedtime.

So, what exact sorts of potential psychological harm constitute sufficient grounds for proclaiming a topic undesirable? Is there some official policy about this that I've failed to acquaint myself with?

Comment author: JoshuaZ 16 August 2010 03:10:15PM 6 points [-]

That's a very valid set of points and I don't have a satisfactory response.

Comment author: MatthewBaker 05 July 2011 06:20:53PM 1 point [-]

Neither do i, and ive thought a lot about religious extremism and other scary views that turn into reality when given to someone in a sufficiently horrible mental state.

Comment author: wedrifid 15 August 2010 10:34:51AM 0 points [-]

I second that question. I am sure there probably are other examples but they for most part wouldn't occur to me. The main examples that spring to mind are from cases where Robin has disagreed with Eliezer... but that is hardly a huge step away from SIAI mainline!

Comment author: multifoliaterose 15 August 2010 08:06:45AM *  6 points [-]

I suggested what to do about this problem in my post: withhold funding from SIAI, and make it clear to them why you're withholding funding from them, and promise to fund them if the issue is satisfactorily resolved to incentivize them to improve.

Comment author: CarlShulman 15 August 2010 10:25:10AM 2 points [-]

Will you do this?

Comment author: multifoliaterose 15 August 2010 10:35:36AM 7 points [-]

I'm definitely interested in funding an existential risk organization. SIAI would have to be a lot more transparent than it is now right now for me to be interested in funding SIAI. For me personally, it wouldn't be enough for SIAI to just take measures to avoid poisoning the meme, I would need to see a lot more evidence that SIAI is systematically working to maximize its impact on existential risk reduction.

As things stand I prefer to hold out for a better organization. But if SIAI exhibited transparency and accountability of levels similar to those of GiveWell (welcoming and publically responding to criticism regularly, regularly posting detailed plans of action, seeking out feedback from subject matter specialists and making this public when possible, etc.) I would definitely fund SIAI and advocate that others do so as well.

Comment author: Wei_Dai 15 August 2010 10:48:30AM 11 points [-]

"transparency"? I thought the point of your post was that SIAI members should refrain from making some of their beliefs easily available to the public?

Comment author: multifoliaterose 15 August 2010 11:00:18AM *  7 points [-]

I see, maybe I should have been more clear. The point of my post is that SIAI members should not express controversial views without substantiating them with abundant evidence. If SIAI provided compelling evidence that Eliezer's work has higher expected value to humanity than what virtually everybody else is doing, then I would think Eliezer's comment appropriate.

As things stand SIAI has not provided such evidence. Eliezer himself may have such evidence, but if so he's either unwilling or unable to share it.

Comment author: CarlShulman 15 August 2010 12:41:20PM 8 points [-]

higher expected value to humanity than what virtually everybody else is doing,

For what definitions of "value to humanity" and "virtually everybody else"?

If "value to humanity" is assessed as in Bostrom's Astronomical Waste paper, that hugely favors effects on existential risk vs alleviating current suffering or increasing present welfare (as such, those also have existential risk effects). Most people don't agree with that view, so asserting that as a privileged frame can be seen as a hostile move (attacking the value systems of others in favor of a value system according to which one's area of focus is especially important). Think of the anger directed at vegetarians, or those who guilt-trip others about not saving African lives. And of course, it's easier to do well on a metric that others are mostly not focused on optimizing.

Dispute about what best reduces existential risk, and annoyance at overly confident statements there is a further issue, but I think that asserting uncommon moral principles (which happen to rank one's activities as much more valuable than most people would rank them) is a big factor on its own.

Comment author: multifoliaterose 15 August 2010 04:20:52PM 0 points [-]

In case my previous comment was ambiguous, I should say that I agree with you completely on this point. I've been wanting to make a top level post about this general topic for a while. Not sure when I'll get a chance to do so.

Comment author: Wei_Dai 17 August 2010 12:57:54AM *  9 points [-]

There are a lot of second and higher order effects in PR. You can always shape your public statements for one audience and end up driving away (or failing to convince) another one that's more important. If Eliezer had shied away from stating some of the more "uncredible" ideas because there wasn't enough evidence to convince a typical smart person, it would surely prompt questions of "what do you really think about this?" or fail to attract people who are currently interested in SIAI because of those ideas.

If SIAI provided compelling evidence that Eliezer's work has higher expected value to humanity than what virtually everybody else is doing, then I would think Eliezer's comment appropriate.

Suppose Eliezer hadn't made that claim, and somebody asks him, "do you think the work SIAI is doing has higher expected value to humanity than what everybody else is doing?", which somebody is bound to, given that Eliezer is asking for donations from rationalists. What is he supposed to say? "I can't give you the answer because I don't have enough evidence to convince a typical smart person?"

I think you make a good point that it's important to think about PR, but I'm not at all convinced that the specific advice you give are the right ones.

Comment author: multifoliaterose 17 August 2010 05:27:28AM 5 points [-]

Thanks for your feedback. Several remarks:

You can always shape your public statements for one audience and end up driving away (or failing to convince) another one that's more important.

This is of course true. I myself am fairly certain that SIAI's public statements are driving away the people who it's most important to interest in existential risk.

Suppose Eliezer hadn't made that claim, and somebody asks him, "do you think the work SIAI is doing has higher expected value to humanity than what everybody else is doing?", which somebody is bound to, given that Eliezer is asking for donations from rationalists. What is he supposed to say? "I can't give you the answer because I don't have enough evidence to convince a typical smart person?"

•It's standard public relations practice to reveal certain information only if asked.

•An organization that has the strongest case for room for more funding need not be an organization that's doing something of higher expected value to humanity than what everybody else is doing. In particular, I simultaneously believe that there are politicians who have higher expected value to humanity than all existential risk researchers alive and that the cause of existential risk has the greatest room for more funding.

•One need not be confident in one's belief that funding one's organization has highest expected value to humanity to believe that funding one's organization has highest expected to humanity. A major issue that I have with Eliezer's rhetoric is that he projects what I perceive to be an unreasonably high degree of confidence in his beliefs.

•Another major issue with Eliezer's rhetoric that I have is that even putting issues of PR aside, I personally believe that funding SIAI does not have anywhere near the highest expected value to humanity out of all possible uses of money. So from my point of view, I see no upside to Eliezer making extreme claims of the sort that he has - it looks to me as though Eliezer is making false claims and damaging public relations for existential risk as a result.

I will be detailing my reasons for thinking that SIAI's research does not have high expected value in a future post.

Comment author: Vladimir_Nesov 17 August 2010 03:09:03PM 3 points [-]

One need not be confident in one's belief [...]

The level of certainty is not up for grabs. You are as confident as you happen to be, this can't be changed. You can change the appearance, but not your actual level of confidence. And changing the apparent level of confidence is equivalent to lying.

Comment author: Emile 17 August 2010 03:26:37PM 2 points [-]

But it isn't perceived as so by the general public - it seems to me that the usual perception of "confidence" has more to do with status than with probability estimates.

The non-technical people I work with often say that I use "maybe" and "probably" too much (I'm a programmer - "it'll probably work" is a good description of how often it does work in practice) - as if having confidence in one's statements was a sign of moral fibre, and not a sign of miscalibration.

Actually, making statements with high confidence is a positive trait, but most people address this by increasing the confidence they express, not by increasing their knowledge until they can honestly make high-confidence statements. And our culture doesn't correct for that, because errors of calibration are not immediatly obvious (as they would be if, say, we had a widespread habit of betting on various things).

Comment author: timtyler 15 August 2010 11:21:13AM 5 points [-]

If there really was "abundant evidence" there probably wouldn't be much of a controversy.

Comment author: rhollerith_dot_com 15 August 2010 12:16:41PM *  5 points [-]

Eliezer himself may have such evidence [that Eliezer's work has higher expected value to humanity than what virtually everybody else is doing], but if so he's either unwilling or unable to share it.

Now that is unfair.

Since 1997, Eliezer has published (mostly on mailing lists and blogs but also in monographs) an enormous amount (at least ten novels worth unless I am very mistaken) of writings supporting exactly that point. Of course most of this material is technical, but unlike the vast majority of technical prose, it is accessible to non-specialists and non-initiates with enough intelligence, a solid undergraduate education as a "scientific generalist" and a lot of free time on their hands because in his writings Eliezer is constantly "watching out for" the reader who does not yet know what he knows. (In other words, it is uncommonly good technical exposition.)

Comment author: multifoliaterose 15 August 2010 04:29:02PM *  10 points [-]

So my impression has been that the situation is that

(i) Eliezer's writings contain a great deal of insightful material.

(ii) These writings do not substantiate the idea that [that Eliezer's work has higher expected value to humanity than what virtually everybody else is doing].

I say this having read perhaps around a thousand pages of what Eliezer has written. I consider the amount of reading that I've done to be a good "probabilistic proof" that the points (i) and (ii) apply to the portion of his writings that I haven't read.

That being said, if there are any particular documents that you would point me to which you feel do provide a satisfactory evidence for the idea [that Eliezer's work has higher expected value to humanity than what virtually everybody else is doing], I would be happy to examine them.

I'm unwilling to read the whole of his opus given how much of it I've already read without being convinced. I feel that the time that I put into reducing existential risk can be used to better effect in other ways.

Comment author: JamesAndrix 15 August 2010 11:48:23PM *  4 points [-]

It would help to know what steps in the probabilistic proof don't have high probability for you.

For example, you might think that the singularity has a good probability of being relatively smooth and some kind of friendly, even without FAI. or you might think that other existential risks may still be a bigger threat, or you may think that Eliezer isn't putting a dent in the FAI problem.

Or some combination of these and others.

Comment author: Perplexed 16 August 2010 04:21:55AM *  7 points [-]

This might be a convenient place to collect a variety of reasons why people are FOOM denialists. From my POV: 1. I am skeptical that safeguards against UFAI (unFAI) will not work. In part because: 2. I doubt that the "takeoff" will be "hard". Because: 3. I am pretty sure the takeoff will require repeatedly doubling and quadrupling hardware, not just autorewriting software. 4. And hence an effective safeguard would be to simply not give the machine its own credit card! 5. And in any case, the Moore's law curve for electronics does not arise from delays in thinking up clever ideas, it arises from delays in building machines to incredibly high tolerances. 6. Furthermore, even after the machine has more hardware, it doesn't yet have higher intelligence until it reads lots more encyclopedias and proves for itself many more theorems. These things take time. 7. And finally, I have yet to see the argument that an FAI protects us from a future UFAI. That is, how does the SIAI help us? 8. Oh, and I do think that the other existential risks, particularly war and economic collapse, put the UFAI risk pretty far down the priority list. Sure, those other risks may not be quite so existential, but if they don't kill us, they will at least prevent an early singularity.

Edit added two days later: Since writing this, I thought about it some more, shut up for a moment, and did the math. I still think that it is unlikely that the first takeoff will be a hard one; so hard that it gets out of control. But I now estimate something like a 10% chance that the first takeoff will be hard, and I estimate something like a 30% chance that at least one of the first couple dozen takeoffs will be hard. Multiply that by an estimated 10% chance that a hard takeoff will take place without adequate safeguards in place, and another 10% chance that a safeguardless hard takeoff will go rogue, and you get something like a 0.3% chance of a disaster of Forbin Project magnitude. Completely unacceptable.

Originally, I had discounted the chance that a simple software change could cause the takeoff; I assumed you would need to double and redouble the hardware capability. What I failed to notice was that a simple "tuning" change to the (soft) network connectivity parameters - changing the maximum number of inputs per "neuron" from 8 to 7, say, could have an (unexpected) effect on performance of several orders of magnitude simply by suppressing wasteful thrashing or some such thing.

Comment author: multifoliaterose 16 August 2010 03:25:14AM 6 points [-]

Yes, I agree with you. I plan on making my detailed thoughts on these points explicit. I expect to be able to do so within a month.

But for a short answer, I would say that the situation is mostly that I think that:

Eliezer isn't putting a dent in the FAI problem.

Comment author: XiXiDu 15 August 2010 12:31:35PM 4 points [-]

Can you be more specific than "it's somewhere beneath an enormous amount of 13 years of material from the very same person whose arguments are scrutinized for evidence"?

This is not sufficient to scare people up to the point of having nightmares and ask them for most of their money.

Comment author: rhollerith_dot_com 15 August 2010 01:16:59PM *  1 point [-]

Can you be more specific . . . ?

Do you want me to repeat the links people gave you 24 hours ago?

The person who was scared to the point of having nightmares was almost certainly on a weeks-long or months-long visit to the big house in California where people come to discuss extremely powerful technologies and the far future and to learn from experts on these subjects. That environment would tend to cause a person to take certain ideas more seriously than a person usually would.

Comment author: Jonathan_Graehl 16 August 2010 11:09:29PM 2 points [-]

Also, are we really discrediting people because they were foolish enough to talk about their deranged sleep-thoughts? I'd sound pretty stupid too if I remembered and advertised every bit of nonsense I experienced while sleeping.

Comment author: XiXiDu 15 August 2010 01:51:55PM *  1 point [-]

It was more than one person. Anyway, I haven't read all of the comments yet so I might have missed some specific links. If you are talking about links to articles written by EY himself where he argues about AI going FOOM, I commented on one of them.

Here is an example of the kind of transparency in the form of strict calculations, references and evidence I expect.

As I said, I'm not sure what other links you are talking about. But if you mean the kind of LW posts dealing with antipredictions, I'm not impressed. Predicting superhuman AI to be a possible outcome of AI research is not sufficient. Where is the difference between claiming the LHC will go FOOM? I'm sure someone like EY would be able to write a thousand posts around such a scenario telling me that the high risk associated with the LHC going FOOM does outweigh its low probability. There might be sound arguments to support this conclusion. But it is a conclusion and a framework of arguments based on a assumption that is itself of unknown credibility. So is it too much to ask for some transparet evidence to fortify this basic premise? Evidence that is not somewhere to be found within hundreds of posts not directly concerned with the evidence in question but rather arguing based on the very assumption it is trying to justify?

Comment author: whpearson 15 August 2010 01:05:49PM *  3 points [-]

I'm planning to fund FHI rather than SIAI, when I have a stable income (although my preference is for a different organisation that doesn't exist)

My position is roughly this.

  • The nature of intelligence (and its capability for FOOMing) is poorly understood

  • The correct actions to take depend upon the nature of intelligence.

As such I would prefer to fund an institute that questioned the nature of intelligence, rather than one that has made up its mind that a singularity is the way forward. And it is not just the name that makes me think that SIAI has settled upon this view.

And because the nature of intelligence is the largest wild card in the future of humanity, I would prefer FHI to concentrate on that. Rather than longevity etc.

Comment author: NancyLebovitz 15 August 2010 10:24:42PM 1 point [-]

What would the charity you'd like to contribute to look like?

Comment author: whpearson 15 August 2010 11:02:19PM 2 points [-]

When I read good popular science books the people will tend to come up with some idea. Then they will test the idea to destruction. Poking and prodding at the idea until it really can't be anything but what they say it is.

I want to get the same feeling off the group studying intelligence as I do from that type of research. They don't need to be running foomable AIs, but truth is entangled so they should be able to figure out the nature of intelligence from other facets of the world, including physics and the biological examples.

Questions I hope they would be asking:

Is the g factor related to ability to absorb cultural information? I.e. is peoples increased ability to solve problems if they have a high g due to them being able to get more information about solving problems from cultural information sources?

If it wasn't then it would be further evidence for .something special in one intelligence over another and it might make sense to call one more intelligent, rather than just having different initial skill sets.

If SIAI had the ethos I'd like, we'd be going over and kicking every one of the supporting arguments for the likelihood of fooming and the nature of intelligence to make sure they were sound. Performing experiments where necessary. However people have forgotten them and moved on to decision theory and the like.

Comment author: NancyLebovitz 15 August 2010 11:13:30PM 4 points [-]

Interesting points. Speaking only for myself, it doesn't feel as though most of my problem solving or idea generating approaches were picked up from the culture, but I could be kidding myself.

For a different angle, here's an old theory of Michael Vassar's-- I don't know whether he still holds it. Talent consists of happening to have a reward system which happens to make doing the right thing feel good.

Comment author: Jonathan_Graehl 16 August 2010 11:02:10PM *  0 points [-]

Talent consists of happening to have a reward system which happens to make doing the right thing feel good.

Definitely not just that. Knowing what the right thing is, and being able to do it before it's too late, are also required. And talent implies a greater innate capacity for learning to do so. (I'm sure he meant in prospect, not retrospect).

It's fair to say that some of what we identify as "talent" in people is actually in their motivations as well as their talent-requisite abilities.

Comment author: Perplexed 15 August 2010 11:35:48PM 2 points [-]

If SIAI had the ethos I'd like, we'd be going over and kicking every one of the supporting arguments for the likelihood of fooming and the nature of intelligence to make sure they were sound.

And then, hypothetically, if they found that fooming is not likely at all, and that dangerous fooming can be rendered nearly impossible by some easily enforced precautions/regulations, what then? If they found that the SIAI has no particular unique expertise to contribute to the development of FAI? An organization with an ethos you would like: what would it do then? To make it a bit more interesting, suppose they find themselves sitting on a substantial endowment when they reason their way to their own obsolescence?

How often in human history have organizations announced, "Mission accomplished - now we will release our employees to go out and do something else"?

Comment author: timtyler 16 August 2010 06:09:37AM *  1 point [-]

It doesn't seem likely. The paranoid can usually find something scary to worry about. If something turns out to be not really-frightening, fear mongers can just go on to the next-most frightening thing in line. People have been concerned about losing their jobs to machines for over a century now. Machines are a big and scary enough domain to keep generating fear for a long time.

Comment author: ciphergoth 16 August 2010 08:59:27AM 2 points [-]

I think that what SIAI works on is real and urgent, but if I'm wrong and what you describe here does come to pass, the world gets yet another organisation campaigning about something no-one sane should care about. It doesn't seem like a disastrous outcome.

Comment author: NancyLebovitz 16 August 2010 08:06:36AM 1 point [-]

From a less cynical angle, building organizations is hard. If an organization has fulfilled its purpose, or that purpose turns out to be a mistake, it isn't awful to look for something useful for the organization to do rather than dissolving it.

Comment author: Perplexed 17 August 2010 03:19:13AM 2 points [-]

The American charity organization, The March of Dimes was originally created to combat polio. Now they are involved with birth defects and other infant health issues.

Since they are the one case I know of (other than ad hoc disaster relief efforts) in which an organized charity accomplished its mission, I don't begrudge them a few additional decades of corporate existence.

Comment author: JamesAndrix 15 August 2010 11:53:58PM 0 points [-]

Then they will test the idea to destruction.

I like this concept.

Assume your theory will fail in some places, and keep pressing it until it does, or you run out of ways to test it.

Comment author: NancyLebovitz 15 August 2010 02:33:44PM 1 point [-]

FHI?

Comment author: whpearson 15 August 2010 02:53:17PM 2 points [-]

The Future of Humanity Institute.

Nick Bostrom's personal website probably gives you the best idea of what they produce.

A little too philosophical for my liking, but still interesting.

Comment author: timtyler 15 August 2010 11:36:55AM *  2 points [-]

I suggested what to do about this problem in my post: withhold funding from SIAI.

Right - but that's only advice for those who are already donating. Others would presumably seek reform or replacement. The decision there seems non-trivial.