multifoliaterose comments on Existential Risk and Public Relations - Less Wrong

36 Post author: multifoliaterose 15 August 2010 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (613)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 15 August 2010 07:35:45AM *  3 points [-]

The one "uncredible" claim mentioned - about Eliezer being "hit by a meteorite" - sounds as though it is the kind of thing he might plausibly think. Not too much of a big deal, IMO.

As with many charities, it is easy to think the SIAI might be having a negative effect - simply because it occupies the niche of another organisation that could be doing a better job - but what to do? Things could be worse as well - probably much worse.

Comment author: multifoliaterose 15 August 2010 08:06:45AM *  6 points [-]

I suggested what to do about this problem in my post: withhold funding from SIAI, and make it clear to them why you're withholding funding from them, and promise to fund them if the issue is satisfactorily resolved to incentivize them to improve.

Comment author: CarlShulman 15 August 2010 10:25:10AM 2 points [-]

Will you do this?

Comment author: multifoliaterose 15 August 2010 10:35:36AM 7 points [-]

I'm definitely interested in funding an existential risk organization. SIAI would have to be a lot more transparent than it is now right now for me to be interested in funding SIAI. For me personally, it wouldn't be enough for SIAI to just take measures to avoid poisoning the meme, I would need to see a lot more evidence that SIAI is systematically working to maximize its impact on existential risk reduction.

As things stand I prefer to hold out for a better organization. But if SIAI exhibited transparency and accountability of levels similar to those of GiveWell (welcoming and publically responding to criticism regularly, regularly posting detailed plans of action, seeking out feedback from subject matter specialists and making this public when possible, etc.) I would definitely fund SIAI and advocate that others do so as well.

Comment author: Wei_Dai 15 August 2010 10:48:30AM 11 points [-]

"transparency"? I thought the point of your post was that SIAI members should refrain from making some of their beliefs easily available to the public?

Comment author: multifoliaterose 15 August 2010 11:00:18AM *  7 points [-]

I see, maybe I should have been more clear. The point of my post is that SIAI members should not express controversial views without substantiating them with abundant evidence. If SIAI provided compelling evidence that Eliezer's work has higher expected value to humanity than what virtually everybody else is doing, then I would think Eliezer's comment appropriate.

As things stand SIAI has not provided such evidence. Eliezer himself may have such evidence, but if so he's either unwilling or unable to share it.

Comment author: CarlShulman 15 August 2010 12:41:20PM 8 points [-]

higher expected value to humanity than what virtually everybody else is doing,

For what definitions of "value to humanity" and "virtually everybody else"?

If "value to humanity" is assessed as in Bostrom's Astronomical Waste paper, that hugely favors effects on existential risk vs alleviating current suffering or increasing present welfare (as such, those also have existential risk effects). Most people don't agree with that view, so asserting that as a privileged frame can be seen as a hostile move (attacking the value systems of others in favor of a value system according to which one's area of focus is especially important). Think of the anger directed at vegetarians, or those who guilt-trip others about not saving African lives. And of course, it's easier to do well on a metric that others are mostly not focused on optimizing.

Dispute about what best reduces existential risk, and annoyance at overly confident statements there is a further issue, but I think that asserting uncommon moral principles (which happen to rank one's activities as much more valuable than most people would rank them) is a big factor on its own.

Comment author: multifoliaterose 15 August 2010 04:20:52PM 0 points [-]

In case my previous comment was ambiguous, I should say that I agree with you completely on this point. I've been wanting to make a top level post about this general topic for a while. Not sure when I'll get a chance to do so.

Comment author: Wei_Dai 17 August 2010 12:57:54AM *  9 points [-]

There are a lot of second and higher order effects in PR. You can always shape your public statements for one audience and end up driving away (or failing to convince) another one that's more important. If Eliezer had shied away from stating some of the more "uncredible" ideas because there wasn't enough evidence to convince a typical smart person, it would surely prompt questions of "what do you really think about this?" or fail to attract people who are currently interested in SIAI because of those ideas.

If SIAI provided compelling evidence that Eliezer's work has higher expected value to humanity than what virtually everybody else is doing, then I would think Eliezer's comment appropriate.

Suppose Eliezer hadn't made that claim, and somebody asks him, "do you think the work SIAI is doing has higher expected value to humanity than what everybody else is doing?", which somebody is bound to, given that Eliezer is asking for donations from rationalists. What is he supposed to say? "I can't give you the answer because I don't have enough evidence to convince a typical smart person?"

I think you make a good point that it's important to think about PR, but I'm not at all convinced that the specific advice you give are the right ones.

Comment author: multifoliaterose 17 August 2010 05:27:28AM 5 points [-]

Thanks for your feedback. Several remarks:

You can always shape your public statements for one audience and end up driving away (or failing to convince) another one that's more important.

This is of course true. I myself am fairly certain that SIAI's public statements are driving away the people who it's most important to interest in existential risk.

Suppose Eliezer hadn't made that claim, and somebody asks him, "do you think the work SIAI is doing has higher expected value to humanity than what everybody else is doing?", which somebody is bound to, given that Eliezer is asking for donations from rationalists. What is he supposed to say? "I can't give you the answer because I don't have enough evidence to convince a typical smart person?"

•It's standard public relations practice to reveal certain information only if asked.

•An organization that has the strongest case for room for more funding need not be an organization that's doing something of higher expected value to humanity than what everybody else is doing. In particular, I simultaneously believe that there are politicians who have higher expected value to humanity than all existential risk researchers alive and that the cause of existential risk has the greatest room for more funding.

•One need not be confident in one's belief that funding one's organization has highest expected value to humanity to believe that funding one's organization has highest expected to humanity. A major issue that I have with Eliezer's rhetoric is that he projects what I perceive to be an unreasonably high degree of confidence in his beliefs.

•Another major issue with Eliezer's rhetoric that I have is that even putting issues of PR aside, I personally believe that funding SIAI does not have anywhere near the highest expected value to humanity out of all possible uses of money. So from my point of view, I see no upside to Eliezer making extreme claims of the sort that he has - it looks to me as though Eliezer is making false claims and damaging public relations for existential risk as a result.

I will be detailing my reasons for thinking that SIAI's research does not have high expected value in a future post.

Comment author: Vladimir_Nesov 17 August 2010 03:09:03PM 3 points [-]

One need not be confident in one's belief [...]

The level of certainty is not up for grabs. You are as confident as you happen to be, this can't be changed. You can change the appearance, but not your actual level of confidence. And changing the apparent level of confidence is equivalent to lying.

Comment author: Emile 17 August 2010 03:26:37PM 2 points [-]

But it isn't perceived as so by the general public - it seems to me that the usual perception of "confidence" has more to do with status than with probability estimates.

The non-technical people I work with often say that I use "maybe" and "probably" too much (I'm a programmer - "it'll probably work" is a good description of how often it does work in practice) - as if having confidence in one's statements was a sign of moral fibre, and not a sign of miscalibration.

Actually, making statements with high confidence is a positive trait, but most people address this by increasing the confidence they express, not by increasing their knowledge until they can honestly make high-confidence statements. And our culture doesn't correct for that, because errors of calibration are not immediatly obvious (as they would be if, say, we had a widespread habit of betting on various things).

Comment author: timtyler 15 August 2010 11:21:13AM 5 points [-]

If there really was "abundant evidence" there probably wouldn't be much of a controversy.

Comment author: rhollerith_dot_com 15 August 2010 12:16:41PM *  5 points [-]

Eliezer himself may have such evidence [that Eliezer's work has higher expected value to humanity than what virtually everybody else is doing], but if so he's either unwilling or unable to share it.

Now that is unfair.

Since 1997, Eliezer has published (mostly on mailing lists and blogs but also in monographs) an enormous amount (at least ten novels worth unless I am very mistaken) of writings supporting exactly that point. Of course most of this material is technical, but unlike the vast majority of technical prose, it is accessible to non-specialists and non-initiates with enough intelligence, a solid undergraduate education as a "scientific generalist" and a lot of free time on their hands because in his writings Eliezer is constantly "watching out for" the reader who does not yet know what he knows. (In other words, it is uncommonly good technical exposition.)

Comment author: multifoliaterose 15 August 2010 04:29:02PM *  10 points [-]

So my impression has been that the situation is that

(i) Eliezer's writings contain a great deal of insightful material.

(ii) These writings do not substantiate the idea that [that Eliezer's work has higher expected value to humanity than what virtually everybody else is doing].

I say this having read perhaps around a thousand pages of what Eliezer has written. I consider the amount of reading that I've done to be a good "probabilistic proof" that the points (i) and (ii) apply to the portion of his writings that I haven't read.

That being said, if there are any particular documents that you would point me to which you feel do provide a satisfactory evidence for the idea [that Eliezer's work has higher expected value to humanity than what virtually everybody else is doing], I would be happy to examine them.

I'm unwilling to read the whole of his opus given how much of it I've already read without being convinced. I feel that the time that I put into reducing existential risk can be used to better effect in other ways.

Comment author: JamesAndrix 15 August 2010 11:48:23PM *  4 points [-]

It would help to know what steps in the probabilistic proof don't have high probability for you.

For example, you might think that the singularity has a good probability of being relatively smooth and some kind of friendly, even without FAI. or you might think that other existential risks may still be a bigger threat, or you may think that Eliezer isn't putting a dent in the FAI problem.

Or some combination of these and others.

Comment author: Perplexed 16 August 2010 04:21:55AM *  7 points [-]

This might be a convenient place to collect a variety of reasons why people are FOOM denialists. From my POV: 1. I am skeptical that safeguards against UFAI (unFAI) will not work. In part because: 2. I doubt that the "takeoff" will be "hard". Because: 3. I am pretty sure the takeoff will require repeatedly doubling and quadrupling hardware, not just autorewriting software. 4. And hence an effective safeguard would be to simply not give the machine its own credit card! 5. And in any case, the Moore's law curve for electronics does not arise from delays in thinking up clever ideas, it arises from delays in building machines to incredibly high tolerances. 6. Furthermore, even after the machine has more hardware, it doesn't yet have higher intelligence until it reads lots more encyclopedias and proves for itself many more theorems. These things take time. 7. And finally, I have yet to see the argument that an FAI protects us from a future UFAI. That is, how does the SIAI help us? 8. Oh, and I do think that the other existential risks, particularly war and economic collapse, put the UFAI risk pretty far down the priority list. Sure, those other risks may not be quite so existential, but if they don't kill us, they will at least prevent an early singularity.

Edit added two days later: Since writing this, I thought about it some more, shut up for a moment, and did the math. I still think that it is unlikely that the first takeoff will be a hard one; so hard that it gets out of control. But I now estimate something like a 10% chance that the first takeoff will be hard, and I estimate something like a 30% chance that at least one of the first couple dozen takeoffs will be hard. Multiply that by an estimated 10% chance that a hard takeoff will take place without adequate safeguards in place, and another 10% chance that a safeguardless hard takeoff will go rogue, and you get something like a 0.3% chance of a disaster of Forbin Project magnitude. Completely unacceptable.

Originally, I had discounted the chance that a simple software change could cause the takeoff; I assumed you would need to double and redouble the hardware capability. What I failed to notice was that a simple "tuning" change to the (soft) network connectivity parameters - changing the maximum number of inputs per "neuron" from 8 to 7, say, could have an (unexpected) effect on performance of several orders of magnitude simply by suppressing wasteful thrashing or some such thing.

Comment author: multifoliaterose 16 August 2010 03:25:14AM 6 points [-]

Yes, I agree with you. I plan on making my detailed thoughts on these points explicit. I expect to be able to do so within a month.

But for a short answer, I would say that the situation is mostly that I think that:

Eliezer isn't putting a dent in the FAI problem.

Comment author: XiXiDu 15 August 2010 12:31:35PM 4 points [-]

Can you be more specific than "it's somewhere beneath an enormous amount of 13 years of material from the very same person whose arguments are scrutinized for evidence"?

This is not sufficient to scare people up to the point of having nightmares and ask them for most of their money.

Comment author: rhollerith_dot_com 15 August 2010 01:16:59PM *  1 point [-]

Can you be more specific . . . ?

Do you want me to repeat the links people gave you 24 hours ago?

The person who was scared to the point of having nightmares was almost certainly on a weeks-long or months-long visit to the big house in California where people come to discuss extremely powerful technologies and the far future and to learn from experts on these subjects. That environment would tend to cause a person to take certain ideas more seriously than a person usually would.

Comment author: Jonathan_Graehl 16 August 2010 11:09:29PM 2 points [-]

Also, are we really discrediting people because they were foolish enough to talk about their deranged sleep-thoughts? I'd sound pretty stupid too if I remembered and advertised every bit of nonsense I experienced while sleeping.

Comment author: XiXiDu 15 August 2010 01:51:55PM *  1 point [-]

It was more than one person. Anyway, I haven't read all of the comments yet so I might have missed some specific links. If you are talking about links to articles written by EY himself where he argues about AI going FOOM, I commented on one of them.

Here is an example of the kind of transparency in the form of strict calculations, references and evidence I expect.

As I said, I'm not sure what other links you are talking about. But if you mean the kind of LW posts dealing with antipredictions, I'm not impressed. Predicting superhuman AI to be a possible outcome of AI research is not sufficient. Where is the difference between claiming the LHC will go FOOM? I'm sure someone like EY would be able to write a thousand posts around such a scenario telling me that the high risk associated with the LHC going FOOM does outweigh its low probability. There might be sound arguments to support this conclusion. But it is a conclusion and a framework of arguments based on a assumption that is itself of unknown credibility. So is it too much to ask for some transparet evidence to fortify this basic premise? Evidence that is not somewhere to be found within hundreds of posts not directly concerned with the evidence in question but rather arguing based on the very assumption it is trying to justify?

Comment author: whpearson 15 August 2010 01:05:49PM *  3 points [-]

I'm planning to fund FHI rather than SIAI, when I have a stable income (although my preference is for a different organisation that doesn't exist)

My position is roughly this.

  • The nature of intelligence (and its capability for FOOMing) is poorly understood

  • The correct actions to take depend upon the nature of intelligence.

As such I would prefer to fund an institute that questioned the nature of intelligence, rather than one that has made up its mind that a singularity is the way forward. And it is not just the name that makes me think that SIAI has settled upon this view.

And because the nature of intelligence is the largest wild card in the future of humanity, I would prefer FHI to concentrate on that. Rather than longevity etc.

Comment author: NancyLebovitz 15 August 2010 10:24:42PM 1 point [-]

What would the charity you'd like to contribute to look like?

Comment author: whpearson 15 August 2010 11:02:19PM 2 points [-]

When I read good popular science books the people will tend to come up with some idea. Then they will test the idea to destruction. Poking and prodding at the idea until it really can't be anything but what they say it is.

I want to get the same feeling off the group studying intelligence as I do from that type of research. They don't need to be running foomable AIs, but truth is entangled so they should be able to figure out the nature of intelligence from other facets of the world, including physics and the biological examples.

Questions I hope they would be asking:

Is the g factor related to ability to absorb cultural information? I.e. is peoples increased ability to solve problems if they have a high g due to them being able to get more information about solving problems from cultural information sources?

If it wasn't then it would be further evidence for .something special in one intelligence over another and it might make sense to call one more intelligent, rather than just having different initial skill sets.

If SIAI had the ethos I'd like, we'd be going over and kicking every one of the supporting arguments for the likelihood of fooming and the nature of intelligence to make sure they were sound. Performing experiments where necessary. However people have forgotten them and moved on to decision theory and the like.

Comment author: NancyLebovitz 15 August 2010 11:13:30PM 4 points [-]

Interesting points. Speaking only for myself, it doesn't feel as though most of my problem solving or idea generating approaches were picked up from the culture, but I could be kidding myself.

For a different angle, here's an old theory of Michael Vassar's-- I don't know whether he still holds it. Talent consists of happening to have a reward system which happens to make doing the right thing feel good.

Comment author: Jonathan_Graehl 16 August 2010 11:02:10PM *  0 points [-]

Talent consists of happening to have a reward system which happens to make doing the right thing feel good.

Definitely not just that. Knowing what the right thing is, and being able to do it before it's too late, are also required. And talent implies a greater innate capacity for learning to do so. (I'm sure he meant in prospect, not retrospect).

It's fair to say that some of what we identify as "talent" in people is actually in their motivations as well as their talent-requisite abilities.

Comment author: Perplexed 15 August 2010 11:35:48PM 2 points [-]

If SIAI had the ethos I'd like, we'd be going over and kicking every one of the supporting arguments for the likelihood of fooming and the nature of intelligence to make sure they were sound.

And then, hypothetically, if they found that fooming is not likely at all, and that dangerous fooming can be rendered nearly impossible by some easily enforced precautions/regulations, what then? If they found that the SIAI has no particular unique expertise to contribute to the development of FAI? An organization with an ethos you would like: what would it do then? To make it a bit more interesting, suppose they find themselves sitting on a substantial endowment when they reason their way to their own obsolescence?

How often in human history have organizations announced, "Mission accomplished - now we will release our employees to go out and do something else"?

Comment author: timtyler 16 August 2010 06:09:37AM *  1 point [-]

It doesn't seem likely. The paranoid can usually find something scary to worry about. If something turns out to be not really-frightening, fear mongers can just go on to the next-most frightening thing in line. People have been concerned about losing their jobs to machines for over a century now. Machines are a big and scary enough domain to keep generating fear for a long time.

Comment author: ciphergoth 16 August 2010 08:59:27AM 2 points [-]

I think that what SIAI works on is real and urgent, but if I'm wrong and what you describe here does come to pass, the world gets yet another organisation campaigning about something no-one sane should care about. It doesn't seem like a disastrous outcome.

Comment author: NancyLebovitz 16 August 2010 08:06:36AM 1 point [-]

From a less cynical angle, building organizations is hard. If an organization has fulfilled its purpose, or that purpose turns out to be a mistake, it isn't awful to look for something useful for the organization to do rather than dissolving it.

Comment author: Perplexed 17 August 2010 03:19:13AM 2 points [-]

The American charity organization, The March of Dimes was originally created to combat polio. Now they are involved with birth defects and other infant health issues.

Since they are the one case I know of (other than ad hoc disaster relief efforts) in which an organized charity accomplished its mission, I don't begrudge them a few additional decades of corporate existence.

Comment author: JamesAndrix 15 August 2010 11:53:58PM 0 points [-]

Then they will test the idea to destruction.

I like this concept.

Assume your theory will fail in some places, and keep pressing it until it does, or you run out of ways to test it.

Comment author: NancyLebovitz 15 August 2010 02:33:44PM 1 point [-]

FHI?

Comment author: whpearson 15 August 2010 02:53:17PM 2 points [-]

The Future of Humanity Institute.

Nick Bostrom's personal website probably gives you the best idea of what they produce.

A little too philosophical for my liking, but still interesting.

Comment author: timtyler 15 August 2010 11:36:55AM *  2 points [-]

I suggested what to do about this problem in my post: withhold funding from SIAI.

Right - but that's only advice for those who are already donating. Others would presumably seek reform or replacement. The decision there seems non-trivial.