Wei_Dai comments on Should I believe what the SIAI claims? - Less Wrong

23 Post author: XiXiDu 12 August 2010 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (600)

You are viewing a single comment's thread.

Comment author: Wei_Dai 12 August 2010 05:37:16PM 10 points [-]

I think Vernor Vinge at least has made a substantial effort to convince people of the risks ahead. What do you think A Fire Upon the Deep is? Or, here is a more explicit version:

If the Singularity can not be prevented or confined, just how bad could the Post-Human era be? Well ... pretty bad. The physical extinction of the human race is one possibility. (Or as Eric Drexler put it of nanotechnology: Given all that such technology can do, perhaps governments would simply decide that they no longer need citizens!). Yet physical extinction may not be the scariest possibility. Again, analogies: Think of the different ways we relate to animals. Some of the crude physical abuses are implausible, yet.... In a Post- Human world there would still be plenty of niches where human equivalent automation would be desirable: embedded systems in autonomous devices, self- aware daemons in the lower functioning of larger sentients. (A strongly superhuman intelligence would likely be a Society of Mind [16] with some very competent components.) Some of these human equivalents might be used for nothing more than digital signal processing. They would be more like whales than humans. Others might be very human-like, yet with a one-sidedness, a dedication that would put them in a mental hospital in our era. Though none of these creatures might be flesh-and-blood humans, they might be the closest things in the new enviroment to what we call human now. (I. J. Good had something to say about this, though at this late date the advice may be moot: Good [12] proposed a "Meta-Golden Rule", which might be paraphrased as "Treat your inferiors as you would be treated by your superiors." It's a wonderful, paradoxical idea (and most of my friends don't believe it) since the game-theoretic payoff is so hard to articulate. Yet if we were able to follow it, in some sense that might say something about the plausibility of such kindness in this universe.)

I have argued above that we cannot prevent the Singularity, that its coming is an inevitable consequence of the humans' natural competitiveness and the possibilities inherent in technology. And yet ... we are the initiators. Even the largest avalanche is triggered by small things. We have the freedom to establish initial conditions, make things happen in ways that are less inimical than others. Of course (as with starting avalanches), it may not be clear what the right guiding nudge really is:

He goes on to talk about intelligence amplification, and then:

Originally, I had hoped that this discussion of IA would yield some clearly safer approaches to the Singularity. (After all, IA allows our participation in a kind of transcendance.) Alas, looking back over these IA proposals, about all I am sure of is that they should be considered, that they may give us more options. But as for safety ... well, some of the suggestions are a little scarey on their face. One of my informal reviewers pointed out that IA for individual humans creates a rather sinister elite. We humans have millions of years of evolutionary baggage that makes us regard competition in a deadly light. Much of that deadliness may not be necessary in today's world, one where losers take on the winners' tricks and are coopted into the winners' enterprises. A creature that was built de novo might possibly be a much more benign entity than one with a kernel based on fang and talon. And even the egalitarian view of an Internet that wakes up along with all mankind can be viewed as a nightmare [26].

Comment author: XiXiDu 13 August 2010 08:37:04AM 2 points [-]

As I wrote in another comment, Eliezer Yudkowsky hasn't come up with anything unique. And there is no argument in saying that he's simply he smartest fellow around since clearly, other people have come up with the same ideas before him. And that was my question, why are they not signaling their support for the SIAI. Or in case they don't know about the SIAI, why are they not using all their resources and publicity and try to stop the otherwise inevitable apocalypse?

It looks like there might be arguments against the kind of fearmongering that can be found within this community. So why is nobody out to inquire about the reasons for the great silence within the group of those aware of a possible singularity but who nevertheless keep quiet? Maybe they know something you don't, or are you people so sure of your phenomenal intelligence?

Comment author: CarlShulman 13 August 2010 11:47:13AM *  8 points [-]

David Chalmers has been writing and presenting to philosophers about AI and intelligence explosion since giving his talk at last year's Singularity Summit. He estimates the probability of human-level AI by 2100 at "somewhat more than one-half," thinks an intelligence explosion following that quite likely, and considers possible disastrous consequences quite important relative to other major causes today. However, he had not written or publicly spoken about his views, and probably would not have for quite some time had he not been invited to the Singularity Summit.

He reports a stigma around the topic as a result of the combination of science-fiction associations and the early failures of AI, and the need for some impetus to brave that. Within the AI field, there is also a fear that discussion of long-term risks, or unlikely short-term risks may provoke hostile reactions against the field thanks to public ignorance and affect heuristic. Comparisons are made to genetic engineering of agricultural crops, where public attention seems to be harmful on net in unduly slowing the development of more productive plants.

Comment author: XiXiDu 13 August 2010 12:22:59PM *  4 points [-]

Thanks. This is more, I think you call it rational evidence, from an outsider. But it doesn't answer the primary question of my post. How do you people arrive at the estimations you state? Where can I find the details of how you arrived at your conclusions about the likelihood of those events?

If all this was supposed to be mere philosophy, I wouldn't inquire about it to such an extent. But the SIAI is asking for the better part of your income and resources. There are strong claims being made by Eliezer Yudkowsky and calls for action. Is it reasonable to follow given the current state of evidence?

Comment author: CarlShulman 13 August 2010 02:14:35PM *  7 points [-]

But the SIAI is asking for the better part of your income and resources.

If you are a hard-core consequentialist altruist who doesn't balance against other less impartial desires you'll wind up doing that eventually for something. Peter Singer's "Famine, Affluence, and Morality" is decades old, and there's still a lot of suffering to relieve. Not to mention the Nuclear Threat Initiative, or funding research into DNA vaccines, or political lobbying, etc. The question of how much you're willing to sacrifice in exchange for helping various numbers of people or influencing extinction risks in various ways is separate from data about the various options. No one is forcing you to reduce existential risk (except insofar as tax dollars go to doing so), certainly not to donate.

I'll have more to say on substance tomorrow, but it's getting pretty late. My tl;dr take would be that with pretty conservative estimates on total AI risk, combined with the lack of short term motives to address it (the threat of near-term and moderate scale bioterrorism drives research into defenses, not the fear of extinction-level engineered plagues; asteroid defense is more motivated by the threat of civilization or country-wreckers than the less common extinction-level events; nuclear risk reduction was really strong only in the face of the Soviets, and today the focus is still more on nuclear terrorism, proliferation, and small scale wars; climate change benefits from visibly already happening and a social movement built over decades in tandem with the existing environmentalist movement), there are still low-hanging fruit to be plucked. [That parenthetical aside somewhat disrupted the tl;dr billing, oh well...] When we get to the point where a sizable contingent of skilled folk in academia and elsewhere have gotten well into those low-hanging fruit, and key decision-makers in the relevant places are likely to have access to them in the event of surprisingly quick progress, that calculus will change.

Comment author: Unknowns 13 August 2010 09:36:40AM -1 points [-]
Comment author: XiXiDu 13 August 2010 09:56:20AM *  5 points [-]

Absence of evidence is not evidence of absence?

There's simply no good reason to argue against cryonics. It is a chance in case of the worst case scenario and it is considerably higher than rotting six feet under.

Have you thought about the possibility that most experts simply are reluctant to come up with detailed critics about specific issues posed by the SIAI, EY and LW? Maybe they consider it not worth the effort as the data that is already available does not justify given claims in the first place.

Anyway, I think I might write some experts and all of the people mentioned in my post, if I'm not too lazy.

I've already got one reply, whom I'm not going to name right now. But let's first consider Yudkowsky' attitude of adressing other people:

You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong...

Now the first of those people I contacted about it:

There are certainly many reasons to doubt the belief system of a cult based around the haphazard musings of a high school dropout, who has never written a single computer program but professes to be an expert on AI. As you point out none of the real AI experts are crying chicken little, and only a handful of AI researchers, cognitive scientists or philosophers take the FAI idea seriously.

Read Moral Machines for current state of the art thinking on how to build a moral machine mind.

SIAI dogma makes sense if you ignore the uncertainties at every step of their logic. It's like assigning absolute numbers to every variable in the Drake equation and determining that aliens must be all around us in the solar system, and starting a church on the idea that we are being observed by spaceships hidden on the dark side of the moon. In other words, religious thinking wrapped up to look like rationality.

ETA

I was told the person I quoted above is stating full ad hominem falsehoods regarding Eliezer. I think it is appropriate to edit the message to show that indeed the person might not has been honest, or clueful. Otherwise I'll unnecessary end up perpetuating possible ad hominem attacks.

Comment author: utilitymonster 13 August 2010 12:26:28PM 8 points [-]

I feel some of the force of this...I do think we should take the opinions of other experts seriously, even if their arguments don't seem good.

I sort of think that many of these criticisms of SIAI turn on not being Bayesian enough. Lots of people only want to act on things they know, where knowing requires really solid evidence, the kind of evidence you get through conventional experimental science, with low p-values and all. It is just impossible to have that kind of robust confidence about the far future. So you're going to have people just more or less ignore speculative issues about the far future, even if those issues are by far the most important. Once you adopt a Bayesian perspective, and you're just interested in maximizing expected utility, the complaint that we don't have a lot of evidence about what will be best for the future, or the complaint that we just don't really know whether SIAI's mission and methodology are going to work seems to lose a lot of force.

Comment author: multifoliaterose 13 August 2010 12:44:34PM 2 points [-]

I sort of think that many of these criticisms of SIAI turn on not being Bayesian enough. Lots of people only want to act on things they know, where knowing requires really solid evidence, the kind of evidence you get through conventional experimental science, with low p-values and all. It is just impossible to have that kind of robust confidence about the far future. So you're going to have people just more or less ignore speculative issues about the far future, even if those issues are by far the most important. Once you adopt a Bayesian perspective, and you're just interested in maximizing expected utility, the complaint that we don't have a lot of evidence about what will be best for the future, or the complaint that we just don't really know whether SIAI's mission and methodology are going to work seems to lose a lot of force.

I have some sympathy for your remark.

The real question is just whether SIAI has greatly overestimated at least one of the relevant probabilities. I have high confidence that the SIAI staff have greatly overestimated their ability to have a systematically positive impact on existential risk reduction.

Comment author: utilitymonster 13 August 2010 01:07:20PM 3 points [-]

Have you read Nick Bostrom's paper, Astronomical Waste? You don't have to be able to affect the probabilities by very much for existential risk to be the thing to worry about, especially if you have a decent dose of credence in utilitarianism.

Is there a decent chance, in your view, of decrease x-risk by 10^-18 if you put all of your resources into it? That could be enough. (I agree that this kind of argument is worrisome; maybe expected utility theory or utilitarianism breaks down with these huge numbers and tiny probabilities, but it is worth thinking about.)

If you're sold on x-risk, are there some candidate other things that might have higher expectations of x-risk reductions on the margin (after due reflection)? (I'm not saying SIAI clearly wins, I just want to know what else you're thinking about.)

Comment author: multifoliaterose 13 August 2010 04:13:30PM *  2 points [-]

Have you read Nick Bostrom's paper, Astronomical Waste? You don't have to be able to affect the probabilities by very much for existential risk to be the thing to worry about, especially if you have a decent dose of credence in utilitarianism.

Is there a decent chance, in your view, of decrease x-risk by 10^-18 if you put all of your resources into it? That could be enough.

I agree with you about what you say above. I personally believe that it is possible to individuals to decrease existential risk by more than 10^(-18) (though I know reasonable people who have at one time or other thought otherwise).

If you're sold on x-risk, are there some candidate other things that might have higher expectations of x-risk reductions on the margin (after due reflection)? (I'm not saying SIAI clearly wins, I just want to know what else you're thinking about.

Two points to make here:

(i) Though there's huge uncertainty in judging these sorts of things and I'm by no means confident in my view on this matter, I presently believe that SIAI is increasing existential risk through unintended negative consequences. I've written about this in various comments, for example here, here and here.

(ii) I've thought a fair amount about other ways in which one might hope to reduce existential risk. I would cite the promotion and funding of an asteroid strike prevention program as a possible candidate. As I discuss here, placing money in a donor advised fund may be the best option. I wrote out much more detailed thoughts on these points which I can send you by email if you want (just PM me) but which are not yet ready for posting in public.

Comment author: CarlShulman 13 August 2010 05:56:22PM *  3 points [-]

I agree that 'poisoning the meme' is a real danger, and that SIAI has historically had both positives and negatives with respect to its reputational effects. My net expectation for it at the moment is positive, but I'll be interested to hear your analysis when it's ready. [Edit: apparently the analysis was about asteroids, not reputation.]

Here's the Fidelity Charitable Gift Fund for Americans. I'm skeptical about asteroid in light of recent investments in that area and technology curve, although there is potential for demonstration effects (good and bad) with respect to more likely risks.

Comment author: thomblake 13 August 2010 06:11:11PM *  6 points [-]

read Moral Machines for current state of the art thinking on how to build a moral machine mind.

It's hardly that. Moral Machines is basically a survey; it doesn't go in-depth into anything, but it can point you in the direction of the various attempts to implement robot / AI morality.

And Eliezer is one of the people it mentions, so I'm not sure how that recommendation was supposed to advise against taking him seriously. (Moral Machines, page 192)

Comment author: thomblake 18 August 2010 08:00:20PM 3 points [-]

To follow up on this, Wendell specifically mentions EY's "friendly AI" in the intro to his new article in the Ethics and Information Technology special issue on "Robot ethics and human ethics".

Comment author: Rain 13 August 2010 01:52:45PM *  4 points [-]

[...] many reasons to doubt [...] belief system of a cult [...] haphazard musings of a high school dropout [...] never written a single computer program [...] professes to be an expert [...] crying chicken little [...] only a handful take the FAI idea seriously.

[...] dogma [...] ignore the uncertainties at every step [...] starting a church [...] religious thinking wrapped up to look like rationality.

I am unable to take this criticism seriously. It's just a bunch of ad hominem and hand-waving. What are the reasons to doubt? How are they ignoring the uncertainties when they list them on their webpage and bring them up in every interview? How is a fiercely atheist group religious at all? How is it a cult (there are lots of posts about this in the LessWrong archive)? How is it irrational?

Edit: And I'm downvoted. You actually think a reply that's 50% insult and emotionally loaded language has substance that I should be engaging with? I thought it was a highly irrational response on par with anti-cryonics writing of the worst order. Maybe you should point out the constructive portion.

Comment author: HughRistik 13 August 2010 06:23:33PM *  6 points [-]

The response by this individual seems like a summary, rather than an argument. The fact that someone writes a polemical summary of their views on a subject doesn't tell us much about whether their views are well-reasoned or not. A polemical summary is consistent with being full of hot air, but it's also consistent with having some damning arguments.

Of course, to know either way, we would have to hear this person's actual arguments, which we haven't, in this case.

How are they ignoring the uncertainties when they list them on their webpage and bring them up in every interview?

Just because a certain topic is raised, doesn't mean that it is discussed correctly.

How is a fiercely atheist group religious at all?

The argument is that their thinking has some similarities to religion. It's a common rhetorical move to compare any alleged ideology to religion, even if that ideology is secular.

How is it a cult (there are lots of posts about this in the LessWrong archive)?

The fact that EY displays an awareness of cultish dynamics doesn't necessarily mean that SIAI avoids them. Personally, I buy most of Eliezer's discussion that "every cause wants to become a cult," and I don't like the common practice of labeled movements as "cults." The net for "cult" is being drawn far too widely.

Yet I wouldn't say that the use of the word "cult" means that the individual is engaging in bad reasoning. While I think "cult" is generally a misnomer, it's generally used as short-hand for a group having certain problematic social-psychological qualities (e.g. conformity, obedience to authority). The individual could well be able to back those criticisms up. Who knows.

We would need to hear this individual's actual arguments to be able to evaluate whether the polemical summary is well-founded.

P.S. I wasn't the one who downvoted you.

Edit:

high school dropout, who has never written a single computer program

I don't know the truth of these statements. The second one seems dubious, but it might not be meant to be taken literally ("Hello World" is a program). If Eliezer isn't a high school dropout, and has written major applications, then the credibility of this writer is lowered.

Comment author: NihilCredo 15 August 2010 12:44:45AM *  2 points [-]

I believe you weren't supposed to engage that reply, which is a dismissal more than criticism. I believe you were supposed to take a step back and use it as a hint as to why the SIAI's yearly budget is 5 x 10^5 rather than 5 x 10^9 USD.

Comment author: timtyler 14 August 2010 12:43:06PM *  0 points [-]

Re: "How is it a cult?"

It looks a lot like an END OF THE WORLD cult. That is a well-known subspecies of cult - e.g. see:

http://en.wikipedia.org/wiki/Doomsday_cult

"The End of the World Cult"

The END OF THE WORLD acts as a superstimulus to human fear mechanisms - and causes caring people rush to warn their friends of the impending DOOM - spreading the panic virally. END OF THE WORLD cults typically act by simulating this energy - and then feeding from it. The actual value of p(DOOM) is not particularly critical for all this.

The net effect on society of the FEARMONGERING that usually results from such organisations seems pretty questionable. Some of those who become convinced that THE END IS NIGH may try and prevent it - but others will neglect their future plans, and are more likely to rape and pillage.

My "DOOM" video has more - http://www.youtube.com/watch?v=kH31AcOmSjs

Comment author: NancyLebovitz 14 August 2010 01:51:06PM 4 points [-]

Slight sidetrack:

There is, of course, one DOOM scenario (ok, one other DOOM scenario) which is entirely respectable here-- that the earth will be engulfed when the sun becomes a red giant.

That fate for the planet haunted me when I was a kid. People would say "But that's billions of years in the future" and I'd feel as though they were missing the point. It's possible that a more detailed discussion would have helped....

Recently, I've read that school teachers have a standard answer for kids who are troubled by the red giant scenario [1]-- that people will have found a solution by then.

This seems less intellectually honest than "The human race will be long gone anyway", but not awful. I think the most meticulous answer (aside from "that's the far future and there's nothing to be done about it now") is "that's so far in the future that we don't know whether people will be around, but if they are, they may well find a solution."

[1] I count this as evidence for the Flynn Effect.

Comment author: timtyler 14 August 2010 12:38:59PM *  0 points [-]

Re: "haphazard musings of a high school dropout, who has never written a single computer program but professes to be an expert on AI."

This opinion sounds poorly researched - e.g.: "This document was created by html2html, a Python script written by Eliezer S. Yudkowsky." - http://yudkowsky.net/obsolete/plan.html

Comment author: XiXiDu 14 August 2010 01:44:59PM 6 points [-]

I posted that quote to put it into perspective as to what others think of EY and his movement compared to what he thinks about them. Given that he thinks the same about those people, i.e. their opinion isn't worth much and that the LW crowd is much smarter anyway, it highlights an important aspect of the almost non-existing cooperation between him and the academics.

Comment author: jimrandomh 14 August 2010 02:12:47PM *  2 points [-]

I don't think one possibly-trivial Python script (to which I am unable to find source code) counts as much evidence. It sets a lower bound, but a very loose one. I have no idea whether Eliezer can program, and my prior says that any given person is extremely unlikely to have real programming ability unless proven otherwise. So I assume he can't.

He could change my mind by either publishing a large software project, or taking a standardized programming test such as a TopCoder SRM and publishing his score.

EDIT: This is not meant to be a defense of obvious wrong hyperbole like "has never written a single computer program".

Comment author: timtyler 14 August 2010 03:48:40PM *  0 points [-]

Eliezer has faced this criticism before and responded (somewhere!). I expect he will figure out coding. I got better at programming over the first 15 years I was doing it. So: he may also take a while to get up to speed. He was involved in this:

http://flarelang.sourceforge.net/

Comment author: Unknowns 13 August 2010 10:03:13AM 1 point [-]

This isn't contrary to Robin's post (except what you say about cryonics.) Robin was saying that there is a reluctance to criticize those things in part because the experts think they are not worth bothering with.