Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to comment by Turgurth on against "AI risk"
Comment author: CarlShulman 12 April 2012 06:03:11AM 5 points [-]

How concerned is SI with existential risks vs. how concerned is SI with catastrophic risks?

Different people have different views. For myself, I care more about existential risks than catastrophic risks, but not overwhelmingly so. A global catastrophe would kill me and my loved ones just as dead. So from the standpoint of coordinating around mutually beneficial policies, or "morality as cooperation" I care a lot about catastrophic risk affecting current and immediately succeeding generations. However, when I take a "disinterested altruism" point of view x-risk looms large: I would rather bring 100 trillion fantastic lives into being than improve the quality of life of a single malaria patient.

If SI is solely concerned with x-risks, do I assume correctly that you also think about how cat. risks can relate to x-risks

Yes.

Or is this sort of thing more what the FHI does?

They spend more time on it, relatively speaking.

FAI as quickly as possible (to head off other x-risks and cat. risks) vs. achieving FAI as safely as possible (to head off UFAI)

Given that powerful AI technologies are achievable in the medium to long term, UFAI would seem to me be a rather large share of the x-risk, and still a big share of the catastrophic risk, so that speedups are easily outweighed by safety gains.

Comment author: multifoliaterose 14 April 2012 12:07:59AM 1 point [-]

However, when I take a "disinterested altruism" point of view x-risk looms large: I would rather bring 100 trillion fantastic lives into being than improve the quality of life of a single malaria patient.

What's your break even point for "bring 100 trillion fantastic lives into being with probability p" vs. "improve the quality of a single malaria patient" and why?

Comment author: Vladimir_M 19 March 2012 03:25:51AM *  13 points [-]

Nordhaus's position to me seems to be stronger than you make it out to be. Here's the thing: even in the Soviet repression some academics risked their lives to speak out. You'd expect at least that much speaking out then among academics in the relevant fields when all they have to risk is their academic careers. Yet, in the relevant disciplines, one doesn't see much of any at all.

The trouble is, the situation is fundamentally different here. If there existed some sort of crude open attempt to dictate official dogma, as in the Soviet Union, I have no doubt that a small but still non-zero minority would speak out against it, no matter what the consequences. However, in the modern academic system, there is no such thing -- rather, there is a complex system of subtle but strong perverse incentives that lead to systematic biases and a gradual drift of the academic mainstream away from reality. (Of course, the magnitude of these problems varies greatly across different fields.)

In this situation, a contrarian is faced with a situation where making fundamental criticism of the state of the field won't invite any open persecution and accusation of heresy, but it will lead to professional marginalization and ruined career prospects without making any useful impact at all. After all, is there a more surefire way to get derided as a crackpot than to claim that accredited experts are failing to appreciate your insight? (Of course, in a field where the mainstream is correct, like in most of the hard sciences, this is a perfectly good heuristic.) So the choice isn't between conformity and heroic defiance, but between conformity -- best achieved by internalizing the mainstream biases -- and becoming a marginalized crackpot who invites only ridicule by anyone of any consequence.

Now, all this may sound like theorizing without evidence. However, in practice we do see whole academic fields where even a basic rational scrutiny of the academic mainstream shows that it's seriously divorced from reality -- and yet, we see no academic insiders screaming this awful truth from the rooftops. The occasional contrarians who mount fundamental criticism do this with a tacit understanding that they've destroyed their career prospects in the academia and closely connected institutions, and they are safely ignored or laughed off as crackpots by the mainstream. (To give a concrete example, large parts of economics clearly fit this description.)

Similarly, if repression of some form were serious, one would expect that the tenure system would cause more people to be free to speak out and one would expect a lot more vocal expressions of dissent from tenured professors than non-tenured faculty, but there doesn't seem to be such a pattern.

This is true only under the assumption that the tenure process doesn't screen thoroughly for people who have internalized all the mainstream biases deeply and honestly.

Mind you, this isn't as outrageous as it may sound. Consider for example a physics department that grants tenure to someone who is in fact a secret (say) relativity crackpot, and who then proceeds to peddle his nonsense with an inalienable academic title and departmental affiliation. This would be an absolute disaster, so physics departments can be expected to weed out prospective tenure candidates ruthlessly if they show any inclination for believing crackpot ideas, and you can't blame them for it.

Now, consider the same problem in a field where the mainstream is heavily biased. A department in this field is faced with the same problem, except that now the dangerous "crackpot" ideas may in fact be closer to reality than the mainstream. However, there is no independent outside authority that could ever confirm this: the biased mainstream consensus is, by definition, what all the credentialed high-status experts will say, and what the general public will use to decide who is an expert and who a crackpot. So a tenure candidate again gets weeded out at the slightest sign of ideas outside the mainstream bounds, except that now these bounds are seriously remote from reality.

Of course, it's always possible in principle that a contrarian might completely hide his views until he gets tenure, but such a grand feat of duplicity would be far beyond ordinary human powers. (Note that the bias of the mainstream experts doesn't at all mean that they are stupid!) It's also possible that a tenured exponent of the orthodoxy might change his mind under the weight of evidence, but people will very rarely accept a truth that places their life's work and accomplishments in a negative light. (Not to mention all the positive incentives, far beyond the guaranteed professorial title and job security, that tenured professors have for maintaining good standing with the mainstream.)

Comment author: multifoliaterose 30 March 2012 10:07:38PM 3 points [-]

The occasional contrarians who mount fundamental criticism do this with a tacit understanding that they've destroyed their career prospects in the academia and closely connected institutions, and they are safely ignored or laughed off as crackpots by the mainstream. (To give a concrete example, large parts of economics clearly fit this description.)

I don't find this example concrete. I know very little about economics ideology. Can you give more specific examples?

Comment author: steven0461 19 March 2012 06:05:20PM 4 points [-]

The claim that nuclear winter is an existential risk needs additional justification.

Comment author: multifoliaterose 19 March 2012 06:30:54PM *  9 points [-]

It seems almost certain that nuclear winter is not an existential risk in and of itself but it could precipitate a civilizational collapse from which it's impossible to recover (e.g. because we've already depleted too much of the low hanging natural resource supply). This seems quite unlikely, maybe the chance conditional on nuclear winter is between 1 and 10 percent. Given that governments already consider nuclear war to be a national security threat and that the probability seems much lower than x-risk due to future technologies it seems best to focus on other things. Even if nothing direct can be done about x-risk from future technologies, movement building seems better than nuclear risk reduction.

Comment author: Yvain 02 March 2012 04:58:26PM *  42 points [-]

From a simple utilitarian perspective, identifiability is bias. By increasing altruism toward the identifiable victims, it may reduce altruism toward the unidentified ones, who are often the ones most in need of help. On the other hand, it could also increase overall altruism, by making people more willing to incur greater personal costs to help the identifiable victims.

So part of what I think is going on here is that giving to statistical charity is a slippery slope. There is no one number that it's consistent to give: if I give $10 to fight malaria, one could reasonably ask why I didn't give $100; if I give $100, why not $1000; and if $1000, why not every spare cent I make? Usually when we're on a slippery slope like this, we look for a Schelling point, but there are only two good Schelling points here: zero and every spare cent for the rest of your life. Since most people won't donate every spare cent, they stick to "zero". I first realized this when I thought about why I so liked Giving What We Can's philosophy of donating 10% of what you make; it's a powerful suggestion because it provides some number between 0 and 100 which you can reach and then feel good about yourself.

Then identifiable charity succeeds not just because it attaches a face to people, but also because it avoids the slippery slope. If we're told we need to donate to save "baby Jessica", it's very easy to donate exactly as much money as is necessary to help save baby Jessica and then stop. The same is true of natural disasters; if there's an earthquake in Haiti, that means we can donate money to Haiti today but not be under any consistency-related obligations to do so again until the next earthquake. If Haiti is just a horrible impoverished country, then there's no reason to donate now as opposed to any other time, and this is true for all possible "now"s.

Feedback appreciated as I've been planning to make a top-level post about this if I ever get time.

Comment author: multifoliaterose 04 March 2012 02:51:43AM 0 points [-]

So part of what I think is going on here is that giving to statistical charity is a slippery slope. There is no one number that it's consistent to give: if I give $10 to fight malaria, one could reasonably ask why I didn't give $100; if I give $100, why not $1000; and if $1000, why not every spare cent I make? Usually when we're on a slippery slope like this, we look for a Schelling point, but there are only two good Schelling points here: zero and every spare cent for the rest of your life. Since most people won't donate every spare cent, they stick to "zero". I first realized this when I thought about why I so liked Giving What We Can's philosophy of donating 10% of what you make; it's a powerful suggestion because it provides some number between 0 and 100 which you can reach and then feel good about yourself.

There's another option which I think may be better for some people (but I don't know because it hasn't been much explored). One can stagger one's donations over time (say, on a quarterly basis) and alter the amount that one gives according to how one feels about donating based on the feeling of past donations. It seems like this may maximize the amount that one gives locally subject to the constraint of avoiding moral burnout.

If one feels uncomfortable with the amount that one is donating because it's interfering with one's lifestyle one can taper off. On the flip side I've found that donating gives the same pleasure that buying something does: a sense of empowerment. Buying a new garment that one realistically isn't going to wear or a book that one realistically isn't going to read feels good, but probably not as good as donating. This is a pressure toward donating more.

Comment author: multifoliaterose 03 March 2012 01:06:27AM 1 point [-]

Cue: Non-contingency of my arguments (such that the same argument could be applied to argue for conclusions which I disagree with).

Comment author: AnnaSalamon 02 March 2012 07:32:37AM *  16 points [-]

Cue for noticing rationalization: I find my mouth responding with a "no" before stopping to think or draw breath.

(Example: Bob: "We shouldn't do question three this way; you only think so because you're a bad writer". My mouth/brain: "No, we should definitely do question three this way! [because I totally don't want to think I'm a bad writer]" Me: Wait, my mouth just moved without me being at all curious as to how question three will play out, nor about what Bob is seeing in question three. I should call an interrupt here.)

Comment author: multifoliaterose 03 March 2012 01:02:22AM 1 point [-]

Bob: "We shouldn't do question three this way; you only think so because you're a bad writer". My mouth/brain: "No, we should definitely do question three this way! [because I totally don't want to think I'm a bad writer]"

It's probably generically the case that the likelihood of rationalization increases with the contextual cue of a slight. But one usually isn't aware of this in real time.

Comment author: AnnaSalamon 15 February 2012 06:13:45PM *  17 points [-]

Though I know Anna is going to frown on me for advocating this path...

Argh, no I'm not going to advocate ignoring one's quirky interests to follow one's alleged duty. My impression is more like fiddlemath's, below. You don't want to follow shiny interests at random (though even that path is much better than drifting randomly or choosing a career to appease one's parents, and cousin_it is right that even this tends to make people more awesome over time). Instead, ideally, you want to figure out what it would be useful to be interested in, cultivate real, immediate, curiosity and urges to be interested in those things, work to update your anticipations and urges so that they know more of what your abstract/verbal reasoning knows, and can see why certain subjects are pivotal…

Not "far-mode reasoning over actual felt interests" but "far-mode reasoning in dialog with actual felt interests, and both goals and urges relating strongly to what you end up actually trying to do, and so that you develop new quirky interests in the questions you need to answer, the way one develops quirky interests in almost any question if one is willing to dwell on it patiently for a long time, with staring with intrinsic interest while the details of the question come out to inhabit your mind...

Comment author: multifoliaterose 19 February 2012 01:34:07AM *  3 points [-]

I find this comment vague and abstract, do you have examples in mind?

Comment author: CarlShulman 29 January 2012 10:28:28AM *  12 points [-]

A few things I would see as better in expectation than AMF in terms of current people (with varying degrees of confidence and EV, note that I am not ranking the following in relation to each other in this comment):

  • GiveWell itself (it directs multiple dollars to its top charities on the dollar invested, as far as I can see, and powers the growth of an effective philanthropy movement with broader implications).
  • Some research in the model of Poverty Action Lab.
  • A portfolio of somewhat outre endeavours like Paul Romer's Charter Cities.
  • Political lobbying for AMF-style interventions (Gates cites his lobbying expenditures as among their very best), carefully optimized as expected-value charity rather than tribalism using GiveWell-style empiricism, with the collective action problems of politics offsetting the reduced efficiency and corruption of the government route
  • In my view, the risk of catastrophe from intelligent machines is large enough and neglected enough to beat AMF (Averting a 0.1% risk of killing everyone would be worth $14 billion at the $2,000/life AMF exchange rate; plus, conditional on intelligent machines being feasible this century the expected standard of living for current people goes up, meriting extra attention depending on how much better life can get than the current standard); this is much less of a slam dunk than if we consider future generations, but still better than AMF when I use my best estimates
  • Nukes and biotech also pose catastrophic risks, but also have much larger spending on countermeasures today (tens of billions annually), although smarter countermeasures could help, so probably not anything I can point to now, although I expect such options exist
  • Putting money in a Donor-Advised Fund to await the discovery of more effective charities, or special time-sensitive circumstances demanding funds especially strongly
Comment author: multifoliaterose 13 February 2012 12:45:52AM *  0 points [-]

GiveWell itself (it directs multiple dollars to its top charities on the dollar invested, as far as I can see, and powers the growth of an effective philanthropy movement with broader implications).

There's an issue of room for more funding.

Some research in the model of Poverty Action Lab.

What information do we have from Poverty Action Lab that we wouldn't have otherwise? (This is not intended as a rhetorical question; I don't know much about what Poverty Action Lab has done).

A portfolio of somewhat outre endeavours like Paul Romer's Charter Cities.

Even in the face of the possibility of such endeavors systematically doing more harm than good due to culture clash?

Political lobbying for AMF-style interventions (Gates cites his lobbying expenditures as among their very best), carefully optimized as expected-value charity rather than tribalism using GiveWell-style empiricism, with the collective action problems of politics offsetting the reduced efficiency and corruption of the government route

Here too maybe there's an issue of room for more funding: if there's room for more funding then why does the Gates Foundation spend money on many other things?

Putting money in a Donor-Advised Fund to await the discovery of more effective charities, or special time-sensitive circumstances demanding funds especially strongly

What would the criterion for using the money be? (If one doesn't have such a criterion then one forever holds off on a better opportunity and this has zero expected value.)

Comment author: orthonormal 03 February 2012 03:03:14AM 5 points [-]

When lecturing, saying the word "obvious" is a signal for the students to begin panicking and self-doubting. I wonder if some instructors do this intentionally.

Comment author: multifoliaterose 03 February 2012 09:45:05PM 2 points [-]

Saying that something is 'obvious' can provide useful information to the listener of the form "If you think about this for a few minutes you'll see why this is true; this stands in contrast with some of the things that I'm talking about today." Or even "though you may not understand why this is true, for experts who are deeply immersed in this theory this part appears to be straightforward."

I personal wish that textbooks more often highlighted the essential points over those theorems that follow from a standard method that the reader is probably familiar with.

But here I really have in mind graduate / research level math where there's widespread understanding that a high percentage of the time people are unable to follow someone who believes his or her work to be intelligible and so who have a prior against such remarks being intended as a slight. It seems like a bad communication strategy for communicating with people who are not in such a niche.

Comment author: Yvain 25 January 2012 07:18:59PM *  4 points [-]

I don't know a single example of somebody who chose a career substantially less enjoyable than what they would otherwise have been doing in order to help people and successfully stuck to it. Do you?

I don't know a single example of somebody who chose a career substantially less enjoyable than what they would otherwise have been doing in order to help people in an efficient utilitarian way, full stop. I know juliawise was considering it, but I don't know what happened.

Do you know of anyone who tried and quit?

Comment author: multifoliaterose 28 January 2012 06:22:03AM 0 points [-]

Do you know of anyone who tried and quit?

No, I don't. This thread touches on important issues which warrant fuller discussion; I'll mull them over and might post more detailed thoughts under the discussion board later on.

View more: Next