Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to comment by Turgurth on against "AI risk"
Comment author: CarlShulman 12 April 2012 06:03:11AM 5 points [-]

How concerned is SI with existential risks vs. how concerned is SI with catastrophic risks?

Different people have different views. For myself, I care more about existential risks than catastrophic risks, but not overwhelmingly so. A global catastrophe would kill me and my loved ones just as dead. So from the standpoint of coordinating around mutually beneficial policies, or "morality as cooperation" I care a lot about catastrophic risk affecting current and immediately succeeding generations. However, when I take a "disinterested altruism" point of view x-risk looms large: I would rather bring 100 trillion fantastic lives into being than improve the quality of life of a single malaria patient.

If SI is solely concerned with x-risks, do I assume correctly that you also think about how cat. risks can relate to x-risks

Yes.

Or is this sort of thing more what the FHI does?

They spend more time on it, relatively speaking.

FAI as quickly as possible (to head off other x-risks and cat. risks) vs. achieving FAI as safely as possible (to head off UFAI)

Given that powerful AI technologies are achievable in the medium to long term, UFAI would seem to me be a rather large share of the x-risk, and still a big share of the catastrophic risk, so that speedups are easily outweighed by safety gains.

Comment author: multifoliaterose 14 April 2012 12:07:59AM 1 point [-]

However, when I take a "disinterested altruism" point of view x-risk looms large: I would rather bring 100 trillion fantastic lives into being than improve the quality of life of a single malaria patient.

What's your break even point for "bring 100 trillion fantastic lives into being with probability p" vs. "improve the quality of a single malaria patient" and why?

Comment author: steven0461 19 March 2012 06:05:20PM 4 points [-]

The claim that nuclear winter is an existential risk needs additional justification.

Comment author: multifoliaterose 19 March 2012 06:30:54PM *  9 points [-]

It seems almost certain that nuclear winter is not an existential risk in and of itself but it could precipitate a civilizational collapse from which it's impossible to recover (e.g. because we've already depleted too much of the low hanging natural resource supply). This seems quite unlikely, maybe the chance conditional on nuclear winter is between 1 and 10 percent. Given that governments already consider nuclear war to be a national security threat and that the probability seems much lower than x-risk due to future technologies it seems best to focus on other things. Even if nothing direct can be done about x-risk from future technologies, movement building seems better than nuclear risk reduction.

Comment author: Yvain 02 March 2012 04:58:26PM *  42 points [-]

From a simple utilitarian perspective, identifiability is bias. By increasing altruism toward the identifiable victims, it may reduce altruism toward the unidentified ones, who are often the ones most in need of help. On the other hand, it could also increase overall altruism, by making people more willing to incur greater personal costs to help the identifiable victims.

So part of what I think is going on here is that giving to statistical charity is a slippery slope. There is no one number that it's consistent to give: if I give $10 to fight malaria, one could reasonably ask why I didn't give $100; if I give $100, why not $1000; and if $1000, why not every spare cent I make? Usually when we're on a slippery slope like this, we look for a Schelling point, but there are only two good Schelling points here: zero and every spare cent for the rest of your life. Since most people won't donate every spare cent, they stick to "zero". I first realized this when I thought about why I so liked Giving What We Can's philosophy of donating 10% of what you make; it's a powerful suggestion because it provides some number between 0 and 100 which you can reach and then feel good about yourself.

Then identifiable charity succeeds not just because it attaches a face to people, but also because it avoids the slippery slope. If we're told we need to donate to save "baby Jessica", it's very easy to donate exactly as much money as is necessary to help save baby Jessica and then stop. The same is true of natural disasters; if there's an earthquake in Haiti, that means we can donate money to Haiti today but not be under any consistency-related obligations to do so again until the next earthquake. If Haiti is just a horrible impoverished country, then there's no reason to donate now as opposed to any other time, and this is true for all possible "now"s.

Feedback appreciated as I've been planning to make a top-level post about this if I ever get time.

Comment author: multifoliaterose 04 March 2012 02:51:43AM 0 points [-]

So part of what I think is going on here is that giving to statistical charity is a slippery slope. There is no one number that it's consistent to give: if I give $10 to fight malaria, one could reasonably ask why I didn't give $100; if I give $100, why not $1000; and if $1000, why not every spare cent I make? Usually when we're on a slippery slope like this, we look for a Schelling point, but there are only two good Schelling points here: zero and every spare cent for the rest of your life. Since most people won't donate every spare cent, they stick to "zero". I first realized this when I thought about why I so liked Giving What We Can's philosophy of donating 10% of what you make; it's a powerful suggestion because it provides some number between 0 and 100 which you can reach and then feel good about yourself.

There's another option which I think may be better for some people (but I don't know because it hasn't been much explored). One can stagger one's donations over time (say, on a quarterly basis) and alter the amount that one gives according to how one feels about donating based on the feeling of past donations. It seems like this may maximize the amount that one gives locally subject to the constraint of avoiding moral burnout.

If one feels uncomfortable with the amount that one is donating because it's interfering with one's lifestyle one can taper off. On the flip side I've found that donating gives the same pleasure that buying something does: a sense of empowerment. Buying a new garment that one realistically isn't going to wear or a book that one realistically isn't going to read feels good, but probably not as good as donating. This is a pressure toward donating more.

Comment author: multifoliaterose 03 March 2012 01:06:27AM 1 point [-]

Cue: Non-contingency of my arguments (such that the same argument could be applied to argue for conclusions which I disagree with).

Comment author: AnnaSalamon 02 March 2012 07:32:37AM *  16 points [-]

Cue for noticing rationalization: I find my mouth responding with a "no" before stopping to think or draw breath.

(Example: Bob: "We shouldn't do question three this way; you only think so because you're a bad writer". My mouth/brain: "No, we should definitely do question three this way! [because I totally don't want to think I'm a bad writer]" Me: Wait, my mouth just moved without me being at all curious as to how question three will play out, nor about what Bob is seeing in question three. I should call an interrupt here.)

Comment author: multifoliaterose 03 March 2012 01:02:22AM 1 point [-]

Bob: "We shouldn't do question three this way; you only think so because you're a bad writer". My mouth/brain: "No, we should definitely do question three this way! [because I totally don't want to think I'm a bad writer]"

It's probably generically the case that the likelihood of rationalization increases with the contextual cue of a slight. But one usually isn't aware of this in real time.

Comment author: AnnaSalamon 15 February 2012 06:13:45PM *  17 points [-]

Though I know Anna is going to frown on me for advocating this path...

Argh, no I'm not going to advocate ignoring one's quirky interests to follow one's alleged duty. My impression is more like fiddlemath's, below. You don't want to follow shiny interests at random (though even that path is much better than drifting randomly or choosing a career to appease one's parents, and cousin_it is right that even this tends to make people more awesome over time). Instead, ideally, you want to figure out what it would be useful to be interested in, cultivate real, immediate, curiosity and urges to be interested in those things, work to update your anticipations and urges so that they know more of what your abstract/verbal reasoning knows, and can see why certain subjects are pivotal…

Not "far-mode reasoning over actual felt interests" but "far-mode reasoning in dialog with actual felt interests, and both goals and urges relating strongly to what you end up actually trying to do, and so that you develop new quirky interests in the questions you need to answer, the way one develops quirky interests in almost any question if one is willing to dwell on it patiently for a long time, with staring with intrinsic interest while the details of the question come out to inhabit your mind...

Comment author: multifoliaterose 19 February 2012 01:34:07AM *  3 points [-]

I find this comment vague and abstract, do you have examples in mind?

Comment author: CarlShulman 29 January 2012 10:28:28AM *  12 points [-]

A few things I would see as better in expectation than AMF in terms of current people (with varying degrees of confidence and EV, note that I am not ranking the following in relation to each other in this comment):

  • GiveWell itself (it directs multiple dollars to its top charities on the dollar invested, as far as I can see, and powers the growth of an effective philanthropy movement with broader implications).
  • Some research in the model of Poverty Action Lab.
  • A portfolio of somewhat outre endeavours like Paul Romer's Charter Cities.
  • Political lobbying for AMF-style interventions (Gates cites his lobbying expenditures as among their very best), carefully optimized as expected-value charity rather than tribalism using GiveWell-style empiricism, with the collective action problems of politics offsetting the reduced efficiency and corruption of the government route
  • In my view, the risk of catastrophe from intelligent machines is large enough and neglected enough to beat AMF (Averting a 0.1% risk of killing everyone would be worth $14 billion at the $2,000/life AMF exchange rate; plus, conditional on intelligent machines being feasible this century the expected standard of living for current people goes up, meriting extra attention depending on how much better life can get than the current standard); this is much less of a slam dunk than if we consider future generations, but still better than AMF when I use my best estimates
  • Nukes and biotech also pose catastrophic risks, but also have much larger spending on countermeasures today (tens of billions annually), although smarter countermeasures could help, so probably not anything I can point to now, although I expect such options exist
  • Putting money in a Donor-Advised Fund to await the discovery of more effective charities, or special time-sensitive circumstances demanding funds especially strongly
Comment author: multifoliaterose 13 February 2012 12:45:52AM *  0 points [-]

GiveWell itself (it directs multiple dollars to its top charities on the dollar invested, as far as I can see, and powers the growth of an effective philanthropy movement with broader implications).

There's an issue of room for more funding.

Some research in the model of Poverty Action Lab.

What information do we have from Poverty Action Lab that we wouldn't have otherwise? (This is not intended as a rhetorical question; I don't know much about what Poverty Action Lab has done).

A portfolio of somewhat outre endeavours like Paul Romer's Charter Cities.

Even in the face of the possibility of such endeavors systematically doing more harm than good due to culture clash?

Political lobbying for AMF-style interventions (Gates cites his lobbying expenditures as among their very best), carefully optimized as expected-value charity rather than tribalism using GiveWell-style empiricism, with the collective action problems of politics offsetting the reduced efficiency and corruption of the government route

Here too maybe there's an issue of room for more funding: if there's room for more funding then why does the Gates Foundation spend money on many other things?

Putting money in a Donor-Advised Fund to await the discovery of more effective charities, or special time-sensitive circumstances demanding funds especially strongly

What would the criterion for using the money be? (If one doesn't have such a criterion then one forever holds off on a better opportunity and this has zero expected value.)

Comment author: orthonormal 03 February 2012 03:03:14AM 5 points [-]

When lecturing, saying the word "obvious" is a signal for the students to begin panicking and self-doubting. I wonder if some instructors do this intentionally.

Comment author: multifoliaterose 03 February 2012 09:45:05PM 2 points [-]

Saying that something is 'obvious' can provide useful information to the listener of the form "If you think about this for a few minutes you'll see why this is true; this stands in contrast with some of the things that I'm talking about today." Or even "though you may not understand why this is true, for experts who are deeply immersed in this theory this part appears to be straightforward."

I personal wish that textbooks more often highlighted the essential points over those theorems that follow from a standard method that the reader is probably familiar with.

But here I really have in mind graduate / research level math where there's widespread understanding that a high percentage of the time people are unable to follow someone who believes his or her work to be intelligible and so who have a prior against such remarks being intended as a slight. It seems like a bad communication strategy for communicating with people who are not in such a niche.

Comment author: Yvain 25 January 2012 07:18:59PM *  4 points [-]

I don't know a single example of somebody who chose a career substantially less enjoyable than what they would otherwise have been doing in order to help people and successfully stuck to it. Do you?

I don't know a single example of somebody who chose a career substantially less enjoyable than what they would otherwise have been doing in order to help people in an efficient utilitarian way, full stop. I know juliawise was considering it, but I don't know what happened.

Do you know of anyone who tried and quit?

Comment author: multifoliaterose 28 January 2012 06:22:03AM 0 points [-]

Do you know of anyone who tried and quit?

No, I don't. This thread touches on important issues which warrant fuller discussion; I'll mull them over and might post more detailed thoughts under the discussion board later on.

Comment author: AnnaSalamon 24 January 2012 07:59:54AM *  26 points [-]

fortunately or unfortunately, I also have parents to provide me with reasons to have urges to do things I wouldn't otherwise have an urge to do.

A good point.

Social incentives that directly incentivize the immediate steps toward long-term goals seem to be key to a surprisingly large portion of functional human behavior.

People acquire the habit of wearing seatbelts in part because parents'/friends' approval incentivizes it; I don't want to be the sort of person my mother would think reckless. (People are much worse at taking safety measures that are not thus backed up by social approval; e.g. driving white or light-colored cars reduces one's total driving-related death risk by ord mag 20%, but this statistic does not spread, and many buy dark cars.)

People similarly bathe lest folks smell them, keep their houses clean lest company be horrified, stick to exercise plans and study and degree plans and retirement savings plans partly via friends' approval, etc.; and are much worse at similar goals for which there are no societally cached social incentives for goal-steps. The key role social incentives play in much apparently long-term action of this is one reason people sometimes say "people do not really care about charity, their own health, their own jobs, etc.; all they care about is status".

But contra Robin, the implication is not "humans only care about status, and so we pretend hypocritically to care about our own survival while really basically just caring about status", the implication is "humans are pretty inept at acquiring urges to do the steps that will fulfill our later urges. We are also pretty inept at doing any steps we do not have a direct urge for. Thus, urges to e.g. survive, or live in a clean and pleasant house, or do anything else that requires many substeps… are often pretty powerless, unless accompanied by some kind of structure that can create immediate rewards for individual steps.

(People rarely exhibit long-term planning to acquire social status any more than we/they exhibit long-term planning to acquire health. E.g., most unhappily single folk do not systematically practice their social skills unless this is encouraged by their local social environment.)

Comment author: multifoliaterose 25 January 2012 10:33:55PM 1 point [-]

(People rarely exhibit long-term planning to acquire social status any more than we/they exhibit long-term planning to acquire health. E.g., most unhappily single folk do not systematically practice their social skills unless this is encouraged by their local social environment.)

Is lack of social skills typically the factor that prevents unhappily single folk from finding relationships? Surely this is true in some cases but I would be surprised to learn that it's generic.

View more: Next