Wei_Dai comments on Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased) - Less Wrong

75 Post author: HoldenKarnofsky 18 August 2011 11:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 19 August 2011 10:38:57PM 15 points [-]

Leaving aside Aumann questions: If people like that think that the Future of Humanity Institute, work on human rationality, or Giving What We Can has a large probability of catalyzing the creation of an effective institution, they should quite plausibly be looking there instead. "I should be doing something I think is at least medium-probably remedying the sheerly stupid situation humanity has gotten itself into with respect to the intelligence explosion" seems like a valuable summary heuristic.

If you can't think of anything medium-probable, using that as an excuse to do nothing is unacceptable. Figure out which of the people trying to address the problem seem most competent and gamble on something interesting happening if you give them more money. Money is the unit of caring and I can't begin to tell you how much things change when you add more money to them. Imagine what the global financial sector would look like if it was funded to the tune of $600,000/year. You would probably think it wasn't worth scaling up Earth's financial sector.

Comment author: Wei_Dai 20 August 2011 05:57:23AM *  8 points [-]

If you can't think of anything medium-probable, using that as an excuse to do nothing is unacceptable.

That's my gut feeling as well, but can we give a theoretical basis for that conclusion, which might also potentially be used to convince people who can't think of anything medium-probable to "do something"?

My current thoughts are

  1. I assign some non-zero credence to having an unbounded utility function.
  2. Bostrom and Toby's moral parliament idea seems to be the best that we have about how to handle moral uncertainty.
  3. If Pascal's wager argument works, and to the extent that I have a faction representing unbounded utility in my moral parliament, I ought to spend a fraction of my resources on Pascal's wager type "opportunities"
  4. If Pascal's wager argument works, I should pick the best wager to bet on, which intuitively could well be "push for a positive Singularity"
  5. But it's not clear that Pascal's wager argument works or what could be the justification for thinking that "push for a positive Singularity" is the best wager. We also don't have any theory to handle this kind of philosophical uncertainty.
  6. Given all this, I still have to choose between "do nothing", "push for positive Singularity", or "investigate Pascal's wager". Is there any way, in this decision problem, to improve upon going with my gut?

Anyway, I understand that you probably have reasons not to engage too deeply with this line of thought, so I'm mostly explaining where I'm currently at, as well as hoping that someone else can offer some ideas.