Matthew Roy

I am a Philosophy Enthusiast with an interest in addressing what I view as structural misunderstandings in Philosophy that block clear thinking on AI design, engineering, and alignment.

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I'd missed that, thank you for pointing that out.

If "expected" effectively means what you're saying is that you're being offered a bet that is good by definition, that even at 50/50 odds, you take the bet, I suppose that's true. If the bet is static for a second flip, it wouldn't be a good deal, but if it dynamically altered such that it was once again a good bet by definition, I suppose you keep taking the bet.

If you're engaging with the uncertainty that people are bad at evaluating things like "expected utility" then at least some of the point is that our naive intuitions are probably missing some of the math, and costs, and the bet is likely a bad bet.

If I was trying to give credence to that second possibility, I'd say that the word "expected" is now doing a bunch of hidden heavy lifting in the payoff structure, and you don't really know what lifting it's doing.

I'm dead sure you'd need more than 'just more than a doubling' for the payoff to make sense. Let's assume two things.

  1. Net utility naturally doubles for humans roughly every 300,000 years. (This is deliberately conservative, recent history would suggest something much faster, but the numbers are so stupidly asymmetric, using recent history would be silly. Homo Sapiens have been around that long, net utility has doubled at least once in that time)
  2. The universe will experience heat death in roughly 10^100 years.

Before you even try to factor in the volatility costs, time value of enjoying that utility, etc. your payoff has to be something like 2^10^95.

Edit, alright since apparently we're having trouble with this argument, let's clarify it.

It's not good enough for a bet to "make sense," in some isolated fashion. You have to evaluate the opportunity cost of what you could have done with the thing you're betting instead. My original comment was suggesting a method to evaluate that opportunity cost.

The post makes this weird "if the utility was just big enough" move, while still attempting to justify the original, incredibly stupid bet. It's a bet. Pick a payoff scheme, and the math works, or it doesn't, when compared to some opportunity cost, not some nonsensical bet from nowhere. Saying that the universe is big, and valuable, and vaguely pointing at a valuation method, but then pointing to "but just make the payout bigger" misses the point. Humans are bad at evaluating such structures, and using them to build your moral theories has issues. 

For the coinflip to make sense, your opportunity cost has to approach zero. Give any reasonable argument that the universe existing has an opportunity cost approaching zero, and the bet gets interesting.

But almost any valuation method you pick for the universe gets to absurdly high ongoing value. That isn't a Pascal's Mugging, that's deciding to bet the universe.

Here's how you get to opportunity cost near zero:

  1. X-Risk greater than 50%
  2. Humans are the only sapient species. (Getting past X-risk gets evaluated per species, the universal coinflip gets evaluated for the universe. That changes the math.)
  3. Your certainty on both 1 and 2 are so high that your fudge factor doesn't play with the fact you know the odds of coinflips.

If any of those three is not true, you can't get a low enough opportunity cost to justify the coinflip. That still might not be enough, but you're at least getting into the ballpark of having a discussion about the bet being sensible.

If anyone wants to argue, instead of downvoting, I'd take the argument. Maybe I'm missing something. But it's just a stupid bet without some method of evaluating opportunity cost. Pick one.

Answer by Matthew Roy51

I would tend to give particular credence to any practice which pre-dates the printing press.

The reason is fairly straightforward. Spreading ideas was significantly more expensive, and often could only occur to the extent that holding the ideas made the carriers better adapted to their environments.

As the cost to spread an idea has become cheaper, humans can unfortunately afford to spread a great deal more pleasant (feel free to substitute reward hacking for pleasant) junk.

That doesn't mean failing to examine the ideas critically, but there are more than a few ideas that I once doubted the wisdom of, which made a great deal more sense from this perspective.

As for the particular practice of meditation that you reference, I tend to view spiritual practices as somewhat difficult to analyze for this purpose, as the entire structure of the religion was what was transmitted, not only the particularly adaptive information. To use DNA as an analogy, it's difficult to tell, which portions are of particularly high utility, analogous to the A, C, G, and T in DNA, and which serve as the sugar-phosphate backbone. Potentially useful in maintenance of the structure as a whole, but perhaps not of particular use when translated outside that context.

Which portions of Buddhism are which, I couldn't tell you, I lack practice in the meditation methods mentioned, and lack deeper familiarity with the relevant social and historical context.