Posts

Sorted by New

Wiki Contributions

Comments

That's only true if the probability is a continuous function - perhaps the probability instantaneously went from below 28% to above 28%.

I’m claiming that we should only ever reason about infinity by induction-type proofs. Due to the structure of the thought experiment, the only thing that is possible to use for to count in this way is galaxies, so (I claim) counting galaxies is the only thing that you’re allowed to use for moral reasoning. Since all of the galaxies in each universe are moral equivalents (either all happy but one or all miserable but one), how you rearrange galaxies doesn’t affect the outcome.

(To be clear, I agree that if you rearrange people under the concepts of infinity that mathematicians like to use, you can turn HEAVEN into HELL, but I’m claiming that we’re simply not allowed to use that type of infinity logic for ethics.)

Obviously this is taking a stance about the ways in which infinity can be used in ethics, but I think this is a reasonable way to do so without giving up the concept of infinity entirely.

I don’t think that it does? There are infinitely many arrangements, but the same proof by induction applies to any possible arrangement.

I have an argument for a way in which infinity can be used but which doesn't imply any of the negative conclusions. I'm not convinced of its reasonableness or correctness though.

I propose that infinity ethics should only be reasoned about by use of proof through induction. When done this way, the only way to reason about HEAVEN and HELL is by matching up galaxies in each universe, and doing induction across all of the elements:

Theorem: The universe HEAVEN that contains n galaxies is a better universe than HELL which contains n galaxies. We will formalize this as HEAVEN(n) > HELL(n). We will prove this by induction.

  • Base case, HEAVEN(1) > HELL(1): 
    • The first galaxy in HEAVEN (which contains billions of happy people and one miserable person) is better than the first galaxy in HELL (which contains billions of miserable people and one happy person), by our understanding of morality.  
  • Induction step HEAVEN(n) > HELL(n) => HEAVEN(n+1) > HELL(n+1):
    • HEAVEN(n) > HELL(n) (given)
      HEAVEN(n) + billions of happy people + 1 happy person > HELL(n) + billions of miserable people + 1 miserable person (by understanding of morality)
      HEAVEN(n) + billions of happy people + 1 miserable person > HELL(n) + billions of miserable people + 1 happy person (moving people around does not improve things if it changes nothing else.)
      HEAVEN(n + 1) > HELL(n + 1) □

A downside of this approach is that you lose the ability to reason about uncountably infinite numbers. However, I think that's a bullet that I am willing to bite, to only be able to reason about a countably infinite number of moral entities.

One downside to using video games to measure "intelligence" is that they often rely on skills that aren't generally included in "intelligence", like how fast and precise you can move your fingers. If someone has poor hand-eye coordination, they'll perform less well on many video games than people who have good hand-eye coordination.

A related problem is that video games in general have a large element of a "shared language", where someone who plays lots of video games will be able to use skills from those when playing a new video game. I know people that are certainly more intelligent than I am, but who are less able when playing a new video game, because their parents wouldn't let them play video games growing up (or, they're older and didn't grow up with video games at all).

I like the idea of using a different tool to measure "intelligence", if you must measure "intelligence", but I'm not sure that video games are the right one.

There's not direct rationality commentary in the post, but there's plenty of other posts on LW that also aren't direct rationality commentary (for example, a large majority of posts here about COVID-19). I think that this post is a good fit because it provides tools for understanding this conflict and others like it, which I didn't possess before and now somewhat do.

It's not directly relevant to my life, but that's fine. I imagine that for some here it might actually be relevant, because of connections through things like effective altruism (maybe it helps grant makers decide where to send funds to aid the Sudanese people?).

Interesting post, thanks!

A couple of formatting notes:

This post gives a context to the deep dives that should be minimally accessible to a general audience. For an explanation of why the war began, see this other post.

It seems like there should be a link here, but there isn't one.

Also, all of the footnotes don't link to each other properly, so currently one has to manually scroll down to the footnotes and then scroll back up. LessWrong has a footnote feature that you could use, which makes the reading experience nicer.

It used to be called Find Friends on iOS, but they rebranded it, presumably because family was a better market fit.

There are others like that too, like Life360, and they’re quite popular. They solve the problem of parents wanting to know where their kids are. It’s perhaps overly zealous on the parents part, but it’s a real desire that the apps are solving.

Metaculus isn’t very precise near zero, so it doesn’t make sense to multiply it out.

Also, there’s currently a mild outbreak, while most of the time there’s no outbreak (or less of one), so the risk for the next half year is elevated compared to normal.

I'm not familiar with how Stockfish is trained, but does it have intentional training for how to play with queen odds? If not, then it might be able to start trouncing you if it were trained to play with it, instead of having to "figure out" new strategies uniquely. 

Load More