All of Jordan Arel's Comments + Replies

Yes, that’s the main place I’m still uncertain, the ten combinations of three 1’s have to be statistically independent which I’m having trouble visualizing; if you rolled six die, the chance that either three pre-selected specific die would be 1’s or the other three die would all be 1’s could just be added together.

But since you have five die, and you are asking whether three of them will be 1’s, or another overlapping set will be 1’s, you have to somehow get these to be statistically independent. Part of that is actually what I left out (that GPT told me,... (read more)

I’m quite sure now, I came to the same conclusion independently of GPT after getting a hint from it, which itself I had already almost guessed.

A woman having the top 10% of any characteristic is almost the same as rolling a 10 sided die and coming up with a 1 (this was the actual problem I presented GPT with, and when it answered it did so in what looked like a hybrid of code and text so I’m quite sure it is computing this somehow).

What was clearly wrong with the first math was that if I roll just three die, there would already be1*10^3 or1/1000 chance of ... (read more)

2Seth Herd
I definitely like the estimation method. I'm totally convinced that the answer is higher than 1/1000 as you describe. The bit about dividing that by the number of ways you can roll three ones on five dice sounds sketchy - I can't tell for sure that that's sensible. But it does sound intuitively right. There are five ways to roll 4 1's (simplifying it to ten sided dice is a great move for my intuition), ten ways to roll 3 1s, and 1 way to roll 5 ones; so that's 16. That would be 1.6%, which is different than GPT4's .86%. So I think that does get into the ballpark, like you said, but it's not exactly right. Anyway, we're into the details. I think you're right about the order of magnitude, and that's good enough for a Fermi estimate.

Ah dang sorry, was not aware of this. Brute force re-taught myself how to do this quick 10^5 / (5-2)! = 100,000 / 6 = 1/16,666. You are right, that was off by more than a factor of ten! Thanks for the tip.

Edit: agghh I hate combinatorials. This seemed way off to me, I thought the original seemed correct. GPT had originally explained the math but I didn’t understand the notation, after working on the problem again for a while I had it explain it’s method to me in easier to understand language and I’m actually pretty sure it was correct.

2Seth Herd
If you explained the math in a footnote or something, you'd probably get some math collaboration from readers. I don't know how to do that one off the top of my head, but it's interesting. My sense is that GPT isn't trustworthy on that, and it could be off by a lot, so it's necessary to include your (or its) math if you're not sure about it yourself.

Ah, thanks for the clarification, this is very helpful. I made a few updates including changing the title of the piece and adding a note about this in the assumptions. Here is the assumption and footnote I added, which I think explains my views on this:

Whenever I say "lives saved" this is shorthand for “future lives saved from nonexistence.” This is not the same as saving existing lives, which may cause profound emotional pain for people left behind, and some may consider more tragic than future people never being born.[6]

Here is footnote 6, created for br... (read more)

As to your second objection, I think that for many people the question of whether murdering people in order to save other people is a good idea is a separate moral question from which altruistic actions we should take to have the most positive impact. I am certainly not advocating murdering billions of people.

But whether saving present people or (in expectation) saving many more unborn future people is a better use of altruistic resources seems to be largely a matter of temperament. I have heard a few discussions of this and they never seem to make much se... (read more)

3JBlack
I'm not actually talking about "a person being tortured today" versus "a person being tortured tomorrow". I agree those are equivalent, from some hypothetical external viewpoint and assuming that various types of uncertainty are declared by fiat to be absent. It's about "a person who actually exists getting to continue their life that would otherwise be terminated" versus "a person being able to come to exist in the future versus not ever existing". I have serious doubts that these are morally equivalent, and am inclined to believe that they are not even on a comparable scale. In particular, I think using the term "saving a life" for the latter is not only unjustified, but wilfully deceptive. Even if there does turn out to be a strong argument for the two outcomes being comparable on some numerical scale, I expect to still strongly disfavour any use of terminology that equates them as this post does.

Interesting objections!

I mentioned a few times that some and perhaps most x-risk work may have negative value ex post. I go into detail how work may likely be negative in footnote 13.

It seems somewhat unreasonable to me, however, to be virtually 100% confident that x-risk work is as likely to have zero or negative value ex ante as it is to have positive value.

I tried to include the extreme difficulty of influencing the future by giving work relatively low efficacy, i.e. in the moderate case 100,000 (hopefully extremely competent) people working on x-risk f... (read more)

Hm, logically this makes sense, but I don’t think most agents in the world are fully rational, hence the continuing problems with potential threats of nuclear war despite mutually assured destruction and extremely negative sum outcomes for everyone. I think this could be made much more dangerous by much more powerful technologies. If there is a strong offense bias and even a single sufficiently powerful agent willing to kill others, and another agent willing to strike back despite being unable to defend themselves by doing so, this could result in everyone... (read more)