_will_

Comments

Sorted by
_will_32

Thanks, that’s helpful!

(Fwiw, I don’t find the ‘caring a tiny bit’ story very reassuring, for the same reasons as Wei Dai, although I do find the acausal trade story for why humans might be left with Earth somewhat heartening. (I’m assuming that by ‘game-theoretic reasons’ you mean acausal trade.))

_will_10

I don't think [AGI/ASI] literally killing everyone is the most likely outcome

Huh, I was surprised to read this. I’ve imbibed a non-trivial fraction of your posts and comments here on LessWrong, and, before reading the above, my shoulder Daniel definitely saw extinction as the most likely existential catastrophe.

If you have the time, I’d be very interested to hear what you do think is the most likely outcome. (It’s very possible that you have written about this before and I missed it—my bad, if so.)

_will_42

Hmm, the ‘making friends’ part seems the most important (since there are ways to share new information you’ve learned, or solve problems, beyond conversation), but it also seems a bit circular. Like, if the reason for making friends is to hang out and have good conversations(?), but one has little interest in having conversations, then doesn’t one have little reason to make friends in the first place, and therefore little reason to ‘git gud’ at the conversation game?

_will_3-5

So basically I don't think it's possible to do robustly positive actions in longtermism with high (>70%? >60%?) probability of being net positive for the long-term future

This seems like an important point, and it's one I've not heard before. (At least, not outside of cluelessness or specific concerns around AI safety speeding up capabilities; I'm pretty sure that most EAs I know have ~100% confidence that what they're doing is net positive for the long-term future.)

I'm super interested in how you might have arrived at this belief: would you be able to elaborate a little? For instance, is there a theoretical argument going on here, like a weak form of cluelessness? Or is it more empirical, for example, did you get here through evaluating a bunch of grants and noticing that even the best seem to carry 30-ish percent downside risk? Something else?

_will_67

"GeneSmith"... the pun just landed with me. nice.

_will_31

Very nitpicky (sorry): it'd be nice if the capitalization to the epistemic status reactions was consistent. Currently, some are in title case, for example "Too Harsh" and "Hits the Mark", while others are in sentence case, like "Key insight" and "Missed the point". The autistic part of me finds this upsetting.

_will_40

Thanks for this comment. I don't have much to add, other than: have you considered fleshing out and writing up this scenario in a style similar to "What 2026 looks like"?

_will_40

Thanks for this question.

Firstly, I agree with you that firmware-based monitoring and compute capacity restrictions would require similar amounts of political will to happen. Then, in terms of technical challenges, I remember one of the forecasters saying they believe that "usage-tracking firmware updates being rolled out to 95% of all chips covered by the 2022 US export controls before 2028" is 90% likely to be physically possible, and 70% likely to be logistically possible. (I was surprised at how high these stated percentages were, but I didn't have time then to probe them on why exactly they were at these percentages—I may do so at the next workshop.)

Assuming the technical challenges of compute capacity restrictions aren't significant, fixing compute capacity restrictions at 15% likely, and applying the following crude calculation:

P(firmware) = P(compute) x P(firmware technical challenges are met)

= 0.15 x (0.9 x 0.7) = 0.15 x 0.63 = 0.0945 ~ 9%

9% is a little above the reported 7%, which I take as meaning that the other forecasters on this question believe the firmware technical challenges are a little, but not massively, harder than the 90%–70% breakdown given above.

Load More