But the film industry slowly got their act together,
I don't think it was that slow. Even 'A trip to the moon' in 1902 already used stop motion, 'The great train robbery' in 1903 had already used more complex cutting & perspectives & even put a camera on a moving train.
By 1930 the film industry had done:
experimented with previously impossible perspectives (1930)
nice visual effects (1927)
stunts (1923)
combined a lot of techniques (1924)
and did various experimental stuff (1929)
But I rarely see any discussion of whether Taiwan could escalate into a Mutually Assured Destruction dynamic
Not at all an expert either. But MAD doesn't just assume nukes but also assumes that each side is able to launch on warning/under attack, or to launch a second strike. The former is harder, compared to US-Russia, because of the physical proximity. The latter disadvantages Taiwan because of its smaller land mass and much more limited navy. So maybe experts have very basic reasons to question the viability of MAD?
Generally agree, but disagree with this part:
And so my options are either to overpower them (by not letting them achieve their idea of good when it conflicts with mine) or trade with them.
There’s room for persuasion and deliberation as well. Moral anti-realists can care about how other people form moral beliefs (e.g. quality of justifications, coherence of values, non-coercion).
I was most confused about 'S', and likely understood it quite differently than intended.
I understood S as roughly "Humanity stays in control after AGI, but slowly (over decades/centuries) becomes fewer and less relevant". I'd expect in many of these cases for something morally valuable to replace humans. So I put S lower than R.
Could it make sense to introduce "population after AGI you care about" as a term – I think this could be clearer.
Also, is the calculator setting non-doom post-AGI mortality to zero by capping the horizon at AGI and counting only pre-AGI deaths?
For example: time to AGI|no pause = 10y & pause = 10y. Then the calculator will arrive 60m x 10 deaths for no-pause vs 60m x 20 for pause. But if after AGI mortality only halves, the fair comparison no-pause path should be 60m x 10 + 0.5 x 60m x 10.
Yes, quite weird to put zero value for people after AGI.
If you e.g. expect p(extinction|no pause)-p(extinction|pause) = 0.1%, and 1 trillion people after AGI, pausing saves a billion people after AGI in expectation.
I've watched Big Joel videos before, which makes me wonder if I have seen this one but forgotten about it. I expressed the same basic idea before this video was released, but the structure I used here is very similar.
Yes, for sure – it's almost identical! Thanks for sharing.
108k views means that many more people have heard of this thesis than I previously thought.
The ones who walk away are the ones who recognize all of this and are no longer willing to participate in the collective illusion (hence, alone).
I like this reading, I hadn't thought about it before!
My understanding is that a lot of the slow progress from 1878 to the early 1900s was the “cinema tech stack” needing to become technically and economically viable.
To get good motion you need ~16 frames per second, which means each frame has to be exposed ~1/16 of a second, which in turn means you need stuff like sensitive film stock, lots of light, decent lenses. Then you need a camera that can move film in a way that is at a constant speed but also holds each frame perfectly still briefly, for a precise duration, and without any jitter/warping/etc. Then for economic viability you also need projection that’s bright and safe for a room, plus a practical way to duplicate film at scale.
The starting point for all of this was early photography (e.g. daguerreotypes in the 1830s–40s), which used rigid metal plates and multi-minute exposures in bright daylight.
For some forms of AI art (single images, short clips) the tech stack feels maybe mostly already there, while for others it doesn't (how to turn short clips into a full-length movie). But maybe that’s just a lack of imagination, and we’ll look back and say something like: “they didn’t realize they needed BCI to really unlock AI art's potential”.