Yep, it's a 17-minute short film by Henry Dunham called The Awareness, here you go! :)
FYI the link for preordering currently contains two concatenated instances of the URL (which fails relatively gracefully, though requires an extra click since it takes you to the top of the site for the book -- which thankfully then conveniently also contains a link to the preorder section -- rather than directly to the preorder section itself).
Contrary to what you might infer from the initial spate of downvotes, it doesn't strike me as necessarily nuts that a project of this kind could have value and so I applaud the effort, though unfortunately this first video does leave a fair bit to be desired methinks. At a minimum, one would hope that the audio isn't actually somehow less comprehensible than the original, yet in comparing the beginnings as well as skipping to where Eliezer begins speaking, I find that both seem significantly more effortful to parse (and so then also have the negative side-...
Thanks. For people that aren't likely to watch, I imagine it might also be worth saying he reports his view as being that we're in an arms race we can't opt out of (and that he's changed his mind regarding -- I think -- the overall appropriateness of such a race, though from what to what I'm not sure) due to insufficient political sanity and part of what constitutes sanity, he says, would be the US and China being able to create a climate whereby we don't fear each other, though it's not totally obvious whether he thinks a sufficient condition for the race...
Interesting ending to the latest Veritasium video today. It asks "What if all the world's biggest problems have the same solution?" and spends nearly all its time talking about AlphaFold and how AI is starting to be able to accelerate areas of research by literal decades almost overnight. Superficially this will no doubt sound positive to many people, but then -- with his final words -- he slips in the following: "This sounds like an amazing future... as long as the AI doesn't take over and destroy us all first."
Here's an example for you: I used to turn the faucet on while going to the bathroom, thinking it was due simply to having a preference for somewhat-masking the sound of my elimination habits from my housemates, then one day I walked into the bathroom listening to something-or-other via earphones and forgetting to turn the faucet on only to realize about halfway through that apparently I actually didn't much care about such masking, previously being able to hear myself just seemed to trigger some minor anxiety about it I'd failed to recognize, though its ab...
The person whose tweets were linked above when mentioning "they become Zealots, doing lasting damage to their lives, and then burning out spectacularly."
Before I read the aphoristic three-word reply to you from Richard Kennaway (admittedly a likely even clearer-cut way to indicate the following sentiment), I was thinking that to downplay any unintended implications about the magnitude of your probabilities that you could maybe say something about your tracking being for mundane-vigilance or intermittent-map-maintenance or routine-reality-syncing / -surveying / -sampling reasons.
For any audience you anticipate familiarity with this essay though, another idea might be to use a version of something like:
"The ...
While we're on the topic of amending standard Mafia, I suppose I'll also mention that implementing Robin Hanson's EquaTalk might make for an interesting game as well.
Since you've not mentioned a specific brand, to make it potentially even easier for people to grab something they might like I suppose I'll go ahead and link to the following (which appeared many moons ago in a product-recommendation post on SSC), though note it's a bit less sugary than the one above, i.e. just 7g/Tbsp: https://www.amazon.com/gp/product/B00CMGRNAK
Among other things I suppose they're not super up on that to efficiently colonise the universe [...] watch dry paint stay dry.
Here's a video
It's also written up on Cognitive Revolution's substack for those that prefer text.
clicked first relative to receiving are the same person! And also that person is from the majority group
A majority member being the initial clicker also isn't terribly surprising because a group being larger means one-or-more of any given sort of person -- in this case, a quick-responder-type -- is likelier to crop up among them.
While not really answering your question, reading the description for the problem you're having brought this exploration / taxonomy of okay-ness to mind.
Also perhaps of interest might be this discussion from the SSC subreddit awhile back where someone detailed their pro-Bigfoot case.
Serious question: would something originating adjacently from a separate Everett branch count?
(sillier-though-hopefully-not-counterproductive question: since your final statement especially would, I think, often seem to go without saying, its "needless" inclusion actually strikes me as probably-not-but-still-hypothetically-maybe worrisome -- surely you're not intending to imply that's the only recourse allowed for being denied one's winning lottery ticket? [or perhaps my own asking is needless since someone deciding to be a jerk and not wanting to pay could simply use such agreed-upon discretion to "fairly" declare themselves the winner anyways, in which case: sorry for the derail!])
Somewhat similar to you I've thought of the second group as "Vroomers", though Eliezer's talk of cursed bananas has amusingly brought "Sunnysiders" to mind for me as well.
The "Borderline" icon currently being a balance is something I most naturally interpret as "balanced fairly", whereas a similar-ish alternative -- open hands gesturing up & down -- reads more like "iffy" to me and might better communicate the concept. Here's a simultaneously too complex and too crude mockup based on https://thenounproject.com/icon/hand-disinfection-3819834/ :
A similar idea to indicate that something might be kind of a toss-up (which at first blush strikes me as less good than palms balancing, yet maybe better than the icon already in u...
Using a plain heart to express empathy seems easier to confuse with "I love this" than seems ideal. Here are a few other options that seemed potentially appealing after looking through results at The Noun Project for "Empathy" and "Hug":
https://thenounproject.com/icon/take-care-4694299/
https://thenounproject.com/icon/hug-4400944/
https://thenounproject.com/icon/heartbeat-977219/
I think "Muddled" unfortunately seems easier to naturally interpret in an accusatory way, so something else indicating "this was hard for me to see / wasn't clear to me" might work better. My initial thought was to maybe use "Foggy" as a metaphor (as in, "there might be something there, but I'm having a hard time seeing it"). I suppose something with a lighthouse probably looks more like "a beacon of clarity", though here are some other possible Hazy / Cloudy things:
"Strawman" seems like it might be kind of niche, so I went on a quest looking for something more indicative of "I find this to be misleading / misrepresentative" before realizing this apparently already exists. I can't say I really have any issue with the one already in use, but since there seems to be lots of ways to approach this and I already have several at hand, here's a multitude of alternatives just in case any seem especially resonant. Themes depicted below include loss of signal or mutations in translation, frames being warped or distorted or stuf...
This seems right to me since e.g. if someone were to use anti-excitement to indicate "this is draining" there'd then be an issue of how someone else might see this and then wonder how best to express they think it's actually pretty neutral rather than draining (since, while excitement cancels out anti-excitement, indicating excitement itself wouldn't be truth-tracking in this case).
Another hybrid approach if you have multiple substantive comments is to silo each of them in their own reply to a parent comment you've made to serve as an anchor point to / index for your various thoughts. This also has the nice side effect of still allowing separate voting on each of your points as well as (I think) enabling them, when quoting the OP, to potentially each be shown as an optimally-positioned side-comment.
Also posted on his shortform :) https://www.lesswrong.com/posts/fxfsc4SWKfpnDHY97/landfish-lab?commentId=jLDkgAzZSPPyQgX7i
I don't have a direct answer for you, though I imagine the resource mentioned at https://www.lesswrong.com/posts/MKvtmNGCtwNqc44qm/announcing-aisafety-training might well turn up what you're looking for :)
(I thought to append an even gaudier headlining tag, but don't have the heart—if though, by the way, you don't know what's so gaudy about it [and/or already don't reckon you much care about such a gripe] and simultaneously also don't love that I've surreptitiously stolen the focus of your eyeballs for such a matter, well... maybe we're in more agreement than you think! ♥)
A couple things I think I'd enjoy seeing regarding mini-ideogram usage here, though happy to know others' opinions as well:
I'd love if the default text-col...
I don't know how promising this might be, but I saw the following yesterday via the Bountied Rationality facebook group after someone else posted an ad regarding possibly getting Paxlovid shipped to China: https://thehill.com/policy/healthcare/3775373-pfizer-signs-deal-to-sell-paxlovid-in-china-as-covid-cases-climb-report/
Also along these lines, perhaps contrasting the flicker fusion rates of different species could be illustrative as well. Here's a 30-second video displaying the relative perceptions of a handful of species side by side: https://www.youtube.com/watch?v=eA--1YoXHIQ . Additionally, a short section from 10:22 - 10:43 of this other video that incorporates time-stretched audio of birdcalls is fairly evocative: https://www.youtube.com/watch?v=Gvg242U2YfQ .
I wonder if a more influential attribution might be https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-ai-seriously-enough-9313474.html since, in addition to Stuart Russell, it also lists Stephen Hawking, Max Tegmark, and Frank Wilczek on the byline.
For policymakers: "Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all."
— Stephen Hawking, Stuart Russell, Max Tegmark, and Frank Wilczek (https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-ai-seriously-enough-9313474.html)
Here's a brand new assessment that was just released (July 17): https://futureoflife.org/ai-safety-index-summer-2025/