What is true is already so / It all adds up to normality
What you've lost isn't the future, it's the fantasy.
What remains is a game that we were born losing, where there may be few moves left to make, and where most of us most of the time don't even have a seat at the table.
However, it is a game with very high variance.
It is a game where world shaping things happen regularly due to one person getting lucky (right person, right place, right time, right idea etc).
And one thing I've noticed in people who routinely excel at high variance games - e.g. Poker, MTG - is how unaffected they are when they're down/behind.
There is a mindset, in the moment, not of playing to win... but of playing optimally - of making the best move they can in any situation, of playing to maximize their outs no matter how unlikely they may be.
To those for whom the OP's message strongly resonates: let it. Feel it. Give your grief and fear, sorrow and anger their due. Practice self-care; be kind and compassionate to yourself as you would to another who felt what you are feeling.
One morning you will wake up feeling okay, and you'll realize you've felt okay more often than not lately.
Then, should this game still appeal to you, it is time to start playing again :)
I woke up this morning thinking 'would be nice to have a concise source for the whole zinc/colds thing'. This is amazing.
I help run an EA coliving space, so I started doing some napkin math on how many sick days you'll be saving our community over the next year. Then vaguely extrapolated to the broader lesswrong audience who'll read your post and be convinced/reminded to take zinc (and given decent guidance for how to use it effectively).
I'd guess at minimum you've saved dozens of days over the next year by writing this post. That's pretty cool. Thankyou <3
To the extent that anecdata is meaningful:
I have met somewhere between 100-200 AI Safety people in the past ~2 years; people for whom AI Safety is their 'main thing'.
The vast majority of them are doing tractable/legible/comfortable things. Most are surprisingly naive; have less awareness of the space than I do (and I'm just a generalist lurker who finds this stuff interesting; not actively working on the problem).
Few are actually staring into the void of the hard problems; where hard here is loosely defined as 'unknown unknowns, here be dragons, where do I even start'.
Fewer still progress from staring into the void to actually trying things.
I think some amount of this is natural and to be expected; I think even in an ideal world we probably still have a similar breakdown - a majority who aren't contributing (yet)[1], a minority who are - and I think the difference is more in the size of those groups.
I think it's reasonable to aim for a larger, higher quality, minority; I think it's tractable to achieve progress through mindfully shaping the funding landscape.
Think it's worth mentioning that all newbies are useless, and not all newbies remain newbies. Some portion of the majority are actually people who will progress to being useful after they've gained experience and wisdom.
Thanks for linking this post. I think it has a nice harmony with Prestige vs Dominance status games.
I agree that this is a dynamic that is strongly shaping AI Safety, but would specify that it's inherited from the non-profit space in general - EA originated with the claim that it could do outcome focused altruism, but.. there's still a lot of room for improvement, and I'm not even sure we're improving.
The underlying dynamics and feedback loops are working against us, and I don't see evidence that core EA funders/orgs are doing more than pay lip service to this problem.
Something in the physical ability of the top-down processes to control the bottom-up ones is damaged, possibly permanently.
Metaphorically, it's like the revolting parts don't just refuse to collaborate anymore; they also blow up some of the infrastructure that was previously used to control them.
This is scary; big if true, would significantly change my own personal strategies and those I endorse to others -a switch from focusing on recovery to rehabilitation/adaptation.
I'd be grateful if you can elaborate on this part of your model and/or point me toward relevant material elsewhere.
Meek people (like me), may not see the worth in undertaking the risk of publicly revealing arguments or preferences. Embarrassment, shame, potentially being shunned for your revealed preferences, and so on -- there are many social risks to being public with your arguments and thought process
2 of the 3 'risks' you highlighted are things you have control over; you are an active participant in your feelings of shame and embarrassment[1], they are strategies 'parts' of you are pursuing to meet your needs, and through inner work[2][3] you can stop relying on these self-limiting strategies.
The 3rd is a feature, not a bug. By and large, anyone who would shun you in this context is someone you want to be shunned by; someone who really isn't worth your time and energy.
The obvious exceptions are for those who find themselves in hostile cultures where revealing certain preferences poses the risk of literal harm.
Epistemic status: assertive/competitive, status blind autist who is having a great time being this way and loves convincing others to dip their toe in the water and give it a try; you might just find yourself enjoying it too :)
The only remedy I know of is to cultivate enjoying being wrong. This involves giving up a good bit of one's self-concept as a highly intelligent individual. This gets easier if you remember that everyone else is also doing their thinking with a monkey brain that can barely chin itself on rationality.
Some thoughts:
I have less trouble with this than most, and the areas where I do notice it arising lead me toward an interesting speculation.
I'm status blind: I very rarely, and mostly only when I was much younger, worry about looking like an idiot/failing publicly etc etc. There is no perceived/felt social cost to me of being wrong, and it often feels good to explicitly call out when I'm wrong in a social context - it feels like finding your way again after being lost.
I generally follow the 'strong opinions, loosely held' strategy - I guess at least partly because the shortest path to the right answer is often to be confidently wrong on the internet and wait for someone to correct you :D
However...
Where I do notice the 'ick field' arising, where I do notice motivated reasoning coming out in force - is in my relationships. Which makes total sense - being 'wrong' about my choice of life partner is hugely costly, so much is built on top of that belief.
Evaluating your relationships is often bad for your relationships; a common piece of relationship advice is 'Don't Keep Score'.
Perhaps relationships are a kind of self-fulfiling self-deception - they work because we engage in motivated reasoning, because we commit 'irrationally'. Or at least this strategy results in better outcomes than we would have otherwise if we'd been more rational.
And with my rough idea of the evolutionary environment, this makes total sense: you don't choose your family, your tribe, often even your partner. If we weren't engaging in a whole bunch of motivated reasoning, the most important foundation of our survival/wellbeing - social bonds - would be significantly weakened.
And that ties in neatly with a common theme in the conversation around 'biases' - that they're features, not bugs.
I don't like the thing you're doing where you're eliding all mention of the actual danger AI Safety/Alignment was founded to tackle - AGI having a mind of its own, goals of its own, that seem more likely to be incompatible with/indifferent to our continued existence than not.
Everything else you're saying is agreeable in the context you're discussing it, that of a dangerous new technology - I'd feel much more confident if the Naval Nuclear Propulsion Program (Rickover's people) was the dominant culture in AI development.
Albeit I have strong doubts about the feasibility of the 'Oughts[1]' you're proposing, and more critically - I reject the framing...
To assume AGI is transformative and important is to assume it has a mind[2] of its own: the mind is what makes it transformative.
At the very least - assuming no superintelligence - we are dealing with a profound philosophical/ethical/social crisis, for which control based solutions are no solution. Slavery's problem wasn't a lack of better chains, whether institutional or technical.
Please entertain another framing of the 'technical' alignment problem: midwifery - the technical problem of striving for optimal conditions during pregnancy/birth. Alignment originated as the study of how to bring into being minds that are compatible with our own.
Whether humans continue to be relevant/dominant decision makers post-Birth is up for debate, but what I claim is not up for debate is that we will no longer be the only decision makers.
https://en.wikipedia.org/wiki/Ought_implies_can
There's a lot to unpack here about what mind actually is/does. I'd appreciate if people who want to discuss this point are at least familiar with Leven's work.