Checked replies so far, no one has given you the right answer.
Whenever you don't do something, you have a reason for not doing it.
If you find yourself stuck in a cycle of intending to do, and not doing, it's always because you're not taking your reason for NOT doing it seriously; you're often habitually ignoring it.
When you successfully take your reasons for not doing something seriously, either you stop wanting to do it, or you change how you're doing it, or your reason for not doing it simply goes away.
So, what does it mean/look like to take your reason for not doing something seriously?
It doesn't look like overanalyzing it in your head - if you find yourself having an internal argument notice that you've tried this a million times before and it hasn't improved things.
It looks like, and indeed just basically is, Focusing (I linked to a lesswrong explainer, but honestly I think Eugene Gendlin does a much better job)
It feels like listening. It feels like insight, like realizing something important that you hadn't noticed before, or had forgotten about.
If you keep pursuing strategies of forcing yourself, of the part of you that wants to do the thing coercing the part(s) that don't, then you'll burn out. You're literally fighting yourself; so much of therapy boils down to 'just stop hitting yourself bro'.
it is possible to do complex general cognition without being able to think about one's self and one's cognition. It is much easier to do complex general cognition if the system is able to think about itself and its own thoughts.
I can see this making sense in one frame, but not in another. The frame which seems most strongly to support the 'Blindsight' idea is Friston's stuff - specifically how the more successful we are at minimizing predictive error, the less conscious we are.[1]
My general intuition, in this frame, is that as intelligence increases more behaviour becomes automatic/subconscious. It seems compatible with your view that a superintelligent system would possess consciousness, but that most/all of its interactions with us would be subconscious.
Would like to hear more about this point, could update my views significantly. Happy for you to just state 'this because that, read X, Y, Z etc' without further elaboration - I'm not asking you to defend your position, so much as I'm looking for more to read on it.
This is my potentially garbled synthesis of his stuff, anyway.
I don't like the thing you're doing where you're eliding all mention of the actual danger AI Safety/Alignment was founded to tackle - AGI having a mind of its own, goals of its own, that seem more likely to be incompatible with/indifferent to our continued existence than not.
Everything else you're saying is agreeable in the context you're discussing it, that of a dangerous new technology - I'd feel much more confident if the Naval Nuclear Propulsion Program (Rickover's people) was the dominant culture in AI development.
Albeit I have strong doubts about the feasibility of the 'Oughts[1]' you're proposing, and more critically - I reject the framing...
Any sufficiently advanced technology is indistinguishable from
magicbiologylife
To assume AGI is transformative and important is to assume it has a mind[2] of its own: the mind is what makes it transformative.
At the very least - assuming no superintelligence - we are dealing with a profound philosophical/ethical/social crisis, for which control based solutions are no solution. Slavery's problem wasn't a lack of better chains, whether institutional or technical.
Please entertain another framing of the 'technical' alignment problem: midwifery - the technical problem of striving for optimal conditions during pregnancy/birth. Alignment originated as the study of how to bring into being minds that are compatible with our own.
Whether humans continue to be relevant/dominant decision makers post-Birth is up for debate, but what I claim is not up for debate is that we will no longer be the only decision makers.
What is true is already so / It all adds up to normality
What you've lost isn't the future, it's the fantasy.
What remains is a game that we were born losing, where there may be few moves left to make, and where most of us most of the time don't even have a seat at the table.
However, it is a game with very high variance.
It is a game where world shaping things happen regularly due to one person getting lucky (right person, right place, right time, right idea etc).
And one thing I've noticed in people who routinely excel at high variance games - e.g. Poker, MTG - is how unaffected they are when they're down/behind.
There is a mindset, in the moment, not of playing to win... but of playing optimally - of making the best move they can in any situation, of playing to maximize their outs no matter how unlikely they may be.
To those for whom the OP's message strongly resonates: let it. Feel it. Give your grief and fear, sorrow and anger their due. Practice self-care; be kind and compassionate to yourself as you would to another who felt what you are feeling.
One morning you will wake up feeling okay, and you'll realize you've felt okay more often than not lately.
Then, should this game still appeal to you, it is time to start playing again :)
I woke up this morning thinking 'would be nice to have a concise source for the whole zinc/colds thing'. This is amazing.
I help run an EA coliving space, so I started doing some napkin math on how many sick days you'll be saving our community over the next year. Then vaguely extrapolated to the broader lesswrong audience who'll read your post and be convinced/reminded to take zinc (and given decent guidance for how to use it effectively).
I'd guess at minimum you've saved dozens of days over the next year by writing this post. That's pretty cool. Thankyou <3
To the extent that anecdata is meaningful:
I have met somewhere between 100-200 AI Safety people in the past ~2 years; people for whom AI Safety is their 'main thing'.
The vast majority of them are doing tractable/legible/comfortable things. Most are surprisingly naive; have less awareness of the space than I do (and I'm just a generalist lurker who finds this stuff interesting; not actively working on the problem).
Few are actually staring into the void of the hard problems; where hard here is loosely defined as 'unknown unknowns, here be dragons, where do I even start'.
Fewer still progress from staring into the void to actually trying things.
I think some amount of this is natural and to be expected; I think even in an ideal world we probably still have a similar breakdown - a majority who aren't contributing (yet)[1], a minority who are - and I think the difference is more in the size of those groups.
I think it's reasonable to aim for a larger, higher quality, minority; I think it's tractable to achieve progress through mindfully shaping the funding landscape.
Think it's worth mentioning that all newbies are useless, and not all newbies remain newbies. Some portion of the majority are actually people who will progress to being useful after they've gained experience and wisdom.
Thanks for linking this post. I think it has a nice harmony with Prestige vs Dominance status games.
I agree that this is a dynamic that is strongly shaping AI Safety, but would specify that it's inherited from the non-profit space in general - EA originated with the claim that it could do outcome focused altruism, but.. there's still a lot of room for improvement, and I'm not even sure we're improving.
The underlying dynamics and feedback loops are working against us, and I don't see evidence that core EA funders/orgs are doing more than pay lip service to this problem.
Something in the physical ability of the top-down processes to control the bottom-up ones is damaged, possibly permanently.
Metaphorically, it's like the revolting parts don't just refuse to collaborate anymore; they also blow up some of the infrastructure that was previously used to control them.
This is scary; big if true, would significantly change my own personal strategies and those I endorse to others -a switch from focusing on recovery to rehabilitation/adaptation.
I'd be grateful if you can elaborate on this part of your model and/or point me toward relevant material elsewhere.
Meek people (like me), may not see the worth in undertaking the risk of publicly revealing arguments or preferences. Embarrassment, shame, potentially being shunned for your revealed preferences, and so on -- there are many social risks to being public with your arguments and thought process
2 of the 3 'risks' you highlighted are things you have control over; you are an active participant in your feelings of shame and embarrassment[1], they are strategies 'parts' of you are pursuing to meet your needs, and through inner work[2][3] you can stop relying on these self-limiting strategies.
The 3rd is a feature, not a bug. By and large, anyone who would shun you in this context is someone you want to be shunned by; someone who really isn't worth your time and energy.
The obvious exceptions are for those who find themselves in hostile cultures where revealing certain preferences poses the risk of literal harm.
Epistemic status: assertive/competitive, status blind autist who is having a great time being this way and loves convincing others to dip their toe in the water and give it a try; you might just find yourself enjoying it too :)
I'm curious about this pitch :)