What is true is already so / It all adds up to normality
What you've lost isn't the future, it's the fantasy.
What remains is a game that we were born losing, where there may be few moves left to make, and where most of us most of the time don't even have a seat at the table.
However, it is a game with very high variance.
It is a game where world shaping things happen regularly due to one person getting lucky (right person, right place, right time, right idea etc).
And one thing I've noticed in people who routinely excel at high variance games - e.g. Poker, MTG...
I woke up this morning thinking 'would be nice to have a concise source for the whole zinc/colds thing'. This is amazing.
I help run an EA coliving space, so I started doing some napkin math on how many sick days you'll be saving our community over the next year. Then vaguely extrapolated to the broader lesswrong audience who'll read your post and be convinced/reminded to take zinc (and given decent guidance for how to use it effectively).
I'd guess at minimum you've saved dozens of days over the next year by writing this post. That's pretty cool. Thankyou <3
To the extent that anecdata is meaningful:
I have met somewhere between 100-200 AI Safety people in the past ~2 years; people for whom AI Safety is their 'main thing'.
The vast majority of them are doing tractable/legible/comfortable things. Most are surprisingly naive; have less awareness of the space than I do (and I'm just a generalist lurker who finds this stuff interesting; not actively working on the problem).
Few are actually staring into the void of the hard problems; where hard here is loosely defined as 'unknown unknowns, here be dragons, where do I...
Thanks for linking this post. I think it has a nice harmony with Prestige vs Dominance status games.
I agree that this is a dynamic that is strongly shaping AI Safety, but would specify that it's inherited from the non-profit space in general - EA originated with the claim that it could do outcome focused altruism, but.. there's still a lot of room for improvement, and I'm not even sure we're improving.
The underlying dynamics and feedback loops are working against us, and I don't see evidence that core EA funders/orgs are doing more than pay lip service to this problem.
Something in the physical ability of the top-down processes to control the bottom-up ones is damaged, possibly permanently.
Metaphorically, it's like the revolting parts don't just refuse to collaborate anymore; they also blow up some of the infrastructure that was previously used to control them.
This is scary; big if true, would significantly change my own personal strategies and those I endorse to others -a switch from focusing on recovery to rehabilitation/adaptation.
I'd be grateful if you can elaborate on this part of your model and/or point me toward relevant material elsewhere.
Meek people (like me), may not see the worth in undertaking the risk of publicly revealing arguments or preferences. Embarrassment, shame, potentially being shunned for your revealed preferences, and so on -- there are many social risks to being public with your arguments and thought process
2 of the 3 'risks' you highlighted are things you have control over; you are an active participant in your feelings of shame and embarrassment[1], they are strategies 'parts' of you are pursuing to meet your needs, and through inner work[2][3] you can stop re...
The only remedy I know of is to cultivate enjoying being wrong. This involves giving up a good bit of one's self-concept as a highly intelligent individual. This gets easier if you remember that everyone else is also doing their thinking with a monkey brain that can barely chin itself on rationality.
Some thoughts:
I have less trouble with this than most, and the areas where I do notice it arising lead me toward an interesting speculation.
I'm status blind: I very rarely, and mostly only when I was much younger, worry about looking like an idiot/failing...
I am very confused.
My first thought when reading this was 'huh, no wonder they're getting mixed results - they're doing it wrong'.
My second thought when returning to this a day later: good - anything I do to contribute to the ability to understand and measure persuasion is literally directly contributing to dangerous capabilities.
Counterfactually, if we don't create evals for this... are we not expected to notice that LLMs are becoming increasingly more persuasive? More able to model and predict human psychology?
What is actually the 'safety' case for this research? What theory of change predicts this work will be net positive?
Re: 2
Most promising way is just raising children better.
See (which I'm sure you've already read): https://www.lesswrong.com/posts/CYN7swrefEss4e3Qe/childhoods-of-exceptional-people
Alongside that though, I think the next biggest leverage point would be something like nationalising social media and retargeting development/design toward connection and flourishing (as opposed to engagement and profit).
This is one area where, if we didn't have multiple catastrophic time pressures, I'd be pretty optimistic about the future. These are incredibly high impact and t...
Is there anything useful we can learn from Crypto ASICs as to how this will play out? And specifically, how to actually bet on it?
Replying to this because it seems a useful addition to the thread; assuming OP already knows this (and more).
1.) And none of the correct counterplays are 'look, my opponent is cheating/look, this game is unfair'. (Scrub mindset)
2.) You know what's more impressive than winning a fair fight? Winning an unfair one. While not always an option, and usually with high risk:reward, beating an opponent who has an assymetric situational advantage is hella convincing; it affords a much higher ceiling (relative to a 'fair' game) to demonstrate just how much better than your opponent you are.
It's an interesting framework, I can see it being useful.
I think it's more useful when you consider both high-decoupling and low-decoupling to be failure modes, more specifically: when one is dominant and the other is neglected, you reliably end up with inacccurate beliefs.
You went over the mistakes of low-decouplers in your post, and provided a wonderful example of a high-decoupler mistake too!
...High decouplers will notice that, holding preferences constant, offering people an additional choice cannot make them worse off. People will only take the choice if
I think future technology all has AI as a pre-requisite?
My high conviction hot take goes further: I think all positive future timelines have AI as a pre-requisite. I expect that, sans AI, our future - our immediate future: decades, not centuries - is going to be the ugliest, and last, chapter in our civilization's history.
I have been in the position of trying to moderate a large and growing community - it was at 500k users last I checked, although I threw in the towel around 300k - and I know what a thankless, sisyphean task it is.
I know what it is to have to explain the same - perfectly reasonable - rule/norm again and again and again.
I know what it is to try to cultivate and nurture a garden while hordes of barbarians trample all over the place.
But...
If it aint broke, don't fix it.
I would argue that the majority of the listed people penalized are net contributors to lessw...
Fine. You win. Take your upvote.
Big fan of both of your writings, this dialogue was a real treat for me.
I've been trying to find a satisfying answer to the seeming inverse correlation of 'wellbeing' and 'agency' (these are very loose labels).
You briefly allude to a potential mechanism for this[1]
You also briefly allude to another mechanism with explanatory power for the inverse[2] - i.e. that while it might seem an individual is highly agentic, they are in fact little more than a host for a highly agentic egregore
I'm engaged in that most quixotic endeavour of actually trying to save...
I don't think there's anything wrong with cultivating a warrior archetype; I strive to cultivate one myself.
Would love to read more on this.
+1 for the 14/24 club.
Hmmm, where to start. Something of a mishmash of thought here.
Actually a manager, not yet clear if I'm particularly successful at it. I certainly enjoy it and I've learned a lot in the past year.
Noticing Panic is a great Step 0, and I really like how you contrast it to noticing confusion.
I used to experience 'Analysis Paralysis' - too much planning, overthinking, and zero doing. This is a form of perfectionism, and is usually rooted in fear of failure.
I expect most academics have been taught entirely the wrong (in the sense of https://www.lesswrong.com/pos...
Re: average age of authors/laureates and average team size
Are these data adjusted for demographic changes? i.e. Aging populations in most western countries, and general population growth.
...I think this is a mistake to import "democracy" at the vision level. Vision is essentially a very high-level plan, a creative engineering task. These are not decided by averaging opinions. "If you want to kill any idea in the world, get a committee working on it." Also, Deutsch was writing about this in "The Beginning of Infinity" in the chapter about democracy.
We should aggregate desiderata and preferences (see "Preference Aggregation as Bayesian Inference"), but not decisions (plans, engineering designs, visions). These should be created by a coherent cr
Joshua Williams created an initial version of a metacrisis map
It's a good presentation, but it isn't a map.
A literal map of the polycrisis[1] can show:
The polycrisis has been my primary source of novelty/intellectual stimulation for a good long while now. Excited to see people explicitly talking about it here.
With regard to the central proposition:
...I think if there were A Plan to make the world visibly less broken, made out of many components which are themselves made out of components that people could join and take responsibility for, this would increase the amount of world-fixing work being done and would meaningfully decrease the brokenness of the world. Further, I think there's a lot of Common Cause
There's a guy called Rafe Kelley on youtube who has a fairly good answer to this, which I'm going to attempt to summarize from memory because I can't point you toward any reasonable sources (I heard him talking about it in a 1h+ conversation with everyone's favourite boogeyman, Jordan Peterson).
His reasoning goes thus:
1.) We need play in order to develop: play teaches us how to navigate Agent - Arena relationships
This speaks to the result of playground injuries increasing despite increased supervision - kids aren't actually getting to spend enough time pla...
Depending on the kind of support they're looking for https://ceealar.org could be an option. At any one time there are a handful of people staying there working independently on AI Safety stuff.
Wholely agree with the 'if it works it works' perspective.
Two minor niggles are worth mentioning:
Niggles aside, if it works it works. And nothing is more important than sleep for he...
So I'm basically the target audience for the OP - I read a lot, of all kinds of stuff, and almost zero papers. I'm an autodidact with no academic background.
I appreciated the post. I usually need a few reminders that 'this thing has value' before I finally get around to exploring it :)
I would say, as the target audience, I'm probably representative when I say that a big part of the reason we don't read papers is a lack of access, and a lack of discovery tools. I signed up for Elicit a while back, but as above - haven't gotten around to using it yet :D
In my experience the highest epistemic standard is achieved in the context of 'nerds arguing on the internet'. If everyone is agreeing, all you have is an echo chamber.
I would argue that good faith, high effort contributions to any debate are something we should always be grateful for if we are seeking the truth.
I think the people who would be most concerned with 'anti-doom' arguments are those who believe it is existentially important to 'win the argument/support the narrative/spread the meme' - that truthseeking isn't as important as trying to embed a cu...
Re: EMH is false, long GOOG
I wish you'd picked a better example.
tl;dr LLMs make search cost more, much more, and thus significantly threaten GOOG's bottom line.
MSFT knows this, and is explicitly using Bing Sydney as an attack on GOOG.
I'm not questioning the capabilities of GOOG's AI department, I'm sure Deepmind have the shiniest toys.
But it's hardly bullish for their share price if their core revenue stream is about to be decapitated or perhaps even entirely destroyed - ad based revenue has been on shaky ground for a while now, I...
AI Therapy isn't the first domino to fall, AI Customer Service is (it's already falling).
95% of customer service humans can be replaced by a combination of Whisper+GPT; they (the humans) are already barely agentic, just following complex scripts. It's likely that the AI customer service will provide a superior experience most of the time (less wait times, better audio quality at a minimum, often more competent and knowledgeable too, plausibly capable of supporting many languages).
Obviously huge cost savings so massive incentive for companies to replace hum...
Thanks for your post, just wanted to contribute by deconfusing ADHD a little (hopefully). I agree that you and OP seem to be agreeing more than disagreeing.
So speaking from a pretty thorough ignorance of the topic itself, my guess based on my priors is that the problem-ness of ADHD has more to do with the combo of (a) taking in the culture's demand that you be functional in a very particular way combined with (b) a built-in incapability of functioning that way.
Correct. However that problem-ness is often a matter of survival/highly non-optional. ADHD can be...
Thanks for this post, it was insightful and perfectly timed; I've been intermittently returning to the problem of trust for a while now and it was on my mind this morning when I found your post.
I think shared reality isn't just a 'warm fuzzies' thing, it's a vital component of cooperation.
I think it's connected with the trust problem; your ability to trust someone is dependent to some degree on a shared reality.
I think that these problems have been severely exacerbated by our current technologies and the social landscape they've shaped, but I'm also highly...
To start with, I agree.
I really agree: about timescales, about the risks of misalignment, about the risks of alignment. In fact I think I'll go further and say that in a hypothetical world where an aligned AGI is controlled by a 99th percentile Awesome Human Being, it'll still end in disaster; homo sapiens just isn't capable of handling this kind of power.[1]
That's why the only kind of alignment I'm interested in is the kind that results in the AGI in control; that we 'align' an AGI with some minimum values that anchor it in a vaguely anthropocentric meme-...
At the outset, I'll say that the answer to 'should you have kids?' in general, is probably not. I'll also say that I've seen/had this discussion dozens of times now and the result is always the same: you're gonna do what you want to do and rationalize it however you need to. The genes win this fight 9 times out of 10.
If you're rich (if you reasonably expect to own multiple properties and afford all of lifes luxuries for the rest of your life), it's probably okay - you won't face financial ruin and your children will be insulated from the worst of what's to...
So, two ideas:
...Our best evidence of what people truly feel and believe comes less from their words than from their deeds. Observers trying to decide what a man is like look closely at his actions... the man himself uses this same evidence to decide what he is like. His behavior tells him about himself; it is a primary source of information about his beliefs and values and attitudes.
Writing was one sort of confirmin
Yep, big fan of Watts and will +1 this recommendation to any other readers.
Curious if you've read much of Schroeder's stuff? Lady of Mazes in particular explores, among many other things, an implementation of this tech that might-not-suck.
The quick version is a society in which everyone owns their GPT-Me, controls distribution of it, and has access to all of its outputs. They use them as a social interface - can't talk to someone? Send your GPT-Me to talk to them. They can't talk to you? Your GPT-Me can talk to GPT-Them and you both get the results fed back to you. etc etc.
>Well, first, there are such things as energy technologies. The steam engine is a technology. Processes to create coke from coal, or to refine crude oil, are technologies. These technologies are what make all of that energy accessible and usable.
To quote my post:
>Certainly technology is involved in capture/extraction/utilisation. But... hmm there's a quote 'Labour without energy is a corpse, capital (substitute technology here) without energy is a sculpture'.
And back to you (emphasis mine):
>I don't think this does answer the question, becaus...
So in your specific example of the threshing machine:
Surplus energy is required such that enough of the population are freed from subsistence and agriculture to specialize in other things.
Even more surplus energy is required for the creation/upkeep of cities, which are a prerequisite for technological innovation/growth (high density of different specialists living alongside eachother, as well as a labour force for factories/mass production).
And the railroads that enabled the widespread distribution of threshing machines - obviously highly energy intensive,...
Why doesn't your analysis account for energy at all?
(Apologies in advance if any/all of this is obvious to you)
Too much sleep is bad, too little sleep is bad. Sleep needs vary per person and throughout life but generally >6 hours, <9 hours is the range.
You don't really sleep in 'hours', you sleep in cycles (https://en.wikipedia.org/wiki/Sleep_cycle) so measuring based on hours doesn't work so much.
If you wake up naturally sometime in that 6-9 hour window, and you sleep deeply through the night (smartwatches are good at measuring this), you're probably getting enough sleep.
If you have reason to be conc...
https://www.goodreads.com/book/show/534755.A_Technique_for_Producing_Ideas?ac=1&from_search=true&qid=FeFvMKus2k&rank=1
+ Incorporating understanding of https://en.wikipedia.org/wiki/Flow_(psychology)
+ Drugs
If you're willing to accept 'on command' as, 'something I spend days/weeks intentionally preparing/cultivating' then it seems like you're in luck.
Sorry if this is all old news; not what you were looking for.
Feel free to delete because this is highly tangential but are you aware of Mark Solms work (https://www.goodreads.com/book/show/53642061-the-hidden-spring) on consciousness, and the subsequent work he's undertaking on artificial consciousness?
I'm an idiot, but it seems like this is a different-enough path to artificial cognition that it could represent a new piece of the puzzle, or a new puzzle entirely - a new problem/solution space. As I understand it, AI capabilities research is building intelligence from the outside-in, whereas the consciousness model would be capable of building it from the inside-out.
https://en.wikipedia.org/wiki/Zhan_zhuang
Both meditation and exercise. A daily (1hr a day is the sweet spot), lifelong practice without end. Easy to learn, probably impossible for most of us to master but that's okay because mastery isn't the point.
The point is to strengthen and broaden the connection between mind and body, and the connections within your body itself - to relearn how to move with the whole body.
To learn how to be still, and yet relaxed instead of stiff.
The point is also, at least for me, to do something impossibly slow and hard every day. ...
I disagree, strongly. Not only do I believe this line of reasoning to be wrong, I believe it to be dangerously wrong. I believe downplaying and/or underestimating the role of energy in our economic system is part of why we find ourselves in the mess we're in today.
To reference Nate Hagens (https://www.youtube.com/watch?v=-xr9rIQxwj4)
We use the equivalent of 100 billion barrels of oil a year. Each barrel of oil can do the amount of work it would take 5 humans to do. There are 500 billion 'ghost' labourers in our society today.
(Back to me)
You cannot ea...
The more powerful a tool is, the more important it is that the tool behaves predictably.
A chainsaw that behaves unpredictably is very, very, dangerous.
AI is, conservatively, thousands of times more powerful than a chainsaw.
And unlike an unpredictable chainsaw, there is no guarantee we will be able to turn an unpredictable AI off and fix it or replace it.
It is plausible that the danger of failing to align AI safely - to make it predictable - is such that we only have one chance to get it right.
Finally, it is absurdly cheap to make massive progress in AI safety.
This was wonderful; the post that finally got me to create an account here. I got quite a few sensible chuckles and a few hearty laughs out of your list. I think we've been reading similar books recently (Graeber's Dawn of Everything? :) )
My contribution is to remind the participants that a somewhat recurring theme (something of an original in western philosophy - i.e. Socrates) in history is of wise people enjoying themselves too much and getting murdered by the people who'd grown increasingly scared/estranged/horrified by them.
Heretical thinking is fun, but in the real world there are people who would harm you for exposing them to it.
Practice safe heresy kids :)
I don't like the thing you're doing where you're eliding all mention of the actual danger AI Safety/Alignment was founded to tackle - AGI having a mind of its own, goals of its own, that seem more likely to be incompatible with/indifferent to our continued existence than not.
Everything else you're saying is agreeable in the context you're discussing it, that of a dangerous new technology - I'd feel much more confident if the Naval Nuclear Propulsion Program (Rickover's people) was the dominant culture in AI development.
Albeit I have strong doubts about the... (read more)