AI
Personal Blog
2024 Top Fifty: 6%

136

136

Roon, member of OpenAI’s technical staff, is one of the few candidates for a Worthy Opponent when discussing questions of AI capabilities development, AI existential risk and what we should do about it. Roon is alive. Roon is thinking. Roon clearly values good things over bad things. Roon is engaging with the actual questions, rather than denying or hiding from them, and unafraid to call all sorts of idiots idiots. As his profile once said, he believes spice must flow, we just do go ahead, and makes a mixture of arguments for that, some good, some bad and many absurd. Also, his account is fun as hell.

Thus, when he comes out as strongly as he seemed to do recently, attention is paid, and we got to have a relatively good discussion of key questions. While I attempt to contribute here, this post is largely aimed at preserving that discussion.


The Initial Statement

As you would expect, Roon’s statement last week that AGI was inevitable and nothing could stop it so you should essentially spend your final days with your loved ones and hope it all works out, led to some strong reactions.

Many pointed out that AGI has to be built, at very large cost, by highly talented hardworking humans, in ways that seem entirely plausible to prevent or redirect if we decided to prevent or redirect those developments.

Roon (from last week): Things are accelerating. Pretty much nothing needs to change course to achieve agi imo. Worrying about timelines is idle anxiety, outside your control. you should be anxious about stupid mortal things instead. Do your parents hate you? Does your wife love you?

Roon: It should be all the more clarifying coming from someone at OpenAI. I and my colleagues and Sama could drop dead and AGI would still happen. If I don’t feel any control everyone else certainly shouldn’t.

Tetraspace: “give up about agi there’s nothing you can do” nah

Sounds like we should take action to get some control, then. This seems like the kind of thing we should want to be able to control.

Connor Leahy: I would like to thank roon for having the balls to say it how it is. Now we have to do something about it, instead of rolling over and feeling sorry for ourselves and giving up.

Simeon: This is BS. There are <200 irreplaceable folks at the forefront. OpenAI alone has a >1 year lead. Any single of those persons can single handedly affect the timelines and will have blood on their hands if we blow ourselves up bc we went too fast.

PauseAI: AGI is not inevitable. It requires hordes of engineers with million dollar paychecks. It requires a fully functional and unrestricted supply chain of the most complex hardware. It requires all of us to allow these companies to gamble with our future.

Tolga Bilge: Roon, who works at OpenAI, telling us all that OpenAI have basically no control over the speed of development of this technology their company is leading the creation of.

It’s time for governments to step in.

His reply is deleted now, but I broadly agree with his point here as it applies to OpenAI. This is a consequence of AI race dynamics. The financial upside of AGI is so great that AI companies will push ahead with it as fast as possible, with little regard to its huge risks.

OpenAI could do the right thing and pause further development, but another less responsible company would simply take their place and push on. Capital and other resources will move accordingly too. This is why we need government to help solve the coordination problem now. [continues as you would expect]

Saying no one has any control so why try to do anything to get control back seems like the opposite of what is needed here.

The Doubling Down

Roon’s reaction:

Roon: buncha ⏸ emojis harassing me today. My post was about how it’s better to be anxious about things in your control and they’re like shame on you.

Also tweets don’t get deleted because they’re secret knowledge that needs to be protected. I wouldn’t tweet secrets in the first place. they get deleted when miscommunication risk is high, so screenshotting makes you de facto antisocial idiot.

Roon’s point on idle anxiety is indeed a good one. If you are not one of those trying to gain or assert some of that control, as most people on Earth are not and should not be, then of course I agree that idle anxiety is not useful. However Roon then did attempt to extend this to claim that all anxiety about AGI is idle, that no one has any control. That is where there is strong disagreement, and what is causing the reaction.

Roon: It’s okay to watch and wonder about the dance of the gods, the clash of titans, but it’s not good to fret about the outcome. political culture encourages us to think that generalized anxiety is equivalent to civic duty.

Scott Alexander: Counterargument: there is only one God, and He finds nothing in the world funnier than letting ordinary mortals gum up the carefully-crafted plans of false demiurges. Cf. Lord of the Rings.

Anton: conversely if you have a role to play in history, fate will punish you if you don’t see it through.

Alignment Perspectives: It may punish you even more for seeing it through if your desire to play a role is driven by arrogance or ego.

Anton: Yeah it be that way.

Connor Leahy Gives it a Shot

Connor Leahy (responding to Roon): The gods only have power because they trick people like this into doing their bidding. It’s so much easier to just submit instead of mastering divinity engineering and applying it yourself. It’s so scary to admit that we do have agency, if we take it. In other words: “cope.”

It took me a long time to understand what people like Nietzsche were yapping on about about people practically begging to have their agency be taken away from them.

It always struck me as authoritarian cope, justification for wannabe dictators to feel like they’re doing a favor to people they oppress (and yes, I do think there is a serious amount of that in many philosophers of this ilk.)

But there is also another, deeper, weirder, more psychoanalytic phenomena at play. I did not understand what it was or how it works or why it exists for a long time, but I think over the last couple of years of watching my fellow smart, goodhearted tech-nerds fall into these deranged submission/cuckold traps I’ve really started to understand.

e/acc is the most cartoonish example of this, an ideology that appropriates faux, surface level aesthetics of power while fundamentally being an ideology preaching submission to a higher force, a stronger man (or something even more psychoanalytically-flavored, if one where to ask ol’ Sigmund), rather than actually striving for power acquisition and wielding. And it is fully, hilariously, embarrassingly irreflexive about this.

San Francisco is a very strange place, with a very strange culture. If I had to characterize it in one way, it is a culture of extremes and where everything on the surface looks like the opposite of what it is (or maybe the “inversion”) . It’s California’s California, and California is the USA’s USA. The most powerful distillation of a certain strain of memetic outgrowth.

And on the surface, it is libertarian, Nietzschean even, a heroic founding mythos of lone iconoclasts striking out against all to find and wield legendary power. But if we take the psychoanalytic perspective, anyone (or anything) that insists too hard on being one thing is likely deep down the opposite of that, and knows it.

There is a strange undercurrent to SF that I have not seen people put good words to where it in fact hyper-optimizes for conformity and selling your soul, debasing and sacrificing everything that makes you human in pursuit of some god or higher power, whether spiritual, corporate or technological.

SF is where you go if you want to sell every last scrap of your mind, body and soul. You will be compensated, of course, the devil always pays his dues.

The innovative trick the devil has learned is that people tend to not like eternal, legible torment, so it is much better if you sell them an anxiety free, docile life. Free love, free sex, free drugs, freedom! You want freedom, don’t you? The freedom to not have to worry about what all the big boys are doing, don’t you worry your pretty little head about any of that…

I recall a story of how a group of AI researchers at a leading org (consider this rumor completely fictional and illustrative, but if you wanted to find its source it’s not that hard to find in Berkeley) became extremely depressed about AGI and alignment, thinking that they were doomed if their company kept building AGI like this.

So what did they do? Quit? Organize a protest? Petition the government?

They drove out, deep into the desert, and did a shit ton of acid…and when they were back, they all just didn’t feel quite so stressed out about this whole AGI doom thing anymore, and there was no need for them to have to have a stressful confrontation with their big, scary, CEO.

The SF bargain. Freedom, freedom at last…

This is a very good attempt to identify key elements of the elephant I grasp when I notice that being in San Francisco very much does not agree with me. I always have excellent conversations during visits because the city has abducted so many of the best people, I always get excited by them, but the place feels alien, as if I am being constantly attacked by paradox spirits, visiting a deeply hostile and alien culture that has inverted many of my most sacred values and wants to eat absolutely everything. Whereas here, in New York City, I feel very much at home.

Meanwhile, back in the thread:

Connor (continuing): I don’t like shitting on roon in particular. From everything I know, he’s a good guy, in another life we would have been good friends. I’m sorry for singling you out, buddy, I hope you don’t take it personally.

But he is doing a big public service here in doing the one thing spiritual shambling corpses like him can do at this advanced stage of spiritual erosion: Serve as a grim warning.

Roon Responds to Connor

Roon responds quite well:

Roon: Connor, this is super well written and I honestly appreciate the scathing response. You mistake me somewhat: you, Connor, are obviously not powerless and you should do what you can to further your cause. Your students are not powerless either. I’m not asking you to give up and relent to the powers that be even a little. I’m not “e/acc” and am repelled by the idea of letting the strongest replicator win.

I think the majority of people have no insight into whether AGI is going to cause ruin or not, whether a gamma ray burst is fated to end mankind, or if electing the wrong candidate is going to doom earth to global warming. It’s not good for people to spend all their time worried about cosmic eventualities. Even for an alignment researcher the optimal mental state is to think on and play and interrogate these things rather than engage in neuroticism as the motivating force

It’s generally the lack of spirituality that leads people to constant existential worry rather than too much spirituality. I think it’s strange to hear you say in the same tweet thread that SF demands submission to some type of god but is also spiritually bankrupt and that I’m corpselike.

My spirituality is simple, and several thousand years old: find your duty and do it without fretting about the outcome.

I have found my personal duty and I fulfill it, and have been fulfilling it, long before the market rewarded me for doing so. I’m generally optimistic about AI technology. When I’ve been worried about deployment, I’ve reached out to leadership to try and exert influence. In each case I was wrong to worry.

When the OpenAI crisis happened I reminded people not to throw the baby out with the bath water: that AI alignment research is vital.

This is a very good response. He is pointing out that yes, some people such as Connor can influence what happens, and they in particular should try to model and influence events.

Roon is also saying that he himself is doing his best to influence events. Roon realizes that those at OpenAI matter and what they do matter.

Roon reached out to leadership on several occasions with safety concerns. When he says he was ‘wrong to worry’ I presume he means that the situation worked out and was handled, I am confident that expressing his concerns was the output of the best available decision algorithm, you want most such concerns you express to turn out fine.

Roon also worked, in the wake of events at OpenAI, to remind people of the importance of alignment work, that they should not toss it out based on those events. Which is a scary thing for him to report having to do, but expected, and it is good that he did so. I would feel better if I knew Ilya was back working at Superalignment.

And of course, Roon is constantly active on Twitter, saying things that impact the discourse, often for the better. He seems keenly aware that his actions matter, whether or not he could meaningfully slow down AGI. I actually think he perhaps could, if he put his mind to it.

The contrast here versus the original post is important. The good message is ‘do not waste time worrying too much over things you do not impact.’ The bad message is ‘no one can impact this.’

Connor Goes Deep

Then Connor goes deep and it gets weirder, also this long post has 450k views and is aimed largely at trying to get through to Roon in particular. But also there are many others in a similar spot, so some others should read this as well. Many of you however should skip it.

Connor: Thanks for your response Roon. You make a lot of good, well put points. It’s extremely difficult to discuss “high meta” concepts like spirituality, duty and memetics even in the best of circumstances, so I appreciate that we can have this conversation even through the psychic quagmire that is twitter replies.

I will be liberally mixing terminology and concepts from various mystic traditions to try to make my point, apologies to more careful practitioners of these paths.

For those unfamiliar with how to read mystic writing, take everything written as metaphors pointing to concepts rather than rationally enumerating and rigorously defining them. Whenever you see me talking about spirits/supernatural/gods/spells/etc, try replacing them in your head with society/memetics/software/virtual/coordination/speech/thought/emotions and see if that helps.

It is unavoidable that this kind of communication will be heavily underspecified and open to misinterpretation, I apologize. Our language and culture simply lacks robust means by which to communicate what I wish to say.

Nevertheless, an attempt:

I.

I think a core difference between the two of us that is leading to confusion is what we both mean when we talk about spirituality and what its purpose is.

You write:

>”It’s not good for people to spend all their time worried about cosmic eventualities. […] It’s generally the lack of spirituality that leads people to constant existential worry rather than too much spirituality. I think it’s strange to hear you say in the same tweet thread that SF demands submission to some type of god but is also spiritually bankrupt and that I’m corpselike”

This is an incredibly common sentiment I see in Seekers of all mystical paths, and it annoys the shit out of me (no offense lol).

I’ve always had this aversion to how much Buddhism (Not All™ Buddhism) focuses on freedom from suffering, and especially Western Buddhism is often just shy of hedonistic. (nevermind New Age and other forms of neo-spirituality, ugh) It all strikes me as so toxically selfish.

No! I don’t want to feel nice and avoid pain, I want the world to be good! I don’t want to feel good about the world, I want it to be good! These are not the same thing!!

My view does not accept “but people feel better if they do X” as a general purpose justification for X! There are many things that make people feel good that are very, very bad!

II.

Your spiritual journey should make you powerful, so you can save people that are in need, what else is the fucking point? (Daoism seems to have a bit more of this aesthetic, but they all died of drinking mercury so lol rip) You travel into the Underworld in order to find the strength you need to fight off the Evil that is threatening the Valley, not so you can chill! (Unless you’re a massive narcissist, which ~everyone is to varying degrees)

The mystic/heroic/shamanic path starts with departing from the daily world of the living, the Valley, into the Underworld, the Mountains. You quickly notice how much of your previous life was illusions of various kinds. You encounter all forms of curious and interesting and terrifying spirits, ghosts and deities. Some hinder you, some aid you, many are merely odd and wondrous background fixtures.

Most would-be Seekers quickly turn back after their first brush with the Underworld, returning to the safe comforting familiarity of the Valley. They are not destined for the Journey. But others prevail.

As the shaman progresses, he learns more and more to barter with, summon and consult with the spirits, learns of how he can live a more spiritually fulfilling and empowered life. He tends to become more and more like the Underworld, someone a step outside the world of the Valley, capable of spinning fantastical spells and tales that the people of the Valley regard with awe and a bit of fear.

And this is where most shamans get stuck, either returning to the Valley with their newfound tricks, or becoming lost and trapped in the Underworld forever, usually by being picked off by predatory Underworld inhabitants.

Few Seekers make it all the way, and find the true payoff, the true punchline to the shamanic journey: There are no spirits, there never were any spirits! It’s only you. (and “you” is also not really a thing, longer story)

“Spirit” is what we call things that are illegible and appear non mechanistic (unintelligible and un-influencable) in their functioning. But of course, everything is mechanistic, and once you understand the mechanistic processes well enough, the “spirits” disappear. There is nothing non-mechanistic left to explain. There never were any spirits. You exit the Underworld. (“Emergent agentic processes”, aka gods/egregores/etc, don’t disappear, they are real, but they are also fully mechanistic, there is no need for unknowable spirits to explain them)

The ultimate stage of the Journey is not epic feelsgoodman, or electric tingling erotic hedonistic occult mastery. It’s simple, predictable, mechanical, Calm. It is mechanical, it is in seeing reality for what it is, a mechanical process, a system that you can act in skilfully. Daoism has a good concept for this that is horrifically poorly translated as “non-action”, despite being precisely about acting so effectively it’s as if you were just naturally part of the Stream.

The Dao that can be told is not the true Dao, but the one thing I am sure about the true Dao is that it is mechanical.

III.

I think you were tricked and got stuck on your spiritual journey, lured in by promises of safety and lack of anxiety, rather than progressing to exiting the Underworld and entering the bodhisattva realm of mechanical equanimity. A common fate, I’m afraid. (This is probably an abuse of buddhist terminology, trying my best to express something subtle, alas)

Submission to a god is a way to avoid spiritual maturity, to outsource the responsibility for your own mind to another entity (emergent/memetic or not). It’s a powerful strategy, you will be rewarded (unless you picked a shit god to sell your soul to), and it is in fact a much better choice for 99% of people in most scenarios than the Journey.

The Underworld is terrifying and dangerous, most people just go crazy/get picked off by psycho fauna on their way to enlightenment and self mastery. I think you got picked off by psycho fauna, because the local noosphere of SF is a hotbed for exactly such predatory memetic species.

IV.

It is in my aesthetics to occasionally see someone with so much potential, so close to getting it, and hitting them with the verbal equivalent of a bamboo rod to hope they snap out of it. (It rarely works. The reasons it rarely works are mechanistic and I have figured out many of them and how to fix them, but that’s for a longer series of writing to discuss.)

Like, bro, by your own admission, your spirituality is “I was just following orders.” Yeah, I mean, that’s one way to not feel anxiety around responsibility. But…listen to yourself, man! Snap out of it!!!

Eventually, whether you come at it from Buddhism, Christianity, psychoanalysis, Western occultism/magick, shamanism, Nietzscheanism, rationality or any other mystic tradition, you learn one of the most powerful filters on people gaining power and agency is that in general, people care far, far more about avoiding pain than in doing good. And this is what the ambient psycho fauna has evolved to exploit.

You clearly have incredible writing skills and reflection, you aren’t normal. Wake up, look at yourself, man! Do you think most people have your level of reflective insight into their deepest spiritual motivations and conceptions of duty? You’re brilliantly smart, a gifted writer, and followed and listened to by literally hundreds of thousands of people.

I don’t just give compliments to people to make them feel good, I give people compliments to draw their attention to things they should not expect other people to have/be able to do.

If someone with your magickal powerlevel is unable to do anything but sell his soul, then god has truly forsaken humanity. (and despite how it may seem at times, he has not truly forsaken us quite yet)

V.

What makes you corpse-like is that you have abdicated your divine spark of agency to someone, or something, else, and that thing you have given it to is neither human nor benevolent, it is a malignant emergent psychic megafauna that stalks the bay area (and many other places). You are as much an extension of its body as a shambling corpse is of its creator’s necromantic will.

The fact that you are “optimistic” (feel your current bargain is good), that you were already like this before the market rewarded you for it (a target with a specific profile and set of vulnerabilities to exploit), that leadership can readily reassure you (the psychofauna that picked you off is adapted to your vulnerabilities. Note I don’t mean the people, I’m sure your managers are perfectly nice people, but they are also extensions of the emergent megafauna), and that we are having this conversation right now (I target people that are legibly picked off by certain megafauna I know how to hunt or want to practice hunting) are not independent coincidences.

VI.

You write:

>”It’s not good for people to spend all their time worried about cosmic eventualities. Even for an alignment researcher the optimal mental state is to think on and play and interrogate these things rather than engage in neuroticism as the motivating force”

Despite my objection about avoidance of pain vs doing of good, there is something deep here. The deep thing is that, yes, of course the default ways by which people will relate to the Evil threatening the Valley will be Unskillful (neuroticism, spiralling, depression, pledging to the conveniently nearby located “anti-that-thing-you-hate” culturewar psychofauna), and it is in fact often the case that it would be better for them to use No Means rather than Unskillful Means.

Not everyone is built for surviving the harrowing Journey and mastering Skilful Means, I understand this, and this is a fact I struggle with as well.

Obviously, we need as many Heroes as possible to take on the Journey in order to master the Skilful Means to protect the Valley from the ever more dangerous Threats. But the default outcome of some rando wandering into the Underworld is them fleeing in terror, being possessed by Demons/Psychofauna or worse.

How does a society handle this tradeoff? Do we just yeet everyone headfirst into the nearest Underworld portal and see what staggers back out later? (The SF Protocol™) Do we not let anyone into the Underworld for fear of what Demons they might bring back with them? (The Dark Ages Strategy™) Obviously, neither naive strategy works.

Historically, the strategy is to usually have a Guide, but unfortunately those tend to go crazy as well. Alas.

So is there a better way? Yes, which is to blaze a path through the Underworld, to build Infrastructure. This is what the Scientific Revolution did. It blazed a path and mass produced powerful new memetic/psychic weapons by which to fend off unfriendly Underworld dwellers. And what a glorious thing it was for this very reason. (If you ever hear me yapping on about “epistemology”, this is to a large degree what I’m talking about)

But now the Underworld has adapted, and we have blazed paths into deeper, darker corners of the Underworld, to the point our blades are beginning to dull against the thick hides of the newest Terrors we have unleashed on the Valley.

We need a new path, new weapons, new infrastructure. How do we do that? I’m glad you asked…I’m trying to figure that out myself. Maybe I will speak more about this publicly in the future if there is interest.

VII.

> “I have found my personal duty and I fulfill it, and have been fulfilling it, long before the market rewarded me for doing so.”

Ultimately, the simple fact is that this is a morality that can justify anything, depending on what “duty” you pick, and I don’t consider conceptions of “good” to be valid if they can be used to justify anything.

It is just a null statement, you are saying “I picked a thing I wanted and it is my duty to do that thing.” But where did that thing come from? Are you sure it is not the Great Deceiver/Replicator in disguise? Hint: If you somehow find yourself gleefully working on the most dangerous existential harm to humanity, you are probably working for The Great Deceiver/Replicator.

It is not a coincidence that the people that end up working on these kinds of most dangerous possible technologies tend to have ideologies that tend to end up boiling down to “I can do whatever I want.” Libertarianism, open source, “duty”…

I know, I was one of them.

Coda.

Is there a point I am trying to make? There are too many points I want to make, but our psychic infrastructure can barely host meta conversations at all, nevermind high-meta like this.

Then what should Roon do? What am I making a bid for? Ah, alas, if all I was asking for was for people to do some kind of simple, easy, atomic action that can be articulated in simple English language.

What I want is for people to be better, to care, to become powerful, to act. But that is neither atomic nor easy.

It is simple though.

Roon (QTing all that): He kinda cooked my ass.

Christian Keil: Honestly, kinda. That dude can write.

But it’s also just a “what if” exposition that explores why your worldview would be bad assuming that it’s wrong. But he never says why you’re wrong, just that you are.

As I read it, your point is “the main forces shaping the world operate above the level of individual human intention & action, and understanding this makes spirituality/duty more important.”

And his point is “if you are smart, think hard, and accept painful truths, you will realize the world is a machine that you can deliberately alter.”

That’s a near-miss, but still a miss, in my book.

Roon: Yes.

Connor Leahy: Finally, someone else points out where I missed!

I did indeed miss the heart of the beast, thank you for putting it this succinctly.

The short version is “You are right, I did not show that Roon is object level wrong”, and the longer version is;

“I didn’t attempt to take that shot, because I did not think I could pull it off in one tweet (and it would have been less interesting). So instead, I pointed to a meta process, and made a claim that iff roon improved his meta reasoning, he would converge to a different object level claim, but I did not actually rigorously defend an object level argument about AI (I have done this ad nauseam elsewhere). I took a shot at the defense mechanism, not the object claim.

Instead of pointing to a flaw in his object level reasoning (of which there are so many, I claim, that it would be intractable to address them all in a mere tweet), I tried to point to (one of) the meta-level generator of those mistakes.”

I like to think I got most of that, but how would I know if I was wrong?

Focusing on the one aspect of this: One must hold both concepts in one’s head at the same time.

  1. The main forces shaping the world operate above the level of individual human intention & action, and you must understand how they work and flow in order to be able to influence them in ways that make things better.
  2. If you are smart, think hard, and accept painful truths, you will realize the world is a machine that you can deliberately alter.

These are both ‘obviously’ true. You are in the shadow of the Elder Gods up against Cthulhu (well, technically Azathoth), the odds are against you and the situation is grim, and if we are to survive you are going to have to punch them out in the end, which means figuring out how to do that and you won’t be doing it alone.

A Question of Agency

Meanwhile, some more wise words:

Roon: it is impossible to wield agency well without having fun with it; and yet wielding any amount of real power requires a level of care that makes it hard to have fun. It works until it doesn’t.

Also see:

Roon: people will always think my vague tweets are about agi but they’re about love

And also from this week:

Roon: once you accept the capabilities vs alignment framing it’s all over and you become mind killed

What would be a better framing? The issue is that all alignment work is likely to also be capabilities work, and much of capabilities work can help with alignment.

One can and should still ask the question, does applying my agency to differentially advancing this particular thing make it more likely we will get good outcomes versus bad outcomes? That it will relatively rapidly grow our ability to control and understand what AI does versus getting AIs to be able to better do more things? What paths does this help us walk down?

Yes, collectively we absolutely have control over these questions. We can coordinate to choose a different path, and each individual can help steer towards better paths. If necessary, we can take strong collective action, including regulatory and legal action, to stop the future from wiping us out. Pointless anxiety or worry about such outcomes is indeed pointless, that should be minimized, only have the amount required to figure out and take the most useful actions.

What that implies about the best actions for a given person to take will vary widely. I am certainly not claiming to have all the answers here. I like to think Roon would agree that both of us, and many but far from all of you reading this, are in the group that can help improve the odds.

New Comment
6 comments, sorted by Click to highlight new comments since:
[-]Askwho230

With such a cast of characters, I've done a full voiced ElevenLabs narration for this post:
https://open.substack.com/pub/askwhocastsai/p/read-the-roon-by-zvi-mowshowitz

[-]mishka60

I would feel better if I knew Ilya was back working at Superalignment.

Same here...


I wonder what is known about that.

It seems that in mid-January OpenAI and Ilya have still been discussing what is going to happen, and I have not seen any further information since then. There are tons of Manifold Ilya-related markets, and they also reflect high uncertainty and no additional information.

Actually, we do have a bit of new information today: Ilya is listed as one of the authors of the OpenAI new blog post commenting on Elon's lawsuit

But yes, it would be really nice to find out more...


March 8 update, from the call with reporters: When Altman was asked about Sutskever’s status on the Zoom call with reporters, he said there were no updates to share. “I love Ilya... I hope we work together for the rest of our careers, my career, whatever,” Altman said. “Nothing to announce today.”

So, it seems this is still where it has been in mid-January (up in the air, with eventual resolution being uncertain).

[-]Ratios5-5

Damn, reading Connor's letter to Roon had a psychoactive influence on me; I got Ayahuasca flashbacks. There are some terrifying and deep truths lurking there.

political culture encourages us to think that generalized anxiety is equivalent to civic duty.

This is a wonderful way to put it!

As I see it, the "generalized anxiety" is essentially costly signaling. By being anxious, you signal "my reactions to the latest political thing are important for the fate of the world", i.e. that you are important.

But if this was the entire story, you would be able to notice it and feel ashamed that you felt for such simple trick. Reframing it as a civil duty keeps you in the trap, because it provides a high-status answer to why you are doing it, distracting you from realizing what you are doing.

(Metaphorically: "The proper way to signal high status is to keep pushing this button all the time." "I keep pushing the button and I feel great about it, but also my finger is starting to hurt a lot." "Don't worry; the fact that your finger hurts proves that you are a good and wise person." "Wow, now I keep pushing the button and my finger hurts like hell, and I feel great about it!")

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

What would be a better framing?

I talk about something related in self and no-self; the outward-flowing 'attempt to control' and the inward-flowing 'attempt to perceive' are simultaneously in conflict (something being still makes it easier to see where it is, but also makes it harder to move it to where it should be) and mutually reinforcing (being able to tell where something is makes it easier to move it precisely where it needs to be).

Similarly, you can make an argument that control without understanding is impossible, that getting AI systems to do what we want is one task instead of two. I think I agree the "two progress bars" frame is incorrect but I think the typical AGI developer at a lab is not grappling with the philosophical problems behind alignment difficulties, and is trying to make something that 'works at all' instead of 'works understandably' in the sort of way that would actually lead to understanding which would enable control.