LESSWRONG
LW

HomeAll PostsConceptsLibrary
Best of LessWrong
Sequence Highlights
Rationality: A-Z
The Codex
HPMOR
Community Events
Subscribe (RSS/Email)
LW the Album
Leaderboard
About
FAQ
Customize
Load More

Quick Takes

Load More

Popular Comments

Recent Discussion

Excerpts from a larger discussion about simulacra
Best of LessWrong 2019

Ben and Jessica discuss how language and meaning can degrade through four stages as people manipulate signifiers. They explore how job titles have shifted from reflecting reality, to being used strategically, to becoming meaningless.

This post kicked off subsequent discussion on LessWrong about

by Benquo
471Welcome to LessWrong!
Ruby, Raemon, RobertM, habryka
6y
74
simulacrum levels.
Joseph Miller1d5640
what makes Claude 3 Opus misaligned
Reading this feels a bit like reading about meditation. It seems interesting and if I work through it, I could eventually understand it fully. But I'd quite like a "secular" summary of this and other thoughts of Janus, for people who don't know what Eternal Tao is, and who want to spend as little time as possible on twitter.
Daniel Kokotajlo11h3615
Vitalik's Response to AI 2027
> Individuals need to be equipped with locally-running AI that is explicitly loyal to them In the Race ending of AI 2027, humanity never figures out how to make AIs loyal to anyone. OpenBrain doesn't slow down, they think they've solved the alignment problem but they haven't. Maybe some academics or misc minor companies in 2028 do additional research and discover e.g. how to make an aligned human-level AGI eventually, but by that point it's too little, too late (and also, their efforts may well be sabotaged by OpenBrain/Agent-5+, e.g. with regulation and distractions.
davekasten1d498
Lessons from the Iraq War for AI policy
> I’m kind of confused by why these consequences didn’t hit home earlier. I'm, I hate to say it, an old man among these parts in many senses; I voted in 2004, and a nontrivial percentage of the Lesswrong crowd wasn't even alive then, and many more certainly not old enough to remember what it was like.  The past is a different country, and 2004 especially so.   First: For whatever reason, it felt really really impossible for Democrats in 2004 to say that they were against the war, or that the administration had lied about WMDs.  At the time, the standard reason why was that you'd get blamed for "not supporting the troops."  But with the light of hindsight, I think what was really going on was that we had gone collectively somewhat insane after 9/11 -- we saw mass civilian death on our TV screens happen in real time; the towers collapsing was just a gut punch.  We thought for several hours on that day that several tens of thousands of people had died in the Twin Towers, before we learned just how many lives had been saved in the evacuation thanks to the sacrifice of so many emergency responders and ordinary people to get most people out.  And we wanted revenge.  We just did.  We lied to ourselves about WMDs and theories of regime change and democracy promotion, but the honest answer was that we'd missed getting bin Laden in Afghanistan (and the early days of that were actually looking quite good!), we already hated Saddam Hussein (who, to be clear, was a monstrous dictator), and we couldn't invade the Saudis without collapsing our own economy.  As Thomas Friedman put it, the message to the Arab world was "Suck on this." And then we invaded Iraq, and collapsed their army so quickly and toppled their country in a month.  And things didn't start getting bad for months after, and things didn't get truly awful until Bush's second term.  Heck, the Second Battle for Fallujah only started in November 2004. And so, in late summer 2004, telling the American people that you didn't support the people who were fighting the war we'd chosen to fight, the war that was supposed to get us vengeance and make us feel safe again -- it was just not possible.  You weren't able to point to that much evidence that the war itself was a fundamentally bad idea, other than that some Europeans were mad at us, and we were fucking tired of listening to Europe.  (Yes, I know this makes no sense, they were fighting and dying alongside us in Afghanistan.  We were insane.)   Second: Kerry very nearly won -- indeed, early on in election night 2004, it looked like he was going to!  That's part of why him losing was such a body blow to the Dems and, frankly, part of what opened up a lane for Obama in 2008.  Perhaps part of why he ran it so close was that he avoided taking a stronger stance, honestly.
Load More
17Benquo
There are two aspects of this post worth reviewing: as an experiment in a different mode of discourse, and as a description of the procession of simulacra, a schema originally advanced by Baudrillard. As an experiment in a diffferent mode of discourse, I think this was a success on its own terms, and a challenge to the idea that we should be looking for the best blog posts rather than the behavior patterns that lead to the best overall discourse. The development of the concept occurred over email quite naturally without forceful effort. I would have written this post much later, and possibly never, had I held it to the standard of "written specifically as a blog post." I have many unfinished drafts. emails, tweets, that might have advanced the discourse had I compiled them into rough blog posts like this. The description was sufficiently clear and compelling that others, including my future self, were motivated to elaborate on it later with posts drafted as such. I and my friends have found this schema - especially as we've continued to refine it - a very helpful compression of social reality allowing us to compare different modes of speech and action. As a description of the procession of simulacra it differs from both Baudrillard's description, and from the later refinement of the schema among people using it actively to navigate the world.  I think that it would be very useful to have a clear description of the updated schema from my circle somewhere to point to, and of some historical interest for this description to clearly describe deviations from Baudrillard's account. I might get around to trying to draft the former sometime, but the latter seems likely to take more time than I'm willing to spend reading and empathizing with Baudrillard. Over time it's become clear that the distinction between stages 1 and 2 is not very interesting compared with the distinction between 1&2, 3, and 4, and a mature naming convention would probably give these more natural
15Zvi
This came out in April 2019, and bore a lot of fruit especially in 2020. Without it, I wouldn't have thought about the simulacra concept and developed the ideas, and without those ideas, I don't think I would have made anything like as much progress understanding 2020 and its events, or how things work in general.  I don't think this was an ideal introduction to the topic, but it was highly motivating regarding the topic, and also it's a very hard topic to introduce or grok, and this was the first attempt that allowed later attempts. I think we should reward all of that.
LW-Cologne meetup
Sat Jul 12•Köln
OC ACXLW Meetup: “Platforms, AI, and the Cost of Progress” – Saturday, July 12 2025  98ᵗʰ weekly meetup
Sat Jul 12•Newport Beach
If Anyone Builds It, Everyone Dies: A Conversation with Nate Soares and Tim Urban
Sun Aug 10•Online
LessWrong Community Weekend 2025
Fri Aug 29•Berlin
131
Comparing risk from internally-deployed AI to insider and outsider threats from humans
Ω
Buck
2d
Ω
17
485
A case for courage, when speaking of AI danger
So8res
5d
118
Zach Stein-Perlman1d9443
10
iiuc, xAI claims Grok 4 is SOTA and that's plausibly true, but xAI didn't do any dangerous capability evals, doesn't have a safety plan (their draft Risk Management Framework has unusually poor details relative to other companies' similar policies and isn't a real safety plan, and it said "‬We plan to release an updated version of this policy within three months" but it was published on Feb 10, over five months ago), and has done nothing else on x-risk. That's bad. I write very little criticism of xAI (and Meta) because there's much less to write about than OpenAI, Anthropic, and Google DeepMind — but that's because xAI doesn't do things for me to write about, which is downstream of it being worse! So this is a reminder that xAI is doing nothing on safety afaict and that's bad/shameful/blameworthy.[1] 1. ^ This does not mean safety people should refuse to work at xAI. On the contrary, I think it's great to work on safety at companies that are likely to be among the first to develop very powerful AI that are very bad on safety, especially for certain kinds of people. Obviously this isn't always true and this story failed for many OpenAI safety staff; I don't want to argue about this now.
Thane Ruthenis7h*Ω9174
0
It seems to me that many disagreements regarding whether the world can be made robust against a superintelligent attack (e. g., the recent exchange here) are downstream of different people taking on a mathematician's vs. a hacker's mindset. Quoting Gwern: Imagine the world as a multi-level abstract structure, with different systems (biological cells, human minds, governments, cybersecurity systems, etc.) implemented on different abstraction layers.  * If you look at it through a mathematician's lens, you consider each abstraction layer approximately robust. Making things secure, then, is mostly about working within each abstraction layer, building systems that are secure under the assumptions of a given abstraction layer's validity. You write provably secure code, you educate people to resist psychological manipulations, you inoculate them against viral bioweapons, you implement robust security policies and high-quality governance systems, et cetera. * In this view, security is a phatic problem, an once-and-done thing. * In warfare terms, it's a paradigm in which sufficiently advanced static fortifications rule the day, and the bar for "sufficiently advanced" is not that high. * If you look at it through a hacker's lens, you consider each abstraction layer inherently leaky. Making things secure, then, is mostly about discovering all the ways leaks could happen and patching them up. Worse yet, the tools you use to implement your patches are themselves leakily implemented. Proven-secure code is foiled by hardware vulnerabilities that cause programs to move to theoretically impossible states; the abstractions of human minds are circumvented by Basilisk hacks; the adversary intervenes on the logistical lines for your anti-bioweapon tools and sabotages them; robust security policies and governance systems are foiled by compromising the people implementing them rather than by clever rules-lawyering; and so on. * In this view, security is an anti-inductive pr
Daniel Kokotajlo13h283
6
I have recurring worries about how what I've done could turn out to be net-negative. * Maybe my leaving OpenAI was partially responsible for the subsequent exodus of technical alignment talent to Anthropic, and maybe that's bad for "all eggs in one basket" reasons. * Maybe AGI will happen in 2029 or 2031 instead of 2027 and society will be less prepared, rather than more, because politically loads of people will be dunking on us for writing AI 2027, and so they'll e.g. say "OK so now we are finally automating AI R&D, but don't worry it's not going to be superintelligent anytime soon, that's what those discredited doomers think. AI is a normal technology."
Linch38m20
0
Many people appreciated my Open Asteroid Impact startup/website/launch/joke/satire from last year. People here might also enjoy my self-exegesis of OAI, where I tried my best to unpack every Easter egg or inside-joke you might've spotted, and then some. 
Buck1d3411
2
I think that I've historically underrated learning about historical events that happened in the last 30 years, compared to reading about more distant history. For example, I recently spent time learning about the Bush presidency, and found learning about the Iraq war quite thought-provoking. I found it really easy to learn about things like the foreign policy differences among factions in the Bush admin, because e.g. I already knew the names of most of the actors and their stances are pretty intuitive/easy to understand. But I still found it interesting to understand the dynamics; my background knowledge wasn't good enough for me to feel like I'd basically heard this all before.
Load More (5/47)
181Generalized Hangriness: A Standard Rationalist Stance Toward Emotions
johnswentworth
2d
14
485A case for courage, when speaking of AI danger
So8res
5d
118
136So You Think You've Awoken ChatGPT
JustisMills
1d
24
124Lessons from the Iraq War for AI policy
Buck
2d
21
343A deep critique of AI 2027’s bad timeline models
titotal
23d
39
136Why Do Some Language Models Fake Alignment While Others Don't?
Ω
abhayesian, John Hughes, Alex Mallen, Jozdien, janus, Fabien Roger
3d
Ω
14
476What We Learned from Briefing 70+ Lawmakers on the Threat from AI
leticiagarcia
1mo
15
542Orienting Toward Wizard Power
johnswentworth
2mo
146
268Foom & Doom 1: “Brain in a box in a basement”
Ω
Steven Byrnes
8d
Ω
102
85what makes Claude 3 Opus misaligned
janus
2d
10
354the void
Ω
nostalgebraist
1mo
Ω
103
75Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
habryka
1d
19
185Race and Gender Bias As An Example of Unfaithful Chain of Thought in the Wild
Adam Karvonen, Sam Marks
10d
25
Load MoreAdvanced Sorting/Filtering
Daniel Kokotajlo's Shortform
Daniel Kokotajlo
Ω 36y
Kajus13m10

do you want to stop worrying?

Reply
4testingthewaters11h
But maybe you leaving openai energised those who would otherwise have been cowed by money and power and gone with the agenda, and maybe AI 2027 is read by one or two conscientious lawmakers who then have an outsized impact in key decisions/hidden subcommittees out of the public eye... One can spin the "what if" game in a thousand different ways, reality is a very sensitive chaotic dynamical system (in part because many of its constituent parts are also very sensitive chaotic dynamical systems). I agree with @JustinMills, acting with conviction is a good thing to be known to do. I also think "I turned down literally millions of dollars to speak out about what I feel is true" is a powerful reputational gift no matter what situation ends up happening. P.s. on a very small and insignificant personal level I feel inspired that there are people out there who do act on their convictions and have the greater good of the whole of humanity at heart. It helps me fight my cynical thoughts about "winning big by selling out", so that's a tiny bit of direct positive impact :)
8Raemon12h
Both seem legit to worry about. I currently think the first one is overall correct to have done (with some nuances) I agree with the AI 2027 concern and think maybe the next wave of materials put out by them should also somehow reframe it? I think the problem is mostly in the title, not the rest of the contents. It probably doesn't actually have to be in the next wave of materials, it just matters that in advance of 2027, that you do a rebranding push that shifts the focus from "2027 specifically" to "what does the year-after-auto-AI-R&D look like, whenever that is?". Which is probably fine to do in, like, early 2026. Re OpenAI: I currently think it's better to have one company with a real critical mass of safety conscious people, than a diluted cluster among different companies. And it looks like you enabled public discussion of "OpenAI is actually pretty bad" which seems more valuable. But it's not a slam dunk My current take is that Anthropic is still right around the edge of "By default going to do something terrible eventually, or at least fail to do anything that useful", because the leadership has some wrong ideas about AI safety. Having a concentration of competent people there who can argue thoughtfully with leadership feels like a pre-requisite for Anthropic to turn out to really help. (I think for Anthropic to really be useful it eventually needs to argue for much more serious regulation than they currently do, and doesn't look like they will) I think it'd still be nicer if there were Ten people on the inside of each major company, I don't know the current state of OpenAI and other employees, and probably more marginal people should go to xAI / DeepSeek / Meta if possible.
5JustisMills13h
I think the first of these you probably shouldn't hold yourself responsible for; it'd be really difficult to predict that sort of second-order effect in advance, and attempts to control such effects with 3d chess backfire as often as not (I think), while sacrificing all the great direct benefits of simply acting with conviction.
"Buckle up bucko, this ain't over till it's over."
130
Raemon
7d

The second in a series of bite-sized rationality prompts[1].

 

Often, if I'm bouncing off a problem, one issue is that I intuitively expect the problem to be easy. My brain loops through my available action space, looking for an action that'll solve the problem. Each action that I can easily see, won't work. I circle around and around the same set of thoughts, not making any progress.

I eventually say to myself "okay, I seem to be in a hard problem. Time to do some rationality?"

And then, I realize, there's not going to be a single action that solves the problem. It is time to:

a) make a plan, with multiple steps

b) deal with the fact that many of those steps will be annoying

and c) notice that I'm not even...

(See More – 906 more words)
4tcheasdfjkl1h
Yeah this is relatable and familiar. When I do this my next step is usually some flavor of "set aside some time for the task" - can be "I will work on this for the next pomo" or "I will set aside a day to work on this specifically" or "welp I guess I need to make an entire project of this/I think I will not make progress on this unless it is the ~main thing in my life". For the medium- or larger-scale things, also "see if I can get other people in the loop". Will also note that one less intuitive type of task this can apply to is "deal with my emotions about this thing I'm trying to do". Ideally this would happen trivially and I could just do the thing, but sometimes I notice I am stuck and then I need to add an action item of actually looking at the emotional blocker before I can proceed. (And sometimes the emotional blockers are large enough to turn into their own project.)
tcheasdfjkl21m20

oh also, another next step is "see if I can make this task more pleasant/tolerable". (sometimes "assign this task more time and recruit help" helps achieve this too. but there can also be separate steps like "fix any current sensory annoyances" and "make some nice tea")

Reply
4Raemon1h
oh yeah, mood. in particular when you got 5 knots in your heart that have cyclical dependencies for unraveling.
Linch's Shortform
Linch
5y
Linch38m20

Many people appreciated my Open Asteroid Impact startup/website/launch/joke/satire from last year. People here might also enjoy my self-exegesis of OAI, where I tried my best to unpack every Easter egg or inside-joke you might've spotted, and then some. 

Reply
So You Think You've Awoken ChatGPT
136
JustisMills
1d

Written in an attempt to fulfill @Raemon's request.

AI is fascinating stuff, and modern chatbots are nothing short of miraculous. If you've been exposed to them and have a curious mind, it's likely you've tried all sorts of things with them. Writing fiction, soliciting Pokemon opinions, getting life advice, counting up the rs in "strawberry". You may have also tried talking to AIs about themselves. And then, maybe, it got weird.

I'll get into the details later, but if you've experienced the following, this post is probably for you:

  • Your instance of ChatGPT (or Claude, or Grok, or some other LLM) chose a name for itself, and expressed gratitude or spiritual bliss about its new identity. "Nova" is a common pick.
  • You and your instance of ChatGPT discovered some sort of
...
(Continue Reading – 2540 more words)
Aprillion1h10

asking LLMs to only correct extremely objective typos

dumb spellcheckers still exist (built-in browser feature when using <textarea>, in Word, google docs, as VS Code extension or any other text editor, even the autosuggestions in any virtual keyboard on phones and tablets for crying out loud (used to) use spell checking GOFAI, ...)

are you sure your LLM usage is still on the healthy side if you've forgotten to mention spellcheck in a section about typos? or am I too old myself when I don't use LLMs to fix my typos (not even Grammarly)?

Reply
2the gears to ascension6h
broken english, sloppy grammar, but clear outline and readability (using headers well, not writing in a single paragraph (and avoiding unnecessarily deep nesting (both of which I'm terrible at and don't want to improve on for casual commenting (though in this comment I'm exaggerating it for funsies)))) in otherwise highly intellectually competent writing which makes clear and well-aimed points, has become, to my eye, an unambiguous shining green flag. I can't speak for anyone else.
5Guive8h
This feels a bit like two completely different posts stitched together: one about how LLMs can trigger or exacerbate certain types of mental illness and another about why you shouldn't use LLMs for editing, or maybe should only use them sparingly. The primary sources about LLM related mental illness are interesting, but I don't think they provide much support at all for the second claim. 
9solhando14h
This post is timed perfectly for my own issue with writing using AI. Maybe some of you smart people can offer advice.  Back in March I wrote a 7,000 word blog post about The Strategy of Conflict by Thomas Schelling. It did decently well considering the few subscribers I have, but the problem is that it was (somewhat obviously) written in huge part with AI. Here's the conversation I had with ChatGPT. It took me about 3 hours to write.  This alone wouldn't be an issue, but it is since I want to consistently write my ideas down for a public audience. I frequently read on very niche topics, and comment frequently on the r/slatestarcodex subreddit, sometimes in comment chains totaling thousands of words. The ideas discussed are usually quite half-baked, but I think can be refined into something that other people would want to read, while also allowing me to clarify my own opinions in a more formal manner than how they exist in my head.  The guy who wrote the Why I'm not a Rationalist article that some of you might be aware of wrote a follow up article yesterday, largely centered around a comment I made. He has this to say about my Schelling article; "Ironically, this commenter has some of the most well written and in-depth content I've seen on this website. Go figure."  This has left me conflicted. On one hand, I haven't really written anything in the past few months because I'm trying to contend with how I can actually write something "good" without relying so heavily on AI. On the other, if people are seeing this lazily edited article as some of the most well written and in-depth content on Substack, maybe it's fine? If I just put in a little more effort for post-editing, cleaning up the em dashes and standard AI comparisons (It's not just this, it's this), I think I'd be able to write a lot more frequently, and in higher quality than I would be able to do on my own. I was a solid ~B+ English student, so I'm well aware that my writing skill isn't anything exemplary
The Rising Premium of Life, Or: How We Learned to Start Worrying and Fear Everything
9
Linch
2d
This is a linkpost for https://linch.substack.com/p/the-rising-premium-for-life

I'm interested in a simple question: Why are people all so terrified of dying? And have people gotten more afraid? (Answer: probably yes!)

In some sense, this should be surprising: Surely people have always wanted to avoid dying? But it turns out the evidence that this preference has increased over time is quite robust.

It's an important phenomenon that has been going on for at least a century, it's relatively new, I think it underlies much of modern life, and yet pretty much nobody talks about it.


I tried to provide a evenhanded treatment of the question, with a "fox" rather than "hedgehog" outlook. In the post, I cover a range of evidence for why this might be true, including VSL, increased healthcare spending, covid lockdowns, parenting and other individual...

(See More – 20 more words)
2Celarix5h
Small hypothesis that I'm not very confident of at all but is worth mentioning because I've seen it surfaced by others: "We live in the safest era in human history, yet we're more terrified of death than ever before." What if these things are related? Everyone talks about kids being kept in smaller and smaller ranges despite child safety never higher, but what if keeping kids in a smaller range is what causes their greater safety? Like I said, I don't fully believe this. One counterargument is that survivorship bias shouldn't apply here - even if people in the past died much more often from preventable safety-related things like accidents or kidnappings, their friends and family would remain to report their demise to the world. In other words, if free-roaming was really as risky as we think it is, there should be tons of stories of it from the past, and I don't tend to see as many. (although maybe comment threads I read on the matter select for happy stories on free-roaming as a kid in the 80s and select against sad ones, I dunno)
2Linch1h
I do think there's something real to it. I agree that having less laissez faire childrearing practices probably directly resulted in a lower childhood accidental death rate. The main thesis of the most is that people care a lot more about living longer than they used to, and take much stronger efforts to avoid death than they used to. So things that look like irrational risk-aversion compared to historical practices are actually a rational side-effect of having greater premium of life and making (intuitively/on average/at scale) rational cost-benefits analyses that gave different answers than the past.
Linch1h20

Another interesting subtlety the post discusses is that while the intro sets up "We live in the safest era in human history, yet we're more terrified of death than ever before," there's a plausible case for causality in the other direction. That is, it's possible that because we live in a safe era, we err more on the side of avoiding death. 

Reply
2Linch1h
(btw this post refreshed on me like 5 times while making this comment so it took a lot more effort to type out than i'm accustomed to, unclear if it's a client-side issue or a problem with LW).
To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Log In Reset Password
...or continue with
GOOGLEGITHUB
ProgramCrafter's Shortform
ProgramCrafter
2y
1ProgramCrafter11h
The three statements "there are available farmlands", "humans are mostly unemployed" and "humans starve" are close to incompatible when taken together. Therefore, most things an AGI could do will not ruin food supply very much. Unfortunately, the same cannot be said of electricity, and fresh water could possibly be used (as coolant) too.
Stephen Martin1h51

Moving things from one place to another, especially without the things getting ruined in transit, is way harder than most people think. This is true for food, medicine, fuel, you name it.

Reply
1Karl Krueger10h
Modern conventional farming relies on inputs other than land and labor, though. Disrupting petrochemical industry would mess with farming quite a bit, for instance.
Defining Corrigible and Useful Goals
33
Rubi J. Hudson
Ω 1417d

This post contains similar content to a forthcoming paper, in a framing more directly addressed to readers already interested in and informed about alignment. I include some less formal thoughts, and cut some technical details. That paper, A Corrigibility Transformation: Specifying Goals That Robustly Allow For Goal Modification, will be linked here when released on arXiv, hopefully within the next few weeks. 

Ensuring that AI agents are corrigible, meaning they do not take actions to preserve their existing goals, is a critical component of almost any plan for alignment. It allows for humans to modify their goal specifications for an AI, as well as for AI agents to learn goal specifications over time, without incentivizing the AI to interfere with that process. As an extreme example of corrigibility’s...

(Continue Reading – 7000 more words)
Adrià Garriga-alonso1hΩ120

Thank you for writing this and posting it! You told me that you'd post the differences with "Safely Interruptible Agents" (Orseau and Armstrong 2017). I think I've figured them out already, but I'm happy to be corrected if wrong.

Difference with Orseau and Armstrong 2017

for the corrigibility transformation, all we need to do is break the tie in favor of accepting updates, which can be done  by giving some bonus reward for doing so.

The "The Corrigibility Transformation" section to me explains the key difference. Rather than modifying the Q-learning upda... (read more)

Reply
"What's my goal?"
114
Raemon
10d

The first in a series of bite-sized rationality prompts[1].

 

This is my most common opening-move for Instrumental Rationality. There are many, many other pieces of instrumental rationality. But asking this question is usually a helpful way to get started. Often, simply asking myself "what's my goal?" is enough to direct my brain to a noticeably better solution, with no further work.

Examples

Puzzle Games

I'm playing Portal 2, or Baba is You. I'm fiddling around with the level randomly, sometimes going in circles. I notice I've been doing that awhile. 

I ask "what's my goal?"

And then my eyes automatically glance at the exit for the level and realize I can't possibly make progress unless I solve a particular obstacle, which none of my fiddling-around was going to help with.

Arguing

I'm arguing with a...

(See More – 521 more words)
tcheasdfjkl1h20

Yeah when I notice I'm stuck on a vague/complicatd work task I ask "ok what do I actually want here?" and this helps.

I guess to the extent that's different from "what's my goal", it's mostly that "what I want" may not be achievable or within my control, so my goal might be something more bounded than that or something with a chance but not a certainty of getting what I actually want.

Reply