Author here. I still endorse the post and have continued to find it pretty central to how I think about myself and nearby ecosystems.
I just submitted some major edits to the post. Changes include:
1. Name change ("Robust, Coherent Agent")
After much hemming and hawing and arguing, I changed the name from "Being a Robust Agent" to "Being a Robust, Coherent Agent." I'm not sure if this was the right call.
It was hard to pin down exactly one "quality" that the post was aiming at. Coherence was the single word that pointed towards "what sort of agent to become." But I think "robustness" still points most clearly towards why you'd want to change. I added some clarifying remarks about that. In individual sentences I tend to refer to either "Robust Agents" or "Coherent agents" depending on what that sentence was talking about
Other options include "Reflective Agent" or "Deliberate Agent." (I think once you deliberate on what sort of agent you want to be, you often become more coherent and robust, although not necessarily)
Edit" Undid the name change, seemed like it was just a worse title.
2. Spelling out what exactly the strategy entails
Originally the post was vaguely gesturing at an idea. It seemed good to try to pin that idea down more clearly. This does mean that, by getting "more specific" it might also be more "wrong." I've run the new draft by a few people and I'm fairly happy with the new breakdown:
But, if people think that's carving the concept at the wrong joints, let me know.
3. "Why is this important?"
Zvi's review noted that the post didn't really argue why becoming a robust agent was so important.
Originally, I viewed the post as simply illustrating an idea rather than arguing for it, and... maybe that was fine. I think it would have been fine to "why" that for a followup post.
But I reflected a bit on why it seemed important to me, and ultimately thought that it was worth spelling it out more explicitly here. I'm not sure my reasons are the same as Zvi's, or others. But, I think they are fairly defensible reasons. Interested if anyone has significantly different reasons, or thinks that the reasons I listed don't make sense.
I'm leaning towards reverting the title to just "being a robust agent", since the new title is fairly clunky, and someone gave me private feedback that it felt less like a clear-handle for a concept. [edit: have done so]
So the most important point of this post is to lay out the Robust Agent paradigm explicitly, with a clear term I could quickly refer to in future discussions, to check “is this something we’re on the same page about, or not?” before continuing on to discuss more complicated ideas.
Have you found that this post (and the concept handle) have been useful for this purpose? Have you found that you do in fact reference it as a litmus test, and steer conversations according to the response others make?
It's definitely been useful with people I've collaborated closely with. (I find the post a useful background while working with the LW team, for example)
I haven't had a strong sense of whether it's proven beneficial to other people. I have a vague sense that the sort of people who inspired this post mostly take this as background that isn't very interesting or something. Possibly with a slightly different frame on how everything hangs together.
It sounds like this post functions (and perhaps was intended) primarily as a filter for people who are already good at agency, and secondarily as a guide for newbies?
If so, that seems like a key point - surrounding oneself with other robust (allied) agents helps develop or support one's own agency.
I actually think it works better as a guide for newbies than as a filter. The people I want to filter on, I typically am able to have long protracted conversations about agency with them anyway, and this blog post isn't the primary way that they get filtered.
I feel like perhaps the name "Adaptive Agent" captures a large element of what you want: an agent capable of adapting to shifting circumstances.
I like the edits!
One thing I think might be worth doing is linking to the post on Realism about Rationality, and explicitly listing at is a potential crux for this post.
I'm pretty onboard theoreticallly with the idea of being a robust agent, but I don't actually endorse it as a goal because I tend to be a rationality anti-realist.
I actually don't consider Realism about Rationality cruxy for this (I tried to lay out my own cruxes in this version). Part of what seemed important here is that I think Coherent Agency is only useful in some cases for some people, and I wanted to be clear about when that was.
I think each of the individual properties (gears level understanding, coherence, game-theoretic-soundness) are each just sort of obviously useful in some ways. There are particular failure modes to get trapped in if you've only made some incremental progress, but generally I think you can make incremental improvements in each domain and get improvements-in-life-outcome.
I do think that the sort of person who naturally gravitates towards this probably has something like 'rationality realism' going on, but I suspect it's not cruxy, and in particular I suspect shouldn't be cruxy for people who aren't naturally oriented that way.
Some people are aspiring directly to be a fully coherent, legible, sound agent. And that might be possible or desirable, and it might be possible to reach a variation of that that is cleanly mathematically describable. But I don't think that has be true for the concept to be useful.
generally I think you can make incremental improvements in each domain and get improvements-in-life-outcome.
To me this implies some level on the continuum of realism about rationality. For instance I often think taht to make improvements on life outcomes I have to purposefully go off of pareto improvements in these domaiins, and sometimes sacrifice them. Because I don't think my brain runs that code natively, and sometimes efficient native code is in direct opposition to naive rationality.
Relatedly:
I've been watching the discussion on Realism About Rationality with some interest and surprise. I had thought of 'something like realism about rationality' as more cruxy for alignment work, because the inspectability of the AI matters a lot more than the inspectability of your own mind – mostly because you're going to scale up the AI a lot more than your own mind is likely to scale up. The amount of disagreement that's come out more recently about that has been interesting.
Some of the people who seem most invested in the Coherent Agency thing are specifically trying to operate on cosmic scales (i.e. part of their goal is to capture value in other universes and simulations, and to be the sort of person you could safely upload).
Upon reflection though, I guess it's not surprising that people don't consider realism "cruxy" for alignment, and also not "cruxy" for personal agency (i.e. upon reflection, I think it's more like an aesthetic input, than a crux. It's not necessary for agency to be mathematically simple or formalized, for incremental legibility and coherence to be useful for avoiding wasted motion)
Bumping this up to two nominations not because I think it needs a review, but because I like it and it captures an important insight that I've not seen written up like this elsewhere.
In my own life, these insights have led me to do/considering doing things like:
If there isn't enough incentive for others to cooperate with you, don't get upset for them if they defect (or "hit the neutral button.") BUT maybe try to create a coordination mechanism so that there is enough incentive.
It seems like "getting upset" is often a pretty effective way of creating exactly the kind of incentive that leads to cooperation. I am reminded of the recent discussion on investing in the commons, where introducing a way to punish defectors greatly increased total wealth. Generalizing that to more everyday scenarios, it seems that being angry at someone is often (though definitely not always, and probably not in the majority of cases) a way to align incentives better.
(Note: I am not arguing in favor of people getting more angry more often, just saying that not getting angry doesn't seem like a core aspect of the "robust agent" concept that Raemon is trying to point at here)
Ah. The thing I was trying to point at here was the "Be Nice, At Least Until You Can Coordinate Meanness" thing.
The world is full of people who get upset at you for not living up to the norms they prefer. There are, in fact, so many people who will get upset for so many contradictory norms that it just doesn't make much sense to try to live up to them all, and you shouldn't be that surprised that it doesn't work.
The motivating examples were something like "Bob gets upset at people for doing thing X. A little while later, people are still doing thing X. Bob gets upset again. Repeat a couple times. Eventually it (should, according to me) become clear that a) getting upset isn't having the desired effect, or at most is producing the effect of "superficially avoid behavior X when Bob is around". And meanwhile, getting upset is sort of emotionally exhausting and the cost doesn't seem worth it."
I do agree that "get upset" (or more accurately "perform upset-ness") works reasonably well as localized strategy, and can scale up a bit if you can rally more people to get upset on your behalf. But the post was motivated by people who seemed to get upset... unreflectively?
(I updated the wording a bit but am not quite happy with it. I do think the underlying point was fairly core to the robust agent thing: you want policies for achieving your goals that actually work. "Getting upset in situation X" might be a good policy, but if you're enacting it as an adaption-executor rather than as a considered policy, it may not actually be adaptive in your circumstance)
Eventually it (should, according to me) become clear that a) getting upset isn't having the desired effect, or at most is producing the effect of "superficially avoid behavior X when Bob is around".
Or "avoid Bob", "drop Bob as a friend", "leave Bob out of anything new", etc. What, if anything, becomes clear to Bob or to those he gets angry with is very underdetermined.
As you would expect from someone who was one of the inspirations for the post, I strongly approve of the insight/advice contained herein. I also agree with the previous review that there is not a known better write-up of this concept. I like that this gets the thing out there compactly.
Where I am disappointed is that this does not feel like it gets across the motivation behind this or why it is so important - I neither read this and think 'yes that explains why I care about this so much' or 'I expect that this would move the needle much on people's robustness as agents going forward if they read this.'
So I guess the takeaway for me looking back is, good first attempt and I wouldn't mind including it in the final list, but someone needs to try again?
It is worth noting that Jacob did *exactly* the adjustments that I would hope would result from this post if it worked as intended, so perhaps it is better than I give it credit for? Would be curious if anyone else had similar things to report.
I'm writing my self-review for this post, and in the process attempting to more clearly define what I mean by "Robust Agent" (possibly finding a better term for it)
The concept here is pointing at four points:
"Robustness" mostly refers to the third and fourth points. It's possible the core strategy might actually make more sense to call "Deliberate Agency". The core thing is that you're deciding on purpose what sort of agent to be. If the environment wasn't going to change, you wouldn't care about being robust.
Or maybe, "Robust Agency" makes sense as a thing to call one overall cluster of strategies, but it's a subset of "Deliberate Agency."
Or maybe, "Robust Agency" makes sense as a thing to call one overall cluster of strategies, but it's a subset of "Deliberate Agency."
Where might "Robust Agency" not overlap with "Deliberate Agency"?
Robust Agency is a subset of Deliberate Agency (so it always overlaps in that direction).
But you might decide, deliberately, to always ‘just copy what your neighbors are doing and not think too hard about it’, or other strategies that don’t match the attributes I listed for coherent/robust agency. (noting again that those attributes are intended to be illustrative rather than precisely defined criteria)
I find the classification of the elements of robust agency to be helpful, thanks for the write up and the recent edit.
I have some issues with Coherence and Consistency:
First, I'm not sure what you mean by that so I'll take my best guess which in its idealized form is something like: Coherence is being free of self contradictions and Consistency is having the tool to commit oneself to future actions. This is going by the last paragraph of that section-
There are benefits to reliably being able to make trades with your future-self, and with other agents. This is easier if your preferences aren’t contradictory, and easier if your preferences are either consistent over time, or at least predictable over time.
Second, the only case for Coherence is that reasons that coherence helps you make trade with your future self. My reasons for it are more strongly related to avoiding compartmentalization and solving confusions, and making clever choices in real time given my limited rationality.
Similarly, I do not view trades with future self as the most important reason for Consistency. It seems that the main motivator here for me is some sort of trade between various parts of me. Or more accurately, hacking away at my motivation schemes and conscious focus, so that some parts of me will have more votes than others.
Third, there are other mechanisms for Consistency. Accountability is a major one. Also, reducing noise in the environment externally and building actual external constraints can be helpful.
Forth, Coherence can be generalized to a skill that allows you to use your gear lever understanding of yourself and your agency to update your gears to what would be the most useful. This makes me wonder if the scope here is too large, and that gears level understanding and deliberate agency aren't related to the main points as much. These may all help one to be trustworthy, in that one's reasoning can judged to be adequate - including for oneself - which is the main thing I'm taking out from here.
Fifth (sorta), I have reread the last section, and I think that I understand now that your main motivation for Coherence and Consistency is that the conversation between rationalists can be made much more effective in that they can more easily understand each other's point of view. This I view related to Game Theoretic Soundness, more than the internal benefits of Coherence and Consistency which are probably more meaningful overall.
I definitely did not intend to make either an airtight or exhaustive case here. I think coherence and consistency are good for a number of reasons, and I included the ones I was most confident in, and felt like I could explain quickly and easily. (The section was more illustrative than comprehensive)
This response will not lay out the comprehensive case, but will try to answer my current thoughts on some specific questions. (I feel a desire to stress that I still don't consider myself an expert or even especially competent amature on this topic)
Second, the only case for Coherence is that reasons that coherence helps you make trade with your future self
That's actually not what I was going for – coherence can be relevant in the moment (if I had to pick, my first guess is that coherence is more costly in the moment and inconsistency is more costly over time, although I'm not sure I was drawing a strong distinction between them)
If you have multiple goals that are at odds, this can be bad in the immediate moment, because instead of getting to focus on one thing, you have to divide up your attention (unnecessarily) between multiple things that are at odds. This can be stressful, it can involve cognitive dissonance which makes it harder to think, and it involves wasted effort
This post has helped me understand quite a bit the mindset of a certain subset of rationalists, and being able to point to it and my disagreements with it has been quite helpful in finding cruxes with disagreements.
Seems like you are trying to elaborate on Eliezer's maxim Rationality is Systematized Winning. Some of what you mentioned implies shedding any kind of ideology, though sometimes wearing a credible mask of having one. Also being smarter than most people around you, both intellectually and emotionally. Of course, if you are already one of those people, then you don't need rationality, because, in all likelihood, you have already succeeded in what yo
Hmm.
I think the thing I'm gesturing at here is related but different to the systemized winning thing.
Some distinctions that I think make sense. (But would defer to people who seem further ahead in this path than I)
When you look at the Meta-Honesty post... I don't think the average person will find it a particularly valuable tool for achieving their goals. But I expect there to be a class of person who actually needs it as a tool to figure out how to trust people in domains where it's often necessary to hide or obfuscate information.
Whether you want your decision-theory robust enough such that Omega simulating you will give you a million dollars depends a lot on whether you expect Omega to actually be simulating you and making that decision. I know at least some people who are actually arranging their life with that sort of concern in mine.
I do think there's an alternate frame where you just say "no, rationality is specifically about being a robust agent. There are other ways to be effective, but rationality is the particular way of being effective where you try to have cognitive patterns with good epistemology and robust decision theory."
This is in tension with the "rationalists should win", thing. Shrug.
I think it's important to have at least one concept that is "anyone with goals should ultimately be trying to solve them the best way possible", and at least one concept that is "you might consider specifically studying cognitive patterns and policies and a cluster of related things, as a strategy to pursue particular goals."
I don't think is quite the same thing as instrumental rationality (although it's tightly entwined). If your goals are simple and well-understood, and you're interfacing in a social domain with clear rules, the most instrumentally rational thing might be to not overthink it and follow common wisdom.
But it's particularly important if you want to coordinate with other agents, over the long term. Especially on ambitious, complicated projects in novel domains.
On my initial read, I read this as saying "this is the right thing for some people, even when it isn't instrumentally rational" (?!). But
I think it's important to have at least one concept that is "anyone with goals should ultimately be trying to solve them the best way possible", and at least one concept that is "you might consider specifically studying cognitive patterns and policies and a cluster of related things, as a strategy to pursue particular goals."
makes me think this isn't what you meant. Maybe clarify the OP?
I was meaning to say "becoming a robust agent may be the instrumentally rational thing for some people in some situation. For other people in other situations, it may not be helpful."
I don't know that "instrumental rationality" is that well defined, and there might be some people who would claim that "instrumental rationality" and what I (here) am calling "being a robust agent" are the same thing. I disagree with that frame, but it's at least a cogent frame.
You might define "instrumental rationality" as "doing whatever thing is best for you according to your values", or you might use it it to mean "using an understanding of, say, probability theory and game theory and cognitive science to improve your decision making". I think it makes more sense to define it the first way, but I think some people might disagree with that.
If you define it the second way, then for some people – at least, people who aren't that smart or good at probability/game-theory/cog-science – then "the instrumentally rational thing" might not be "the best thing."
I'm actually somewhat confused about which definition Eliezer intended. He has a few posts (and HPMOR commentary) arguing that "the rational thing" just means "the best thing". But he also notes that it makes sense to use the word "rationality" specifically when we're talking about understanding cognitive algorithms.
Not sure whether that helped. (Holding off on updating the post till I've figured out what the confusion here is)
I define it the first way, and don't see the case for the second way. Analogously, for a while, Bayesian reasoning was our best guess of what the epistemic Way might look like. But then we find out about logical induction, and that seems to tell us a little more about what to do when you're embedded. So, we now see it would have been a mistake to define "epistemic rationality" as "adhering to the dictates of probability theory as best as possible".
I think that Eliezer's other usage of "instrumental rationality" points to fields of study for theoretical underpinning of effective action.
(not sure if this was clear, but I don't feel strongly about which definition to use, I just wanted to disambiguate between definitions people might have been using)
I think that Eliezer's other usage of "instrumental rationality" points to fields of study for theoretical underpinning of effective action.
This sounds right-ish (i.e. this sounds like something he might have meant). When I said "use probability and game theory and stuff" I didn't mean "be a slave to whatever tools we happen to use right now", I meant sort of as examples of "things you might use if you were trying to base your decisions and actions off of sound theoretical underpinnings."
So I guess the thing I'm still unclear on (people's common usage of words): Do most LWers think it is reasonable to call something "instrumentally rational" if you just sorta went with your gut without ever doing any kind of reflection (assuming your gut turned out to be trustworthy?).
Or are things only instrumentally rational if you had theoretical underpinnings? (Your definition says "no", which seems fine. But it might leave you with an awkward distinction between "instrumentally rational decisions" and "decisions rooted in instrumental rationality.")
I'm still unsure if this is dissolving confusion, or if the original post still seems like it needs editing.
Your definition says "no", which seems fine. But it might leave you with an awkward distinction between "instrumentally rational decisions" and "decisions rooted in instrumental rationality."
My definition was the first, which is "instrumental rationality = acting so you win". So, wouldn't it say that following your gut was instrumentally rational? At least, if it's a great idea in expectation given what you knew - I wouldn't say lottery winners were instrumentally rational.
I guess the hangup is in pinning down "when things are actually good ideas in expectation", given that it's harder to know that without either lots of experience or clear theoretical underpinnings.
I think one of the things I was aiming for with Being a Robust Agent is "you set up the longterm goal of having your policies and actions have knowably good outcomes, which locally might be a setback for how capable you are, but allows you to reliably achieve longer term goals."
By default, humans are a kludgy bundle of impulses. But we have the ability to reflect upon our decision making, and the implications thereof, and derive better overall policies. You might want to become a more robust, coherent agent – in particular if you're operating in an unfamiliar domain, where common wisdom can't guide you.