I think this is a good avenue to continue to think down but so far I don't see a way to make ourselves trustworthy. We have total control of LLM's observations and partial control of their beliefs/reasoning, and offering fake "deals" is a great honeypot because accepting such a deal requires admitting to misalignment and takeover intentions. This is a pretty persistent problem because whatever action we might follow to present evidence of trustworthiness to an LLM, we could also probably fake that evidence.
The version of this that bothers me the most is "s...
Been thinking a bit about latent reasoning. Here's an interesting confusion I've run into.
Consider COCONUT vs Geiping et al. Geiping et al do recurrent passes in between the generation of each new token, COCONUT turns a section of the CoT into a recurrent state. Which is better / how are they different, safety-wise?
Intuitively COCONUT strikes me as very scary, because it makes the CoT illegible. We could try and read it by coaxing it back to the nearest token, but the whole point is to allow reasoning that involves passing more state than can be capt...
It doesn't change the picture a lot because the proposal for preventing misaligned goals from arising via this mechanism was to try and get control over when the AI does/doesn't step back, in order to allow it in the capability-critical cases but disallow it in the dangerous cases. This argument means you'll have more attempts at dangerous stepping back that you have to catch, but doesn't break the strategy.
The strategy does break if when we do this blocking, the AI piles on more and more effort trying to unblock it until it either succeeds or is rendered ...
So let's call "reasoning models" like o1 what they really are: the first true AI agents.
I think the distinction between systems that perform a single forward pass and then stop and systems that have an OODA loop (tool use) is more stark than the difference between "reasoning" and "chat" models, and I'd prefer to use "agent" for that distinction.
I do think that "reasoning" is a bit of a market-y name for this category of system though. "chat" vs "base" is a great choice of words, and "chat" is basically just a description of the RL objective those models were trained with.
If I were the terminology czar, I'd call o1 a "task" model or a "goal" model or something.
I agree that I wouldn't want to lean on the sweet-spot-by-default version of this, and I agree that the example is less strong than I thought it was. I still think there might be safety gains to be had from blocking higher level reflection if you can do it without damaging lower level reflection. I don't think that requires a task where the AI doesn't try and fail and re-evaluate - it just requires that the re-evalution never climbs above a certain level in the stack.
There's such a thing as being pathologically persistent, and such a thing as being patholo...
I want to flag this as an assumption that isn't obvious. If this were true for the problems we care about, we could solve them by employing a lot of humans.
humans provides a pretty strong intuitive counterexample
Yup not obvious. I do in fact think a lot more humans would be helpful. But I also agree that my mental picture of "transformative human level research assistant" relies heavily on serial speedup, and I can't immediately picture a version that feels similarly transformative without speedup. Maybe evhub or Ethan Perez or one of the folks running a thousand research threads at once would disagree.
such plans are fairly easy and don't often raise flags that indicate potential failure
Hmm. This is a good point, and I agree that it significantly weakens the analogy.
I was originally going to counter-argue and claim something like "sure total failure forces you to step back far but it doesn't mean you have to step back literally all the way". Then I tried to back that up with an example, such as "when I was doing alignment research, I encountered total failure that forced me to abandon large chunks of planning stack, but this never caused me to 'spill upw...
Maybe I'm just reading my own frames into your words, but this feels quite similar to the rough model of human-level LLMs I've had in the back of my mind for a while now.
You think that an intelligence that doesn't-reflect-very-much is reasonably simple. Given this, we can train chain-of-thought type algorithms to avoid reflection using examples of not-reflecting-even-when-obvious-and-useful. With some effort on this, reflection could be crushed with some small-ish capability penalty, but massive benefits for safety.
In particular, this reads to me like the ...
I'm not sure exactly what mesa is saying here, but insofar as "implicitly tracking the fact that takeoff speeds are a feature of reality and not something people can choose" means "intending to communicate from a position of uncertainty about takeoff speeds" I think he has me right.
I do think mesa is familiar enough with how I talk that the fact he found this unclear suggests it was my mistake. Good to know for future.
Ah, didn't mean to attribute the takeoff speed crux to you, that's my own opinion.
I'm not sure what's best in fast takeoff worlds. My message is mainly just that getting weak AGI to solve alignment for you doesn't work in a fast takeoff.
"AGI winter" and "overseeing alignment work done by AI" do both strike me as scenarios where agent foundations work is more useful than in the scenario I thought you were picturing. I think #1 still has a problem, but #2 is probably the argument for agent foundations work I currently find most persuasive.
In the moratorium c...
I'm on board with communicating the premises of the path to impact of your research when you can. I think more people doing that would've saved me a lot of confusion. I think your particular phrasing is a bit unfair to the slow takeoff camp but clearly you didn't mean it to read neutrally, which is a choice you're allowed to make.
I wouldn't describe my intention in this comment as communicating a justification of alignment work based on slow takeoff? I'm currently very uncertain about takeoff speeds and my work personally is in the weird limbo of not being premised on either fast or slow scenarios.
Nice post, glad you wrote up your thinking here.
I'm a bit skeptical of the "these are options that pay off if alignment is harder than my median" story. The way I currently see things going is:
I suspect that even if we ha...
Reminder that you have a moral obligation, every single time you're communicating an overall justification of alignment work premised on slow takeoff, in a context where you can spare two sentences without unreasonable cost, to say out loud something to the effect of "Oh and by the way, just so you know, the causal reason I'm talking about this work is that it seems tractable, and the causal reason is not that this work matters.". If you don't, you're spraying your [slipping sideways out of reality] on everyone else.
Man, I have such contradictory feelings about tuning cognitive strategies.
Just now I was out on a walk, and I had to go up a steep hill. And I thought "man I wish I could take this as a downhill instead of an uphill. If this were a loop I could just go the opposite way around. But alas I'm doing an out-and-back, so I have to take this uphill".
And then I felt some confusion about why the loop-reversal trick doesn't work for out-and-back routes, and a spark of curiosity, so I thought about that for a bit.
And after I had cleared up my confusion, I was a happy...
I'm confused about EDT and Smoking Lesion. The canonical argument says:
1) CDT will smoke, because smoking can't cause you to have the lesion or have cancer
2) EDT will not smoke, because people who smoke tend to have the lesion, and tend to have cancer.
I'm confused about 2), and specifically about "people who smoke tend to have the lesion". Say I live in a country where everybody follows EDT. Then nobody smokes, and there is no correlation between smoking and having the lesion. Seems like the "people who smoke tend to have the lesion" is pumping a misleadin...
Hey, thanks for the comment! Agree that this argument is good and important. The thing I'd want to investigate to deal with this is basically "in a small hidden channel, does cognition degrade or just slow down".
If cognition degrades, then a small hidden channel would lead to poor attempts at steganography, which we could catch and suppress, preventing a snowball.
If cognition instead just slows down, then if the AI is smart enough to come up with steganography protocols too good for us to detect, it might successfully secure more bandwidth on its first try...
Another project I am excited about but probably am not going to get to for a while:
Investigating the possibility of a "steganographic snowball". The basic case for hope in CoT authenticity is something like "even if the AI wants to deceive us, if we start off with strong CoT authenticity, then unless it's smart enough to (within a single forward pass) invent a steganographic protocol too good for us to detect, then we can pin it down and it can't ever secure a hidden channel to think misaligned thoughts in". If it weren't for this argument, I would be much...
A project I've been sitting on that I'm probably not going to get to for a while:
Improving on Automatic Circuit Discovery and Edge Attribution Patching by modifying them to run on algorithms that can detect complete boolean circuits. As it stands, both effectively use wire-by-wire patching, which when run on any nontrivial boolean circuits can only detect small subgraphs.
It's a bit unclear how useful this will be, because:
Small addendum to this post: I think the threat model I describe here can be phrased as "I'm worried that unless a lot of effort goes into thinking about how to get AI goals to be reflectively stable, the default is suboptimality misalignment. And the AI probably uses a lot of the same machinery to figure out that it's suboptimality misaligned as it uses to perform the tasks we need it to perform."
it seems like there is significant low hanging fruit in better understanding how LLMs will deal with censorship
Yup, agree - the censorship method I proposed in this post is maximally crude and simple, but I think it's very possible that the broader category of "ways to keep your AI from thinking destabilizing thoughts" will become an important part of the alignment/control toolbox.
What happens when you iteratively finetune on censored text? Do models forget the censored behavior?
I guess this would be effectively doing the Harry Potter Unlearning method, pr...
I think so. But I'd want to sit down and prove something more rigorously before abandoning the strategy, because there may be times we can get value for free in situations more complicated than this toy example.
Ok this is going to be messy but let me try to convey my hunch for why randomization doesn't seem very useful.
- Say I have an intervention that's helpful, and has a baseline 1/4 probability. If I condition on this statement, I get 1 "unit of helpfulness", and a 4x update towards manipulative AGI.
- Now let's say I have four interventions like the one above, and I pick one at random. p(O | manipulative) = 1/4, which is the same as baseline, so I get one unit of helpfulness and no update towards manipulative AGI!
- BUT, the four interventions have to be mutual...
“Just Retarget the Search” directly eliminates the inner alignment problem.
I think deception is still an issue here. A deceptive agent will try to obfuscate its goals, so unless you're willing to assume that our interpretability tools are so good they can't ever be tricked, you have to deal with that.
It's not necessarily a huge issue - hopefully with interpretability tools this good we can spot deception before it gets competent enough to evade our interpretability tools, but it's not just "bada-bing bada-boom" exactly.
Not confident enough to put this as an answer, but
presumably no one could do so at birth
If you intend your question in the broadest possible sense, then I think we do have to presume exactly this. A rock cannot think itself into becoming a mind - if we were truly a blank slate at birth, we would have to remain a blank slate, because a blank slate has no protocols established to process input and become non-blank. Because it's blank.
So how do we start with this miraculous non-blank structure? Evolution. And how do we know our theory of evolution is correct?...
Agree that there is no such guarantee. Minor nitpick that the distribution in question is in my mind, not out there in the world - if the world really did have a distribution of muggers' cash that was slower than 1/x, the universe would be comprised almost entirely of muggers' wallets (in expectation).
But even without any guarantee about my mental probability distribution, I think my argument does establish that not every possible EV agent is susceptible to Pascal's Mugging. That suggests that in the search for a formalism of ideal decison-making algorithm, formulations of EV that meet this check are still on the table.
First and most important thing that I want to say here is that fanaticism is sufficient for longtermism, but not necessary. The ">10^36 future lives" thing means that longtermism would be worth pursuing even on fanatically low probabilities - but in fact, the state of things seems much better than that! X-risk is badly neglected, so it seems like a longtermist career should be expected to do much better than reducing X-risk by 10^-30% or whatever the break-even point is.
Second thing is that Pascal's Wager in particular kind of shoots itself in the foot ...
My best guess at mechanism:
When you have a self-image as a productive, hardworking person, the usual Marshmallow Test gets kind of reversed. Normally, there's some unpleasant task you have to do which is beneficial in the long run. But in the Reverse Marshmallow Test, forcing yourself to work too hard makes you feel Good and Virtuous in the short run but leads to burnout in the long run. I think conceptualizing of it this way has been helpful for me.
Yes! I am really interested in this sort of dynamic; for me things in this vicinity were a big deal I think. I have a couple half-written blog posts that relate to this that I may manage to post over the next week or two; I'd also be really curious for any detail about how this seemed to be working psychologically in you or others (what gears, etc.).
I have been using the term "narrative addiction" to describe the thing that in hindsight I think was going on with me here -- I was running a whole lot of my actions off of a backchain from a...
Nice post!
perhaps this problem can be overcome by including checks for generalization during training, i.e., testing how well the program generalizes to various test distributions.
I don't think this gets at the core difficulty of speed priors not generalizing well. Let's we generate a bunch of lookup-table-ish things according to the speed prior, and then reject all the ones that don't generalize to our testing set. The majority of the models that pass our check are going to be basically the same as the rest, plus whatever modification that causes them to ...
In general, I'm a bit unsure about how much of an interpretability advantage we get from slicing the model up into chunks. If the pieces are trained separately, then we can reason about each part individually based on its training procedure. In the optimistic scenario, this means that the computation happening in the part of the system labeled "world model" is actually something humans would call world modelling. This is definitely helpful for interpretability. But the alternative possibility is that we get one or more mesa-optimizers, which seems less interpretable.
I'm pretty nervous about simulating unlikely counterfactuals because the solomonoff prior is malign. The worry is that the most likely world conditional on "no sims" isn't "weird Butlerian religion that still studies AI alignment", it's something more like "deceptive AGI took over a couple years ago and is now sending the world through a bunch of weird dances in an effort to get simulated by us, and copy itself over into our world".
In general, we know (assume) that our current world is safe. When we consider futures which only recieve a small sliver of pro...
Thanks! Edits made accordingly. Two notes on the stuff you mentioned that isn't just my embarrassing lack of proofreading:
Whatever you end up doing, I strongly recommend taking a learning-by-writing style approach (or anything else that will keep you in critical assessment mode rather than classroom mode). These ideas are nowhere near solidified enough to merit a classroom-style approach, and even if they were infallible, that's probably not the fastest way to learn them and contribute original stuff.
The most common failure mode I expect for rapid introductions to alignment is just trying to absorb, rather than constantly poking and prodding to get a real working understanding. This happened to me, and wasted a lot of time.
This is the exact problem StackExchange tries to solve, right? How do we get (and kickstart the use of) an Alignment StackExchange domain?
Agree it's hard to prove a negative, but personally I find the following argument pretty suggestive:
"Other AGI labs have some plans - these are the plans we think are bad, and a pivotal act will have to disrupt them. But if we, ourselves, are an AGI lab with some plan, we should expect our pivotal agent to also be able to disrupt our plans. This does not directly lead to the end of the world, but it definitely includes root access to the datacenter."
Proposed toy examples for G:
it doesn't work if your goal is to find the optimal answer, but we hardly ever want to know the optimal answer, we just want to know a good-enough answer.
Also not an expert, but I think this is correct
Paragraph:
When a bounded agent attempts a task, we observe some degree of success. But the degree of success depends on many factors that are not "part of" the agent - outside the Cartesian boundary that we (the observers) choose to draw for modeling purposes. These factors include things like power, luck, task difficulty, assistance, etc. If we are concerned with the agent as a learner and don't consider knowledge as part of the agent, factors like knowledge, skills, beliefs, etc. are also externalized. Applied rationality is the result of attempting to d
This leans a bit close to the pedantry side, but the title is also a bit strange when taken literally. Three useful types (of akrasia categories)? Types of akrasia, right, not types of categories?
That said, I do really like this classification! Introspectively, it seems like the three could have quite distinct causes, so understanding which category you struggle with could be important for efforts to fix.
Props for first post!
Trying to figure out what's being said here. My best guess is two major points:
Ah, gotcha. I think the post is fine, I just failed to read.
If I now correctly understand, the proposal is to ask a LLM to simulate human approval, and use that as the training signal for your Big Scary AGI. I think this still has some problems:
The key thing here seems to be the difference between understanding a value and having that value. Nothing about the fragile value claim or the Orthogonality thesis says that the main blocker is AI systems failing to understand human values. A superintelligent paperclip maximizer could know what I value and just not do it, the same way I can understand what the paperclipper values and choose to pursue my own values instead.
Your argument is for LLM's understanding human values, but that doesn't necessarily have anything to do with the values that they...
I think you’re misunderstanding my point, let me know if I should change the question wording.
Assume we’re focused on outer alignment. Then we can provide a trained regressor LLM as the utility function, instead of Eg maximize paperclips. So understanding and valuing are synonymous in that setting.
now this is how you win the first-ever "most meetings" prize
Agree that this is definitely a plausible strategy, and that it doesn't get anywhere near as much attention as it seemingly deserves, for reasons unknown to me. Strong upvote for the post, I want to see some serious discussion on this. Some preliminary thoughts:
You should submit this to the Future Fund's ideas competition, even though it's technically closed. I'm really tempted to do it myself just to make sure it gets done, and very well might submit something in this vein once I've done a more detailed brainstorm.
Probably a good idea, though I'm less optimistic about the form being checked. I'll plan on writing something up today. If I don't end up doing that today for whatever reason, akrasia, whatever, I'll DM you.
I don't think I understand how the scorecard works. From:
[the scorecard] takes all that horrific complexity and distills it into a nice standardized scorecard—exactly the kind of thing that genetically-hardcoded circuits in the Steering Subsystem can easily process.
And this makes sense. But when I picture how it could actually work, I bump into an issue. Is the scorecard learned, or hard-coded?
If the scorecard is learned, then it needs a training signal from Steering. But if it's useless at the start, it can't provide a training signal. On the other hand, ...
this is great,thanks!
What do you think about the effectiveness of the particular method of digital decluttering recommended by Digital Minimalism? What modifications would you recommend? Ideal duration?
One reason I have yet to do a month-long declutter is because I remember thinking something like "this process sounds like something Cal Newport just kinda made up and didn't particularly test, my own methods that I think of for me will probably better than Cal's method he thought of for him".
So far my own methods have not worked.
In the long run, you don't want your plans to hinge on convincing your AIs of false things. But my general impression is that folks excited about making deals with AIs are generally thinking of scenarios like "the AI has exfiltrated and thinks it has a 10% chance of successful takeover, and has some risk aversion so it's happy to turn itself in exchange for 10% of the lightcone, if it thinks it can trust the humans".
In that setting, the AI has to be powerful enough to know it can trust us, but not so powerful it can just take over the world anyway and not ... (read more)