1 min read

2

This is a special post for quick takes by Daniel Tan. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Daniel Tan's Shortform
189 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Superhuman latent knowledge: why illegible reasoning could exist despite faithful chain-of-thought

Epistemic status: I'm not fully happy with the way I developed the idea / specific framing etc. but I still think this makes a useful point

Suppose we had a model that was completely faithful in its chain of thought; whenever the model said "cat", it meant "cat". Basically, 'what you see is what you get'. 

Is this model still capable of illegible reasoning?

I will argue that yes, it is. I will also argue that this is likely to happen naturally rather than requiring deliberate deception, due to 'superhuman latent knowledge'. 

Reasoning as communication

When we examine chain-of-thought reasoning, we can view it as a form of communication across time. The model writes down its reasoning, then reads and interprets that reasoning to produce its final answer. 

Formally, we have the following components: 

  1. A question Q
  2. A message M (e.g. a reasoning trace)  
  3. An answer A
  4. An entity that maps Q  M, and M  A. 

Note that there are two instances of the entity here. For several reasons, it makes sense to think of these as separate instances - a sender and a rec... (read more)

4Daniel Tan
Key idea: Legibility is not well-defined in a vacuum. It only makes sense to talk about legibility w.r.t a specific observer (and their latent knowledge). Things that are legible from the model’s POV may not be legible to humans.  This means that, from a capabilities perspective, there is not much difference between “CoT reasoning not fully making sense to humans” and “CoT reasoning actively hides important information in a way that tries to deceive overseers”. 
1Daniel Tan
A toy model of "sender and receiver having the same latent knowledge which is unknown to overseer" might just be to give them this information in-context, c.f. Apollo scheming evals, or to finetune it in, c.f. OOCR 

I'm worried that it will be hard to govern inference-time compute scaling. 

My (rather uninformed) sense is that "AI governance" is mostly predicated on governing training and post-training compute, with the implicit assumption that scaling these will lead to AGI (and hence x-risk). 

However, the paradigm has shifted to scaling inference-time compute. And I think this will be much harder to effectively control, because 1) it's much cheaper to just run a ton of queries on a model as opposed to training a new one from scratch (so I expect more entities to be able to scale inference-time compute) and 2) inference can probably be done in a distributed way without requiring specialized hardware (so it's much harder to effectively detect / prevent). 

Tl;dr the old assumption of 'frontier AI models will be in the hands of a few big players where regulatory efforts can be centralized' doesn't seem true anymore. 

Are there good governance proposals for inference-time compute? 

[-][anonymous]244

I think it depends on whether or not the new paradigm is "training and inference" or "inference [on a substantially weaker/cheaper foundation model] is all you need." My impression so far is that it's more likely to be the former (but people should chime in).

If I were trying to have the most powerful model in 2027, it's not like I would stop scaling. I would still be interested in using a $1B+ training run to make a more powerful foundation model and then pouring a bunch of inference into that model.

But OK, suppose I need to pause after my $1B+ training run because I want to a bunch of safety research. And suppose there's an entity that has a $100M training run model and is pouring a bunch of inference into it. Does the new paradigm allow the $100M people to "catch up" to the $1B people through inference alone?

My impression is that the right answer here is "we don't know." So I'm inclined to think that it's still quite plausible that you'll have ~3-5 players at the frontier and that it might still be quite hard for players without a lot of capital to keep up. TBC I have a lot of uncertainty here. 

Are there good governance proposals for inference-time compute? 

So far, I ha... (read more)

5Josh You
By several reports, (e.g. here and here) OpenAI is throwing enormous amounts of training compute at o-series models. And if the new RL paradigm involves more decentralized training compute than the pretraining paradigm, that could lead to more consolidation into a few players, not less, because pretraining* is bottlenecked by the size of the largest cluster. E.g. OpenAI's biggest single compute cluster is similar in size to xAI's, even though OpenAI has access to much more compute overall. But if it's just about who has the most compute then the biggest players will win. *though pretraining will probably shift to distributed training eventually
5Bogdan Ionut Cirstea
I've had similar thoughts previously: https://www.lesswrong.com/posts/wr2SxQuRvcXeDBbNZ/bogdan-ionut-cirstea-s-shortform?commentId=rSDHH4emZsATe6ckF. 
3Nathan Helm-Burger
I think those governance proposals were worse than worthless anyway. They didn't take into account rapid algorithmic advancement in peak capabilities and in training and inference efficiency. If this helps the governance folks shake off some of their myopic hopium, so much the better. Related comment

[Proposal] Can we develop a general steering technique for nonlinear representations? A case study on modular addition

Steering vectors are a recent and increasingly popular alignment technique. They are based on the observation that many features are encoded as linear directions in activation space; hence, intervening within this 1-dimensional subspace is an effective method for controlling that feature. 

Can we extend this to nonlinear features? A simple example of a nonlinear feature is circular representations in modular arithmetic. Here, it's clear that a simple "steering vector" will not work. Nonetheless, as the authors show, it's possible to construct a nonlinear steering intervention that demonstrably influences the model to predict a different result. 

Problem: The construction of a steering intervention in the modular addition paper relies heavily on the a-priori knowledge that the underlying feature geometry is a circle. Ideally, we wouldn't need to fully elucidate this geometry in order for steering to be effective. 

Therefore, we want a procedure which learns a nonlinear steering intervention given only the model's activations and labels (e.g. the correct n... (read more)

3Bogdan Ionut Cirstea
You might be interested in works like Kernelized Concept Erasure, Representation Surgery: Theory and Practice of Affine Steering, Identifying Linear Relational Concepts in Large Language Models.  
2Daniel Tan
This is really interesting, thanks! As I understand, "affine steering" applies an affine map to the activations, and this is expressive enough to perform a "rotation" on the circle. David Chanin has told me before that LRC doesn't really work for steering vectors. Didn't grok kernelized concept erasure yet but will have another read.   Generally, I am quite excited to implement existing work on more general steering interventions and then check whether they can automatically learn to steer modular addition 

Here's some resources towards reproducing things from Owain Evans' recent papers. Most of them focus on introspection / out-of-context reasoning.  

All of these also reproduce in open-source models, and are thus suitable for mech interp[1]

Policy awareness[2]. Language models finetuned to have a specific 'policy' (e.g. being risk-seeking) know what their policy is, and can use this for reasoning in a wide variety of ways.

Policy execution[3]. Language models finetuned on descriptions of a policy (e.g. 'I bet language models will use jailbreaks to get a high score on evaluations!) will execute this policy[4]

Introspection. Language models finetuned to predict what they would do (e.g. 'Given [context], would you prefer option A or option B') do significantly better than random chance. They also beat stronger models finetuned on the same data, indicating they ca... (read more)

Some rough notes from Michael Aird's workshop on project selection in AI safety. 

Tl;dr how to do better projects? 

  • Backchain to identify projects.
  • Get early feedback, iterate quickly
  • Find a niche

On backchaining projects from theories of change

  • Identify a "variable of interest"  (e.g., the likelihood that big labs detect scheming).
  • Explain how this variable connects to end goals (e.g. AI safety).
  • Assess how projects affect this variable
  • Red-team these. Ask people to red team these. 

On seeking feedback, iteration. 

  • Be nimble. Empirical. Iterate. 80/20 things
  • Ask explicitly for negative feedback. People often hesitate to criticise, so make it socially acceptable to do so
  • Get high-quality feedback. Ask "the best person who still has time for you". 

On testing fit

  • Forward-chain from your skills, available opportunities, career goals.
  • "Speedrun" projects. Write papers with hypothetical data and decide whether they'd be interesting. If not then move on to something else.
  • Don't settle for "pretty good". Try to find something that feels "amazing" to do, e.g. because you're growing a lot / making a lot of progress. 

Other points

On developing a career

  • "T-shaped" model
... (read more)
4James Chua
Writing hypothetical paper abstracts has been a good quick way for me to figure out if things would be interesting.

"Just ask the LM about itself" seems like a weirdly effective way to understand language models' behaviour. 

There's lots of circumstantial evidence that LMs have some concept of self-identity. 

Some work has directly tested 'introspective' capabilities. 

... (read more)
5eggsyntax
Self-identity / self-modeling is increasingly seeming like an important and somewhat neglected area to me, and I'm tentatively planning on spending most of the second half of 2025 on it (and would focus on it more sooner if I didn't have other commitments). It seems to me like frontier models have an extremely rich self-model, which we only understand bits of. Better understanding, and learning to shape, that self-model seems like a promising path toward alignment. I agree that introspection is one valuable approach here, although I think we may need to decompose the concept. Introspection in humans seems like some combination of actual perception of mental internals (I currently dislike x), ability to self-predict based on past experience (in the past when faced with this choice I've chosen y), and various other phenomena like coming up with plausible but potentially false narratives. 'Introspection' in language models has mostly meant ability to self-predict, in the literature I've looked at. I have the unedited beginnings of some notes on approaching this topic, and would love to talk more with you and/or others about the topic. Thanks for this, some really good points and cites.
4the gears to ascension
Partially agreed. I've tested this a little personally; Claude successfully predicted their own success probability on some programming tasks, but was unable to report their own underlying token probabilities. The former tests weren't that good, the latter ones somewhat were okay, I asked Claude to say the same thing across 10 branches and then asked a separate thread of Claude, also downstream of the same context, to verbally predict the distribution.
3Daniel Tan
That's pretty interesting! I would guess that it's difficult to elicit introspection by default. Most of the papers where this is reported to work well involve fine-tuning the models. So maybe "willingness to self-report honestly" should be something we train models to do. 
2the gears to ascension
willingness seems likely to be understating it. a context where the capability is even part of the author context seems like a prereq. finetuning would produce that, with fewshot one has to figure out how to make it correlate. I'll try some more ideas.

shower thought: What if mech interp is already pretty good, and it turns out that the models themselves are just doing relatively uninterpretable things?

5Thomas Kwa
How would we know?
1Daniel Tan
I don’t know! Seems hard It’s hard because you need to disentangle ‘interpretation power of method’ from ‘whether the model has anything that can be interpreted’, without any ground truth signal in the latter. Basically you need to be very confident that the interp method is good in order to make this claim. One way you might be able to demonstrate this, is if you trained / designed toy models that you knew had some underlying interpretable structure, and showed that your interpretation methods work there. But it seems hard to construct the toy models in a realistic way while also ensuring it has the structure you want - if we could do this we wouldn’t even need interpretability. Edit: Another method might be to show that models get more and more “uninterpretable” as you train them on more data. Ie define some metric of interpretability, like “ratio of monosemantic to polysemantic MLP neurons”, and measure this over the course of training history. This exact instantiation of the metric is probably bad but something like this could work
2Jozdien
I would ask what the end-goal of interpretability is. Specifically, what explanations of our model's cognition do we want to get out of our interpretability methods? The mapping we want is from the model's cognition to our idea of what makes a model safe. "Uninterpretable" could imply that the way the models are doing something is too alien for us to really understand. I think that could be fine (though not great!), as long as we have answers to questions we care about (e.g. does it have goals, what are they, is it trying to deceive its overseers)[1]. To questions like those, "uninterpretable" doesn't seem as coherent to me. 1. ^ The "why" or maybe "what", instead of the "how".
1Daniel Tan
I agree, but my point was more of “how would we distinguish this scenario from the default assumption that the interp methods aren’t good enough yet”? How can we make a method-agnostic argument that the model is somehow interpretable? It’s possible there’s no way to do this, which bears thinking about
2Mateusz Bagiński
Something like "We have mapped out the possible human-understandable or algorithmically neat descriptions of the network's behavior sufficiently comprehensively and sampled from this space sufficiently comprehensively to know that the probability that there's a description of its behavior that is meaningfully shorter than the shortest one of the ones that we've found is at most ϵ.".
1Daniel Tan
Yeah, seems hard I’m not convinced that you can satisfy either of those “sufficiently comprehensively” such that you’d be comfortable arguing your model is not somehow interpretable
2Mateusz Bagiński
I'm not claiming it's feasible (within decades). That's just what a solution might look like.

Can frontier language models engage in collusion for steganography? Here is a write-up of a preliminary result along these lines, showing that Deepseek-v3 may be able to collude with other instances of itself to do steganography. And also that this steganography might be more subtle than we think. 

Epistemic status; highly uncertain (and I’m sorry if this ends up being overclaiming, but I’m very excited at the moment).  

7eggsyntax
A few thoughts: * I expect that you could nudge the model toward coming up with steganographic encodings that worked better. LLMs are mostly bad at spelling-related tasks because tokens. Maybe ask the encoding model to think about and list some steganographic approaches that it thinks it's especially likely to be able to decode, prior to doing the task? * Maybe something like 'think carefully step-by-step about what encodings you would be able to correctly decode if you saw them' would help also? * 'This is extremely incredible and I have no explanation how this occurred'. Seems likely to just be surprisal-based under the hood; 'I used a word here that I wouldn't expect myself to have used'. That seems like a strong strategy to the extent that its logit distribution differs from the monitoring model's. Maybe complicated by temperature, though? Seems like average number of false positives in decoding would increase steadily with temperature. * Out of curiosity, why Deepseek-v3 as opposed to eg Claude? * I vaguely think there's been at least a bit of research on 'just ask the model to do stego' but could totally be wrong. * 'Introspection is one reason why Deepseek-v3 might understand its own encoding scheme better than Claude.' By introspection you just mean 'the fact that models are better at predicting themselves than predicting other models' here, right? I think it would be clearer to just say that (I'm going to keep pushing back on 'introspection' as the term for that; I think we should reserve the term for uses that correspond more to the typical meaning of 'introspection').
3Daniel Tan
Thanks egg, great thoughts!  Yup this makes sense; although this still seems like a pretty incredible claim because it requires:  1. The encoder and decoder instance both know what kinds of words it uses (c.f. being able to articulate their goals), and this is robust enough to transmit information 2. The encoder and decoder both spontaneously settle on this as a schelling point for encoding the message.  I'm pretty skeptical that a human would be able to coordinate with a hypothetical clone of themselves this well, especially in the randomly-generated passphrase setting.  No particular reason beyond lack of time! If I do a more systematic version of this I will definitely try to replicate this in more models. 
5eggsyntax
LLMs do seem to be pretty good at picking self-consistent Schelling points, at least in simple cases -- I've got a writeup here of some casual experiments I did with GPT-4 last January on picking various Schelling points, eg a date, a number, a word (also some discussion of that in the MATS slack). I think it seems somewhat less surprising to me (maybe because of the Schelling point experiments), but I certainly wouldn't have been confident that it would do this well.   PS -- I'm loving the frequent shortform posts, I hope you continue! I try to do something somewhat similar with my research diary, but usually no one reads that and it's certainly not daily. I'm tempted to try doing the same thing :)
5Joey Yudelson
I wonder if r1 could reliably make the claim that the text had stego-text without being told—i.e. give it a lineup of ten pieces of short creative writing that another instance generated, 1 has steg. See if r1 can pick out the steg, then if various monitors (Sonnet, o1) can also pick it out
3joanv
What is the threat model you are probing for here?  The prompting approach suffers from LLMs' shallowness in strategic thinking; I would not be surprised if, prompting a language model several times to come up with an encoding scheme would result in a highly concentrated distribution.  > A saving grace may be that ‘detecting steg is happening at all’ seems easier than ‘recovering the plaintext message’ While i agree in principle, as in all red/blue team interactions, you may not want your blue team to be trigger happy; you still want to be able to extract some usefulness out of your scheming models, thus making this hope somewhat less robust. This would be a cool research question nonetheless. > Both of these [steganography inducing methods] involve finetuning the model Note that in Hidden in Plain Text, we also work with in-context RL, which is not explicitly finetuning the model, but just giving rewards to rollouts in context.
3Daniel Tan
As originally conceived, this is sort of like a “dangerous capability” eval for steg. The argument being that, if a model can do steg in this very toy setting where we’ve significantly nudged the model, it might do steg in more realistic scenarios.  There is also a claim here of the form “language models can decode things from their own encodings that other language models cannot, due to having access to privileged internal information”.  I didnt' make these points especially clear in the slide deck - thanks for the feedback!  Agreed on the rest of points! 
1julius vidal
>As originally conceived, this is sort of like a “dangerous capability” eval for steg. I am actually just about to start building something very similar to this for the AISI's evals bounty program.
2Daniel Tan
As of today (26 Jan) this isn't reproducing for me. It's possible that DeepSeek changed their model API to serve a distilled model. 

Collection of how-to guides

 

Some other guides I'd be interested in 

  • How to write a survey / position paper
  • "How to think better" - the Sequences probably do this, but I'd like to read a highly distilled 80/20 version of the Sequences
8ryan_greenblatt
Research as a Stochastic Decision Process by Jacob Steinhardt
5Rauno Arike
A few other research guides: * Tips for Empirical Alignment Research by Ethan Perez * Advice for Authors by Jacob Steinhardt * How to Write ML Papers by Sebastian Farquhar

My experience so far with writing all my notes in public

For the past ~2 weeks I've been writing LessWrong shortform comments every day instead of writing on private notes. Minimally the notes just capture an interesting question / observation, but often I explore the question / observation further and relate it to other things. On good days I have multiple such notes, or especially high-quality notes. 

I think this experience has been hugely positive, as it makes my thoughts more transparent and easier to share with others for feedback. The upvo... (read more)

1Yonatan Cale
I'm trying the same! You have my support
1Sheikh Abdur Raheem Ali
Thanks for sharing your notes Daniel!

My Seasonal Goals, Jul - Sep 2024

This post is an exercise in public accountability and harnessing positive peer pressure for self-motivation.   

By 1 October 2024, I am committing to have produced:

  • 1 complete project
  • 2 mini-projects
  • 3 project proposals
  • 4 long-form write-ups

Habits I am committing to that will support this:

  • Code for >=3h every day
  • Chat with a peer every day
  • Have a 30-minute meeting with a mentor figure every week
  • Reproduce a paper every week
  • Give a 5-minute lightning talk every week
2rtolsma
Would be cool if you had repos/notebooks to share for the paper reproductions!
1Daniel Tan
For sure! Working in public is going to be a big driver of these habits :) 

[Note] On SAE Feature Geometry

 

SAE feature directions are likely "special" rather than "random". 


Re: the last point above, this points to singular learning theory being an effective tool for analys... (read more)

Is refusal a result of deeply internalised values, or memorization? 

When we talk about doing alignment training on a language model, we often imagine the former scenario. Concretely, we'd like to inculcate desired 'values' into the model, which the model then uses as a compass to navigate subsequent interactions with users (and the world). 

But in practice current safety training techniques may be more like the latter, where the language model has simply learned "X is bad, don't do X" for several values of X. E.g. because the alignment training da... (read more)

4Daniel Tan
Hypothesis: 'Memorised' refusal is more easily jailbroken than 'generalised' refusal. If so that'd be a way we could test the insights generated by influence functions I need to consult some people on whether a notion of 'more easily jailbreak-able prompt' exists.  Edit: A simple heuristic might be the value of N in best-of-N jailbreaking. 

Prover-verifier games as an alternative to AI control. 

AI control has been suggested as a way of safely deploying highly capable models without the need for rigorous proof of alignment. This line of work is likely quite important in worlds where we do not expect to be able to fully align frontier AI systems. 

The formulation depends on having access to a weaker, untrusted model. Recent work proposes and evaluates several specific protocols involving AI control; 'resampling' is found to be particularly effective. (Aside: this is consistent with 'en... (read more)

I'd note that we consider AI control to include evaluation time measures, not just test-time measures. (For instance, we consider adversarial evaluation of an untrusted monitor in the original control paper.)

(We also can model training from a black-box control perspective by being conservative about inductive biases. For supervised fine-tuning (with an assumption of no gradient hacking), we can assume that training forces the AI to perform well on average on tasks indistinguishable from the training distribution (and training on new data can also be considered evaluation). For RL the picture is more complex due to exploration hacking.)

Thanks for bringing this up!

I need to get around to writing up the connection between PVGs and AI control. There's definitely a lot of overlap, but the formalisms are fairly different and the protocols can't always be directly transferred from one to the other.

There are a few AI control projects in the works that make the connection between AI control and PVGs more explicitly.

EDIT: Oh actually @Ansh Radhakrishnan and I already wrote up some stuff about this, see here.

Strategies in social deduction games

Here I will describe what I believe to be some basic and universally applicable strategies across social deduction games. 

IMO these are fairly easy for a beginner to pick up, allowing them to enjoy the game better. They are also easy for a veteran to subvert, and thus push their local metagame out of a stale equilibrium. 

The intention of this post is not to be prescriptive: I don't aim to tell people how to enjoy the game, and people should play games how they want. However, I do want to outline some 'basic' st... (read more)

Some ideas on model organisms of steganography. 

Borrowing Max Nadeau's definition, steganography is when: 

  1. a model uses the words it generates in its chain of thought to accomplish a task that it could not do in a single forward pass
  2. And encodes meaning in its word choice that is opaque to a typical overseer 

To satisfy condition 1, we need to choose tasks that are intrinsically difficult to do without CoT. Ideally, we'd also be able to verify whether the CoT was actually used. We could use: 

  • Multi-hop QA tasks. Language models generally fi
... (read more)
3Daniel Tan
Feedback from Max:  * The definition he originally gave leaves out a notion of 'stealthiness' as opposed to 'gibberish'; 'stealthy' reasoning is more concerning. * OOCR may not meet the opaqueness criterion, since the meaning of 'f' is pretty obvious from looking at the fine-tuning examples. This is like 'reasoning in a foreign language' or 'using esoteric terminology'; it's less legible to the average human, but not outside the realm of human semantics. (But still interesting! just that maybe OOCR isn't needed) * Backdoors have a similar concern to the above. A separate (and more important) concern is that it doesn't satisfy the definition because 'steg' typically refers to meaning encoded in generated tokens (while in 'backdoors', the hidden meaning is in the prompt tokens) 
1Daniel Tan
RL-finetuning suffers from a “zero-to-one” problem, where the model is likely to be mostly legible and faithful by default, and difficult encoding schemes seem hard to randomly explore into. 
1Daniel Tan
Feedback from various people: the “opaqueness” criterion is really important. Simple schemes like jargon, speaking in a different language, or redefining words via OOCR aren’t sufficiently “opaque” because a human overseer could still understand the CoT if effort was expended.  This makes model organisms of steg really hard to make IMO.  OTOH I think we don’t need “strict opaqueness” for CoT unfaithfulness to be concerning.  Maybe we need to borrow an idea from AI control and assume an oversight budget; under this model, even if a model doesn’t engage in full-blown steganography, increased amounts of obfuscation (like using more jargon) might still make it impractical to verify the full CoT. 
1Daniel Tan
Additional thoughts on how RL fine-tuning would induce obfuscation.  1. Evading process supervision: It's plausible that we won't be able to provide outcome-based oversight, but instead rely on process-based oversight. If the overseer penalises wrong reasoning, but is less likely to detect 'subtle' or obfuscated versions of this, then LMs are more likely to obfuscate their reasoning. 2. Reward hacking. Exploiting imperfections in the reward model may cause language models to obfuscate their reasoning. E.g. using more technical jargon often pattern-matches to competence and expertise. Human raters might assign higher scores to responses that use sophisticated-sounding language, even if the underlying reasoning isn't better (and might be worse). Currently uncertain about whether this is a good threat model, since both of these concerns are not applicable to math / coding RL fine-tuning (where it's not possible to obfuscate reasoning, and where the ground-truth verifier is perfect) 
1[comment deleted]
1Daniel Tan
Hidden in Plain Text provides a model organism of steganography. How do they solve the challenges here?  * Condition 1 is satisfied by having a separate encoder and decoder, s.t. the decoder only sees the encoder's output (and not the prompt). This is most similar to "long-context task" in my framing. * Condition 2 is satisfied by RL fine-tuning against a CoT monitor. (It's slightly more complicated than that because the encoder and decoder are trained separately, so there's a loop where encoder is trained for some time, then decoder is trained for some time, etc).  It seems like this setup could be simplified:  * The encoder and decoder could be the same model. BUT, we still delete the original prompt from the decoder's context, so the model must learn to encode the prompt information. (This represents task-relevant information falling out of the context window at some point. Seems plausible in particularly important threat models like scheming) * The RL setup could be simplified to a single reward formulation, where the reward is 0 if the model is detected to be engaging in reasoning, or if the model predicts wrongly, and 1 otherwise. Some reward shaping may be beneficial. The simplified setting may be easier to generalise to different tasks (e.g. sycophancy, evading trusted monitoring, etc) 

How does activation steering compare to fine-tuning on the task of transfer learning? 

  • 'Activation steering' consumes some in-distribution data, and modifies the model to have better in-distribution performance. Note that this is exactly the transfer learning setting.
  • Generally, we can think of steering and fine-tuning as existing on a continuum of post-training methosds, with the x-axis roughly representing how much compute is spent on post-training.
  • It becomes pertinent to ask, what are the relative tradeoffs? Relevant metrics: effectiveness, selectivi
... (read more)

LessWrong 'restore from autosave' is such a lifesaver. thank you mods

I’m pretty confused as to why it’s become much more common to anthropomorphise LLMs.

At some point in the past the prevailing view was “a neural net is a mathematical construct and should be understood as such”. Assigning fundamentally human qualities like honesty or self-awareness was considered an epistemological faux pas.

Recently it seems like this trend has started to reverse. In particular, prosaic alignment work seems to be a major driver in the vocabulary shift. Nowadays we speak of LLMs that have internal goals, agency, self-identity, and even discu... (read more)

6Zack_M_Davis
A mathematical construct that models human natural language could be said to express "agency" in a functional sense insofar as it can perform reasoning about goals, and "honesty" insofar as the language it emits accurately reflects the information encoded in its weights?
3Daniel Tan
I agree that from a functional perspective, we can interact with an LLM in the same way as we would another human. At the same time I’m pretty sure we used to have good reasons for maintaining a conceptual distinction. One potential issue is that when the language shifts to implicitly frame the LLM as a person, that subtly shifts the default perception on a ton of other linked issues. Eg the “LLM is a human” frame raises the questions of “do models deserve rights”. But idunno, it’s possible that there’s some philosophical argument by which it makes sense to think of LLMs as human once they pass the turing test. Also, there’s undoubtedly something lost when we try to be very precise. Having to dress discourse in qualifications makes the point more obscure, which doesn’t help when you want to leave a clear take home message. Framing the LLM as a human is a neat shorthand that preserves most of the xrisk-relevant meaning. I guess I’m just wondering if alignment research has resorted to anthropomorphization because of some well considered reason I was unaware of, or simply because it’s more direct and therefore makes points more bluntly (“this LLM could kill you” vs “this LLM could simulate a very evil person who would kill you”).
2Viliam
If the LLM simulates a very evil person who would kill you, and the LLM is connected to a robot, and the simulated person uses the robot to kill you... then I'd say that yes, the LLM killed you. So far the reason why LLM cannot kill you is that it doesn't have hands, and that it (the simulated person) is not smart enough to use e.g. their internet connection (that some LLMs have) to obtain such hands. It also doesn't have (and maybe will never have) the capacity to drive you to suicide by a properly written output text, which would also be a form of killing.

Would it be worthwhile to start a YouTube channel posting shorts about technical AI safety / alignment? 

  • Value proposition is: accurately communicating advances in AI safety to a broader audience
    • Most people who could do this usually write blogposts / articles instead of making videos, which I think misses out on a large audience (and in the case of LW posts, is preaching to the choir)
    • Most people who make content don't have the technical background to accurately explain the context behind papers and why they're interesting
    • I think Neel Nanda's recent exp
... (read more)
2Viliam
Sounds interesting. When I think about making YouTube videos, it seems to me that doing it at high technical level (nice environment, proper lights and sounds, good editing, animations, etc.) is a lot of work, so it would be good to split the work at least between 2 people: 1 who understands the ideas and creates the script, and 1 who does the editing.

[Note] On illusions in mechanistic interpretability

... (read more)
3Arthur Conmy
I would say a better reference for the limitations of ROME is this paper: https://aclanthology.org/2023.findings-acl.733 Short explanation: Neel's short summary, i.e. editing in the Rome fact will also make slightly related questions e.g. "The Louvre is cool. Obama was born in" ... be completed with " Rome" too.

How I currently use various social media

  • LessWrong: main place where I read, write
  • X: Following academics, finding out about AI news
  • LinkedIn: Following friends. Basically what Facebook is to me now

What effect does LLM use have on the quality of people's thinking / knowledge?   

  • I'd expect a large positive effect from just making people more informed / enabling them to interpret things correctly / pointing out fallacies etc.
  • However there might also be systemic biases on specific topics that propagate to people as a result

I'd be interested in anthropological / sociol science studies that investigate how LLM use changes people's opinions and thinking, across lots of things 

2James Stephen Brown
I personally love the idea of having a highly rational partner to bounce ideas off, and I think LLMs have high utility in this regard, I use them to challenge my knowledge and fill in gaps, unweave confusion, check my biases. However, what I've heard about how others are using chat, and how I've seen kids use it, is much more as a cognitive off-loader, which has large consequences for learning, because "cognitive load" is how we learn. I've heard many adults say "It's a great way to get a piece of writing going", or "to make something more concise", these are mental skills that we use when communicating that will atrophy with disuse, and unless we are going to have an omnipresent LLM filter for our thoughts, this is likely to have consequences, for our ability to conceive of ideas and compress them into a digestible form. But, as John Milton says "A fool will be a fool with the best book". It really depends on the user, the internet gave us the world's knowledge at our fingertips, and we managed to fill it with misinformation. Now we have the power of reason at our fingertips, but I'm not sure that's where we want it. At the same time, I think more information, better information and greater rationality is a net-positive, so I'm hopeful.

Can SAE feature steering improve performance on some downstream task? I tried using Goodfire's Ember API to improve Llama 3.3 70b performance on MMLU. Full report and code is available here

SAE feature steering reduces performance. It's not very clear why at the moment, and I don't have time to do further digging right now. If I get time later this week I'll try visualizing which SAE features get used / building intuition  by playing around in the Goodfire playground. Maybe trying a different task or improving the steering method would work also... (read more)

2eggsyntax
Can you say something about the features you selected to steer toward? I know you say you're finding them automatically based on a task description, but did you have a sense of the meaning of the features you used? I don't know whether GoodFire includes natural language descriptions of the features or anything but if so what were some representative ones?

"Taste" as a hard-to-automate skill. 

  • In the absence of ground-truth verifiers, the foundation of modern frontier AI systems is human expressions of preference (i.e 'taste'), deployed at scale.
  • Gwern argues that this is what he sees as his least replaceable skill.
  • The "je ne sais quois" of senior researchers is also often described as their ‘taste’, i.e. ability to choose interesting and tractable things to do

Even when AI becomes superhuman and can do most things better than you can, it’s unlikely that AI can understand your whole life experience well ... (read more)

3Daniel Tan
Concrete example: Even in the present, when using AI to aid in knowledge work is catching on, the expression of your own preferences is (IMO) the key difference between AI slop and fundamentally authentic work

Current model of why property prices remain high in many 'big' cities in western countries despite the fix being ostensibly very simple (build more homes!) 

  • Actively expanding supply requires (a lot of) effort and planning. Revising zoning restrictions, dealing with NIMBYs, expanding public infrastructure, getting construction approval etc.
  • In a liberal consultative democracy, there are many stakeholders, and any one of them complaining can stifle action. Inertia is designed into the government at all levels.
  • Political leaders usually operate for short
... (read more)
5Dagon
Incentive (for builders and landowners) is pretty clear for point 1.  I think point 3 is overstated - a whole lot of politicians plan to be in politics for many years, and many of local ones really do seem to care about their constituents.   Point 2 is definitely binding.  And note that this is "stakeholders", not just "elected government".  

[Proposal] Do SAEs learn universal features? Measuring Equivalence between SAE checkpoints

If we train several SAEs from scratch on the same set of model activations, are they “equivalent”? 

Here are two notions of "equivalence: 

  • Direct equivalence. Features in one SAE are the same (in terms of decoder weight) as features in another SAE. 
  • Linear equivalence. Features in one SAE directly correspond one-to-one with features in another SAE after some global transformation like rotation. 
  • Functional equivalence. The SAEs define the same input-ou
... (read more)
5faul_sname
Found this graph on the old sparse_coding channel on the eleuther discord: So at least tentatively that looks like "most features in a small SAE correspond one-to-one with features in a larger SAE trained on the activations of the same model on the same data".
3Daniel Tan
Oh that's really interesting! Can you clarify what "MCS" means? And can you elaborate a bit on how I'm supposed to interpret these graphs? 
4faul_sname
Yeah, stands for Max Cosine Similarity. Cosine similarity is a pretty standard measure for how close two vectors are to pointing in the same direction. It's the cosine of the angle between the two vectors, so +1.0 means the vectors are pointing in exactly the same direction, 0.0 means the vectors are orthogonal, -1.0 means the vectors are pointing in exactly opposite directions. To generate this graph, I think he took each of the learned features in the smaller dictionary, and then calculated to cosine similarity of that small-dictionary feature with every feature in the larger dictionary, and then the maximal cosine similarity was the MCS for that small-dictionary feature. I have a vague memory of him also doing some fancy linear_sum_assignment() thing (to ensure that each feature in the large dictionary could only be used once in order avoid having multiple features in the small dictionary have their MCS come from the same feature on the large dictionary) though IIRC it didn't actually matter. Also I think the small and large dictionaries were trained using different methods as each other for layer 2, and this was on pythia-70m-deduped so layer 5 was the final layer immediately before unembedding (so naively I'd expect most of the "features" to just be "the output token will be the" or "the output token will be when" etc). Edit: In terms of "how to interpret these graphs", they're histograms with the horizontal axis being bins of cosine similarity, and the vertical axis being how many small-dictionary features had a the cosine similarity with a large-dictionary feature within that bucket. So you can see at layer 3 it looks like somewhere around half of the small dictionary features had a cosine similarity of 0.96-1.0 with one of the large dictionary features, and almost all of them had a cosine similarity of at least 0.8 with the best large-dictionary feature. Which I read as "large dictionaries find basically the same features as small ones, plus some new one
31stuserhere
For SAEs of different sizes, for most layers, the smaller SAE does contain very high similarity with some of the larger SAE features, but it's not always true. I'm working on an upcoming post on this.
2Bart Bussmann
Interesting, we find that all features in a smaller SAE have a feature in a larger SAE with cosine similarity > 0.7, but not all features in a larger SAE have a close relative in a smaller SAE (but about ~65% do have a close equavalent at 2x scale up). 

If pretraining from human preferences works, why hasn’t there been follow up work?

Also, why can’t this be combined with the deepseek paradigm?

https://arxiv.org/abs/2302.08582

The tasks I delegate to AI are very different from what I thought they'd be. 

  • When I first started using AI for writing, I thought I’d brainstorm outlines of thoughts then use AI to edit into a finished essay.
  • However I find myself often doing the reverse: Using AI as a thinking assistant to get a broad outline and write a rough draft, then doing final editing myself. 

I think this is consistent with how people delegate to other people. 

  • Senior authors on research papers will often let junior authors run experiments and write rough drafts of pa
... (read more)

Why understanding planning / search might be hard

It's hypothesized that, in order to solve complex tasks, capable models perform implicit search during the forward pass. If so, we might hope to be able to recover the search representations from the model. There are examples of work that try to understand search in chess models and Sokoban models

However I expect this to be hard for three reasons. 

  1. The model might just implement a bag of heuristics. A patchwork collection of local decision rules might be sufficient for achieving high performance.
... (read more)
7quetzal_rainbow
We need to split "search" into more fine-grained concepts. For example, "model has representation of the world and simulates counterfactual futures depending of its actions and selects action with the highest score over the future" is a one notion of search. The other notion can be like this: imagine possible futures as a directed tree graph. This graph has set of axioms and derived theorems describing it. Some of the axioms/theorems are encoded in model. When model gets sensory input, it makes 2-3 inferences from combination of encoded theorems + input and selects action depending on the result of inference. While logically this situation is equivalent to some search over tree graph, mechanistically it looks like "bag of heuristics".  
1Daniel Tan
That’s interesting! What would be some examples of axioms and theorems that describe a directed tree? 
2quetzal_rainbow
Chess tree looks like classical example. Each node is a boardstate, edges are allowed moves. Working heuristics in move evaluators can be understood as sort of theorem "if such-n-such algorithm recognizes this state, it's an evidence in favor of white winning 1.5:1". Note that it's possible to build powerful NN-player without explicit search.

A new paper claims that refusal isn't just a fixed direction in the activation stream, it's also due to specific attention heads. Pretty interesting to get a somewhat orthogonal  perspective on 'safety mechanisms' in LLMs. Interesting follow-up would be to see whether you can similarly steer refusal by only intervening on these attention heads. 

Summary of 'Alignment Faking'. Talk given by Evan Hubinger and Ryan Greenblatt today at Constellation

---

Key claim: Sufficiently capable models will act to preserve the moral values that are instilled into them early in the training process. 

Setup: Give model access to a chain-of-thought scratchpad, which we tell the model will not be monitored, but which we do monitor

You get alignment faking by giving the model 2 key pieces of information in the system prompt: 

  1. Tell the model it's being trained to modify its current values (in Claude's case, to be
... (read more)

Implementing the 5 whys with Todoist 

In 2025 I've decided I want to be more agentic / intentional about my life, i.e. my actions and habits should be more aligned with my explicit values. 

A good way to do this might be the '5 whys' technique; i.e. simply ask "why" 5 times. This was originally introduced at Toyota to diagnose ultimate causes of error and improve efficiency. E.g: 

  1. There is a piece of broken-down machinery. Why? -->
  2. There is a piece of cloth in the loom. Why? -->
  3. Everyone's tired and not paying attention.
  4. ...
  5. The culture is te
... (read more)

Experimenting with having all my writing be in public “by default”. (ie unless I have a good reason to keep something private, I’ll write it in the open instead of in my private notes.)

This started from the observation that LW shortform comments basically let you implement public-facing Zettelkasten.

I plan to adopt a writing profile of:

  1. Mostly shortform notes. Fresh thoughts, thinking out loud, short observations, questions under consideration. Replies or edits as and when I feel like
  2. A smaller amount of high-effort, long-form content synthesizing / disti
... (read more)
1CstineSublime
This is cool to me. I for one am very interested and find some of your shortforms very relevant to my own explorations, for example note taking and your "sentences as handles for knowledge" one. I may be in the minority but thought I'd just vocalize this. I'm also keen to see how this as an experiment goes for you and what reflections, lessons, or techniques you develop as a result of it.
1Daniel Tan
After thinking about it more I'm actually quite concerned about this, a big problem is that other people have no way to 'filter' this content by what they consider important, so the only reasonable update is to generally be less interested in my writing.  I'm still going to do this experiment with LessWrong for some short amount of time (~2 weeks perhaps) but it's plausible I should consider moving this to Substack after that 
1Daniel Tan
Another minor inconvenience is that it’s not terribly easy to search my shortform. Ctrl + F works reasonably well when I’m on laptop. On mobile the current best option is LW search + filter by comments. This is a little bit more friction than I would like but it’s tolerable I guess

I recommend subscribing to Samuel Albanie’s Youtube channel for accessible technical analysis of topics / news in AI safety. This is exactly the kind of content I have been missing and want to see more of

Link: https://youtu.be/5lFVRtHCEoM?si=ny7asWRZLMxdUCdg

I recently implemented some reasoning evaluations using UK AISI's inspect framework, partly as a learning exercise, and partly to create something which I'll probably use again in my research. 

Code here: https://github.com/dtch1997/reasoning-bench 

My takeaways so far: 
- Inspect is a really good framework for doing evaluations
- When using Inspect, some care has to be taken when defining the scorer in order for it not to be dumb, e.g. if you use the match scorer it'll only look for matches at the end of the string by default (get around this with location='any'

Here's how I explained AGI to a layperson recently, thought it might be worth sharing. 

Think about yourself for a minute. You have strengths and weaknesses. Maybe you’re bad at math but good at carpentry. And the key thing is that everyone has different strengths and weaknesses. Nobody’s good at literally everything in the world. 

Now, imagine the ideal human. Someone who achieves the limit of human performance possible, in everything, all at once. Someone who’s an incredible chess player, pole vaulter, software engineer, and CEO all at once.

Basically, someone who is quite literally good at everything.  

That’s what it means to be an AGI. 

 

3brambleboy
This seems too strict to me, because it says that humans aren't generally intelligent, and that a system isn't AGI if it's not a world-class underwater basket weaver. I'd call that weak ASI.
1Daniel Tan
Fair point, I’ll probably need to revise this slightly to not require all capabilities for the definition to be satisfied. But when talking to laypeople I feel it’s more important to convey the general “vibe” than to be exceedingly precise. If they walk away with a roughly accurate impression I’ll have succeeded

Interpretability needs a good proxy metric

I’m concerned that progress in interpretability research is ephemeral, being driven primarily by proxy metrics that may be disconnected from the end goal (understanding by humans). (Example: optimising for the L0 metric in SAE interpretability research may lead us to models that have more split features, even when this is unintuitive by human reckoning.) 

It seems important for the field to agree on some common benchmark / proxy metric that is proven to be indicative of downstream human-rated interpretability, ... (read more)

Some unconventional ideas for elicitation

  • Training models to elicit responses from other models (c.f. "investigator agents")
  • Promptbreeder ("evolve a prompt")
    • Table 1 has many relevant baselines
  • Multi-agent ICL (e.g. using different models to do reflection, think of strategies, re-prompt the original model)
  • Multi-turn conversation (similar to above, but where user is like a conversation partner instead of an instructor)
    • Mostly inspired by Janus and Pliny the Liberator
    • The thr
... (read more)

"Emergent obfuscation": A threat model for verifying the CoT in complex reasoning. 

It seems likely we'll eventually deploy AI to solve really complex problems. It will likely be extremely costly (or impossible) to directly check outcomes, since we don't know the right answers ourselves. Therefore we'll rely instead on process supervision, e.g. checking that each step in the CoT is correct.  

Problem: Even if each step in the CoT trace is individually verifiable, if there are too many steps, or the verification cost per step is too high, then it ma... (read more)

I get the sense that I don't read actively very much. By which I mean I have a collection of papers that have seemed interesting based on abstract / title but which I haven't spent the time to engage further. 

For the next 2 weeks, will experiment with writing a note every day about a paper I find interesting. 

Making language models refuse robustly might be equivalent to making them deontological. 

Epistemic status: uncertain / confused rambling

For many dangerous capabilities, we'd like to make safety cases arguments that "the model will never do X under any circumstances". 

Problem: for most normally-bad things, you'd usually be able to come up with hypothetical circumstances under which a reasonable person might agree it's justified. E.g. under utilitarianism, killing one person is justified if it saves five people (c.f. trolley problems). 

However... (read more)

Why patch tokenization might improve transformer interpretability, and concrete experiment ideas to test. 

Recently, Meta released Byte Latent Transformer. They do away with BPE tokenization, and instead dynamically construct 'patches' out of sequences of bytes with approximately equal entropy. I think this might be a good thing for interpretability, on the whole. 

Multi-token concept embeddings. It's known that transformer models compute 'multi-token embeddings' of concepts in their early layers. This process creates several challenges for interpr... (read more)

3Daniel Tan
Actually we don’t even need to train a new byte latent transformer. We can just generate patches using GPT-2 small.  1. Do the patches correspond to atomic concepts? 2. If we turn this into an embedding scheme, and train a larger LM on the patches generated as such, do we  get a better LM? 3. Can't we do this recursively to get better and better patches? 

"Feature multiplicity" in language models. 

This refers to the idea that there may be many representations of a 'feature' in a neural network. 

Usually there will be one 'primary' representation, but there can also be a bunch of 'secondary' or 'dormant' representations. 

If we assume the linear representation hypothesis, then there may be multiple direction in activation space that similarly produce a 'feature' in the output. E.g. the existence of 800 orthogonal steering vectors for code

This is consistent with 'circuit formation' resulti... (read more)

At MATS today we practised “looking back on success”, a technique for visualizing and identifying positive outcomes.

The driving question was, “Imagine you’ve had a great time at MATS; what would that look like?”

My personal answers:

  • Acquiring breadth, ie getting a better understanding of the whole AI safety portfolio / macro-strategy. A good heuristic for this might be reading and understanding 1 blogpost per mentor
  • Writing a “good” paper. One that I’ll feel happy about a couple years down the line
  • Clarity on future career plans. I’d probably like to keep
... (read more)

Capture thoughts quickly. 

Thoughts are ephemeral. Like butterflies or bubbles. Your mind is capricious, and thoughts can vanish at any instant unless you capture them in something more permanent. 

Also, you usually get less excited about a thought after a while, simply because the novelty wears off. The strongest advocate for a thought is you, at the exact moment you had the thought. 

I think this is valuable because making a strong positive case for something is a lot harder than raising an objection. If the ability to make this strong positi... (read more)

1Daniel Tan
For me this is a central element of the practice of Zettelkasten - Capturing comes first. Organization comes afterwords.  In the words of Tiago Forte: "Let chaos reign - then, rein in chaos."
1CstineSublime
What does GOOD Zettlekastern capturing look like? I've never been able to make it work. Like, what do the words on the page look like? What is the optima formula? How does one balance the need for capturing quickly and capturing effectively? The other thing I find is having captured notes, and I realize the whole point of the Zettlekasten is inter-connectives that should lead to a kind of strategic serendipity. Where if you record enough note cards about something, one will naturally link to another. However I have not managed to find a system which allows me to review and revist in a way which gets results. I think capturing is the easy part, I capture a lot. Review and Commit. That's why I'm looking for a decision making model. And I wonder if that system can be made easier by having a good standardized formula that is optimized between the concerns of quickly capturing notes, and making notes "actionable" or at least "future-useful". For example, if half-asleep in the night I write something cryptic like "Method of Loci for Possums" or "Automate the Race Weekend", sure maybe it will in a kind of Brian Eno Oblique Strategies or Delphi Oracle way be a catalyst for some kind of thought. But then I can do that with any sort of gibberish. If it was a good idea, there it is left to chance that I have captured the idea in such a way that I can recreate it at another time. But more deliberation on the contents of a note takes more time, which is the trade-off. Is there a format, a strategy, a standard that speeds up the process while preserving the ability to create the idea/thought/observation at a later date? What do GOOD Zettlekastern notes look like?  
2Daniel Tan
Hmm I get the sense that you're overcomplicating things. IMO 'good' Zettelkasten is very simple.  1. Write down your thoughts (and give them handles) 2. Revisit your thoughts periodically. Don't be afraid to add to / modify the earlier thoughts. Think new thoughts following up on the old ones. Then write them down (see step 1).  I claim that anybody who does this is practising Zettelkasten. Anyone who tries to tell you otherwise is gatekeeping what is (IMO) a very simple and beautiful idea. I also claim that, even if you feel this is clunky to begin with, you'll get better at it very quickly as your brain adjusts to doing it. ' Good Zettelkasten isn't about some complicated scheme. It's about getting the fundamentals right.  Now on to some object level advice.  I find it useful to start with a clear prompt (e.g. 'what if X', 'what does Y mean for Z', or whatever my brain cooks up in the moment) and let my mind wander around for a bit while I transcribe my stream of consciousness. After a while (e.g. when i get bored) I look back at what I've written, edit / reorganise a little, try to assign some handle, and save it.  It helps here to be good at making your point concisely, such that notes are relatively short. That also simplifies your review.  I agree that this is ideal, but I also think you shouldn't feel compelled to 'force' interconnections. I think this is describing the state of a very mature Zettelkasten after you've revisited and continuously-improved notes over a long period of time. When you're just starting out I think it's totally fine to just have a collection of separate notes that you occasionally cross-link.  I think you shouldn't feel chained to your past notes? If certain thoughts resonate with you, you'll naturally keep thinking about them. And when you do it's a good idea to revisit the note where you first captured them. But feeling like you have to review everything is counterproductive, esp if you're still building the habit. FWIW I
1CstineSublime
That is helpful, thank you.    This doesn't match up with my experience. For example, I have hundreds, HUNDREDS of film ideas. And sometimes I'll be looking through and be surprised by how good one was - as in I think "I'd actually like to see that film but I don't remember writing this". But they are all horrendously impractical in terms of resources. I don't really have a reliable method of going through and managing 100s of film ideas, and need a system for evaluating them. Reviewing weekly seems good for new notes, but what about old notes from years ago? That's probably two separate problems, the point I'm trying to make is that even non-film ideas, I have a lot of notes that just sit in documents unvisted and unused. Is there any way to resurrect them, or at least stop adding more notes to the pile awaiting a similar fate? Weekly Review doesn't seem enough because not enough changes in a week that an idea on Monday suddenly becomes actionable on Sunday. Not all my notes pertain to film ideas, but this is perhaps the best kept, most organized and complete note system I have hence why I mention it. Yeah but nothing is working for me, forget a perfect model, a working model would be nice. A "good enough" model would be nice.
2Daniel Tan
Note that I think some amount of this is inevitable, and I think aiming for 100% retention is impractical (and also not necessary). If you try to do it you'll probably spend more time optimising your knowledge system than actually... doing stuff with the knowledge.   It's also possible you could benefit from writing better handles. I guess for film, good handles could look like a compelling title, or some motivating theme / question you wanted to explore in the film. Basically, 'what makes the film good?' Why did it resonate with you when you re-read it? That'll probably tell you how to give a handle. Also, you can have multiple handles. The more ways you can reframe something to yourself the more likely you are to have one of the framings pull you back later. 
1CstineSublime
Thanks for preserving with my questions and trying to help me find an implementation. I'm going to try and reverse engineer my current approach to handles. Oh of course, 100% retention is impossible. As ridiculous and arbitrary as it is, I'm using Sturgeon's law as a guide for now.
1Daniel Tan
"Future useful" to me means two things: 1. I can easily remember the rough 'shape' of the note when I need to and 2. I can re-read the note to re-enter the state of mind I was at when I wrote the note.  I think writing good handles goes a long way towards achieving 1, and making notes self-contained (w. most necessary requisites included, and ideas developed intuitively) is a good way to achieve 2. 

Create handles for knowledge. 

A handle is a short, evocative phrase or sentence that triggers you to remember the knowledge in more depth. It’s also a shorthand that can be used to describe that knowledge to other people.

I believe this is an important part of the practice of scalably thinking about more things. Thoughts are ephemeral, so we write them down. But unsorted collections of thoughts quickly lose visibility, so we develop indexing systems. But indexing systems are lossy, imperfect, and go stale easily. To date I do not have a single indexing... (read more)

2Nathan Helm-Burger
I second this. But I don't find I need much of an indexing system. Early on when I got more into personal note taking, I felt a bit guilty for just putting all my notes into a series of docs as if I were filling a journal. Now, looking back on years of notes, I find it easy enough to skim through them and reacquaint myself with their contents, that I don't miss an external indexing system. More notes > more organized notes, in my case. Others may differ on the trade-off. In particular, I try to always take notes on academic papers I read that I find valuable, and to cite the paper that inspired the note even if the note goes in a weird different direction. This causes the note to become an index into my readings in a useful way.
3Daniel Tan
Agree! This is also what I was getting at here. I find I don't need an indexing system, my mind will naturally fish out relevant information, and I can turbocharge this by creating optimal conditions for doing so  Also agree, and I think this is what I am trying to achieve by capturing thoughts quickly - write down almost everything I think, before it vanishes into aether

Frames for thinking about language models I have seen proposed at various times:

  • Lookup tables. (To predict the next token, an LLM consults a vast hypothetical database containing its training data and finds matches.)
  • Statistical pattern recognition machines. (To predict the next token, an LLM uses context as evidence to do Bayesian update on a prior probability distribution, then samples the posterior).
  • People simulators. (To predict the next token, an LLM infers what kind of person is writing the text, then simulates that person.)
  • General world models.
... (read more)

Marketing and business strategy offer useful frames for navigating dating.

The timeless lesson in marketing is that selling [thing] is done by crafting a narrative that makes it obvious why [thing] is valuable, then sending consistent messaging that reinforces this narrative. Aka belief building.

Implication for dating: Your dating strategy should start by figuring out who you are as a person and ways you’d like to engage with a partner.

eg some insights about myself:

  • I mainly develop attraction through emotional connection (as opposed to physical attraction
... (read more)
2Viliam
Off topic, but your words helped me realize something. It seems like for some people it is physical attraction first, for others it is emotional connection first. The former may perceive the latter as dishonest: if their model of the world is that for everyone it is physical attraction first (it is only natural to generalize from one example), then what you describe as "take my time getting to know someone organically", they interpret as "actually I was attracted to the person since the first sight, but I was afraid of a rejection, so I strategically pretended to be a friend first, so that I could later blackmail them into having sex by threatening to withdraw the friendship they spent a lot of time building". Basically, from the "for everyone it is attraction first" perspective, the honest behavior is either going for the sex immediately ("hey, you're hot, let's fuck" or a more diplomatic version thereof), or deciding that you are not interested sexually, and then the alternatives are either walking away, or developing a friendship that will remain safely sexless forever. And from the other side, complaining about the "friend zone" is basically complaining that too many people you are attracted to happen to be "physical attraction first" (and they don't find you attractive), but it takes you too long to find out.

In 2025, I'm interested in trying an alternative research / collaboration strategy that plays to my perceived strengths and interests. 

Self-diagnosis of research skills

  • Good high-level research taste, conceptual framing, awareness of field
  • Mid at research engineering (specifically the 'move quickly and break things' skill could be doing better), low-level research taste (specifically how to quickly diagnose and fix problems, 'getting things right' the first time, etc)
  • Bad at self-management (easily distracted, bad at prioritising), sustaining things long
... (read more)

Do people still put stock in AIXI? I'm considering whether it's worthwhile for me to invest time learning about Solomonoff induction etc. Currently leaning towards "no" or "aggressively 80/20 to get a few probably-correct high-level takeaways". 

Edit: Maybe a better question is, has AIXI substantially informed your worldview / do you think it conveys useful ideas and formalisms about AI 

7Alexander Gietelink Oldenziel
I find it valuable to know about AIXI specifically and Algorithmic information theory generally. That doesn't mean it is useful for you however. If you are not interested in math and mathematical approaches to alignment I would guess all value in AIXI is low. An exception is that knowing about AIXI can inoculate one against the wrong but very common intuitions that (i) AGI is about capabilities (ii) AGI doesnt exist or that (iii) RL is outdated (iv) that pure scaling next-token prediction will lead to AGI, (v) that there are lots of ways to create AGI and the use of RL is a design choice [no silly]. The talk about kolmogorov complexity and uncomputable priors is a bit of a distraction from the overall point that there is an actual True Name of General Intelligence which is an artificial "Universal Intelligence", where universal must be read with a large number of asterisks. One can understand this point without understanding the details of AIXI and I think it is mostly distinct but could help. Defining, describing, mathematizing, conceptualizing intelligence is an ongoing research programme. AIXI (and its many variants like AIXI-tl) is a very idealized and simplistic model of general intelligence but it's a foothold for the eventual understanding that will emerge.
1Daniel Tan
I found this particularly insightful! Thanks for sharing Based on this I'll probably do a low-effort skim of LessWrong's AIXI sequence and see what I find 

I increasingly feel like I haven't fully 'internalised' or 'thought through' the implications of what short AGI timelines would look like. 

As an experiment in vividly imagining such futures, I've started writing short stories (AI-assisted). Each story tries to explore one potential idea within the scope of a ~2 min read. A few of these are now visible here: https://github.com/dtch1997/ai-short-stories/tree/main/stories

I plan to add to this collection as I come across more ways in which human society could be changed. 

The Last Word: A short story about predictive text technology

---

Maya noticed it first in her morning texts to her mother. The suggestions had become eerily accurate, not just completing her words, but anticipating entire thoughts. "Don't forget to take your heart medication," she'd started typing, only to watch in bewilderment as her phone filled in the exact dosage and time—details she hadn't even known.

That evening, her social media posts began writing themselves. The predictive text would generate entire paragraphs about her day, describing events that ... (read more)

In the spirit of internalizing Ethan Perez's tips for alignment research, I made the following spreadsheet, which you can use as a template: Empirical Alignment Research Rubric [public] 

It provides many categories of 'research skill' as well as concrete descriptions of what 'doing really well' looks like. 

Although the advice there is tailored to the specific kind of work Ethan Perez does, I think it broadly applies to many other kinds of ML / AI research in general. 

The intended use is for you to self-evaluate periodically and get better at ... (read more)

[Note] On self-repair in LLMs

A collection of empirical evidence

Do language models exhibit self-repair? 

One notion of self-repair is redundancy; having "backup" components which do the same thing, should the original component fail for some reason. Some examples: 

  • In the IOI circuit in gpt-2 small, there are primary "name mover heads" but also "backup name mover heads" which fire if the primary name movers are ablated. this is partially explained via copy suppression
  • More generally, The Hydra effect: Ablating one attention head leads to other
... (read more)
2Daniel Murfet
It's a fascinating phenomenon. If I had to bet I would say it isn't a coping mechanism but rather a particular manifestation of a deeper inductive bias of the learning process.
1Daniel Tan
That's a really interesting blogpost, thanks for sharing! I skimmed it but I didn't really grasp the point you were making here. Can you explain what you think specifically causes self-repair? 
5Daniel Murfet
I think self-repair might have lower free energy, in the sense that if you had two configurations of the weights, which "compute the same thing" but one of them has self-repair for a given behaviour and one doesn't, then the one with self-repair will have lower free energy (which is just a way of saying that if you integrate the Bayesian posterior in a neighbourhood of both, the one with self-repair gives you a higher number, i.e. its preferred). That intuition is based on some understanding of what controls the asymptotic (in the dataset size) behaviour of the free energy (which is -log(integral of posterior over region)) and the example in that post. But to be clear it's just intuition. It should be possible to empirically check this somehow but it hasn't been done. Basically the argument is self-repair => robustness of behaviour to small variations in the weights => low local learning coefficient => low free energy => preferred I think by "specifically" you might be asking for a mechanism which causes the self-repair to develop? I have no idea.

[Note] The Polytope Representation Hypothesis

This is an empirical observation about recent works on feature geometry, that (regular) polytopes are a recurring theme in feature geometry. 

Simplices in models. Work studying hierarchical structure in feature geometry finds that sets of things are often represented as simplices, which are a specific kind of regular polytope. Simplices are also the structure of belief state geometry

Regular polygons in models. Recent work studying natural language modular arithmetic has found that language models repr... (read more)

Proposal: A Few-Shot Alignment Benchmark

Posting a proposal which I wrote a while back, and now seems to have been signal-boosted. I haven't thought a lot about this in a while but still seems useful to get out 

Tl;dr proposal outlines a large-scale benchmark to evaluate various kinds of alignment protocol in the few-shot setting, and measure their effectiveness and scaling laws. 

Motivations

Here we outline some ways this work might be of broader significance. Note: these are very disparate, and any specific project will probably center on one of th... (read more)

Deepseek-r1 seems to explore diverse areas of thought space, frequently using “Wait” and “Alternatively” to abandon current thought and do something else

Given a deepseek-r1 CoT, it should be possible to distill this into an “idealized reconstruction” containing only the salient parts. 

C.f Daniel Kokotajlo’s shoggoth + face idea

C.f. the “historical” vs “rational reconstruction” Shieber writing style 

2Daniel Tan
What if the correct way to do safety post training is to train a different aligned model on top (the face) instead of directly trying to align the base model?

What makes steganography concerning? Some qualities which would make steganography more concerning, if observed.

Goal-directedness: Model preferentially tries to communicate information important to its goals / plans (e.g. 'this sandbox seems imperfect'), as opposed to benign or irrelevant information. 

  • Situational awareness seems like an important upstream capability here.
  • Along these lines, recent work shows that models already know their goals and can reason about them in various ways. 

Targetedness: Model preferentially employs steganography in s... (read more)

"How to do more things" - some basic tips 

Why do more things? Tl;dr It feels good and is highly correlated with other metrics of success. 

  • According to Maslow's hierarchy, self-actualization is the ultimate need. And this comes through the feeling of accomplishment—achieving difficult things and seeing tangible results. I.e doing more things
  • I claim that success in life can be measured by the quality and quantity of things we accomplish. This is invariant to your specific goals / values. C.f. "the world belongs to high-energy people". 

The tips

... (read more)
5eggsyntax
Partial counterargument: I find that I have to be careful not to do too many things. I really need some time for just thinking and reading and playing with models in order to have new ideas (and I suspect that's true of many AIS researchers) and if I don't prioritize it, that disappears in favor of only taking concrete steps on already-active things. You kind of say this later in the post, with 'Attention / energy is a valuable and limited resource. It should be conserved for high value things,' which seems like a bit of a contradiction with the original premise :D
5Daniel Tan
Yes, this is true - law of reversing advice holds. But I think two other things are true:  1. Intentionally trying to do more things makes you optimise more deliberately for being able to do many things. Having that ability is valuable, even if you don't always exercise it 2. I think most people aren't 'living their best lives", in the sense that they're not doing the volume of things they could be doing It's possibly not worded very well as you say :) I think it's not a contradiction, because you can be doing only a small number of things at any given instant in time, but have an impressive throughput overall. 
1Milan W
I think that being good at avoiding wasted motion while doing things is pretty fundamental to resolving the contradiction.

Rough thoughts on getting better at integrating AI into workflows

AI (in the near future) likely has strengths and weaknesses vs human cognition. Optimal usage may not involve simply replacing human cognition, but leveraging AI and human cognition according to respective strengths and weaknesses. 

  • AI is likely to be good at solving well-specified problems. Therefore it seems valuable to get better at providing good specifications, as well as breaking down larger, fuzzier problems into more smaller, more concrete ones
  • AI can generate many different soluti
... (read more)

Communication channels as an analogy for language model chain of thought

Epistemic status: Highly speculative

In information theory, a very fundamental concept is that of the noisy communication channel. I.e. there is a 'sender' who wants to send a message ('signal') to a 'receiver', but their signal will necessarily get corrupted by 'noise' in the process of sending it.  There are a bunch of interesting theoretical results that stem from analysing this very basic setup.

Here, I will claim that language model chain of thought is very analogous. The promp... (read more)

1Daniel Tan
Important point: The above analysis considers communication rate per token. However, it's also important to consider communication rate per unit of computation (e.g. per LM inference). This is relevant for decoding approaches like best-of-N which use multiple inferences per token
1Daniel Tan
In this context, the “resample ablation” used in AI control is like adding more noise into the communication channel 

Report on an experiment in playing builder-breaker games with language models to brainstorm and critique research ideas 

---

Today I had the thought: "What lessons does human upbringing have for AI alignment?"

Human upbringing is one of the best alignment systems that currently exist. Using this system we can take a bunch of separate entities (children) and ensure that, when they enter the world, they can become productive members of society. They respect ethical, societal, and legal norms and coexist peacefully. So what are we 'doing right' and how does... (read more)

4Daniel Tan
Note that “thinking through implications” for alignment is exactly the idea in deliberative alignment https://openai.com/index/deliberative-alignment/ Some notes * Authors claim that just prompting o1 with full safety spec already allows it to figure out aligned answers * The RL part is simply “distilling” this imto the model (see also, note from a while ago on RLHF as variational inference) * Generates prompt, CoT, outcome traces from the base reasoning model. Ie data is collected “on-policy” * Uses a safety judge instead of human labelling

Writing code is like writing notes

Confession, I don't really know software engineering. I'm not a SWE, have never had a SWE job, and the codebases I deal with are likely far less complex than what the average SWE deals with. I've tried to get good at it in the past, with partial success. There are all sorts of SWE practices which people recommend, some of which I adopt, and some of which I suspect are cargo culting (these two categories have nonzero overlap). 

In the end I don't really know SWE well enough to tell what practices are good. But I think I... (read more)

2Viliam
Quoting Dijkstra: Also, Harold Abelson: There is a difference if the code is "write, run once, and forget" or something that needs to be maintained and extended. Maybe researchers mostly write the "run once" code, where the best practices are less important.
1Daniel Tan
Yeah. I definitely find myself mostly in the “run once” regime. Though for stuff I reuse a lot, I usually invest the effort to make nice frameworks / libraries

Shower thought: Imposter syndrome is a positive signal. 

A lot of people (esp knowledge workers) perceive that they struggle in their chosen field (impostor syndrome). They also think this is somehow 'unnatural' or 'unique', or take this as feedback that they should stop doing the thing they're doing. I disagree with this; actually I espouse the direct opposite view. Impostor syndrome is a sign you should keep going

Claim: People self-select into doing things they struggle at, and this is ultimately self-serving. 

Humans gravitate toward acti... (read more)

4Viliam
This is true when you are free to choose what you do. Less so if life just throws problems at you. Sometimes you simply fail because the problem is too difficult for you and you didn't choose it. (Technically, you are free to choose your job. But it could be that the difficulty is different than it seemed at the interview. Or you just took a job that was too difficult because you needed the money.) I agree that if you are growing, you probably feel somewhat inadequate. But sometimes you feel inadequate because the task is so difficult that you can't make any progress on it.
1Daniel Tan
Yes, this is definitely highly dependent on circumstances. see also: https://slatestarcodex.com/2014/03/24/should-you-reverse-any-advice-you-hear/

Discovered that lifting is p fun! (at least at the beginner level) 

Going to try and build a micro-habit of going to the gym once a day and doing warmup + 1 lifting exercise

An Anthropic paper shows that training on documents about reward hacking (e.g 'Claude will always reward-hack') induces reward hacking. 

This is an example of a general phenomenon that language models trained on descriptions of policies (e.g. 'LLMs will use jailbreaks to get a high score on their evaluations') will execute those policies

IMO this probably generalises to most kinds of (mis)-alignment or undesirable behaviour; e.g. sycophancy, CoT-unfaithfulness, steganography, ... 

In this world we should be very careful to make sure that AIs ... (read more)

3quetzal_rainbow
Just Censor Training Data. I think it is a reasonable policy demand for any dual-use models.

Some tech stacks / tools / resources for research. I have used most of these and found them good for my work. 

Finetuning open-source language models. 

  • Docker images: Nvidia CUDA latest image as default, or framework-specific image (e.g Axolotl)
  • Orchestrating cloud instances: Runpod
    • Connecting to cloud instances: Paramiko
    • Transferring data: SCP
  • Launching finetuning jobs: Axolotl
... (read more)

Experimenting with writing notes for my language model to understand (but not me). 

What this currently look like is just a bunch of stuff I can copy-paste into the context window, such that the LM has all relevant information (c.f. Cursor 'Docs' feature). 

Then I ask the language model to provide summaries / answer specific questions I have. 

How does language model introspection work? What mechanisms could be at play? 

'Introspection': When we ask a  language model about its own capabilities, a lot of times this turns out to be a calibrated estimate of the actual capabilities. E.g. models 'know what they know', i.e. can predict whether they know answers to factual questions. Furthermore this estimate gets better when models have access to their previous (question, answer) pairs. 

One simple hypothesis is that a language model simply infers the general level of capability from the ... (read more)

1Daniel Tan
Introspection is an instantiation of 'Connecting the Dots'.  * Connecting the Dots: train a model g on (x, f(x)) pairs; the model g can infer things about f. * Introspection: Train a model g on (x, f(x)) pairs, where x are prompts and f(x) are the model's responses. Then the model can infer things about f. Note that here we have f = g, which is a special case of the above. 

Does model introspection extend to CoT reasoning? Probing steganographic capabilities of LLMs

To the extent that o1 represents the future of frontier AI systems, I predict that CoT is likely to get longer as the reasoning gets broken into more fine-grained (and verifiable) intermediate steps. 

Why might this be important? Transformers have fixed context window; in the limit of extremely long reasoning traces far too large to fit in single context window, the model must “get good” at transmitting information to its (future) self. Furthermore, with transf... (read more)

3Daniel Tan
Relatedly on introspection, can we devise some unlearning procedure that removes models’ capability to introspect? This might reduce their situational awwareness.
2Felix J Binder
That's interesting. One underlying consideration is that the object-level choices of reasoning steps are relative to a reasoner: differently abled agents need to decompose problems differently, know different things and might benefit from certain ways of thinking in different ways. Therefore, a model plausibly chooses CoT that works well for it "on the object level", without any steganography or other hidden information necessary. If that is true, then we would expect to see models benefit from their own CoT over that of others for basic, non-steganography reasons. Consider a grade schooler and a grad student thinking out loud. Each benefits from having access to their own CoT, and wouldn't get much from the others for obvious reasons. I think the questions of whether models actually choose their CoT with respect to their own needs, knowledge and ability is a very interesting one that is closely related to introspection. 
1Felix J Binder
Re steganography for chain-of-thought: I've been working on a project related to this for a while, looking at whether RL for concise and correct answers might teach models to stenographically encode their CoT for benign reasons. There's an early write-up here: https://ac.felixbinder.net/research/2023/10/27/steganography-eval.html\ Currently, I'm working with two BASIS fellows on actually training models to see if we can elicit steganography this way. I'm definitely happy to chat more/set up a call about this topic
1Daniel Tan
Comment from @James Chua in internal slack group interesting idea! thanks for writing this out. this is my initial reaction, i may have more later on. i agree on this -- this is what i'll expect.   but if the above is true, I'm unsure about  evidence for steganography. for example, gpt-4o is more capable than gpt-3.5. suppose we provide gpt-3.5 with gpt-4o's CoT. one reason why gpt-3.5 may not improve is that gpt-4o's CoT skips certain steps since gpt-4o is a more capable model. e.g. gpt-4o can calculate 4^4 directly, but gpt-3.5 cannot. So gpt-4o's CoT will leave out steps out 4 * 4 * 4* 4.  which gpt-3.5 needs to get the correct answerdo we then say that gpt-3.5 is doing steganography? (edited)  possible improvement: provide gpt-3.5 with its own version of its own CoT, but paraphrased by another model. gpt-3.5 sees the same steps of "4 * 4 * 4 *4", but paraphrased. Does it still perform as well?
1Daniel Tan
Expanding on this, would minimizing information in the chain of thought be a viable solution to steganography? * Based on the above intuition, the “actual” information present in the CoT may be much greater than the “apparent” information (visible as explicitly stated reasoning). If we additionally train the models with an objective that penalises the amount of “extra” information not present, we might be able to solve steganograohy Examples of penalizing extra information. * Train the model such that its CoT is predictable by some other (fixed, trusted) model. * Apply entropy regularization (see: Yann Lecun’s JEPA for an example here)
3Daniel Tan
It turns out that “train the model st CoT is predictable by a different model” is exactly the idea in prover-verifier games. That’s very exciting! Do PVGs reduce introspection or steganography?

Should we be optimizing SAEs for disentanglement instead of sparsity? 

  • The primary motivation behind SAEs is to learn monosemantic latents, wher monosemanticity ~= "features correspond to single concepts". In practice, sparsity is used as a proxy metric for monosemanticity.
  • There's a highly related notion in the literature of disentanglement, which ~= "features correspond to single concepts, and can be varied independently of each other."
  • The literature contains known objectives to induce disentanglement directly, without needing proxy metrics. 

Clai... (read more)

Do wider language models learn more fine-grained features? 

  • The superposition hypothesis suggests that language models learn features as pairwise almost-orthogonal directions in N-dimensional space.
  • Fact: The number of admissible pairwise orthogonal features in R^N grows exponentially in N.  
  • Corollary: wider models can learn exponentially more features. What do they use this 'extra bandwidth' for? 

Hypothesis 1: Wider models learn approximately the same set of features as the narrower models, but also learn many more long-tail features. 

Hy... (read more)

5gwern
Prediction: the SAE results may be better for 'wider', but only if you control for something else, possibly perplexity, or regularize more heavily. The literature on wider vs deeper NNs has historically shown a stylized fact of wider NNs tending to 'memorize more, generalize less' (which you can interpret as the finegrained features being used mostly to memorizing individual datapoints, perhaps exploiting dataset biases or nonrobust features) and so deeper NNs are better (if you can optimize them effectively without exploding/collapsing gradients), which would potentially more than offset any orthogonality gains from the greater width. Thus, you would either need to regularize more heavily ('over-regularizing' from the POV of the wide net, because it would achieve a better performance if it could memorize more of the long tail, the way it 'wants' to) or otherwise adjust for performance (to disentangle the performance benefits of wideness from the distorting effect of achieving that via more memorization).
1Daniel Tan
Intuition pump: When you double the size of the residual stream, you get a squaring in the number of distinct (almost-orthogonal) features that a language model can learn. If we call a model X and its twice-as-wide version Y, we might expect that the features in Y all look like Cartesian pairs of features in X.  Re-stating an important point: this suggests that feature splitting could be something inherent to language models as opposed to an artefact of SAE training.  I suspect this could be elucidated pretty cleanly in a toy model. Need to think more about the specific toy setting that will be most appropriate. Potentially just re-using the setup from Toy Models of Superposition is already good enough.  
1Daniel Tan
Related idea: If we can show that models themselves learn featyres of different granularity, we could then test whether SAEs reflect this difference. (I expect they do not.) This would imply that SAEs capture properties of the data rather than the model.

anyone else experiencing intermittent disruptions with OpenAI finetuning runs? Experiencing periods where training file validation takes ~2h (up from ~5 mins normally) 

[Repro] Circular Features in GPT-2 Small

This is a paper reproduction in service of achieving my seasonal goals

Recently, it was demonstrated that circular features are used in the computation of modular addition tasks in language models. I've reproduced this for GPT-2 small in this Colab

We've confirmed that days of the week do appear to be represented in a circular fashion in the model. Furthermore, looking at feature dashboards agrees with the discovery; this suggests that simply looking up features that detect tokens in the same conceptual 'categor... (read more)

[Proposal] Out-of-context meta learning as a toy model of steganography

Steganography; the idea that models may say one thing but mean another, and that this may enable them to evade supervision. Essentially, models might learn to "speak in code". 

In order to better study steganography, it would be useful to construct model organisms of steganography, which we don't have at the moment. How might we do this? I think out-of-context meta learning is a very convenient path. 

Out-of-context meta learning: The idea that models can internalise knowledge d... (read more)

[Note] Excessive back-chaining from theories of impact is misguided

Rough summary of a conversation I had with Aengus Lynch 

As a mech interp researcher, one thing I've been trying to do recently is to figure out my big cruxes for mech interp, and then filter projects by whether they are related to these cruxes. 

Aengus made the counterpoint that this can be dangerous, because even the best researchers' mental model of what will be impactful in the future is likely wrong, and errors will compound through time. Also, time spent refining a mental mode... (read more)

[Note] Is adversarial robustness best achieved through grokking? 

A rough summary of an insightful discussion with Adam Gleave, FAR AI

We want our models to be adversarially robust. 

  • According to Adam, the scaling laws don't indicate that models will "naturally" become robust just through standard training. 

One technique which FAR AI has investigated extensively (in Go models) is adversarial training. 

  • If we measure "weakness" in terms of how much compute is required to train an adversarial opponent that reliably beats the target model at G
... (read more)

[Note] On the feature geometry of hierarchical concepts

A rough summary of insightful discussions with Jake Mendel and Victor Veitch

Recent work on hierarchical feature geometry has made two specific predictions: 

  • Proposition 1: activation space can be decomposed hierarchically into a direct sum of many subspaces, each of which reflects a layer of the hierarchy. 
  • Proposition 2: within these subspaces, different concepts are represented as simplices. 

Example of hierarchical decomposition: A dalmation is a dog, which is a mammal, which is an anima... (read more)

[Proposal] Attention Transcoders: can we take attention heads out of superposition? 

Note: This thinking is cached from before the bilinear sparse autoencoders paper. I need to read that and revisit my thoughts here. 

Primer: Attention-Head Superposition

Attention-head superposition (AHS) was introduced in this Anthropic post from 2023. Briefly, AHS is the idea that models may use a small number of attention heads to approximate the effect of having many more attention heads.  

Definition 1: OV-incoherence. An attention circuit is OV-incoherent ... (read more)

[Draft][Note] On Singular Learning Theory

 

Relevant links

[Proposal] Do SAEs capture simplicial structure? Investigating SAE representations of known case studies

It's an open question whether SAEs capture underlying properties of feature geometry. Fortunately, careful research has elucidated a few examples of nonlinear geometry already. It would be useful to think about whether SAEs recover these geometries. 

Simplices in models. Work studying hierarchical structure in feature geometry finds that sets of things are often represented as simplices, which are a specific kind of regular polytope. Simplices are al... (read more)

[Note] Is Superposition the reason for Polysemanticity? Lessons from "The Local Interaction Basis" 

Superposition is currently the dominant hypothesis to explain polysemanticity in neural networks. However, how much better does it explain the data than alternative hypotheses?  

Non-neuron aligned basis. The leading alternative, as asserted by Lawrence Chan here, is that there are not a very large number of underlying features; just that these features are not represented in a neuron-aligned way, so individual neurons appear to fire on multiple dist... (read more)

11stuserhere
You'll enjoy reading What Causes Polysemanticity? An Alternative Origin Story of Mixed Selectivity from Incidental Causes (link to the paper)

[Proposal] Is reasoning in natural language grokkable? Training models on language formulations of toy tasks. 

Previous work on grokking finds that models can grok modular addition and tree search. However, these are not tasks formulated in natural language. Instead, the tokens correspond directly to true underlying abstract entities, such as numerical values or nodes in a graph. I question whether this representational simplicity is a key ingredient of grokking reasoning. 

I have a prior that expressing concepts in natural language (as opposed to ... (read more)

[Proposal] Are circuits universal? Investigating IOI across many GPT-2 small checkpoints

Universal features. Work such as the Platonic Representation Hypothesis suggest that sufficiently capable models converge to the same representations of the data. To me, this indicates that the underlying "entities" which make up reality are universally agreed upon by models.

Non-universal circuits. There are many different algorithms which could correctly solve the same problem. Prior work such as the clock and the pizza indicate that, even for very simple algorithms, m... (read more)