mattmacdermott

Wiki Contributions

Comments

Sorted by

Nitpick: “odds of 63%” sounds to me like it means “odds of 63:100” i.e. “probability of around 39%”. Took me a while to realise this wasn’t what you meant.

I think the way to go, philosophically, might be to distinguish kindness-towards-conscious-minds and kindness-towards-agents. The former comes from our values, while the second may be decision theoretic.

The revealed preference orthogonality thesis

People sometimes say it seems generally kind to help agents achieve their goals. But it's possible there need be no relationship between a system's subjective preferences (i.e. the world states it experiences as good) and its revealed preferences (i.e. the world states it works towards).

For example, you can imagine an agent architecture consisting of three parts:

  • a reward signal, experienced by a mind as pleasure or pain
  • a reinforcement learning algorithm
  • a wrapper which flips the reward signal before passing it to the RL algorithm.

This system might seek out hot stoves to touch while internally screaming. It would not be very kind to turn up the heat.

Even if you think a life’s work can’t make a difference but many can, you can still think it’s worthwhile to work on alignment for whatever reasons make you think it’s worthwhile to do things like voting.

(E.g. a non-CDT decision theory)

Since o1 I’ve been thinking that faithful chain-of-thought is waaaay underinvested in as a research direction.

If we get models such that a forward pass is kinda dumb, CoT is superhuman, and CoT is faithful and legible, then we can all go home, right? Loss of control is not gonna be a problem.

And it feels plausibly tractable.

I might go so far as to say it Pareto dominates most people’s agendas on importance and tractability. While being pretty neglected.

Do we know that the test set isn’t in the training data?

You can read examples of the hidden reasoning traces here.

But it's not clear to me that in practice it would say naughty things, since it's easier for the model to learn one consistent set of guidelines for what to say or not say than it is to learn two.

If I think about asking the model a question about a politically sensitive or taboo subject, I can imagine it being useful for the model to say taboo or insensitive things in its hidden CoT in the course of composing its diplomatic answer. The way they trained it may or may not incentivise using the CoT to think about the naughtiness of its final answer.

But yeah, I guess an inappropriate content filter could handle that, letting us see the full CoT for maths questions and hiding it for sensitive political ones. I think that does update me more towards thinking they’re hiding it for other reasons.

CoT optimised to be useful in producing the correct answer is a very different object to CoT optimised to look good to a human, and a priori I expect the former to be much more likely to be faithful. Especially when thousands of tokens are spent searching for the key idea that solves a task.
For example, I have a hard time imagining how the examples in the blog post could be not faithful (not that I think faithfulness is guaranteed in general).

If they're avoiding doing RL based on the CoT contents,

Note they didn’t say this. They said the CoT is not optimised for ‘policy compliance or user preferences’. Pretty sure what they mean is the didn’t train the model not to say naughty things in the CoT.

'We also do not want to make an unaligned chain of thought directly visible to users.' Why?

I think you might be overthinking this. The CoT has not been optimised not to say naughty things. OpenAI avoid putting out models that haven’t been optimised not to say naughty things. The choice was between doing the optimising, or hiding the CoT.

Edit: not wanting other people to finetune on the CoT traces is also a good explanation.

Load More