LESSWRONG
LW

wassname
37941850
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
3wassname's Shortform
1y
12
No wikitag contributions to display.
Model Organisms for Emergent Misalignment
wassname11d10

That makes sense, thank you for explaining. Ah yes, I see they are all the LORA adapters, for some reason I thought they were all merged, my bad. Adapters are certainly much more space efficient.

Reply
Gemini Diffusion: watch this space
wassname13d30

Yes, that's exactly what I mean! If we have word2vec like properties, steering and interpretability would be much easier and more reliable. And I do think it's a research direction that is prospective, but not certain.

Facebook also did an interesting tokenizer, that makes LLM's operating in a much richer embeddings space: https://github.com/facebookresearch/blt. They embed sentences split by entropy/surprise. So it might be another way to test the hypothesis that a better embedding space would provide ice Word2Vec like properties.

Reply1
Model Organisms for Emergent Misalignment
wassname13d10

Are you going to release the code models too? They seem useful? Also, the LORA versions if possible, please.

Reply
Model Organisms for Emergent Misalignment
wassname13d10

Thank you for releasing the models.

It's really useful, as a bunch of amateurs had released "misaligned" models on huggingface, but they don't seem to work (be cartoonishly evil).

I'm experimenting with various morality evals (https://github.com/wassname/llm-moral-foundations2, https://github.com/wassname/llm_morality) and it's good to have a negative baseline. It will also be good to add it to speechmap.ai if we can.

Reply
Gemini Diffusion: watch this space
wassname23d10

Good point! And it's plausible because diffusion seems to provide more supervision and get better results in generative vision models, so it's a candidate for scaling.

Reply
Gemini Diffusion: watch this space
wassname23d10

Oh it's not explicitly in the paper, but in Apple's version they have an encoder/decoder with explicit latent space. This space would be much easier to work with and steerable than the hidden states we have in transformers.

With an explicit and nicely behaved latent space we would have a much better chance of finding a predictive "truth" neuron where intervention reveals deception 99% of the time even out of sample. Right now mechinterp research achieves much less, partly because the transformers have quite confusing activation spaces (attention sinks, suppressed neurons, etc).

Reply
Gemini Diffusion: watch this space
wassname1mo30

If it's trained from scratch, and they release details, then it's one data point for diffusion LLM scaling. But if it's distilled, then it's zero points of scaling data.

Because we are not interested in scaling which is distilled from a larger parent model, as that doesn't push the frontier because it doesn't help get the next larger parent model.

Apple also have LLM diffusion papers, with code. It seems like it might be helpful for alignment and interp because it would have a more interpretable and manipulable latent space.

Reply
Gemini Diffusion: watch this space
wassname1mo42

True, and then it wouldn't be an example of the scaling of diffusion models, but the of distillation from a scaled up autoregressive LLM.

Reply1
Making Vaccine
wassname1mo10

Deleted tweet. Why were they sceptical? And does anyone know if there were follow-up antibody tests, I can't find them.

Reply
Jan Betley's Shortform
wassname2mo50

I also haven't seen this mentioned anywhere.

I think most commercial frontier models that offer logprobs will take some precautions against distilling. Some logprobs seem to have a noise vector attached too (deepseek?), and some like grok will only offer the top 8, not the top 20. Others will not offer them at all.

It's a shame, as logprobs can be really information rich and token efficient ways to do evals, ranking, and judging.

Reply
Load More
3wassname's Shortform
1y
12
15What did you learn from leaked documents?
Q
2y
Q
10
16What should we censor from training data?
2y
4
1Talk and Q&A - Dan Hendrycks - Paper: Aligning AI With Shared Human Values. On Discord at Aug 28, 2020 8:00-10:00 AM GMT+8.
5y
0