The sidebar that shows all comments by author is incredibly useful (to me)!
I don't know how long ago it was put in, but when I noticed it, it made it waaaaay easier for me to parse through big conversation trees, get a sense for what people are thinking, and zero in on threads I want to read in detail.
Thanks to whoever had that idea and implemented it!
Ajeya "Cotra", not "Corta" :)
Thanks, for the info I'm reading through your posts now! I'm sorry your experience was / still is so terrible. Knock on wood, I'm not having as bad a time so far -- I wonder if the most recent booster helped me, or if it's just luck (different strain, different immune system, etc.)
Especially good to know how easy it was to pass to your spouse -- I'll do my best to take that into account.
(I strongly agree w/ your post on Paxlovid, by the way -- it was a game changer for how bad my symptoms were, I'm very glad I could get it.)
In case it's useful, I have a threat model writeup here: https://www.danieldewey.net/risk/case.html. (I haven't linked it from many places, and I'm trying to spread it around when I see the chance.)
If I wanted to explain these results, I think I would say something like:
GPT-3 has been trained to predict what the next token would be if the prompt appeared in its dataset (text from the internet). So, if GPT-3 has learned well, it will "talk as if symbols are grounded" when it predicts that the internet-text would "talk as if symbols are grounded" following the given prompt, and not if not.
It's hard to use this explanation to predict what GPT-3 will do on edge cases, but this would lead me to expect that GPT-3 will more often "talk as if symbols are grounded" when the prompt is a common prose format (e.g. stories, articles, forum posts), and less often when the prompt is most similar to non-symbol-groundy things in the dataset (e.g. poetry) or not that similar to anything in the dataset.
I think your examples here broadly fit that explanation, though it feels like a shaky just-so story:
I don't see how to test this theory, but it seems like it has to be kind of tautologically correct -- predicting next token is what GPT-3 was trained to do, right?
Maybe to find out how adept GPT-3 is at continuing prompts that depend on common knowledge about common objects, or object permanence, or logical reasoning, you could create prompts that are as close as possible to what appears in the dataset, then see if it fails those prompts more than average? I don't think there's a lot we can conclude from unusual-looking prompts.
I'm curious what you think of this -- maybe it misses the point of your post?
*(I'm not sure exactly what you mean when you say "symbol grounding", but I'm taking it to mean something like "the words describe objects that have common-sense properties, and future words will continue this pattern".)
Nice work everyone! Very excited to see the results.
Thank you for writing this! I usually have to find a few different angles to look at a paper from before I feel like I understand it, and this kind of thing is super helpful.
Thanks, Richard!
I do think both of those cases fit into the framework fine (unless I'm misunderstanding what you have in mind):
In other words, if we imagine a model misbehaving in the wild, I think it'll usually either be the case that (1) it behaved that way during training but we didn't notice the badness (evaluation breakdown), or (2) we didn't train it on a similar enough situation (high-level distribution shift).
As we move further away from standard DL training practices, we could see failure modes that don't fit into these two categories -- e.g. there could be some bad fixed-point behaviors in amplification that aren't productively thought of as "evaluation breakdown" or "high-level distribution shift." But these two categories do seem like the most obvious ways that current DL practice could produce systematically harmful behavior, and I think they take up a pretty large part of the space of possible failures.
(ETA: I want to reiterate that these two problems are restatements of earlier thinking, esp. by Paul and Evan, and not ideas I'm claiming are new at all; I'm using my own terms for them because "inner" and "outer" alignments have different meanings for different people.)
I'm really enjoying Project Hail Mary, the new book from The Martian author Andy Weir, and I think other LW readers might as well.
Avoid spoilers harder than you normally would -- there are a lot of spoilers online that are easy to hit by accident.
Why you might like it:
This is really remarkable! Thanks for posting, I don't know when I would have seen it otherwise.
Partway through, I Googled a couple passages to quickly confirm that it wasn't some "modern blog post in an old-timey style" -- it feels more specific and prescient to me than I remember Erewhon being (though Gwern's comment makes me think I should go back and look again, or check Darwin Among the Machines).
Evans' predictions on Trost's arguments and overall vibe are also impressive, IMO. (E.g. "if it were not for your incurable dilettanteism in science as in all other things--if you had once understood the action of any delicate machine..." ~= "actually training models gives you the insight that AI risk isn't a thing.") I wonder how much Evans-vs.-Trost-style conversation was already going on.
Interesting how this part turned out:
It doesn't seem like Evans (or others at the time?) anticipated that instead of individual machines reproducing like humans, they could just run the factories / mines / etc. that produce them. Which is an odd oversight, since by that time machines were certainly used in factories to make more machines -- it seems like the "reproduction via external construction" prediction should have been a lot easier to make than the "reasoning machines" prediction.
😬