abramdemski

Sequences

Pointing at Normativity
Implications of Logical Induction
Partial Agency
Alternate Alignment Ideas
Filtered Evidence, Filtered Arguments
CDT=EDT?
Embedded Agency
Hufflepuff Cynicism

Comments

Sorted by

Fair. I think the analysis I was giving could be steel-manned as: pretenders are only boundedly sophisticated; they can't model the genuine mindset perfectly. So, saying what is actually on your mind (eg calling out the incentive issues which are making honesty difficult) can be a good strategy.

However, the "call out" strategy is not one I recall using very often; I think I wrote about it because other people have mentioned it, not because I've had sucess with it myself.

Thinking about it now, my main concerns are:
1. If the other person is being genuine, and I "call out" the perverse incentives that theoretically make genuine dialogue difficult in this circumstance, then the other person might stop being genuine due to perceiving me as not trusting them.

2. If the other person is not being genuine, then the "call out" strategy can backfire. For example, let's say some travel plans are dependent on me (maybe I am the friend who owns a car) and someone is trying to confirm that I am happy to do this. Instead of just confirming, which is what they want, I "call out" that I feel like I'd be disappointing everyone if I said no. If they're not genuinely concerned for my enthusiasm and instead disingenuously wanted me to make enthusiastic noises so that others didn't feel I was being taken advantage of, then they could manipulatively take advantage of my revealed fear of letting the group down, somehow.

I came up with my estimate of one-to-four orders of magnitude via some quick search results, so, very open to revision. But indeed, the possibility that GPT4.5 is about 10% of the human brain was within the window I was calling a "small fraction", which maybe is misleading use of language. My main point is that if a human were born with 10% (or less) of the normal amount of brain tissue, we might expect them to have a learning disability which qualitatively impacted the sorts of generalizations they could make.

Of course, comparison of parameter-counts to biological brain sizes is somewhat fraught.

This fits my bear-picture fairly well. 

Here's some details of my bull-picture:

  • GPT4.5 is still a small fraction of the human brain, when we try to compare sizes. It makes some sense to think of it as a long-lived parrot that's heard the whole internet and then been meticulously reinforced to act like a helpful assistant. From this perspective, it makes a lot of sense that its ability to generalize datapoints is worse than human, and plausible (at least naively) that one to four additional orders of magnitude will close the gap.
  • Even if the pretraining paradigm can't close the gap like that due to fundamental limitations in the architecture, CoT is approximately Turing-complete. This means that the RL training of reasoning models is doing program search, but with a pretty decent prior (ie representing a lot of patterns in human reasoning). Therefore, scaling reasoning models can achieve all the sorts of generalization which scaling pretraining is failing at, in principle; the key question is just how much it needs to scale in order for that to happen.
  • While I agree that RL on reasoning models is in some sense limited to tasks we can provide good  feedback on, it seems like things like math and programming and video games should in principle provide a rich enough training environment to get to highly agentic and sophisticated cognition, again with the key qualification of "at some scale".
  • For me a critical part of the update with o1 was that frontier labs are still capable of innovation when it comes to the scaling paradigm; they're not stuck in a scale-up-pretraining loop. If they can switch to this, they can also try other things and switch to them. A sensible extrapolation might be that they'll come up with a new idea whenever their current paradigm appears to be stalling.
abramdemskiΩ220

My guess is that we want to capture those differences with the time&date meta-data instead (and to some extent, location and other metadata). That way, we can easily query what you-in-particular would say at other periods in your life (such as the future). However, I agree that this is at least not obvious. 

Maybe a better way to do it would be to explicitly take both approaches, so that there's an abstract-you vector which then gets mapped into a particular-you author space via combination with your age (ie with date&time). This attempts to explicitly capture the way you change over time (we can watch your vector move through the particular-author space), while still allowing us to query what you would say at times where we don't have evidence in the form of writing from you. 

Ideally, imagining the most sophisticated version of the setup, the model would be able to make date&time attributions very fine-grained, guessing when specific words were written & constructing a guessed history of revisions for a document. This complicates things yet further. 

From my personal experience, I agree. I find myself unexcited about trying the newest LLM models. My main use-case in practice these days is Perplexity, and I only use it when I don't care much about the accuracy of the results (which ends up being a lot, actually... maybe too much). Perplexity confabulates quite often even with accurate references in hand (but at least I can check the references). And it is worse than me at the basics of googling things, so it isn't as if I expect it to find better references than me; the main value-add is in quickly reading and summarizing search results (although the new Deep Research option on Perplexity will at least iterate through several attempted searches, so it might actually find things that I wouldn't have).

I have been relatively persistent about trying to use LLMs for actual research purposes, but the hallucination rate seems to go to 100% almost whenever an accurate result would be useful to me. 

The hallucination rate does seem adequately low when talking about established mathematics (so long as you don't ask for novel implications, such as applying ideas to new examples). For this and for other reasons I think they can be quite helpful for people trying to get oriented to a subfield they aren't familiar with -- it can make for a great study partner, so long as you verify what it says be checking other references. 

Also decent for coding, of course, although the same caveat applies -- coders who are already an expert in what they are trying to do will get much less utility out of it.

I recently spoke to someone who made a plausible claim that LLMs were 10xing their productivity in communicating technical ideas in AI alignment with something like the following workflow:

  • Take a specific cluster of failure modes for thinking about alignment which you've seen often.
  • Hand-write a large, careful prompt document about the cluster of alignment failure modes, which includes many specific trigger-action patterns (if someone makes mistake X, then the correct counterspell to avoid the mistake is Y). This document is highly opinionated and would come off as rude if directly cited/quoted; it is not good communication. However, it is something you can write once and use many times.
  • When responding to an email/etc, load the email and the prompt document into Claude and ask Claude to respond to the email using the document. Claude will write something polite, informative, and persuasive based on the document, with maybe a few iterations of correcting Claude if its first response doesn't make sense. The person also emphasized that things should be written in small pieces, as quality declines rapidly when Claude tries to do more at once.

They also mentioned that Claude is awesome at coming up with meme versions of ideas to include in powerpoints and such, which is another useful communication tool.

So, my main conclusion is that there isn't a big overlap between what LLMs are useful for and what I personally could use. I buy that there are some excellent use-cases for other people who spend their time doing other things.

Still, I agree with you that people are easily fooled into thinking these things are more useful than they actually are. If you aren't an expert in the subfield you're asking about, then the LLM outputs will probably look great due to Gell-Mann Amnesia type effects. When checking to see how good the LLM is, people often check the easier sorts of cases which the LLMs are actually decent at, and then wrongly generalize to conclude that the LLMs are similarly good for other cases.

abramdemskiΩ9252

For me, this is significantly different from the position I understood you to be taking. My push-back was essentially the same as 

"has there been, across the world and throughout the years, a nonzero number of scientific insights generated by LLMs?" (obviously yes),

& I created the question to see if we could substantiate the "yes" here with evidence. 

It makes somewhat more sense to me for your timeline crux to be "can we do this reliably" as opposed to "has this literally ever happened" -- but the claim in your post was quite explicit about the "this has literally never happened" version. I took your position to be that this-literally-ever-happening would be significant evidence towards it happening more reliably soon, on your model of what's going on with LLMs, since (I took it) your current model strongly predicts that it has literally never happened.

This strong position even makes some sense to me; it isn't totally obvious whether it has literally ever happened. The chemistry story I referenced seemed surprising to me when I heard about it, even considering selection effects on what stories would get passed around.

abramdemskiΩ340

My idea is very similar to paragraph vectors: the vectors are trained to be useful labels for predicting the tokens.

To differentiate author-vectors from other types of metadata, the author vectors should be additionally trained to predict author labels, with a heavily-reinforced constraint that the author vectors are identical for documents which have the same author. There's also the author-vector-to-text-author-attribution network, which should be pre-trained to have a good "prior" over author-names (so we're not getting a bunch of nonsense strings out). During training, the text author-names are being estimated alongside the vectors (where author labels are not available), so that we can penalize different author-vectors which map to the same name. (Some careful thinking should be done about how to handle people with the actual same name; perhaps some system of longer author IDs?)

Other meta-data would be handled similarly.

abramdemskiΩ220

Yeah, this is effectively a follow-up to my recent post on anti-slop interventions, detailing more of what I had in mind. So, the dual-use idea is very much what I had in mind.

abramdemskiΩ350

Yeah, for better or worse, the logical induction paper is probably the best thing to read. The idea is actually to think of probabilities as prediction-market prices; the market analogy is a very strong one, not an indirect way of gesturing at the idea.

Load More