snewman

Software engineer and repeat startup founder; best known for Writely (aka Google Docs). Now blogging at https://amistrongeryet.substack.com and looking for ways to promote positive outcomes from AI.

Wiki Contributions

Comments

Sorted by
snewman118

I love this. Strong upvoted. I wonder if there's a "silent majority" of folks who would tend to post (and upvote) reasonable things, but don't bother because "everyone knows there's no point in trying to have a civil discussion on Twitter".

Might there be a bit of a collective action problem here? Like, we need a critical mass of reasonable people participating in the discussion so that reasonable participation gets engagement and thus the reasonable people are motivated to continue? I wonder what might be done about that.

snewman10

I think we're saying the same thing? "The LLM being given less information [about the internal state of the actor it is imitating]" and "the LLM needs to maintain a probability distribution over possible internal states of the actor it is imitating" seem pretty equivalent.

snewman20

As I go about my day, I need to maintain a probability distribution over states of the world. If an LLM tries to imitate me (i.e. repeatedly predict my next output token), it needs to maintain a probability distribution, not just over states of the world, but also over my internal state (i.e. the state of the agent whose outputs it is predicting). I don't need to keep track of multiple states that I myself might be in, but the LLM does. Seems like that makes its task more difficult?

Or to put an entirely different frame on the the whole thing: the job of a traditional agent, such as you or me, is to make intelligent decisions. An LLM's job is to make the exact same intelligent decision that a certain specific actor being imitated would make. Seems harder?

snewman30

I am trying to wrap my head around the high-level implications of this statement. I can come up with two interpretations:

  1. What LLMs are doing is similar to what people do as they go about their day. When I walk down the street, I am simultaneously using visual and other input to assess the state of the world around me ("that looks like a car"), running a world model based on that assessment ("the car is coming this way"), and then using some other internal mechanism to decide what to do ("I'd better move to the sidewalk").
  2. What LLMs are doing is harder than what people do. When I converse with someone, I have some internal state, and I run some process in my head – based on that state – to generate my side of the conversation. When an LLM converses with someone, instead of maintaining internal state, needs to maintain a probability distribution over possible states, make next-token predictions according to that distribution, and simultaneously update the distribution.

(2) seems more technically correct, but my intuition dislikes the conclusion, for reasons I am struggling to articulate. ...aha, I think this may be what is bothering me: I have glossed over the distinction between input and output tokens. When an LLM is processing input tokens, it is working to synchronize its state to the state of the generator. Once it switches to output mode, there is no functional benefit to continuing to synchronize state (what is it synchronizing to?), so ideally we'd move to a simpler neural net that does not carry the weight of needing to maintain and update a probability distribution of possible states. (Glossing over the fact that LLMs as used in practice sometimes need to repeatedly transition between input and output modes.) LLMs need the capability to ease themselves into any conversation without knowing the complete history of the participant they are emulating, while people have (in principle) access to their own complete history and so don't need to be able to jump into a random point in their life and synchronize state on the fly.

So the implication is that the computational task faced by an LLM which can emulate Einstein is harder than the computational task of being Einstein... is that right? If so, that in turn leads to the question of whether there are alternative modalities for AI which have the advantages of LLMs (lots of high-quality training data) but don't impose this extra burden. It also raises the question of how substantial this burden is in practice, in particular for leading-edge models.

snewman20

All of this is plausible, but I'd encourage you to go through the exercise of working out these ideas in more detail. It'd be interesting reading and you might encounter some surprises / discover some things along the way.

Note, for example, that the AGIs would be unlikely to focus on AI research and self-improvement if there were more economically valuable things for them to be doing, and if (very plausibly!) there were not more economically valuable things for them to be doing, why wouldn't a big chunk of the 8 billion humans have been working on AI research already (such that an additional 1.6 million agents working on this might not be an immediate game changer)? There might be good arguments to be made that the AGIs would make an important difference, but I think it's worth spelling them out.

snewman20

Can you elaborate? This might be true but I don't think it's self-evidently obvious.

In fact it could in some ways be a disadvantage; as Cole Wyeth notes in a separate top-level comment, "There are probably substantial gains from diversity among humans". 1.6 million identical twins might all share certain weaknesses or blind spots.

snewman106

Assuming we require a performance of 40 tokens/s, the training cluster can run  concurrent instances of the resulting 70B model

Nit: you mixed up 30 and 40 here (should both be 30 or both be 40).

I will assume that the above ratios hold for an AGI level model.

If you train a model with 10x as many parameters, but use the same training data, then it will cost 10x as much to train and 10x as much to operate, so the ratios will hold.

In practice, I believe it is universal to use more training data when training larger models? Implying that the ratio would actually increase (which further supports your thesis).

On the other hand, the world already contains over 8 billion human intelligences. So I think you are assuming that a few million AGIs, possibly running at several times human speed (and able to work 24/7, exchange information electronically, etc.), will be able to significantly "outcompete" (in some fashion) 8 billion humans? This seems worth further exploration / justification.

snewman10

They do mention a justification for the restrictions – "to maintain consistency across cells". One needn't agree with the approach, but it seems at least to be within the realm of reasonable tradeoffs.

Nowadays of course textbooks are generally available online as well. They don't indicate whether paid materials are within scope, but of course that would be a question for paper textbooks as well.

What I like about this study is that the teams are investing a relatively large amount of effort ("Each team was given a limit of seven calendar weeks and no more than 80 hours of red-teaming effort per member"), which seems much more realistic than brief attempts to get an LLM to answer a specific question. And of course they're comparing against a baseline of folks who still have Internet access.

snewman10

I recently encountered a study which appears aimed at producing a more rigorous answer to the question of how much use current LLMs would be in abetting a biological attack: https://www.rand.org/pubs/research_reports/RRA2977-1.html. This is still work in progress, they do not yet have results. @1a3orn I'm curious what you think of the methodology?

Load More