1 min read

7

This is a special post for quick takes by faul_sname. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
32 comments, sorted by Click to highlight new comments since:

I don't think talking about "timelines" is useful anymore without specifying what the timeline is until (in more detail than "AGI" or "transformative AI"). It's not like there's a specific time in the future when a "game over" screen shows with our score. And for the "the last time that humans can meaningfully impact the course of the future" definition, that too seems to depend on the question of how: the answer is already in the past for "prevent the proliferation of AI smart enough to understand and predict human language", but significantly in the future for "prevent end-to-end automation of the production of computing infrastructure from raw inputs".

I very much agree that talking about time to AGI or TAI is causing a lot of confusion because people don't share a common definition of those terms. I asked What's a better term now that "AGI" is too vague?, arguing that the original use of AGI was very much the right term, but it's been watered down from fully general to fairly general, making the definition utterly vague and perhaps worse-than-useless.

I didn't really get any great suggestions for better terminology, including my own. Thinking about it since then, I wonder if the best term (when there's not space to carefully define it) is artifical superintelligence, ASI. That has the intuitive sense of "something that outclasses us". The alignment community has long been using it for something well past AGI, to the nearly-omniscient level, but it technically just means smarter than a human - which is something that intuition says we should be very worried about. 

There are arguments that AI doesn't need to be smarter than human to worry about it, but I personally worry most about "real" AGI, as defined in that linked post and I think in Yudkowsky's original usage: AI that can think about and learn about anything.

You could also say that ASI already exists, because AI is narrowly superhuman, but superintelligence does intuitively suggest smarter than human in every way.

My runners-up were parahuman AI and superhuman entities.

I don't think it's an issue of pure terminology. Rather, I expect the issue is expecting to have a single discrete point in time at which some specific AI is better than every human at every useful task. Possibly there will ever be such a point in time, but I don't see any reason to expect "AI is better than all humans at developing new euv lithography techniques", "AI is better than all humans at equipment repair in the field", and "AI is better than all humans at proving mathematical theorems" to happen at similar times.

Put another way, is an instance of an LLM that has an affordance for "fine-tune itself on a given dataset" an ASI? Going by your rubric:

  • Can think about any topic, including topics outside of their training set:Yep, though it's probably not very good at it
  • Can do self-directed, online learning: Yep, though this may cause it to perform worse on other tasks if it does too much of it
  • Alignment may shift as knowledge and beliefs shift w/ learning: To the extent that "alignment" is a meaningful thing to talk about with regards to only a model rather than a model plus its environment, yep
  • Their own beliefs and goals: Yes, at least for definitions of "beliefs" and "goals" such that humans have beliefs and goals
  • Alignment must be reflexively stable: ¯_(ツ)_/¯ seems likely that some possible configuration is relatively stable
  • Alignment must be sufficient for contextual awareness and potential self-improvement: ¯_(ツ)_/¯ even modern LLM chat interfaces like Claude are pretty contextually aware these days
  • Actions: Yep, LLMs can already perform actions if you give them affordances to do so (e.g. tools)
  • Agency is implied or trivial to add: ¯_(ツ)_/¯, depends what you mean by "agency" but in the sense of "can break down large goals into subgoals somewhat reliably" I'd say yes

Still, I don't think e.g. Claude Opus is "an ASI" in the sense that people who talk about timelines mean it, and I don't think this is only because it doesn't have any affordances for self-directed online learning.

Olli Järviniemi made something like this point:

Rather, I expect the issue is expecting to have a single discrete point in time at which some specific AI is better than every human at every useful task. Possibly there will ever be such a point in time, but I don't see any reason to expect "AI is better than all humans at developing new euv lithography techniques", "AI is better than all humans at equipment repair in the field", and "AI is better than all humans at proving mathematical theorems" to happen at similar times.

in the post Near-mode thinking on AI:

https://www.lesswrong.com/posts/ASLHfy92vCwduvBRZ/near-mode-thinking-on-ai

In particular, here are the most relevant quotes on this subject:

"But for the more important insight: The history of AI is littered with the skulls of people who claimed that some task is AI-complete, when in retrospect this has been obviously false. And while I would have definitely denied that getting IMO gold would be AI-complete, I was surprised by the narrowness of the system DeepMind used."

"I think I was too much in the far-mode headspace of one needing Real Intelligence - namely, a foundation model stronger than current ones - to do well on the IMO, rather than thinking near-mode "okay, imagine DeepMind took a stab at the IMO; what kind of methods would they use, and how well would those work?"

"I also updated away from a "some tasks are AI-complete" type of view, towards "often the first system to do X will not be the first systems to do Y".

I've come to realize that being "superhuman" at something is often much more mundane than I've thought. (Maybe focusing on full superintelligence - something better than humanity on practically any task of interest - has thrown me off.)"

Like:

"In chess, you can just look a bit more ahead, be a bit better at weighting factors, make a bit sharper tradeoffs, make just a bit fewer errors. If I showed you a video of a robot that was superhuman at juggling, it probably wouldn't look all that impressive to you (or me, despite being a juggler). It would just be a robot juggling a couple balls more than a human can, throwing a bit higher, moving a bit faster, with just a bit more accuracy. The first language models to be superhuman at persuasion won't rely on any wildly incomprehensible pathways that break the human user (c.f. List of Lethalities, items 18 and 20). They just choose their words a bit more carefully, leverage a bit more information about the user in a bit more useful way, have a bit more persuasive writing style, being a bit more subtle in their ways. (Indeed, already GPT-4 is better than your average study participant in persuasiveness.) You don't need any fundamental breakthroughs in AI to reach superhuman programming skills. Language models just know a lot more stuff, are a lot faster and cheaper, are a lot more consistent, make fewer simple bugs, can keep track of more information at once. (Indeed, current best models are already useful for programming.) (Maybe these systems are subhuman or merely human-level in some aspects, but they can compensate for that by being a lot better on other dimensions.)"

"As a consequence, I now think that the first transformatively useful AIs could look behaviorally quite mundane."

I agree with all of that. My definition isn't crisp enough; doing crappy general thinking and learning isn't good enough. It probably needs to be roughly human level or above at those things before it's takeover-capable and therefore really dangerous.

I didn't intend to add the alignment definitions to the definition of AGI.

I'd argue that LLMs actually can't think about anything outside of their training set, and it's just that everything humans have thought about so far is inside their training set. But I don't think that discussion matters here.

I agree that Claude isn't an ASI by that definition. even if it did have longer-term goal-directed agency and self-directed online learning added, it would still be far subhuman in some important areas, arguably in general reasoning that's critical for complex novel tasks like taking over the world or the economy. ASI needs to mean superhuman in every important way. And of course important is vague.

I guess a more reasonable goal is working toward the minimum description length that gets across all of those considerations. And a big problem is that timeline predictions to important/dangerous AI are mixed in with theories about what will make it important/dangerous. One terminological move I've been trying is the word "competent" to invoke intuitions about getting useful (and therefore potentially dangerous) stuff done.

I think the unstated assumption (when timeline-predictors don't otherwise specify) is "the time when there are no significant deniers", or "the time when things are so clearly different that nobody (at least nobody the predictor respects) is using the past as any indication of the future on any relevant dimension.

Some people may CLAIM it's about the point of no return, after which changes can't be undone or slowed in order to maintain anything near status quo or historical expectations.  This is pretty difficult to work with, since it could happen DECADES before it's obvious to most people.

That said, I'm not sure talking about timelines was EVER all that useful or concrete.  There are too many unknowns, and too many anti-inductive elements (where humans or other agents change their behavior based on others' decisions and their predictions of decisions, in a chaotic recursion).  "short", "long", or "never" are good at giving a sense of someone's thinking, but anything more granular is delusional.

[Epistemic status: 75% endorsed]

Those who, upon seeing a situation, look for which policies would directly incentivize the outcomes they like should spend more mental effort solving for the equilibrium.

Those who, upon seeing a situation, naturally solve for the equilibrium should spend more mental effort checking if there is indeed only one "the" equilibrium, and if there are multiple possible equilibria, solving for which factors determine which of the several possible the system ends up settling on.

In the startup world, conventional wisdom is that, if your company is default-dead (i.e. on the current growth trajectory, you will run out of money before you break even), you should pursue high-variance strategies. In one extreme example, "in the early days of FedEx, [founder of FedEx] Smith had to go to great lengths to keep the company afloat. In one instance, after a crucial business loan was denied, he took the company's last $5,000 to Las Vegas and won $27,000 gambling on blackjack to cover the company's $24,000 fuel bill. It kept FedEx alive for one more week."

By contrast, if your company is default-alive (profitable or on-track to become profitable long before you run out of money in the bank), you should avoid making high-variance bets for a substantial fraction of the value of the company, even if those high-variance bets are +EV.

Obvious follow-up question: in the absence of transformative AI, is humanity default-alive or default-dead?

in the absence of transformative AI, is humanity default-alive or default-dead?

I suspect humanity is default-alive, but individual humans (the ones who actually make decisions) are default-dead[1].

  1. ^

    Or, depending on your views on cryonics, they mistakenly en masse believe they are default-dead.

Yes. And that means most people will support taking large risks on achieving aligned AGI and immortality, since most people aren't utilitarian or longtermist.

in the absence of transformative AI, is humanity default-alive or default-dead

Almost certainly alive for several more decades if we are talking literal extinction rather than civilization-wreaking catastrophe. Therefore it makes sense to work towards global coordination to pause AI for at least this long.

if your company is default-dead, you should pursue high-variance strategies

There are rumors OpenAI (which has no moat) is spending much more than it's making this year despite good revenue, another datapoint on there being $1 billion training runs currently in progress.

I'm curious what sort of policies you're thinking of which would allow for a pause which plausibly buys us decades, rather than high-months-to-low-years. My imagination is filling in "totalitarian surveillance state which is effective at banning general-purpose computing worldwide, and which prioritizes the maintenance of its own control over all other concerns". But I'm guessing that's not what you have in mind.

No more totalitarian than control over manufacturing of nuclear weapons. The issue is that currently there is no buy-in on a similar level, and any effective policy is too costly to accept for people who don't expect existential risk. This might change once there are long-horizon task capable AIs that can do many jobs, if they are reined in before there is runaway AGI that can do research on its own. And establishing control over compute is more feasible if it turns out that taking anything approaching even a tiny further step in the direction of AGI takes 1e27 FLOPs.

Generally available computing hardware doesn't need to keep getting better over time, for many years now PCs have been beyond what is sufficient for most mundane purposes. What remains is keeping an eye on GPUs for the remaining highly restricted AI research and specialized applications like medical research. To prevent their hidden stockpiling, all GPUs could be required to need regular unlocking OTPs issued with asymmetric encryption using multiple secret keys kept separately, so that all of the keys would need to be stolen simultaneously to keep the GPUs working (if the GPUs go missing or a country that hosts the datacenter goes rogue, and official unlocking OTPs wouldn't keep being issued). Hidden manufacturing of GPUs seems much less feasible than hidden or systematically subverted datacenters.

a totalitarian surveillance state which is effective at banning general-purpose computing worldwide, and which prioritizes the maintenance of its own control over all other concerns

I much prefer that to everyone's being killed by AI. Don't you?

Great example. One factor that's relevant to AI strategy is that you need good coordination to increase variance. If multiple people at the company make independent gambles without properly accounting for every other gamble happening, this would average the gambles and reduce the overall variance. 

E.g. if coordination between labs is terrible, they might each separately try superhuman AI boxing+some alignment hacks, with techniques varying between groups.

It seems like lack of coordination for AGI strategy increases the variance? That is, without coordination somebody will quickly launch an attempt at value aligned AGI; if they get it, we win. If they don't, we probably lose. With coordination, we might all be able to go slower to lower the risk and therefore variance of the outcome.

I guess it depends on some details, but I don't understand your last sentence. I'm talking about coordinating on one gamble.

Analogous the the OP, I'm thinking of AI companies making a bad bet (like 90% chance of loss of control, 10% chance gain the tools to do a pivotal act in the next year). Losing the bet ends the betting, and winning allows everyone to keep playing. Then if many of them make similar independent gambles simultaneously, it becomes almost certain that one of them loses control.

In the absence of transformative AI, humanity survives many millennia with p = .9 IMO, and if humanity does not survive that long, the primary cause is unlikely to be climate change or nuclear war although either might turn out to be a contributor.

(I'm a little leery of your "default-alive" choice of words.)

In software development / IT contexts, "security by obscurity" (that is, having the security of your platform rely on the architecture of that platform remaining secret) is considered a terrible idea. This is a result of a lot of people trying that approach, and it ending badly when they do.

But the thing that is a bad idea is quite specific - it is "having a system which relies on its implementation details remaining secret". It is not an injunction against defense in depth, and having the exact heuristics you use for fraud or data exfiltration detection remain secret is generally considered good practice.

There is probably more to be said about why the one is considered terrible practice and the other is considered good practice.

There are competing theories here.  Including secrecy of architecture and details in the security stack is pretty common, but so is publishing (or semi-publishing: making it company confidential, but talked about widely enough that it's not hard to find if someone wants to) mechanisms to get feedback and improvements.  The latter also makes the entire value chain safer, as other organizations can learn from your methods.

A lot of AI x-risk discussion is focused on worlds where iterative design fails. This makes sense, as "iterative design stops working" does in fact make problems much much harder to solve.

However, I think that even in the worlds where iterative design fails for safely creating an entire AGI, the worlds we succeed will be ones in which we were able to do iterative design on the components that safe AGI, and also able to do iterative design on the boundaries between subsystems, with the dangerous parts mocked out.

I am not optimistic about approaches that look like "do a bunch of math and philosophy to try to become less confused without interacting with the real world, and only then try to interact with the real world using your newfound knowledge".

For the most part, I don't think it's a problem if people work on the math / philosophy approaches. However, to the extent that people want to stop people from doing empirical safety research on ML systems as they actually are in practice, I think that's trading off a very marginal increase in the odds of success in worlds where iterative design could never work against a quite substantial decrease in the odds of success in worlds where iterative design could work. I am particularly thinking of things like interpretability / RLHF / constitutional AI as things which help a lot in worlds where iterative design could succeed.

A lot of AI x-risk discussion is focused on worlds where iterative design fails. This makes sense, as "iterative design stops working" does in fact make problems much much harder to solve.

Maybe on LW, this seems way less true for lab alignment teams, open phil, and safety researchers in general.

Also, I think it's worth noting the distinction between two different cases:

See also this quote from Paul from here:

Eliezer often equivocates between “you have to get alignment right on the first ‘critical’ try” and “you can’t learn anything about alignment from experimentation and failures before the critical try.” This distinction is very important, and I agree with the former but disagree with the latter. Solving a scientific problem without being able to learn from experiments and failures is incredibly hard. But we will be able to learn a lot about alignment from experiments and trial and error; I think we can get a lot of feedback about what works and deploy more traditional R&D methodology. We have toy models of alignment failures, we have standards for interpretability that we can’t yet meet, and we have theoretical questions we can’t yet answer.. The difference is that reality doesn’t force us to solve the problem, or tell us clearly which analogies are the right ones, and so it’s possible for us to push ahead and build AGI without solving alignment. Overall this consideration seems like it makes the institutional problem vastly harder, but does not have such a large effect on the scientific problem.

The quote from Paul sounds about right to me, with the caveat that I think it's pretty likely that there won't be a single try that is "the critical try": something like this (also by Paul) seems pretty plausible to me, and it is cases like that that I particularly expect having existing but imperfect tooling for interpreting and steering ML models to be useful.

However, to the extent that people want to stop people from doing empirical safety research on ML systems as they actually are in practice

Does anyone want to stop this? I think some people just contest the usefulness of improving RLHF / RLAIF / constitutional AI as safety research and also think that it has capabilties/profit externalities. E.g. see discussion here.

(I personally think this this research is probably net positive, but typically not very important to advance at current margins from an altruistic perspective.)

Does anyone want to stop [all empirical research on AI, including research on prosaic alignment approaches]?

Yes, there are a number of posts to that effect.

That said, "there exist such posts" is not really why I wrote this. The idea I really want to push back on is one that I have heard several times in IRL conversations, though I don't know if I've ever seen it online. It goes like

There are two cars in a race. One is alignment, and one is capabilities. If the capabilities car hits the finish line first, we all die, and if the alignment car hits the finish line first, everything is good forever. Currently the capabilities car is winning. Some things, like RLHF and mechanistic interpretability research, speed up both cars. Speeding up both cars brings us closer to death, so those types of research are bad and we should focus on the types of research that only help alignment, like agent foundations. Also we should ensure that nobody else can do AI capabilities research.

Maybe almost nobody holds that set of beliefs! I am noticing now that my list of articles arguing that prosaic alignment strategies are harmful in expectation are by a pretty short list of authors.

So I keep seeing takes about how to tell if LLMs are "really exhibiting goal-directed behavior" like a human or whether they are instead "just predicting the next token". And, to me at least, this feels like a confused sort of question that misunderstands what humans are doing when they exhibit goal-directed behavior.

Concrete example. Let's say we notice that Jim has just pushed the turn signal lever on the side of his steering wheel. Why did Jim do this?

The goal-directed-behavior story is as follows:

  • Jim pushed the turn signal lever because he wanted to alert surrounding drivers that he was moving right by one lane
  • Jim wanted to alert drivers that he was moving one lane right because he wanted to move his car one lane to the right.
  • Jim wanted to move his car one lane to the right in order to accomplish the goal of taking the next freeway offramp
  • Jim wanted to take the next freeway offramp because that was part of the most efficient route from his home to his workplace
  • Jim wanted to go to his workplace because his workplace pays him money
  • Jim wants money because money can be exchanged for goods and services
  • Jim wants goods and services because they get him things he terminally values like mates and food

But there's an alternative story:

  • When in the context of "I am a middle-class adult", the thing to do is "have a job". Years ago, this context triggered Jim to perform the action "get a job", and now he's in the context of "having a job".
  • When in the context of "having a job", "showing up for work" is the expected behavior.
  • Earlier this morning, Jim had the context "it is a workday" and "I have a job", which triggered Jim to begin the sequence of actions associated with the behavior "commuting to work"
  • Jim is currently approaching the exit for his work - with the context of "commuting to work", this means the expected behavior is "get in the exit lane", and now he's in the context "switching one lane to the right"
  • In the context of "switching one lane to the right", one of the early actions is "turn on the right turn signal by pushing the turn signal lever". And that is what Jim is doing right now.

I think this latter framework captures some parts of human behavior that the goal-directed-behavior framework misses out on. For example, let's say the following happens

  1. Jim is going to see his good friend Bob on a Saturday morning
  2. Jim gets on the freeway - the same freeway, in fact, that he takes to work every weekday morning
  3. Jim gets into the exit lane for his work, even though Bob's house is still many exits away
  4. Jim finds himself pulling onto the street his workplace is on
  5. Jim mutters "whoops, autopilot" under his breath, pulls a u turn at the next light, and gets back on the freeway towards Bob's house

This sequence of actions is pretty nonsensical from a goal-directed-behavior perspective, but is perfectly sensible if Jim's behavior here is driven by contextual heuristics like "when it's morning and I'm next to my work's freeway offramp, I get off the freeway".

Note that I'm not saying "humans never exhibit goal-directed behavior".

Instead, I'm saying that "take a goal, and come up with a plan to achieve that goal, and execute that plan" is, itself, just one of the many contextually-activated behaviors humans exhibit.

I see no particular reason that an LLM couldn't learn to figure out when it's in a context like "the current context appears to be in the execute-the-next-step-of-the-plan stage of such-and-such goal-directed-behavior task", and produce the appropriate output token for that context.

Is it possible to determine whether a feature (in the SAE sense of "a single direction in activation space") exists for a given set of changes in output logits?

Let's say I have a feature from a learned dictionary on some specific layer of some transformer-based LLM. I can run a whole bunch of inputs through the LLM, either adding that feature to the activations at that layer (in the manner of Golden Gate Claude) or ablating that direction from the outputs at that layer. That will have some impact on the output logits.

Now I have a collection of (input token sequence, output logit delta) pairs. Can I, from that set, find the feature direction which produces those approximate output logit deltas by gradient descent?

If yes, could the same method be used to determine which features in a learned dictionary trained on one LLM exist in a completely different LLM that uses the same tokenizer?

I imagine someone has already investigated this question, but I'm not sure what search terms to use to find it. The obvious search terms like "sparse autoencoder cross model" or "Cross-model feature alignment in transformers" don't turn up a ton, although they turn up the somewhat relevant paper Text-To-Concept (and Back) via Cross-Model Alignment.

Wait I think I am overthinking this by a lot and the thing I want is in the literature under terms like "classifier" / and "linear regression'.

I've heard that an "agent" is that which "robustly optimizes" some metric in a wide variety of environments. I notice that I am confused about what the word "robustly" means in that context.

Does anyone have a concrete example of an existing system which is unambiguously an agent by that definition?

In this context, 'robustly' means that even with small changes to the system (such as moving the agent or the goal to a different location in a maze) the agent still achieves the goal. If you think of the system state as a location in a phase space, this could look like a large "basin of attraction" of initial states that all converge to the goal state.

If we take a marble and a bowl, and we place the marble at any point in the bowl, it will tend to roll towards the middle of the bowl. In this case "phase space" and "physical space" map very closely to each other, and the "basin of attraction" is quite literally a basin. Still, I don't think most people would consider the marble to be an "agent" that "robustly optimizes for the goal of being in the bottom of the bowl".

However, while I've got a lot of concrete examples of things which are definitely not agents (like the above) or "maybe kinda agent-like but definitely not central" (e.g. a minmaxing tic-tac-toe program that finds the optimal move by exploring the full game tree, or an e-coli bacterium which uses run-and-tumble motion to increase the fraction of the time it spends in favorable environments, a person setting and then achieving career goals), I don't think I have a crisp central example of a thing that exists in the real world that is definitely an agent.

I think I found a place where my intuitions about "clusters in thingspace" / "carving thingspace at the joints" / "adversarial robustness" may have been misleading me.

Historically, when I thought of of "clusters in thing-space", my mental image was of a bunch of widely-spaced points in some high-dimensional space, with wide gulfs between the clusters. In my mental model, if we were to get a large enough sample size that the clusters approached one another, the thresholds which carve those clusters apart would be nice clean lines, like this.

  

In this model, an ML model trained on these clusters might fit to a set of boundaries which is not equally far from each cluster (after all, there is no bonus reduction in loss for more robust perfect classification). So in my mind the ground truth would be something like the above image, whereas what the non-robust model learned would be something more like the below:

 

But even if we observe clusters in thing-space, why should we expect the boundaries between them to be "nice"? It's entirely plausible to me that the actual ground truth is something more like this

 

That is the actual ground truth for the categorization problem of "which of the three complex roots will iteration of the Euler Method converge on for  given each starting point". And in terms of real-world problems, we see the recent and excellent paper The boundary of neural network trainability is fractal.