Stephen McAleese

Software Engineer interested in AI and AI safety.

Wikitag Contributions

Comments

Sorted by

The risks of nuclear weapons, the most dangerous technology of the 20th century, were largely managed by creating a safe equilibrium via mutual assured destruction (MAD), an innovative idea from game theory.

A similar pattern could apply to advanced AI, making it valuable to explore game theory-inspired strategies for managing AI risk.

Thanks for these thoughtful predictions. Do you think there's anything we can do today to prepare for accelerated or automated AI research?

I agree that the Alignment Forum should be selective, and its members probably represent a small subset of LessWrong readers. That said, useful comments from regular LessWrong users are often promoted to the Alignment Forum.

However, I do think there should be more comments on the Alignment Forum because many posts currently receive no comments. This may be discouraging for authors, because they may feel that their work isn't being read or appreciated.

Thank you for bringing up this issue.

While we don't want low quality comments, comments can provide helpful feedback to the author and clarify the reader's thinking. Because of these benefits, I believe commenting should be encouraged.

The upvoting and downvoting mechanisms helps filter out low-quality comments so I don’t think there’s a significant risk of them overwhelming the discussion.

"Maybe a slight tweak to the LLM architecture, maybe a completely novel neurosymbolic approach."

I think you might be underestimating the power of incremental, evolutionary improvements over time where near-term problems are constantly solved and this leads to gradual improvement. After all, human intelligence is the result of gradual evolutionary change and increasing capabilities over time. It's hard to point to a specific period in history where humans achieved general intelligence.

Currently LLMs are undoubtedly capable at many tasks (e.g. coding, general knowledge) and much more capable than their predecessors. But it's hard to point at any particular algorithmic improvement or model and say that it was key to the success of modern LLMs.

So I think it's possible that we'll see more gradual progress and tweaks on LLMs that lead towards increasingly capable models and eventually yield AGI. Eventually you could call this progress a new architecture even though all the progress is gradual.

I know using LLMs on LessWrong is often frowned upon (probably for good reasons) but given that this post is about using AIs to generate and evaluate AI research I thought it would be appropriate to use an LLM here.

I asked o1 pro to evaluate this post and this was its response:

This post offers an intriguing perspective on delegating the hardest parts of AI safety research to AI itself, rather than expecting human researchers to handle every step all the way up to superintelligence. The central idea—“passing the buck” to AI—strikes me as a pragmatic stance if you anticipate that AI systems will soon outstrip humans in many relevant research and engineering capacities. Notably, the author does not claim that autonomous “buck passing” is trivially safe or guaranteed to work out of the box; instead, they provide a structured framework for why it could improve safety relative to keeping humans in the loop.

A central strength is the post’s recognition that AI developers might reach a point where human oversight becomes practically irrelevant: sheer speed and complexity of AI-led R&D can eclipse what any human team can meaningfully supervise. The proposed solution is a multi-layered approach of alignment, control, and incentive design—each method contributing partial safeguards that, ideally, reinforce each other. The emphasis on continuous testing, local verifiability, iterative evaluations, and “multi-party integrity” structures also underscores a realistic appreciation that no single safety measure is foolproof.

That said, many open challenges remain. The post takes on big assumptions: for example, that we can rely on thorough testing or incremental “bootstrapping” to detect (and correct) misalignment before it becomes catastrophic. While such approaches are viable in principle, adversarial dynamics at very high capability levels can be harder to test than the post acknowledges. Another concern is that the solution is still highly conceptual; significant engineering details would be required to make “buck passing” arguments robust against advanced adversarial behavior. Overall, though, the post is refreshingly direct about its core premise—using AI to solve the alignment problem—and outlines multiple thoughtful ways to try to make that premise safe.

I thought its response was pretty helpful and I would rate it as 7/10.

Thanks for the post. I thought it was interesting and considered a wide variety of risks and mitigation strategies. I thought it made a decent argument that delegating the superalignment problem to advanced AIs could be possible using a mixture of alignment and control methods.

The risk I'm most concerned with is alignment faking where agents intelligently hide misalignment to appear more safe than they are.

It seems like the post proposes a strategy where once it has been verified that the AIs are behaving safely, they are deployed to carry out harder tasks or more capable AIs are deployed.

The issue is that alignment faking breaks that feedback loop: if the AIs are deceptive and only appear to be aligned and useful, then the verification steps don't really have any value and you end up deploying increasingly powerful misaligned AIs.

The post describes that alignment faking is caused by a distribution shift where the task to be performed is significantly harder than the test task:

Not-alignment-faking arguments require that there is not an extreme distribution shift between evaluation tasks and tasks for which regulators intend to defer to the AI advisors (section 5.4.4). This is problematic because deference is most useful when developers can’t evaluate tasks on their own. For example, developers might ask an AI advisor if thousands of pages of empirical ML research provide evidence that an AI system is safe.

Instead I would describe the problem as arising from a generator and verifier mismatch: when the generator is much stronger than the verifier, the verifier is incentivized to fool the verifier without completing the task successfully.

The post describes bootstrapping as a potential solution. Here are some quotes from the paper:

Bootstrapping can be used to scale not-alignment-faking arguments up to hard-to-evaluate tasks. Bootstrapping involves utilizing weaker, trusted AI systems to recursively develop more powerful AI systems so that there is never an extreme distribution shift.

The diagram above shows how not-alignment-faking arguments could be bootstrapped up to strongly superhuman AI systems. This particular strategy is called iterated amplification (Christiano et al., 2018) [1]. AI systems can be ‘amplified’ by creating additional copies of them or speeding them up

It seems like this bootstrapping strategy involves improving both the generator and the verifier using an amplification and distillation process.

This is how the IDA blog post describes IDA:

Step 1:
- A[0] is the original AI.
- Amplify(H, A[0]) is a human + a large number of copies of A[0] that verifies the behavior of A[0].
- A[1] is trained to imitate Amplify(H, A[0])

Step 2:
- A[1] produces an output.
- Amplify(H, A[1]) is a human + a large number of copies of A[1] that verifies the behavior of A[1].
- A[2] is trained to imitate Amplify(A[1])

Step 3:
...

This section of the post describes AIs checking the behavior of each other and seems to have a similar idea:

This means that M_1 agents will initially try to maintain their alignment. For example, M_1 agents might constantly interrogate each other, and run checks for alignment faking – iteratively increasing the complexity of behavioral testing data at each time step.

A later section describes monitoring AIs with other AIs and preventing collusion between the monitor and the actor:

AI agents are unable to make trades with each other. AI agents might have different goals and still make agreements with successors (perhaps verified with lie detectors). One way to prevent these agreements is to block communication between monitors and the agents they monitor. That way, monitors can never express what outcomes they care about.

The paper has some ideas for preventing alignment faking as well:

  • Externalized reasoning arguments claim that AI systems are incapable of performing reasoning
    necessary to fake alignment without revealing this reasoning in their chain-of-thought outputs.
  • Interpretability arguments justify that suspicious reasoning is not occurring within model internals.

I think it would be useful to have a post or paper focused on defining alignment faking and mitigation strategies.

  1. ^

    This post explains IDA well.

I agree that this seems like a core alignment problem. The problem you are describing seems like a rephrasing of the ELK problem.

Here is a quote from the post describing what to do in that situation:

These automatic control measures only raise the alarm. What happens after the alarm is raised? The developer might also implement autonomous incident response:

  • Resampling: Force defective M_1 agents to take different actions.
  • AI agent elimination: Delete suspicious agents in favor of other (hopefully decorrelated) unsuspicious ones.
  • Fine-tuning: Train agents not to take suspicious actions again.

I'm curious about how fine-tuning away scheming thoughts could still lead to scheming. Here are two possible outcomes:

  1. Fine-tuning actually robustly reduces scheming behavior (as intended) maybe because tokens and actions associated with scheming can be reliably detected and these tokens are required for scheming in the CoT.
  2. Fine-tuning reduces the appearance of scheming but doesn't really reduce scheming: in this case, the fine-tuning reduces the probability of certain words or trajectories but the underlying 'concept' of scheming remains and the model merely rephrases its thinking in a way that is not detected as scheming and uses different tokens.

For scenario 2, maybe you need something like activation engineering to change the underlying thought process rather than just token-level descriptions of it.

Unfortunately I don't think many people agree with me (outside of the LW bubble) and that what I'm proposing is still somewhat outside the Overton window. The cognitive steps that are needed are as follows:

  1. Being aware of AGI as a concept and a real possibility in the near future.
  2. Believing that AGI poses a significant existential risk.
  3. Knowing about pausing AI progress as a potential solution to AGI risk and seeing it as a promising solution.
  4. Having a detailed plan to implement the proposed pause in practice.

A lot of people are not even at step 1 and just think that AI is ChatGPT. People like Marc Andreessen and Yan LeCun are at step 1. Many people on LW are at step 2 or 3. But we need someone (ideally in the government like a president or prime minister) at step 4. My hope is that that could happen in the next several years if necessary. Maybe AI alignment will be easy and it won't be necessary but I think we should be ready for all possible scenarios.

I don't have any good ideas right now for how an AI pause might work in practice. The main purpose of my comment was to propose argument 3 conditional on the previous two arguments and maybe try to build some consensus.

Load More