Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Dakara10

"Well, I'm not sure. As you mention, it depends on the step size. It also depends on how vulnerable to adversarial inputs LLMs are and how easy they are to find. I haven't looked into the research on this, but it sounds empirically checkable. If there are lots of adversarial inputs which have a wide array of impacts on LLM behavior, then it would seem very plausible that the optimized planner could find useful ones without being specifically steered in that direction."

I am really interested in hearing CBiddulph's thoughts on this. Do you agree with Abram?

Dakara10

I have 3 other concrete concerns about this strategy. So if I understand it correctly, the plan is for humans to align AGI and then for that AGI to align AGI and so forth (until ASI).

  1. What if the strategy breaks on the first step? What if first AGI turns out to be deceptive (scheming) and only pretends to be aligned with humans. It seems like if we task such deceptive AGI to align other AGIs, then we will end up with a pyramid of misaligned AGIs.

  2. What if the strategy breaks later down the line? What if AGI #21 accidentally aligns AGI #22 to be deceptive (scheming)? Would there be any fallback mechanisms we can rely on?

  3. What is the end goal? Do we stop once we achieve ASI? Can we stop once we achieve ASI? What if ASI doesn't agree and instead opts to continue self-improving? Are we going to be able to get to the point where the acceleration of ASI's intelligence plateaus and we can recuperate and plan for future?

Dakara10

Sure, I might as well ask my question directly about scalable oversight, since it seems like a leading strategy of iterative alignment anyways. I do have one preliminary question (which probably isn't worthy of being included in that post, given that it doesn't ask about a specific issue or threat model, but rather about expectations of people).

I take it that this strategy relies on evaluation being easier than coming up with research? Do you expect this to be the case?

Dakara10

That's pretty interesting, I do think that if iterative alignment strategy ends up working, then this will probably end up working too (if nothing else, then because this seems much easier).

I have some concerns left about iterative alignment strategy in general, so I will try to write them down below.

EDIT: On the second thought, I might create a separate question for it (and link it here), for the benefit of all of the people who concerned about the things (or similar things) that I am concerned about.

Dakara10

This topic is quite interesting for me from the perspective of human survival, so if you do decide to make a post specifically about preventing jailbreaking, then please tag me (somewhere) so that I can read it.

Dakara10

I agree with comments both by you and Seth. I guess that isn't really part of an alignment as usually understood. However, I think it is a part of a broad preventing AI from killing humans strategy, so it's still pretty important for our main goal.

I am not exactly sure I understand your proposal. Are you proposing that we radically gut our leading future model by restricting it severely? I don't think any AI labs will agree to do so, because such future AI is much less useful than probably even current AIs.

Or are you proposing that we use AI monitors to monitor our leading future AI models and then we heavily restrict only the monitors?

Dakara30

I also have a more meta-level layman concern (sorry if it will sound unusual). There seem to be a large number of jailbreaking strategies that all succeed against current models. To mitigate them, I can conceptually see 2 paths: 1) trying to come up with a different niche technical solution to each and every one of them individually or 2) trying to come up with a fundamentally new framework that happens to avoid all of them collectively.

Strategy 1 seems logistically impossible, as developers at leading labs (which are most likely to produce AGI) have to be aware of all of them (and they are often reported in relatively unknown papers). Furthermore, even if they somehow manage to monitor all reported jailbreaks, they would have to come up with so many different solutions, that it seems very unlikely to succeed.

Strategy 2 seems conceptually correct, but there seems to be no sign of it as even newer models are getting jailbreaked.

What do you think?

Dakara10

Looking more generally, there seems to be a ton of papers that develop sophisticated jailbreak attacks (that succeed against current models). Probably more than I can even list here. Are there any fundamentally new defense techniques that can protect LLMs against these attacks (since the existing ones seem to be insufficient)?

EDIT: The concern behind this comment is better detailed in the next comment.

Dakara30

"Perhaps the missing piece is that I think alignment is already solved for LLM agents."

Another concern that I might have is that maybe it only seems like alignment is solved for LLMs. For example, this, this, this and this short papers argue that that seemingly secure LLMs may not be as safe as we initially believe. And it appears that they test even our models that are considered to be more secure and still find this issue.

Dakara32

That's a really good point. I would like to see John address it, because it seems quite crucial for the overall alignment plan.

Load More