All of Simon Möller's Comments + Replies

I flat out do not believe them. Even if Llama-2 was unusually good, the idea that you can identify most unsafe requests only a 0.05% false positive rate is absurd.

 

Given the quote in the post, this is not really what they claim. They say (bold mine):

However, false refusal is overall rare—approximately 0.05%—on the helpfulness dataset

So on that dataset, I assume it might be true although "in the wild" it's not.

Which brings us back to the central paradox: If the thesis that you need advanced systems to do real alignment work is true, why should we think that cutting edge systems are themselves currently sufficiently advanced for this task?

 

I really like this framing and question.

My model of Anthropic says their answer would be: We don't know exactly which techniques work until when or how fast capabilities evolve. So we will continuously build frontier models and align them.

This assumes at least a chance that we could iteratively work our way through this. I... (read more)

I fully agree. I tried using ChatGPT for some coaching, but tried to keep it high level and in areas where I wouldn't be too bothers if it showed up on the internet.

I think using the API, rather than ChatGPT, is better. See e.g. https://techcrunch.com/2023/03/01/addressing-criticism-openai-will-no-longer-use-customer-data-to-train-its-models-by-default/: 

Starting today, OpenAI says that it won’t use any data submitted through its API for “service improvements,” including AI model training, unless a customer or organization opts in. In addition, the co

... (read more)
3Solenoid_Entity
There are a few Obsidian plugins that do similar stuff using LLMs, (they purport to read your notes and help you something something). I'm thinking of mocking something up over the next week or so that does this 'diary questions' thing in a more interactive way, via the API, from inside Obsidian. 

Couple of years? I think we are talking about months here. I guess the biggest bottleneck would be to get all notes into the LLM context. But I doubt you really need that. I think you can probably guess a few important notes for what you are currently working on and add those as context.

"Human-level AGI" is not a useful concept (any more). I think many people equate human-level AGI and AGI (per definition) as a system (or a combination of systems) that can accomplish any (cognitive) task at least as well as a human.

That's reasonable, but having the "human-level" in that term seems misleading to me. It anchors us to the idea that the system will be "somewhat like a human", which it won't be. So let's drop the qualifier and just talk about AGI.

Comparing artificial intelligence to human intelligence was somewhat meaningful when we were far a... (read more)

This post is great. Strongly upvoted. I just spent a day or so thinking about OpenAI's plan and reading other people's critique. This post does a great job of pointing out problems with the plan at what I think is the right level of detail. The tone also seems unusually constructive.

Upvoted since I like how literally you went through the plan. I think we need to think about and criticize both, the literal version of the plan and the way it intersects with reality.

 

The methods you are trying are all known to fail at sufficiently high levels of intelligence. But if these are your only ideas, it is possible they get you far enough for GPT-5 to output a better idea.

To me this seems like a key point that many other critiques are missing that focus on specific details.