1 min read

2

This is a special post for quick takes by Kabir Kumar. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
6 comments, sorted by Click to highlight new comments since:

btw, thoughts on this for 'the alignment problem'?
"A robust, generalizable, scalable,  method to make an AI model which will do set [A] of things as much as it can and not do set [B] of things as much as it can, where you can freely change [A] and [B]"

Freely changing an AGIs goals is corrigibility, which is a huge advantage if you can get it. See Max Harms' corrigibility sequence and my "instruction-following AGI is easier...."

The question is how a reliably get such a thing. Goalcrafting is one part of the problem, and I agree that those are good goals; the other and larger part is technical alignment, getting those desired goals to really work that way in the particular first AGI we get.

Yup, those are hard. Was just thinking of a definition for the alignment problem, since I've not really seen any good ones.

I'd say you're addressing the question of goalcrafting or selecting alignment targets.

I think you've got the right answer for technical alignment goals; but the question remains of what human would control that AGI. See my "if we solve alignment, do we all die anyway" for the problems with that scenario.

Spoiler alert; we do all die anyway if really selfish people get control of AGIs. And selfish people tend to work harder at getting power.

But I do think your goal defintion is a good alignment target for the technical work. I don't think there's a better one. I do prefer instruction following or corriginlilty by the definitions in the posts I linked above because they're less rigid, but they're both very similar to your definition.

I pretty much agree. I prefer rigid definitions because they're less ambiguous to test and more robust to deception. And this field has a lot of deception.

give better names to actual formal math things, jesus christ.