ARC-AGI is a diverse artificial dataset that aims to test general intelligence. It's sort of like an IQ test that's played out on rectangular grids.

Last month, @ryan_greenblatt proposed an approach that used GPT-4o to generate about 8000 Python programs per task. It then selected the programs that worked on the "training" examples given, and ran them to actually solve the "test" query.

His approach achieved 72% accuracy on the part of the benchmark that humans have been measured to get 85% accuracy on.

I have an idea for an improvement, on top of this approach. It should be relatively cheap. I don't have time to work on this myself, but I hope someone else runs with it, hence this post.

The motivation for this idea is Ryan's note 

[GPT-4o] makes simple mistakes like off-by-one errors extremely often

My idea is to try to fix them automatically. I call it Program Dithering. You go through the generated Python programs, and try to perturb all integer constants in it, one at a time, and maybe several at a time. 

Thus, if you try two perturbations at a time, a program that looks like this

x = 7
...
y = x + 3

can become

x = 8
...
y = x + 2

etc., generating a potentially large number of candidate programs without any extra GPT-4o calls. One could also consider perturbing array indexing locations in a similar way.

If off-by-one errors are extremely common, Program Dithering could fix some or many of them, and improve the overall accuracy. 

Off-by-one errors seem like a general flaw, so fixing them should not be "overfitting" the benchmark.

Generalizations:

If there are other simple mistakes that GPT-4o tends to make, e.g. swapping array indexes, one can extend the approach to try to fix them also.

Other tasks, like what AlphaCode does, might find this useful too.

New Comment
9 comments, sorted by Click to highlight new comments since:

Seems like a reasonable idea. To implement this, I'd have to look more carefully at exactly what types of mistakes GPT-4o makes to calibrate what should/shouldn't be dithered. (Additional programs are cheap, but you can easily get a combinatorial explosion with this sort of thing.)

(I'm not currently working on ARC-AGI methods and I might not ever return to this, so don't count on me trying this!)

Can I ask why you "might not ever return to this"? I've just recently discovered the ARC challenge and just went through Chollet's "On the Measure of Intelligence" and I'm tempted to go deeper into the rabbit hole. Just curious to know if your motivations for not returning are strictly personal or if you think this is a wild goose chase.  

If you have  locations that you want to perturb, then if you try a single off-by-one perturbation at a time, this adds  programs. With two at a time, this adds  programs.

There's a possible optimization, where you only try this on tasks where no unperturbed program was found (<28%)

 

EDIT: Ironically, I made an off-by-one error, which Program Dithering would have fixed: This should be 

I think I'd try having the 'dithering' done by an LLM also... giving the prompt of 'this code might have an off-by-one error. Can you suggest possibilities for values that might need correcting?

@ryan_greenblatt's approach also asks GPT-4o to improve its previous guesses.

These calls are expensive though.

The idea of Program Dithering is to generate many candidate programs cheaply.

Agree overall, but you might be able to use a notably cheaper model (e.g. GPT-3.5) to dither.

If GPT-4o made the off-by-one error, is it reasonable to expect GPT-3.5 to spot it?

No, but it doesn't need to spot errors, just note places which could plausibly be bugs.

A variation on this:

Any expression should be considered for replacement by a slightly bigger or smaller one. For example

z = f(x**2 * y)

should be replaceable by

z = f((x**2 - 1) * y)

The generated programs are quite short. So I would guess that this multiplies their number by 100-1000, if you consider one perturbation at a time.