I saw this announcement on Twitter:
https://twitter.com/asimdotshrestha/status/1644883727707959296
I gave it a trial run and asked it to Solve the hard AI Alignment problem:
At this point the trial had reached 30 seconds and expired.
I think it is funny that it "thinks" the task executed successfully. I guess that's because in most texts ChatGPT has read how researchers describe their successes but not the actual work that is backing it up. I guess the connection to the real world is what will throw off such systems until they are trained on more real-world-like data.
I agree. A while back, I asked Does non-access to outputs prevent recursive self-improvement? I think that letting such systems learn from experiments with the real world is very dangerous.