I neglected to update my comment here -- the agent I built for this replication is now publicly available as part of the METR task workbench, here: https://drive.google.com/drive/folders/1-m1y0_Akunqq5AWcFoEH2_-BeKwsodPf
That's me on the bass! Thank you for hosting, it was really fun to jam with everyone.
Yeah, I definitely could! It's on my to-do list. I'll let you know when I complete it.
Thank you! No, I'm not building custom prompts for the different tasks. I wrote a single prompt template -- the only difference between runs is the task description, which gets plugged into the template. I think ARC Evals did the same thing.
I have been improving the prompt as I worked through the tasks. I probably spent 2-3 hours working on the prompt to try and improve the agent's performance on some tasks. I'll definitely rerun all the tasks with the current version of my prompt, just to check that it can still perform the easier tasks.
You're right that getting the agent to attempt the last three tasks is relatively simple. Still, I was thinking that it wasn't worth the time or money. I think it's very unlikely that the agent will succeed at any of the last three tasks. Still, maybe it's worth getting a conclusive negative result.
Thank you for the kind comment! You have lots of good ideas for how to improve this. I especially like the idea of testing with different cloud providers. I could add programming languages in there: Maybe GPT-4 is better at writing Node.JS than Python (the language I prompted it to use).
I agree, a fully reproducible version would have benefits. Differences in prompt quality between evaluations is a problem.
Also agreed that it's important to allow the agent to try and complete the tasks without assistance. I did that for this reproduction. The only changes I made to the agent's commands were to restrict it to accessing files in a particular directory on my computer.
I've hesitated to open-source my code. I don't want to accidentally advance the frontier of language model agents. But like I said in another comment, my code and prompts are pretty simple and don't use any techniques that aren't available elsewhere on the internet. So maybe it isn't a big deal. Curious to hear what you think.
EDIT: The agent I built for this replication is now publicly available as part of the METR task workbench: https://drive.google.com/drive/folders/1-m1y0_Akunqq5AWcFoEH2_-BeKwsodPf
I'm torn! I think that better LLM scaffolding accelerates capabilities as much as it accelerates alignment. On the other hand, a programmer (or a non-programmer with help from ChatGPT) could easily reproduce my current scaffolding code. Maybe open-sourcing the current state of the project is fine. What do you think?
since private goods are non-rival it is efficient to exclude consumers who aren't willing to pay
Should this be, "since private goods are rival it is efficient..."?
Here is a submission: https://ai-safety-conversational-agent.thomasbroadley.com
Source code here: https://github.com/tbroadley/ai-safety-conversational-agent
I followed @Max H's suggestion of using chat-langchain. To start, I created an embedding based on the articles from https://aisafety.info and have the submission using that embedding.
I'll get in touch with Stampy about working on their conversational agent.
I have arrived!