This is a linkpost for https://www.adept.ai/act

ACT-1 can take a high-level user request and execute it. The user simply types a command into the text box and ACT-1 does the rest. In this example, this requires repeatedly taking actions and observations over a long time horizon to fulfill a single goal.

...

While we’re excited that these systems can transform what people can do on a computer, we clearly see that they have the potential to cause harm if misused or misaligned with user preferences. Our goal is to build a company with large-scale human feedback at the center — models will be evaluated on how well they satisfy user preferences, and we will iteratively evaluate how well this is working as our product becomes more sophisticated and load-bearing.

Daniel's commentary: To be clear, this is very much just a cool demo, not much better than WebGPT as far as I can tell, and not surprising at all that this level of capability is possible.

New Comment
4 comments, sorted by Click to highlight new comments since:

I think the main difference is that it has access to all of the user's computers (or at least browsers), right? This should imply way more opportunities for malicious actions, right? 

We need to reopen the debate on AI boxing and ask if this is something we want to oppose. I wrote about this in the basic AI safety questions thread.

How should this affect one's decision to specialize in UI design versus other areas of software engineering? Will there be fewer GUIs in the future, or will the "audience" simply cease to be humans?

IMO, one probably shouldn't be specializing in UI design at the moment. Then again, other areas of software engineering might not be any better. That said, most of what's driving my advice here comes from my background views on AI timelines and not from Adept specifically.