One thing I might have noticed recently is a lot of governance on AI might require specific models of AI in society, especially over misalignment or misuse concerns.

Are there AI policies that could be robustly-net positive without having to tie it to a specific scenario?

New Answer
New Comment

1 Answers sorted by

20

Nobody has answered this?

Centralizing compute into known locations. This has minimal effect on the development of advanced AI (it adds a little bit of latency to robotics applications since the racks of compute driving a robot would be in a nearby data center, with only a little onboard the bot) but has protective effect on worst case scenarios.

Suspect there is an out of control ASI out there? Send people in person to each location with a paper list, check what each cluster is really doing with out of channel methods. (Like plug in a monitor and keyboard to actual hardware and check at the hypervisor or privileged os layer)

Or just start pulling breakers.