Wiki Contributions

Comments

Sorted by
Leopard3-6

After reading Pope and Belrose's work, a viewpoint of "lots of good aligned ASIs already building nanosystems and better computing infra" has solidified in my mind. And therefore, any accidentally or purposefully created misaligned AIs necessarily wouldn't have a chance of long-term competitive existence against the existing ASIs. Yet, those misaligned AIs might still be able to destroy the world via nanosystems; as we wouldn't yet trust the existing AIs with the herculean task of protecting our dear nature against the invasive nanospecies and all such. Byrnes voiced similar concerns in his point 1 against Pope&Belrose.

Assuming AIs don't soon come up with even better crypto/decentralization solutions: I hadn't considered that the smart contracts being too complicated (and thus unsecure) might not hold true anymore once AI-assistants and cyberprotection scale up. Especially the ZK, a natural language for AIs.