An AI DAO is an interesting thing to specify.
The etherium blockchain as a whole contains a virtual machine running at 350,000 instructions per second. In other words, even if someone very rich threw enough etherium at their AI to be able to outbid everyone else for gas, the AI would be running on a computer 10,000x less powerful than a raspberry pi.
A blockchain replaces one computer doing an Add instruction, with many computers all doing cryptographic protocols to check to make sure none of the other computers are cheating. It comes with one heck of a performance penalty. I would expect making an AI run on that level of compute is at the very least much harder than making an AGI that takes a more reasonable amount of compute.
So lets say the AI is actually running on a desktop in the programmers house. Its given unrestricted internet access. They might tell someone what they are planning to do, or what they have done. If the AI is smart and unaligned, it won't make its existence as an unaligned AI obvious. Although there is a chance the AI will give its existence away when its still fairly dumb. (Probably not, most things a dumb AGI can do online, a trolling human can do. Even if it went on lesswrong and asked "Hi, I'm a young dumb AI, how can I take over the world", we still wouldn't realize it was an actual AI.)
So in this scenario, we probably don't get strong evidence that the AI exists until it is too late to do anything. Although its possible that someone from here calls the developer and says "I'm concerned about the safety of your AI design, could you turn it off". That might happen if the design was posted somewhere prominent. But in that case, someone else will run the same code next week.
What people like Eliezer are aiming for is a scenario where (they/ someone who listened to them ) make an AGI aligned to the best interests of humanity. Somehow or other, that AI stops anyone else making an AI. (And probably does a bunch of other things.) Nanomachines that melt all GPUs has been suggested.
I specified a "Blockchain", and not Ethereum specifically. Assume we are using a 3rd generation or higher Blockchain, and a the Oracle problem has been solved. The heavy computation could be outsourced off the Blockchain and minimal core circuits would run on the DAO. If the particular universe we inhabit is structured so that AI strength is proportional to computational power (and Large Language Scaling laws seem to suggest this is the case) then in the war between friendly and unfriendly AI becomes a game where first move wins. Once an unfriendly AI ...
The scenario is simple, some unassuming programmer creates a DAO on a Blockchain that is the seed AI, with the single purpose gaining political, economic, and military power to create a new world order with the DAO at the top of the proverbial food chain. The question becomes, what do the bloggers/posters of LessWrong.org actually DO to stop the AI DAO?