The cryptocurrency ethereum is mentioned here occasionally, and I'm not surprised to see an overlap in interests from that sphere. Vitalik Buterin has recently published a blog post discussing some ideas regarding how smart contracts can be used to enforce superrationality in the real world, and which cases those actually are. 

New Comment
10 comments, sorted by Click to highlight new comments since:

An advanced DAO (decentralized/distributed autonomous organization), the way Vitalik images it, is a pretty believable candidate for an uncontrolled seed AI, so I'm not sure Eliezer and co shares Vitalik's apparent enthusiasm regarding the convergence of these two sets of ideas.

I don't think so. They're running on the blockchain, which slows them down. The primary decision-making mechanisms for them are going to basically be the same as can be used for existing organizations, like democracy, prediction markets, etc. Unless you think your bank or government is going to become a seed AI, there's not that much more to DAOs.

They're running on the blockchain, which slows them down.

They can follow the advice of any off-the-blockchain computational process if that is to their advantage. They can even audit this advice, so that they don't lose their autonomy. For example, Probabilistically Checkable Proofs are exactly for that setup: when a slow system has to cooperate with an untrusted but faster other. There's the obvious NP case, when the answer by Merlin (the AI) can be easily verified by Arthur (the blockchain). But the classic IP=PSPACE result says that this kind of cooperation can work in much more general cases.

The primary decision-making mechanisms for them are going to basically be the same as can be used for existing organizations, like democracy, prediction markets, etc.

These are just the typical use cases proposed today. In principle, their decision-making mechanism can be anything whatsoever, and we can expect that there will be many of them competing for resources.

The thing that I think makes them interesting from a FAI perspective is the "autonomous" part. They can buy and sell and build stuff. They have agency, they can be very intelligent, and they are not human.

...Okay, that sounded a bit too sensationalist, so let me clarify. Personally, I am much more optimistic regarding UFAI issues than MIRI or median LW. I don't actually argue that DAOs are dangerous. What I argue is that if someone is interested in how very smart, autonomous computational processes could arise in the future, this possible path might be worth investigating a bit.

An unusual feature of an AI of this form is its speed - while the off-the-blockchain subprocesses can run at normal speed, IIRC the blockchain itself is optimistically going to have a block time of 12 seconds. This means you couldn't have a realtime conversation with the AI as a whole, nor could it drive a car for instance, although a subprocess might be able to complete these tasks. Overall, it would perhaps be more like a superintelligent ant colony.

That very much depends on what you choose to regard as the 'true nature' of the AI. In other words we're flirting with reification fallacy by regarding the AI as a whole as 'living on the blockchain', or even being 'driven' by the blockchain. It's important to fix in mind what makes the blockchain important to such an AI and to its autonomy. This, I believe, is always the financial aspect. The on-blockchain process is autonomous precisely because it can directly control resources; it loses autonomy in so far as its control of resources no longer fulfils its goals. If you wish, you can consider the part of the AI which verifies correct computation and interfaces with 'financial reality' as being its real locus of selfhood, but bear in mind that even the goal description/fulfilment logic can be zero-knowledge proofed and so exist off-chain. From my perspective, the on-chain component of such an AI looks a lot more like a combination of robotic arm and error-checking module.

[-][anonymous]50

I've definitely seen people around here think their government is going to become a seed AI, though the way they put it is that they're concerned that the NSA might make one.

Why the NSA rather then DARPA or some other government agency? Is it just because the NSA are rather unpopular at the moment, so the halo effect leads people to decide they might destroy the world?

[-]gjm60

More likely because the NSA is widely believed to have a huge amount of computing power at its disposal.

Why the NSA

Because they have a great deal of resources and very little oversight.

There's a difference between "become" and "produce".