Wei_Dai comments on Are smart contracts AI-complete? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (46)
Security is one problem with smart contracts, but lack of applications is another one. AI may make the security problem worse, but it's needed for many potential applications of smart contracts. For example, suppose I want to pay someone to build a website for me that is standards conforming, informative, and aesthetically pleasing. Without an AI that can make human-like judgements, to create a smart contract where "the code is the contract", I'd have to mathematically define each of those adjectives, which would be impossibly difficult or many orders of magnitude more costly than just building the website.
The solution to this would be to have each of the contracting parties provide evidence to the AI, which could include digitally signed (authenticated) data from third parties (security camera operators, shipping companies, etc.), and have the AI make judgments about them the same way a human judge would.
If you're going to rely on signed data from third parties then you're still trusting 3rd parties.
In a dozen or so lines of code you could create a system that collects signed and weighted opinions from a collection of individuals or organisations making encoding arbitration simple. (does the delivery company say they delivered it etc)
You're just kicking the trust can down the road.
On the other hand it's unlikely we'll see any reasonably smart AI's with anything less than millions of lines of code (or code and data) and flaws anywhere in them destroy the security of the whole system.
This is not a great use for AI until we 1: actually have notable AI and 2: have proven the code that makes it up which is a far larger undertaking.