You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

HungryHobo comments on Are smart contracts AI-complete? - Less Wrong Discussion

11 Post author: Stuart_Armstrong 22 June 2016 02:08PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (46)

You are viewing a single comment's thread.

Comment author: HungryHobo 24 June 2016 09:48:54AM 5 points [-]

AI is complex. Complexity means bugs. Bugs in smart contracts are exactly what you need to avoid.

What is needed the most is mathematically proving code.

For certain contract types you're going to need some way of confirming that, say, physical goods have been delivered but you gain nothing my adding AI to the mix.

Without AI you have a switch someone has to toggle or some other signal that someone might hack. With AI you just have some other input stream that someone might tamper with. Either way you need to accept information into the system somehow and it may not be accurate. AI does not solve the problem. It just adds complexity which makes mistakes more likely.

When all you have is a hammer, everything looks like a nail, when all you have is AI theories everything looks like a problem to throw AI at.

Comment author: Wei_Dai 25 June 2016 05:49:26AM 4 points [-]

AI is complex. Complexity means bugs. Bugs in smart contracts are exactly what you need to avoid.

Security is one problem with smart contracts, but lack of applications is another one. AI may make the security problem worse, but it's needed for many potential applications of smart contracts. For example, suppose I want to pay someone to build a website for me that is standards conforming, informative, and aesthetically pleasing. Without an AI that can make human-like judgements, to create a smart contract where "the code is the contract", I'd have to mathematically define each of those adjectives, which would be impossibly difficult or many orders of magnitude more costly than just building the website.

With AI you just have some other input stream that someone might tamper with.

The solution to this would be to have each of the contracting parties provide evidence to the AI, which could include digitally signed (authenticated) data from third parties (security camera operators, shipping companies, etc.), and have the AI make judgments about them the same way a human judge would.

Comment author: HungryHobo 27 June 2016 11:54:53AM 0 points [-]

If you're going to rely on signed data from third parties then you're still trusting 3rd parties.

In a dozen or so lines of code you could create a system that collects signed and weighted opinions from a collection of individuals or organisations making encoding arbitration simple. (does the delivery company say they delivered it etc)

You're just kicking the trust can down the road.

On the other hand it's unlikely we'll see any reasonably smart AI's with anything less than millions of lines of code (or code and data) and flaws anywhere in them destroy the security of the whole system.

This is not a great use for AI until we 1: actually have notable AI and 2: have proven the code that makes it up which is a far larger undertaking.