Comment author: Wei_Dai 25 June 2016 05:49:26AM 4 points [-]

AI is complex. Complexity means bugs. Bugs in smart contracts are exactly what you need to avoid.

Security is one problem with smart contracts, but lack of applications is another one. AI may make the security problem worse, but it's needed for many potential applications of smart contracts. For example, suppose I want to pay someone to build a website for me that is standards conforming, informative, and aesthetically pleasing. Without an AI that can make human-like judgements, to create a smart contract where "the code is the contract", I'd have to mathematically define each of those adjectives, which would be impossibly difficult or many orders of magnitude more costly than just building the website.

With AI you just have some other input stream that someone might tamper with.

The solution to this would be to have each of the contracting parties provide evidence to the AI, which could include digitally signed (authenticated) data from third parties (security camera operators, shipping companies, etc.), and have the AI make judgments about them the same way a human judge would.

Comment author: HungryHobo 27 June 2016 11:54:53AM 0 points [-]

If you're going to rely on signed data from third parties then you're still trusting 3rd parties.

In a dozen or so lines of code you could create a system that collects signed and weighted opinions from a collection of individuals or organisations making encoding arbitration simple. (does the delivery company say they delivered it etc)

You're just kicking the trust can down the road.

On the other hand it's unlikely we'll see any reasonably smart AI's with anything less than millions of lines of code (or code and data) and flaws anywhere in them destroy the security of the whole system.

This is not a great use for AI until we 1: actually have notable AI and 2: have proven the code that makes it up which is a far larger undertaking.

Comment author: HungryHobo 24 June 2016 03:41:27PM *  5 points [-]

ok, for some context here, I think a lot of people are getting hung up on the words "contract" or "smart contract".

If we want to talk about it intelligently it helps to taboo the term "contract" or else people get terribly terribly confused like in some of the existing comments.

I'm going to say "independent program" instead of "smart contract" for clarity

Ethereum allows for the existence of independent programs which can hold and transfer money. The rules have to be hardcoded into them when you create them. They cannot be changed once launched.

For example if you wanted to create a prize fund for people who've factored large numbers you could create an independent program which accepts a set of numbers, decides if they're the prime factors of another number and if it is then it transfers a prize fund to the first person to do so.

Years later, someone factors the number, connects and gets their payment.

You might be a thousand years dead but the independent program is still in the system and has control of the funds.

Depending on how you write it you may even not be able to retrieve the funds yourself without solving the problem.

It doesn't matter if a court orders you to pay out because they have a law declaring pi to be exactly three and someone has come forward with a kooky proof for their "factored" number. If it doesn't match the rules of the program there's nothing you or the court can do.

If you've not given yourself control at the start it will sit there until the end of time until someone actually factors the number.

Or perhaps you set up the program badly and it will accept numbers which aren't the factors as correct and gives a payment to someone who's not factored anything.

it is not a legal contract which says "I will give money to the person who solves this problem", it's a piece of code which, when followed, may give the money in it's control to someone who connects to it depending on the rules programmed into it.

Some funds have been set up controlled by independent programs and their "about" pages tend to say something along the lines of "you're agreeing to the code being followed, anything we write here is just our best try at explaining what the code does, here is the full source code, if you're happy with this then you're free to give control of some funds to this code or a copy of this code"

Comment author: tsathoggua 22 June 2016 11:26:28PM 0 points [-]

In the specific case of the project known as 'TheDAO', the terms of service does indeed waive all legal rights and says that whatever the computer program says supersedes all human-world stuff.

I may have missed it, but that is not at all what the link you posted says. It has a waiver of liability against 3rd parties (basically the DAO operation). It does not say that you cannot have liability between to parties subject to a contract, or even seem to mention anything about dispute resolution.

Also, I would like to point out that you CANNOT have a contract that requires an illegal act. For instance, you cannot create a contract that says "Person A waives all legal recourse against Person B if Person B murders them." The act of murder is still illegal even if both parties agree to it.

Finally, the TOS for DAO is not the contract, it is merely the TOS for using the service. So the individual contracts between two people are going to override that.

Comment author: HungryHobo 24 June 2016 03:10:56PM *  0 points [-]

You're still conflating the term "smart contract" and the idea of a legal contract.

That's like conflating "observer" in physics with a human staring at you or hearing someone talking about a Daemon on their server and talking as if it's a red skinned monster from hell perched on the server.

Imagine someone says

"This is a river, if you throw your money in it will end up somewhere, we call the currents a 'water contract', the only difference to a normal river is that we've got the paperwork signed such that this doesn't count as littering"

It does indeed end up somewhere and you're really really unhappy about where it ends up.

Who do you think you're going to take to court and for what contract?

Comment author: Lumifer 23 June 2016 02:20:18AM 2 points [-]

But you probably could achieve substantially the same effect.

That depends. Courts can, and on occassion do, rewrite contracts (or refuse to enforce them) because they consider the contract to be inequitable or, in simpler terms, unjust.

Comment author: HungryHobo 24 June 2016 02:56:51PM *  1 point [-]

Though unfortunately for them once launched the particular type of smart contract in question enforces itself (since it handles the transfers itself) and re-writing it isn't really possible without destroying the entire system so the court isn't being asked for help enforcing the contract and a ruling asking to change it is about as enforceable as a ruling declaring the moon to be an unlicensed aircraft that needs to cease flight immediately if you can't get your hands on both parties to physically force them to make adjustments using new transactions.

It's complicated even more by the fact that contracts can themselves be recipients/actors within this system.

Comment author: HungryHobo 24 June 2016 09:48:54AM 5 points [-]

AI is complex. Complexity means bugs. Bugs in smart contracts are exactly what you need to avoid.

What is needed the most is mathematically proving code.

For certain contract types you're going to need some way of confirming that, say, physical goods have been delivered but you gain nothing my adding AI to the mix.

Without AI you have a switch someone has to toggle or some other signal that someone might hack. With AI you just have some other input stream that someone might tamper with. Either way you need to accept information into the system somehow and it may not be accurate. AI does not solve the problem. It just adds complexity which makes mistakes more likely.

When all you have is a hammer, everything looks like a nail, when all you have is AI theories everything looks like a problem to throw AI at.

Comment author: SquirrelInHell 01 June 2016 04:38:29AM 1 point [-]

The first story I saw on the main page was "The Metropolitan Man".

I thought "OK, I'll give it 10 minutes, see how far it can get".

After 1,5 minute the story displayed a total lack of understanding of middle-school level physics.

Superman prevented an accident by flying into the space between two cars and stopping them with his hands. However, a deceleration on the distance equal to the length of one human arm, is no less lethal than over the (approximately equal) length of the average crumple zone.

I mean, I don't want to poop on the party, but seriously?

Comment author: HungryHobo 02 June 2016 01:12:27PM *  5 points [-]

sure, and the traditional plot line where superman grabs a plumeting jet would actually lead to the jet tearing like tissue paper around wherever he grabbed it.

A certain level of "ok superman has a small physics-free bubble around him" needs to be granted if you want to do anything with superman.

A lot of ink has been spilled by geeks trying to come up with self-consistent systems under which superman could do what he regularly does in the stories.

Comment author: Lumifer 27 May 2016 02:25:16PM 3 points [-]

extremely financially secure with lots of reserves

With a couple of hundred thousand dollars? They don't make you financially independent (defined as "don't have to work"), you can't even buy an apartment in SF or NYC, etc...

Comment author: HungryHobo 27 May 2016 04:30:56PM *  1 point [-]

Ya but I don't want to buy an apartment in new York.

Again, I didn't say utility goes to zero. it just drops off dramatically. The difference between 0 and 250K is far bigger in terms of utility than the difference between 250k and 500k. You still can't buy a new york apartment and having 500K is better than having only 250K but in terms of how it changes your life the first increment is far more significant.

Comment author: Lumifer 26 May 2016 03:14:58PM 3 points [-]

anything more than a couple hundred K provides little utility at all

How do you know?

Comment author: HungryHobo 27 May 2016 10:22:46AM 1 point [-]

Because at that point I'm tapdancing on the top of Maslow's Hierarchy of Needs, extremely financially secure with lots of reserves.

It doesn't go to zero but it's like the difference between the utility of an extra portion of truffle desert when I'm already stuffed vs the utility of a few bags of plumpynut when I have a starving child.

Comment author: entirelyuseless 26 May 2016 01:32:49PM 0 points [-]

I don't think this works out, if you think you are agreeing with Villiam. Suppose your net worth is $20,000. Then the utility increase represented by $100 is going to be [proportional to] 0.00498. On the other hand, the utility increase represented by $10,000 is going to be [proportional to] 0.40546. That is, $10,000 will be 81 times as valuable as $100.

In other words, it is less than 100 times as valuable. But not by that much, and certainly not by enough to explain the degree to which people prefer the certain $100.

Comment author: HungryHobo 26 May 2016 02:38:33PM 0 points [-]

Using your net worth as part of the calculation doesn't feel right.

Even if my net worth is quite high much of that may be inaccessible to me short term.

If I have 100,000 in liquid cash then 100 has lower utility to me than if I have 100,000 in something non liquid like a house and no cash.

Comment author: Viliam 26 May 2016 10:21:55AM *  5 points [-]

Utility is approximately the logarithm of money. Pretend otherwise, and you will get results that go against the intuition, duh.

Utility is linear to money only if we take such a small part of the logarithmic curve that it is more or less linear at the given interval. But this is something you cannot extrapolate to situations where the part of the logarithmic curve is significantly curved. Two examples of linearity:

1) You are a millionaire, so you more or less don't give a fuck about getting or not getting $1000. In such case you can treat small money as linear and choose B. If you are not a millionaire, imagine that it is about certainty of 24ยข versus 25% chance of $1.

2) You are an effective altruist and you want to donate all the money to a charity that saves human lives. If $1000 is very small compared with the charity budget, we can treat the number of human lives saved as a linear function of extra money given. (See: Circular Altruism.)

Comment author: HungryHobo 26 May 2016 12:41:07PM 0 points [-]

Yep, for me 100 dollars provides a nice chunk of utility. 10000 does not provide 100 times as much utility and anything more than a couple hundred K provides little utility at all.

In theory that 1.6 billion powerball lottery had a (barely) positive expected return (depending on how taxes work out) and thus rationalists should throw money at it but in reality a certainty of 1 dollar is better than a 1 in a billion chance of getting 1.6 billion. (I know these numbers aren't exact)

View more: Prev | Next