Kaj_Sotala comments on AlphaGo versus Lee Sedol - Less Wrong

17 Post author: gjm 09 March 2016 12:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (183)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kaj_Sotala 18 March 2016 11:02:34AM 1 point [-]

have separate risk management systems which impose constraints and limits on what they can do.

If those risk management systems are themselves software, that doesn't really change the overall picture.

Yes, but that has nothing to do with AI:

If we're talking about "would companies place AI systems in a role where those systems could cost the company lots of money if they malfunctioned", then examples of AI systems having been placed in roles where they cost the company a lot of money have everything to do with the discussion.

Comment author: Lumifer 18 March 2016 03:01:59PM -1 points [-]

If those risk management systems are themselves software, that doesn't really change the overall picture.

It does because the issue is complexity and opaqueness. A simple gatekeeper filter along the lines of

 if (trade.size > gazillion) { reject(trade) }

is not an "AI system".

Comment author: Torchlight_Crimson 19 March 2016 12:52:48AM 1 point [-]

In which case the AI splits the transaction into 2 transactions, each just below a gazillion.

Comment author: Lumifer 21 March 2016 02:50:48PM 0 points [-]

I'm talking about contemporary-level-of-technology trading systems, not about future malicious AIs.

Comment author: Crownless_Prince 21 March 2016 11:58:56PM 1 point [-]

So? An opaque neural net would quickly learn how to get around trade size restrictions if given the proper motivations.

Comment author: Lumifer 22 March 2016 12:07:42AM -1 points [-]

So? An opaque neural net would quickly learn how to get around trade size restrictions if given the proper motivations.

At which point the humans running this NN will notice that it likes to go around risk control measures and will... persuade it that it's a bad idea.

It's not like no one is looking at the trades it's doing.

Comment author: Crownless_Prince 22 March 2016 12:16:58AM 1 point [-]

At which point the humans running this NN will notice that it likes to go around risk control measures and will... persuade it that it's a bad idea.

How? By instituting more complex control measures? Then you're back to the problem Kaj mentioned above.

Comment author: Lumifer 22 March 2016 04:59:08PM -1 points [-]

How?

In the usual way. Contemporary trading systems are not black boxes full of elven magic. They are models, that is, a bunch of code and some data. If the model doesn't do what you want it to do, you stick your hands in there and twiddle the doohickeys until it stops outputting twaddle.

Besides, in most trading systems the sophisticated part ("AI") is an oracle. Typically it outputs predictions (e.g. of prices of financial assets) and its utility function is some loss function on the difference between the prediction and the actual. It has no concept of trades, or dollars, or position limits.

Translating these predictions into trades is usually quite straightforward.