one where AI systems are trusted with enormous sums of money
Kinda. They are carefully watched and have separate risk management systems which impose constraints and limits on what they can do.
E.g. one company apparently lost $440 million in less than an hour due to a glitch in their software.
Yes, but that has nothing to do with AI: "To err is human, but to really screw up you need a computer". Besides, there are equivalent human errors (fat fingers, add a few zeros to a trade inadvertently) with equivalent magnitude of losses.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Great work!
A clarifying question - is this more of a "here are the changes that we're going to make unless people find serious problems with them" kind of document (implying that ~everything in it will be implemented), or more of a "here are changes that we think seem the most promising, later on we'll decide which ones we'll actually implement" type of document (implying that only some limited subset will be implemented)?