Agreed, this is a good point. Here are some thoughts my contrarian comment generator had in response to this:
It's also not a particularly lucrative place to apply the upper end of powerful agent intelligence. While ultimately everything boils down to algorithmic trading, the most lucrative trades are made by starting an actual company around a world-changing product. As a trader, the agent would commonly want to actually start a company to make more money, and that's not an action that is available until you go far enough down the diminishing returns curve of world modeling that it starts being able to plan through manipulating stock price to communicate.
also, high frequency trading is not likely to use heavy ML any time soon, due to strict latency constraints, and longer term trading is competing against human traders who, while imperfect, are still some of the least inadequate in the world at predicting the future.
These are interesting contrarian comments.
Regarding ML in high frequency trading, I'm not sure there is a significant impediment. What one would do there (and maybe someone already does?) is use ML to control the parameters of, and ultimately design from scratch, the algorithms that do the trading itself (so that the ML runs with high latency in the background while the algorithms operate in real-time).
There was a similar idea presented in reddit-control-probelem a few days ago.
A quote from a longer post: "Instead of using the "seed AI" analogy, I want to describe a different scenario where, paradoxically, it is not the core intelligence that drives improvement, but external economic incentives. One can think of a such AI as a kind of economic system.
At it's core, it is pretty dumb, but it happens to work very well in a complex environment. Instead of having intelligence as its core, it out-sources the parts of problem solving requiring intelligence to third-parties which uses the system as a trading platform".
I commented where and wound add here again that it looks like bitcoin, which even outsources intelligence for ASIC building, and I could imagine that it could create incentives for some miners to buy weapons to protect their network. At the end full automated ascending economy could appear from bitcoin and I would not surprised that the universe will by titled by miners. The idea of bitcoin as a paperclipper starts to appear often, fueled by its growth as electricity consumer.
Wasn't this averted by there being finite potential bitcoins, such that eventually miners only receive what the transactions are willing to pay?
In bitcoin case probably yes, but there are other cryptocurencies, which now consume around a half of all mining electricity. It is a good example of a case where initial "AI" has some anti-paperclipping properties, but its counterfactual copies didn't.
I suspect this observation is far from original, but I didn't see it explicitly said anywhere, so it seemed worthwhile spelling it out.
The paperclip maximizer is a popular example of unfriendly AI, however it's not the most realistic one (obviously it wasn't meant to be). It might be useful to think which applications of AI are the most realistic examples, i.e. which applications are both likely to use state-of-the-art AI and are especially prone to failure modes (which is not to say that other applications are not dangerous). In particular, if AI risk considerations ever make it into policy, such an analysis is one thing that might help to inform it.
One application that stands out is algorithmic trading. Considering the following: