I think it makes sense to keep the term "Friendly AI" for "stably self-modifying AGI which optimizes for humane values."
Narrow AI safety work — for extant systems and near-term future systems — goes by many different names. We (at MIRI) have touched on that work in several posts and interviews, e.g.:
I am aware that Yudkowsky considers the two main aspects of FAI theory to be a) formalising the maths required for an agent to self-modify without losing its values and b) being able to correctly infer optimal values from humans. These two aims seem quite separate from most work in narrow AI, which involve optimising for a single task.
You seem to call pretty much all sorts of software which makes some decisions "narrow AI". I don't think that's a useful use of the term.
To use your example, most of HFT is basically a very short-term statistical model coupled with an execution engine. It's neither magic, nor AI.
Which is why it frankly ought to be stopped. It doesnt add any informational value to the prices of stocks, nor does it raise any capital for investment. What it does is to allow people with expensive computers in very expensive locations to divert a good chunk of the return from other people's investments into their own pockets by gaming the system. That sort of thing is not good for the economy, the social order or, well, anyone other than the people with a technological spigot hammered into the veins of the stock market.
Which is why it frankly ought to be stopped.
I disagree.
It doesnt add any informational value to the prices of stocks, nor does it raise any capital for investment.
That's not why it's useful. It's useful because it provides liquidity and reduces the costs of trading.
to divert a good chunk of the return from other people's investments into their own pockets by gaming the system
I don't think this statement is true.
That sort of thing is not good for ... the social order
8-/ Lots of things (like questioning authority) are not good for the social order. I don't consider that a compelling argument.
That's not why it's useful. It's useful because it provides liquidity and reduces the costs of trading.
Absent other people getting their trades completed slightly ahead of you, getting your trades completed in a millisecond instead of a second is that valuable ? I'm not being rhetorical - I know very little about finance. What processes in the rest of the economy are happening fast enough to make millisecond trading worthwhile ?
I would have guessed a failure to solve a co-ordination problem. That is, at one time trades were executed on the timescale of minutes (or maybe even days or weeks once upon a time), and that at every point in time since, there has been a marginal advantage to getting your trades done a little faster than everyone else. At some point the costs of HST outweighed the liquidity benefits but on-one (alone) was in a position to back out without losing - the end result being major engineering projects aimed at shaving milliseconds off network propagation delays, and flash crashes.
I can imaging an alternative universe where, at the point when trade times got down under a second, everyone got together and said "look, this could get silly", and decided to agree that exchanges should collect trades arriving in 1-second buckets and execute them in a randomly permuted order. (Or does something like that not work for some obvious reason ?)
(Also, I would guess that HST does not divert "a good chunk" of the return from other people's investments - if it were more than a sliver, I suspect the co-ordination problem would have got solved.)
getting your trades completed in a millisecond instead of a second is that valuable ?
The benefit to the small investor is not really faster execution -- it is lower bid-ask spread and lower trading costs in general.
For example there was a recent "natural experiment" in Canada (emphasis mine):
...in a recent natural experiment set off by Canada’s stock market regulators. In April 2012 they limited the activity of high-frequency traders by increasing the fees on market messages sent by all broker-dealers, such as trades, order submissions and cancellations. This affected high-frequency traders the most, since they issue many more messages than other traders.
The effect, as measured by a group of Canadian academics, was swift and startling. The number of messages sent to the Toronto Stock Exchange dropped by 30 percent, and the bid-ask spread rose by 9 percent, an indicator of lower liquidity and higher transaction costs.
But the effects were not evenly distributed among investors. Retail investors, who tend to place more limit orders — i.e., orders to buy or sell stocks at fixed prices — experienced lower intraday returns. Institutional investors, who placed more market orders, buying and selling at whatever the market price happened to be, did better. In other words, the less high-frequency trading, the worse the small investors did.
…In a paper published last year, Terry Hendershott of Berkeley, Jonathan Brogaard of the University of Washington and Ryan Riordan of the University of Ontario Institute of Technology concluded that, “Over all, HFTs facilitate price efficiency by trading in the direction of permanent price changes and in the opposite direction of transitory price errors, both on average and on the highest volatility days.”
(source)
Here are some informed opinions on HFT: here, here, and here. If you want a more sceptical, but still informed opinion, here's an example.
invisiblegirlfriend.com and invisibleboyfriend.com are not self aware computers. But they include computers pretending to be human.
There are a few humans that beat all other humans at chess. There are a few machines that beat those humans. Those machines and all the humans are beat by human / computer teams, where a computer does what it does best (run through many options) and so do humans (make the final choice on machine suggested moves). The above services seem like this - a human / computer team designed to play a (social) game well.
I'd place this in the arena of immediate and real world friendly AI research.
I have no affiliation with these commercial services.
Self-driving cars may someday be put into a position where it has to choose between one of two victims to crash into. This puts programmers of self-driving cars (or even sophisticated crash-avoidance systems for cars that aren't fully autonomous) into the position of having to solve a real-life version of the trolley problem.
Much of the glamor and attention paid toward Friendly AI is focused on the misty-future event of a super-intelligent general AI, and how we can prevent it from repurposing our atoms to better run Quake 2. Until very recently, that was the full breadth of the field in my mind. I recently realized that dumber, narrow AI is a real thing today, helpfully choosing advertisements for me and running my 401K. As such, making automated programs safe to let loose on the real world is not just a problem to solve as a favor for the people of tomorrow, but something with immediate real-world advantages that has indeed already been going on for quite some time. Veterans in the field surely already understand this, so this post is directed at people like me, with a passing and disinterested understanding of the point of Friendly AI research, and outlines an argument that the field may be useful right now, even if you believe that an evil AI overlord is not on the list of things to worry about in the next 40 years.
Let's look at the stock market. High-Frequency Trading is the practice of using computer programs to make fast trades constantly throughout the day, and accounts for more than half of all equity trades in the US. So, the economy today is already in the hands of a bunch of very narrow AIs buying and selling to each other. And as you may or may not already know, this has already caused problems. In the “2010 Flash Crash”, the Dow Jones suddenly and mysteriously hit a massive plummet only to mostly recover within a few minutes. The reasons for this were of course complicated, but it boiled down to a couple red flags triggering in numerous programs, setting off a cascade of wacky trades.
The long-term damage was not catastrophic to society at large (though I'm sure a couple fortunes were made and lost that day), but it illustrates the need for safety measures as we hand over more and more responsibility and power to processes that require little human input. It might be a blue moon before anyone makes true general AI, but adaptive city traffic-light systems are entirely plausible in upcoming years.
To me, Friendly AI isn't solely about making a human-like intelligence that doesn't hurt us – we need techniques for testing automated programs, predicting how they will act when let loose on the world, and how they'll act when faced with unpredictable situations. Indeed, when framed like that, it looks less like a field for “the singularitarian cultists at LW”, and more like a narrow-but-important specialty in which quite a bit of money might be made.
After all, I want my self-driving car.
(To the actual researchers in FAI – I'm sorry if I'm stretching the field's definition to include more than it does or should. If so, please correct me.)