I would like to raise a discussion topic in the spirit of trying to quantify risk from uncontrolled / unsupervised software.
What is the maximum autonomy that has been granted to an algorithm according to your best estimates? What is the likely trend in the future?
The estimates could be in terms of money, human lives, processes, etc.
Another estimate could be on the time it takes for a human to come in the process and say "This isn't right".
A high speed trading algorithm has a lot of money on the line, but a drone might have lives on the line.
A lot of business processes might get affected by data coming in via an API from a system that might have had slightly different assumptions resulting in catastrophic events. eg. http://en.wikipedia.org/wiki/2010_Flash_Crash
The reason this topic might be worth researching is that it is a relatively easy to communicate risk of AGI. There might be many people who have an implicit assumption that whatever software is being deployed in the real world, there are humans to counter balance it. For them, empirical evidence that they are mistaken about the autonomy given to present day software may shift beliefs.
EDIT : formatting
I had recently heard about them and put a link to an Article that shows a demo of one below. They do exist, although I can't really give you specifics for how common they are and they probably aren't currently used for most deliveries. I'd imagine a local pizza place probably has less automation, but it will definitely vary in extent and how fast they need to produce pizzas.
http://www.huffingtonpost.com/2012/06/13/pizza-vending-machine-lets-pizza_n_1593115.html