I haven't given much thought to the concept of automation and computer induced unemployment. Others at the FHI have been looking into it in more details - see Carl Frey's "The Future of Employment", which did estimates for 70 chosen professions as to their degree of automatability, and extended the results of this using O∗NET, an online service developed for the US Department of Labor, which gave the key features of an occupation as a standardised and measurable set of variables.
The reasons that I haven't been looking at it too much is that AI-unemployment has considerably less impact that AI-superintelligence, and thus is a less important use of time. However, if automation does cause mass unemployment, then advocating for AI safety will happen in a very different context to currently. Much will depend on how that mass unemployment problem is dealt with, what lessons are learnt, and the views of whoever is the most powerful in society. Just off the top of my head, I could think of four scenarios on whether risk goes up or down, depending on whether the unemployment problem was satisfactorily "solved" or not:
AI risk\Unemployment | Problem solved | Problem unsolved |
---|---|---|
Risk reduced |
With good practice in dealing with AI problems, people and organisations are willing and able to address the big issues. |
The world is very conscious of the misery that unrestricted AI research can cause, and very wary of future disruptions. Those at the top want to hang on to their gains, and they are the one with the most control over AIs and automation research. |
Risk increased |
Having dealt with the easier automation problems in a particular way (eg taxation), people underestimate the risk and expect the same solutions to work. |
Society is locked into a bitter conflict between those benefiting from automation and those losing out, and superintelligence is seen through the same prism. Those who profited from automation are the most powerful, and decide to push ahead. |
But of course the situation is far more complicated, with many different possible permutations, and no guarantee that the same approach will be used across the planet. And let the division into four boxes not fool us into thinking that any is of comparable probability to the others - more research is (really) needed.
Here's a video on AI job automation, intended to be accessible to a nontechnical audience, but still interesting:
http://qz.com/250154/still-think-robots-cant-do-your-job-this-video-may-change-your-mind/
This video is a polished example of the Luddite fallacy.
Are things “different this time”? As long as we haven't created godlike AI, humans will still have a comparative advantage in something.