Mechanical Engineering magazine (paywalled until next month) and Financial Times, among others, recently reviewed the book Race Against the Machine by economists Erik Brynjolfsson and Andrew McAfee. The FT reviewer writes:
Pattern recognition, the authors think, will quickly allow machines to branch out further. Computers will soon drive more safely than humans, a fact Google has demonstrated by allowing one to take out a Toyota Prius for a 1,000-mile spin. Truck and taxi drivers should be worried – but then so should medical professionals, lawyers and accountants; all of their jobs are at risk too. The outcome is a nightmarish but worryingly convincing vision of a future in which an ever-decreasing circle of professions is immune from robotic encirclement.
And ME magazine quotes McAfee in an interview:
Once computers get better than people, you don't have to hire people to do that job any more. That doesn't mean that people can't find work. There will always be an amount of work to do, but they won't like the wages they are offered.
Both reviewers also hint that McAfee and Brynjolfsson offer a partial explanation of the "jobless recovery", but either the book's argument is weak or the reviewers do a poor job summarizing it. Such a purported explanation might be the main attraction for most readers, but I'm more interested in the longer-term picture. Be it the "nightmarish vision" of the future mentioned in FT, or the simpler point about wages offered by McAfee, this might be a good hook to get the general public thinking about the long-term consequences of AI.
Is that a good idea? Should sleeping general publics be left to lie? There seems to be significant reluctance among many LessWrongers to stir the public, but have we ever hashed out the reasons for and against? Please describe any non-obvious reasons on either side.
Well, so is large-scale primate extermination leaving an empty husk of a planet.
The question is not so much whether the primates exist in the future, but what exists in the future and whether it's something we should prefer to exist. I accept that there probably exists some X such that I prefer (X + no humans) to (humans), but it certainly isn't true that for all X I prefer that.
So whether bringing that curtain down on dead-end primate dramas is something I would celebrate depends an awful lot on the nature of our "mind children."
OK, but if we are positing the creation of artificial superintelligences, why wouldn't they also be morally superior to us? I find this fear of a superintelligence wanting to tile the universe with paperclips absurd; why is that likely to be the summum bonum to a being vastly smarter than us? Aren't smarter humans generally more benevolent toward animals than stupider humans and animals? Why shouldn't this hold for AI's? And if you say that the AI might be so much smarter than us that we will be like ants to it, then why would you care if such a species de... (read more)