Following some somewhat misleading articles quoting me, I thought Id present the top 9 myths about the AI risk thesis:

  1. That we’re certain AI will doom us. Certainly not. It’s very hard to be certain of anything involving a technology that doesn’t exist; we’re just claiming that the probability of AI going bad isn’t low enough that we can ignore it.
  2. That humanity will survive, because we’ve always survived before. Many groups of humans haven’t survived contact with more powerful intelligent agents. In the past, those agents were other humans; but they need not be. The universe does not owe us a destiny. In the future, something will survive; it need not be us.
  3. That uncertainty means that you’re safe. If you’re claiming that AI is impossible, or that it will take countless decades, or that it’ll be safe... you’re not being uncertain, you’re being extremely specific about the future. “No AI risk” is certain; “Possible AI risk” is where we stand.
  4. That Terminator robots will be involved. Please? The threat from AI comes from its potential intelligence, not from its ability to clank around slowly with an Austrian accent.
  5. That we’re assuming the AI is too dumb to know what we’re asking it. No. A powerful AI will know what we meant to program it to do. But why should it care? And if we could figure out how to program “care about what we meant to ask”, well, then we’d have safe AI.
  6. That there’s one simple trick that can solve the whole problem. Many people have proposed that one trick. Some of them could even help (see Holden’s tool AI idea). None of them reduce the risk enough to relax – and many of the tricks contradict each other (you can’t design an AI that’s both a tool and socialising with humans!).
  7. That we want to stop AI research. We don’t. Current AI research is very far from the risky areas and abilities. And it’s risk aware AI researchers that are most likely to figure out how to make safe AI.
  8. That AIs will be more intelligent than us, hence more moral. It’s pretty clear than in humans, high intelligence is no guarantee of morality. Are you really willing to bet the whole future of humanity on the idea that AIs might be different? That in the billions of possible minds out there, there is none that is both dangerous and very intelligent?
  9. That science fiction or spiritual ideas are useful ways of understanding AI risk. Science fiction and spirituality are full of human concepts, created by humans, for humans, to communicate human ideas. They need not apply to AI at all, as these could be minds far removed from human concepts, possibly without a body, possibly with no emotions or consciousness, possibly with many new emotions and a different type of consciousness, etc... Anthropomorphising the AIs could lead us completely astray.
Lists cannot be comprehensive, but they can adapt and grow, adding more important points:
  1. That AIs have to be evil to be dangerous. The majority of the risk comes from indifferent or partially nice AIs. Those that have some goal to follow, with humanity and its desires just getting in the way – using resources, trying to oppose it, or just not being perfectly efficient for its goal.
  2. That we believe AI is coming soon. It might; it might not. Even if AI is known to be in the distant future (which isn't known, currently), some of the groundwork is worth laying now.

 

New Comment
44 comments, sorted by Click to highlight new comments since:

Three more myths, from Luke Muehlhauser:

  • We don’t think AI progress is “exponential,” nor that human-level AI is likely ~20 years away.
  • We don’t think AIs will want to wipe us out. Rather, we worry they’ll wipe us out because that is the most effective way to satisfy almost any possible goal function one could have.
  • AI self-improvement and protection against external modification isn’t just one of many scenarios. Like resource acquisition, self-improvement and protection against external modification are useful for the satisfaction of almost any final goal function.

A similar list by Rob Bensinger:

  • Worrying about AGI means worrying about narrow AI
  • Worrying about AGI means being confident it’s near
  • Worrying about AGI means worrying about “malevolent” AI
[-]gjm30

because that is the most effective way to satisfy almost any possible goal function

Perhaps more accurate: because that is a likely side effect of the most effective way (etc.).

Not a side effect. The most effective way is to consume the entire cosmic commons just in case all that computation finds a better way. We have our own ideas about what we'd like to do with the cosmic commons, and we might not like the AI doing that; we might even act to try and prevent it or slow it down in some way. Therefore killing us all ASAP is a convergent instrumental goal.

I'm not sure I agree with 9. There is a lot of SF out there, and some of it (Roadside Picnic, Stanislaw Lem's works) presents the idea that the universe is inherently uncaring in a very real way. Anthropomorphization is not an inherent feature of the genre, and fiction might enable directly understanding these ideas in a way that argument doesn't make real for some people.

I'm not a Friendliness researcher, but I did once consider whether trying to slow down AI research might be a good idea. Current thinking is probably not, but only because we're forced to live in a third-best world:

First best: Do AI research until just before we're ready to create an AGI. Either Friendliness is already solved by then, or else everyone stop and wait until Friendliness is solved.

Second best: Friendliness looks a lot harder than AGI, and we can't expect everyone to resist the temptation of fame and fortune when the possibility of creating AGI is staring them in the face. So stop or slow down AI research now.

Third best: Don't try to stop or slow down AI research because we don't know how to do it effectively, and doing it ineffectively will just antagonize AI researchers and create PR problems.

There are some people, who honestly think Friendliness-researchers in MIRI and other places actually discourage AI research. Which sounds to me ridiculous, I've never seen such attitude from Friendliness-researchers, nor can even imagine that.

Why is this so ridiculous as to be unimaginable? Isn't the second-best world above actually better than the third-best, if only it was feasible?

That we want to stop AI research. We don’t. Current AI research is very far from the risky areas and abilities. And it’s risk aware AI researchers that are most likely to figure out how to make safe AI.

Is it really the case that nobody interested in AI risk/safety wants to stop or slow down progress in AI research? It seemed to me there was perhaps at least substantial minority that wanted to do this, to buy time.

I remember that we were joking at the NYC Singularity Summit workshop a few years back that maybe we should provide AI researchers with heroin and philosophers to slow them down.

As far as I have noticed, there are few if any voices in the academic/nearby AI safety community that promote slowing AI research as the best (or even a good) option. People talking about relinquishment or slowing seem to be far outside the main discourse, typically people who have only a passing acquaintance with the topic or a broader technology scepticism.

The best antidote is to start thinking about the details of how one would actually go about it: that generally shows why differential development is sensible.

I think differential technological development - prioritising some areas over others - is the current approach. It achirves the same result but has a higher chance of working.

Thanks for your response and not to be argumentative, but honest question: doesn't that mean that you want some forms of AI research to slow down, at least on a relative scale?

I personally don't see any thing wrong with this stance, but it seems to me like you're trying to suggest that this trade-off doesn't exist, and that's not at all what I took from reading Bostrom's Superintelligence.

An important distinction that jumps out to me- if we slowed down all technological progress equally, that wouldn't actually "buy time" for anything in particular- I can't think of anything we'd want to be doing with that time besides either 1. researching other technologies that might help with avoiding AI (can't think of any ATM though- one that comes to mind is technologies that would allow downloading or simulating a human mind before we build AI from scratch, which sounds at least somewhat less dangerous from a human perspective than building AI from scratch), or 2. thinking about AI value systems.

The 2 is presumably the reason why anyone would suggest slowing down AI research, but I think a notable obstacle to 2 at present is large numbers of people not being concerned about AI risk because it's so far away. If we get to the point where people actually expect an AI very soon, then slowing down while we discuss it might make sense.

The trade off exists. There are better ways of resolving it than others, and there are better ways of phrasing it than others.

I am one of those proponents of stopping all AI research and I will explain why.

(1) Don't stand too close to the cliff. We don't know how AGI will emerge and by the time we are close enough to know, it's probably too late. Either human error or malfeasance will bring us over the edge.

(2) Friendly AGI might be impossible. Computer scientists cannot even predict the behavior of simple programs. The halting problem, a specific type of prediction, is provably impossible in non-trivial code. I doubt we'll even grasp why the first AGI we build works.

Neither of these statements seems controversial, so if we are determined to not produce unfriendly AGI, the only safe approach is to stop AI research well before it becomes dangerous. It's playing with fire in a straw cabin, our only shelter on a deserted island. Things would be different if someday we solve the friendliness problem, build a provably secure "box", or are well distributed across the galaxy.

Nice. This would probably be useful to have near the end of a primer on AI risk, perhaps as a summary of sorts.

I can only talk about those I've interacted with, and I haven't seen AI research blocking being discussed as a viable option.

Given the speed of AI development in other countries, do we know if any of the work on friendly AI is being translated or implemented outside of the US? Or what the level of awareness of AI friendliness issues among AI researchers in non-English speaking countries?

(I realize that IQ isn't an effective test of AI, but this is the article that prompted me wondering: http://www.businessinsider.com/chinese-ai-beat-humans-in-an-iq-test-2015-6. )

[-]V_V10

Anybody who can contribute to AI research can read English.

I have my own idea of "one simple trick". Might as well solicit opinions here: consider the possibility that developing tools for providing useful summaries of and agent's thinking could substantially lower the risk associated with AGI. If there's an effective way to place trip-wires in the agent's mind to inform us of its thought-crimes, we would know to ignore its protestations and could just pull the plug on it. Or better yet, have the plug be automatically pulled by the monitoring process. Perhaps an agent be able to meditate it's way around our safeguards, so as to shield itself from Jedi mind-tricks?

[-][anonymous]120

The concepts you propose are actually discussed at length in Nick Bostrom's recent work, Superintelligence, although tripwires are only one of many security measures a safe design might employ. The book is a good introductory text on the various technical problems superintelligences (including AGI) carry with them, as well as some global-political overview.

I really recommend it, if you're sufficiently interested.

Certainly a good compilation! It might be even more useful, though, if it contained references to research papers, Bostrom's superintelligence etc., where the arguments are discussed in full detail.

Then it would be a more respectable, less click-bait article - we can't have that! ^_^

As an example of number 10, consider the Optimalverse. The friendliest death of self-determination I ever did see.

Unfortunately, I'm not quite sure of the point of this post, considering you're posting a reply to news articles on a forum filled with people who understand the mistakes they made in the first place. Perhaps as a repository of rebuttals to common misconceptions posited in the future?

As an article to link to when the issue comes up.

[-][anonymous]10

Argument for a powerful AI being unlikely - was this considered before?

One problem I see here is the "lone hero inventor" implicit assumption, namely that there are people optimizing things for their goals on their own and an AI could be extremely powerful at this. I would like to propose a different model.

This model would be that intelligence is primarily a social, communication skill, it is the skill to disassemble (understand, lat. intelligo), play with and reassemble ideas acquired from other people. Like literally what we are doing on this forum. It is conversational. The whole standing on the shoulder of giants thing, not the lone hero thing.

In this model, inventions are made by the whole of humankind, a network, where each brain is a node communicating slightly modified ideas to each other.

In such a network one 10000 IQ node does not get very powerful, it doesn't even make the network very powerful i.e. a friendly AI does not quickly solve mortality even with human help.

The primary reason I think such a model is correct that intelligence means thinking, we think in concepts, and concepts are not really nailed down but they are constantly modified through a social communication process. Atoms used to mean indivisible units, then they became divisible into little ping-pong balls, and then the model was updated into something entirely different by quantum physics, but is quantum physics based atom theory about the same atoms that were once thought to be indivisible or is this a different thing now? Is modern atomic theory still about atoms? What are we even mapping here and where does the map end and the territory begin?

So the point is human knowledge is increased by a social communication process where we keep throwing bleggs at each other, and keep redefining what bleggs and rubes mean now, keep juggling these concept, keep asking what you really mean under bleggs, and so on. Intelligence is this communication ability, it is to disassemble Joe's concept of bleggs and understand how it differs from Jane's concept of bleggs and maybe assemble a new concept that describes both bleggs.

Without this communication, what would be even intelligence? What would lone intelligence be? It is almost a contradictory term in itself. What would a brain alone in a universe intelligere i.e. understand if nothing would talk to it? Just tinker with matter somehow without any communication whatsoever? But even if we imagine such an "idiot inventor genius", some kind of a mega-plumber on steroids instead of an intellectual or academic it needs goals for that kind of tinkering with that material stuff, for that it needs concepts, and concepts come and evolve from a constant social ping-pong.

An AI would be yet another node in our network, and participate in this process of throwing blegg-concepts at each other probably far better than any human can, but still just a node.

I think you will find this discussed in the Hanson-Yudkowsky foom debate. Robin thinks that distributed networks of intelligence (also known as economies) are indeed a more likely outcome than a single node bootstrapping itself to extreme intelligence. He has some evidence from the study of firms, which is a real-world example of how economies of scale can produce chunky but networked smart entities. As a bonus, they tend to benefit from playing somewhat nicely with the other entities.

The problem is that while this is a nice argument, would we want to bet the house on it? A lot of safety engineering is not about preventing the most likely malfunctions, but the worst malfunctions. Occasional paper jams in printers are acceptable, fires are not. So even if we thought this kind of softer distributed intelligence explosion was likely (I do) we could be wrong about the possibility of sharp intelligence explosions, and hence it is rational to investigate them and build safeguards.

An AI would be yet another node in our network, and participate in this process of throwing blegg-concepts at each other probably far better than any human can, but still just a node.

Why would an AI be a single node? I can run two programs in parallel right now on my computer, and they can talk to each other just fine. So if communication is necessary for intelligence, why couldn't an AI be split up into many communicating sub-AIs?

[-][anonymous]10

Ah... so not one individual personality, but a "city" of of AI's? Well, if I see it not as a "robotic superhuman" but "robotic super-humankind" then it certainly becomes possible - a whole species of more efficient beings could of course outcompete a lower species but I was under the impression running many beings each advanced enough to be sentient (OK Yudkowsky claims intelligence is possible without sentience but how would a non-sentient being conceptualize?) would be prohibitively expensive in hardware. I mean imagine simulating all of us or at least a human city...

We can already run neural nets with 1 billion synapses at 1000 hz on a single GPU, or 10 billion synapses at 100 hz (real-time). At current rates of growth (software + hardware), that will be up to 100 billion synapses @100 hz per GPU in just a few years.

At that point, it mainly becomes a software issue, and once AGI's become useful the hardware base is already there to create millions of them, then soon billions.

If we could build a working AGI that required a billion dollars of hardware for world-changing results, why would Google not throw a billion dollars of hardware at it?

But the AGI would not be alone, it would have access to humanity.

I recently gave a talk at an academic science fiction conference about whether sf is useful for thinking about the ethics of cognitive enhancement. I think some of the conclusions are applicable to point 9 too:

(1) Bioethics can work in a "prophetic" and a "regulatory" mode. The first is big picture, proactive and open-ended, dealing with the overall aims we ought to have, possibilities, and values. It is open for speculation. The regulatory mode is about ethical governance of current or near-term practices. Ethicists formulate guidelines, point out problems, and suggest reforms, but their purpose is generally not to rethink these practices from the ground-up or to question the wisdom of the whole enterprise. As the debate about the role of speculative bioethics has shown, mixing the modes can be problematic. (Guyer and Moreno 2004) really takes bioethics to task for using science fiction instead of science to motivate arguments: they point out that this can actually be good if one does it inside the prophetic mode, but a lot of bioethics (like the President's Council on Bioethics at the time) cannot decide what kind of consideration it is.

(2) Is it possible to find out things about the world by making stuff up? (Elgin 2014) argues fictions and thought experiments do exemplify patterns or properties that they share with phenomena in the real world, and hence we can learn something about the realized world from considering fictional worlds (i.e. there is a homeomorphism between them in some domain). It does require the fiction to be imaginative but not lawless: not every fiction or thought experiment has value in telling us something the real or moral world. This is of course why just picking a good or famous piece of fiction as a source of ideas is risky: it was selected not for how it reflects patterns in the real world, but for other reasons.

Considering Eliezer's levels of intelligence in fictional characters is a nice illustration of this: level 1 intelligence characters show some patterns (being goal directed agents) that matter, and level 3 characters actually give examples of rational skilled cognition.

(3) Putting this together, if you want to use fiction in your argument, the argument better be in the more prophetic, open-ended mode (e.g. arguing that there is AI risks of various kind, what values are at stake etc.) and the fiction needs to have pretty not only high standards of not just internal consistency but actual mappability to the real world. If the discussion is on the more regulatory side (e.g. thinking of actual safeguards, risk assessment, institutional strategies) then fiction is unlikely to be helpful, and very likely (due to good story bias, easily inserted political agendas or different interpretations of worldview) introducing biasing or noise elements.

There are of course some exceptions. Hannu Rajaniemi provides a neat technical trick to the AI boxing problem in the second novel of his Quantum Thief trilogy (turn a classical computation into a quantum one that will decohere if it interacts with the outside world). But the fictions most people mention in AI safety discussions are unlikely to be helpful - mostly because very few stories succeed with point (2) (and if they are well written, they hide this fact convincingly!)

Commenting on the first myth, Yudkowsky himself seems to be pretty sure of this when reading his comment here: http://econlog.econlib.org/archives/2016/03/so_far_my_respo.html. I know Yudkowsky's post is written after this LessWrong article, but it still seems relevant to mention.

He is a bit overconfident in that regards, I agree.

Agreed, especially when compared to http://www.fhi.ox.ac.uk/gcr-report.pdf.

Although, now that I think about it, this survey is about risks before 2100, so the 5% risk of superintelligent AI might be that low because some of the responders belief such AI not to happen before 2100. Still, it seems in sharp contrast with Yudkowsky's estimate.

[-][anonymous]00

Following some somewhat misleading articles quoting me, I thought I’d present the top 10 myths about the AI risk thesis:

1) That we’re certain AI will doom us. Certainly not. It’s very hard to be certain of anything involving a technology that doesn’t exist; we’re just claiming that the probability of AI going bad isn’t low enough that we can ignore it.

MISLEADING.

If by "we" you mean the people who have published their thoughts on this matter. I believe I am right in saying that you have in the past referenced Steve Omohundro's paper, in which he says:

Without special precautions, [the AGI] will resist being turned off, will try to break into other machines and make copies of itself, and will try to acquire resources without regard for anyone else’s safety. These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems (Omohundro, 2008).

Although this begins with "without special precautions" it then goes on to ONLY list all the ways in which this could happen, with no suggestions about how "special precautions" are even possible.

This cannot be construed as "we’re just claiming that the probability of AI going bad isn’t low enough that we can ignore it." The quote is also inconsistent with your statement "It’s very hard to be certain of anything involving a technology that doesn’t exist", because Omohundro says categorically that this "will occur ... because of the intrinsic nature of goal driven systems".

I picked Omohundro's paper as an example, but there are numerous similar writings from MIRI and FHI. I listed several examples in my AAAI paper (Loosemore, 2014).

2) That humanity will survive, because we’ve always survived before. Many groups of humans haven’t survived contact with more powerful intelligent agents. In the past, those agents were other humans; but they need not be. The universe does not owe us a destiny. In the future, something will survive; it need not be us.

STRAWMAN.

I haven't seen anyone of significance make that claim, so how can it be a "myth about the AI risk thesis"?

3) That uncertainty means that you’re safe. If you’re claiming that AI is impossible, or that it will take countless decades, or that it’ll be safe... you’re not being uncertain, you’re being extremely specific about the future. “No AI risk” is certain; “Possible AI risk” is where we stand.

STRAWMAN.

Again, only a tiny minority have said anything resembling those claims, so how can they be a "myth about the AI risk thesis"?

4) That Terminator robots will be involved. Please? The threat from AI comes from its potential intelligence, not from its ability to clank around slowly with an Austrian accent.

STRAWMAN.

Journalists and bloggers love to put a Terminator picture on their post as an Eyeball Magnet. Why elevate an Eyeball Magnet to the level of a "myth about the AI risk thesis"?

5) That we’re assuming the AI is too dumb to know what we’re asking it. No. A powerful AI will know what we meant to program it to do. But why should it care? And if we could figure out how to program “care about what we meant to ask”, well, then we’d have safe AI.

WRONG.

I published a paper giving a thorough analysis anddebunking of your (MIRI and FHI) claims in that regard -- and yet neither you nor anyone else at MIRI or FHI has ever addressed that analysis. Instead, you simply repeat the nonsense as if no one has ever refuted it. MIRI was also invited to respond when the paper was presented. They refused the invitation.

6) That there’s one simple trick that can solve the whole problem. Many people have proposed that one trick. Some of them could even help (see Holden’s tool AI idea). None of them reduce the risk enough to relax – and many of the tricks contradict each other (you can’t design an AI that’s both a tool and socialising with humans!).

INCOHERENT.

I have heard almost no one propose that there is "one simple trick", so how can it be a "myth" (you need more than a couple of suggestions, for something to be a myth).

More importantly, what happened to your own Point 1, above? You said "It’s very hard to be certain of anything involving a technology that doesn’t exist" -- but now you are making categorical statements (e.g. "None of them reduce the risk enough to relax" and "you can’t design an AI that’s both a tool and socialising with humans!") about the effectiveness of various ideas about that technology that doesn’t exist.

7) That we want to stop AI research. We don’t. Current AI research is very far from the risky areas and abilities. And it’s risk aware AI researchers that are most likely to figure out how to make safe AI.

PARTLY TRUE

......... except for all the discussion at MIRI and FHI regarding Hard Takeoff scenarios. And, again, whence cometh the certainty in the statement "Current AI research is very far from the risky areas and abilities."?

8) That AIs will be more intelligent than us, hence more moral. It’s pretty clear than in humans, high intelligence is no guarantee of morality. Are you really willing to bet the whole future of humanity on the idea that AIs might be different? That in the billions of possible minds out there, there is none that is both dangerous and very intelligent?

MISLEADING, and a STRAWMAN.

Few if any people have made the claim that increased intelligence BY ITSELF guarantees greater morality.

This is misleading because some people have discussed a tendency (not a guarantee) for higher intelligence to lead to greater morality (Mark Waser's papers go into this in some detail). Combining that with the probability of AI going through a singleton bottleneck, and there is a plausible scenario in which AIs themselves enforce a post-singleton constraint on the morality of future systems.

You are also profoundly confused (or naive) about how AI works, when you ask the question "Are you really willing to bet the whole future of humanity on the idea that AIs might be different?" One does not WAIT to find out if the motivation system of a future AI "is different", one DESIGNS the motivation system of a future AI to be either this way or that way.

It could be that an absolute correlation between increased intelligence and increased morality, in humans, is undermined by the existence of a psychopathic-selfish module in the human motivation system. Solution? Remove the module. Not possible to do in humans because of the biology, but trivially easy to do if you are designing an AI along the same lines. And if this IS what is happening in humans, then you can deduce nothing about future AI systems from the observation that "in humans, high intelligence is no guarantee of morality".

9) That science fiction or spiritual ideas are useful ways of understanding AI risk. Science fiction and spirituality are full of human concepts, created by humans, for humans, to communicate human ideas. They need not apply to AI at all, as these could be minds far removed from human concepts, possibly without a body, possibly with no emotions or consciousness, possibly with many new emotions and a different type of consciousness, etc... Anthropomorphising the AIs could lead us completely astray.

MISLEADING and CONFUSED.

This is a confusing mishmash of speculation and assumption.

A sentence like "minds far removed from human concepts" is not grounded in any coherent theory of what a 'mind' is or what a 'concept' is, or how to do comparative measures across minds and concepts. The sentence is vague, science-fictional handwaving.

The same goes for other statements like that the AI might have "no emotions or consciousness". Until you define what you mean by those terms, and give some kind of argument about why the AI would or would not be expected to have them, and what difference it would make, in either case, the statement is just folk psychology dressed up as science.

Lists cannot be comprehensive, but they can adapt and grow, adding more important points: 1) That AIs have to be evil to be dangerous. The majority of the risk comes from indifferent or partially nice AIs. Those that have sone goal to follow, with humanity and its desires just getting in the way – using resources, trying to oppose it, or just not being perfectly efficient for its goal.

MISLEADING and a STRAWMAN.

Yet again, I demonstrated in my 2014 paper that that claim is incoherent. It is predicated on a trivially stupid AI design, and there is no evidence that such a design will ever work in the real world.

If you, or anyone else at MIRI or FHI think that you can answer the demolition of this idea that I presented in the AAAI paper, it is about time you published it.

2) That we believe AI is coming soon. It might; it might not. Even if AI is known to be in the distant future (which isn't known, currently), some of the groundwork is worth laying now).

ACCURATE.

References

Loosemore, R.P.W. (2014). The Maverick Nanny with a Dopamine Drip: Debunking Fallacies in the Theory of AI Motivation. Association for the Advancement of Artificial Intelligence 2014 Spring Symposium, Stanford, CA.

Omohundro, Stephen M. 2008. The Basic AI Drives. In Wang, P., Goertzel, B. and Franklin, S. (Eds), Artificial General Intelligence 2008: Proceedings of the First AGI Conference. Amsterdam: IOS.

My own set of objections to AI risk does not include any of these (except possibly #7); but it's possible that they are unusual and therefore do not qualify as "top 10". Still, FWIW, I remain unconvinced that AI risk is something we should be spending any amount of resources on.

"any amount of resources on."

That's a very strong statement, denoting very high certainty. Do you have a good basis for it?

See my response to Caspar42, below. I'll write up my thoughts and post them, this way I have something to link to every time this issue comes up...

Is there a write-up of your objections anywhere?

No, and that's a good point, I should really make one. I will try to post a discussion post about it, once I get more time.

This article appears to encompass most of my objections:

http://thebulletin.org/artificial-intelligence-really-existential-threat-humanity8577

I do disagree with some of the things Geist says in there, but of course he's a professional AI researcher and I'm, well, me, so...

[-][anonymous]00

It would be great if someone could compile this list and some more points from the comments in a blog post so that it can be shared more easily.

[This comment is no longer endorsed by its author]Reply

The reality , is that robots will subvert human society not because they are more intelligent or intent on doing humans harm, rather, they will scupper at least industrialised societies by simply being capable of most of the work now done by humans. It will be simple economics which does the trick, with employers caught between the Scylla of needing to compete with other employers who use robots to replace men (which will mean a catastrophic and irreparable lost of demand) and the Charybdis of having to throw away the idea of laissez faire economics and engage in a command economy, something which political elites raised on worship of the great god Market will have immense difficulty in doing.

Read more at https://livinginamadhouse.wordpress.com/2011/07/01/robotics-and-the-real-sorry-karl-you-got-it-wrong-final-crisis-of-capitalism/

Why would a command economy be necessary to avoid that? Welfare Capitalism- you run the economy with laissez-faire except you tax some and give it away to poor people, who can then spend it as they wish as if they'd earned it in laissez-faire economics- would work just fine. As mechanization increases, you gradually increase the welfare.

It won't be entirely easy to implement politically, mainly because of our ridiculous political dichotomy where you can either understand basic economics or care about poor people, but not both.

Since we're citing sources I'll admit Scott expressed this better than I can: http://slatestarcodex.com/2013/12/08/a-something-sort-of-like-left-libertarianism-ist-manifesto/#comment-23688