Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

jmh comments on Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” - Less Wrong

0 Post author: turchin 25 November 2017 11:44AM

Comments (17)

You are viewing a single comment's thread.

Comment author: jmh 06 December 2017 06:18:26PM 0 points [-]

Not sure I have anything to add to the question but do find myself having to ask why the general presumption so often seems to be that of AI gets annoyed at stupid people and kills humanity.

It's true that we can think of situation where that might be possible, and maybe even a predictable AI response, but I just wonder if such settings are all that probable.

Has anyone ever sat down and tried to list out the situations where an AI would have some incentive to kill off humanity and then assess how reasonable thinking such a situation might be?

Comment author: turchin 06 December 2017 11:02:17PM 0 points [-]

It will kill humanity not because it will be annoyed, but for two main goals: its own safety, or to use human atoms. Other variants also possible, I explored them here: http://lesswrong.com/lw/mgf/a_map_agi_failures_modes_and_levels/