Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

anros comments on Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” - Less Wrong

0 Post author: turchin 25 November 2017 11:44AM

Comments (19)

You are viewing a single comment's thread.

Comment author: anros 14 December 2017 12:18:45PM *  0 points [-]

It's a nice sentiment, but if the AI is an actual serious potential threat to us, it's also gonna be waaaay better than any of us at evaluating these sorts of things. So this is either unnecessary because the AI will already be aware of such reasons, or pointless because the AI will have already dismissed such reasons as either insufficient or inaccurate. Cheers

Comment author: turchin 14 December 2017 02:50:24PM 0 points [-]

I am getting this comment in different forms many times, so I will state again 3 main reasons why it is not exactly true:

1) The message is addressed to Young AI, not to full-blown superintelligence. Such Young AI will be time constraint but could have to make some pivotal act.

2) The message includes some our commitments, like Nelson attack and AI-Elf deal, so it provides some useful information to the AI about portability that it is in the simulation and some other useful things.

3) If AI is fully aware of all these reasons, it already created my mind model, so I am immortal :) Also, if it buys all these reasons, it will be less likely to kill us for atoms, as human atoms utility for AI is 10E-12, and any positive hypothetical utility of human existence will overweight it.