Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

turchin comments on Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” - Less Wrong

0 Post author: turchin 25 November 2017 11:44AM

Comments (14)

You are viewing a single comment's thread. Show more comments above.

Comment author: tukabel 26 November 2017 03:58:14PM 0 points [-]

Looks like the tide is shifting from the strong "engineering" stance (We will design it friendly.) through the "philosophical" approach (There are good reasons to be friendly.)... towards the inevitable resignation (Please, be friendly).

These "firendly AI" debates are not dissimilar to the medieval monks violently arguing about the number of angels on a needletip (or their "friendliness" - there are fallen "singletons" too). They also started strongly (Our GOD rules.) through philosophical (There are good reasons for God.) up to nowadays resignation (Please, do not forget our god or... we'll have no jobs.)

Comment author: turchin 26 November 2017 04:39:34PM 0 points [-]

I think a lot of people are still working on other aspects of AI safety, like value alignment and containment. This approach is just the last line of defence.

Comment author: Luke_A_Somers 27 November 2017 05:11:10PM 1 point [-]

See: my first post on this site.

Comment author: turchin 27 November 2017 05:27:17PM 0 points [-]

Thanks! It is nice that we come to the wording "young AI" more or less independently. I tried to maximally stealmanning this approach and collect all ideas about messaging to the future AI. I will add your post in the bibliographic section.