Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Luke_A_Somers 27 November 2017 05:11:10PM 1 point [-]

See: my first post on this site.

Comment author: turchin 27 November 2017 05:27:17PM 0 points [-]

Thanks! It is nice that we come to the wording "young AI" more or less independently. I tried to maximally stealmanning this approach and collect all ideas about messaging to the future AI. I will add your post in the bibliographic section.

Comment author: tukabel 26 November 2017 03:58:14PM 0 points [-]

Looks like the tide is shifting from the strong "engineering" stance (We will design it friendly.) through the "philosophical" approach (There are good reasons to be friendly.)... towards the inevitable resignation (Please, be friendly).

These "firendly AI" debates are not dissimilar to the medieval monks violently arguing about the number of angels on a needletip (or their "friendliness" - there are fallen "singletons" too). They also started strongly (Our GOD rules.) through philosophical (There are good reasons for God.) up to nowadays resignation (Please, do not forget our god or... we'll have no jobs.)

Comment author: turchin 26 November 2017 04:39:34PM 0 points [-]

I think a lot of people are still working on other aspects of AI safety, like value alignment and containment. This approach is just the last line of defence.

[Link] Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”

0 turchin 25 November 2017 11:44AM
Comment author: J_Thomas_Moros 13 November 2017 03:23:49PM 2 points [-]

Not going to sign up with some random site. If you are the author, post a copy that doesn't require signup.

Comment author: turchin 13 November 2017 03:42:20PM 0 points [-]

Thanks, it was not clear to me that it is not visible to non-members.

New link on google drive - also commenting is open. https://docs.google.com/document/d/15D71qhhY-ZsAY7syzZsr1lKopTODbdeXVPElaPaIqyA/edit

Comment author: turchin 13 November 2017 11:29:26AM *  0 points [-]

This is our accepted chapter in the edited volume "AI Safety and Security" (Roman Yampolskiy, ed.), CRC Press. Forthcoming, 2018

[Link] Military AI as a Convergent Goal of Self-Improving AI

0 turchin 13 November 2017 11:25AM
Comment author: turchin 07 November 2017 01:20:03PM 0 points [-]

Memetic hazard - dangerous habit.

Comment author: turchin 04 November 2017 10:57:58AM *  0 points [-]

I have unpublished text on the topic and will put a draft online in the next couple of weeks, and will apply it to the competition. I will add URL here when it will be ready.

Update: My entry is here: https://www.lesserwrong.com/posts/CDWsjQr8KDuj69fTJ/message-to-any-future-ai-there-are-several-instrumental

Comment author: Elo 02 November 2017 04:39:44AM 1 point [-]

we are in the process of transitioning. The new one is missing some features and is being iterated on every day. EY is writing there, as are a few others. www.lesserwrong.com

It will eventually replace this site but not before functionality is over there.

Comment author: turchin 02 November 2017 11:01:13AM *  1 point [-]

Will the posts here be deleted or will their URLs change? I have some useful URLs here and they are linked in published scientific articles, so if the site will be demolished they will not work, and I hope it will not happen.

Comment author: turchin 20 October 2017 10:04:21AM *  0 points [-]

I solved lucid dreaming around a year ago after finding that megadosing of galantamine before sleep (16 mg) almost sure will produce LD and out-of-body experiences. (Warning: unpleasant side effects and risks)

But taking 8 mg in the middle of the night (as it is recommended everywhere) doesn't work for me.

View more: Prev | Next