You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

jacob_cannell comments on Crazy Ideas Thread - October 2015 - Less Wrong Discussion

7 Post author: Gunnar_Zarncke 06 October 2015 10:38PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (114)

You are viewing a single comment's thread. Show more comments above.

Comment author: turchin 11 October 2015 05:59:58AM *  1 point [-]

I just publish simulation map, in which I conclude that most likely I live in one person me-simulation of a period near AI creation. In fact, there is two possible variants: 1. This is a simulation of Eliezer's life, and I just onу of thousands people who are simulated for it with enough details to be conscious observer. 2. It is only me-simulation, there I am the only really simulated observer, and others are p-zombie and simplified models.

Hypothesis 2 is favoured by some kind of power law in simulation world, that says that simpler and cheaper simulations are more abundant. (e.g. there are more novels than movies in our world). But if it is true I should do something really important in FAI or other x-risks topics. I did many things, like map of x-risks prevention, but it is not enough to be simulated.

The simulation map:

http://lesswrong.com/r/discussion/lw/mv0/simulations_map_what_is_the_most_probable_type_of/

Comment author: jacob_cannell 28 October 2015 03:21:12AM 1 point [-]

This is a simulation of Eliezer's life,

I'm surprised you think he actually has a high change of creating AGI.

Comment author: turchin 28 October 2015 08:07:23AM 1 point [-]

EY was here only an example. Now so many players on field that AGI will probably created by someone else. And also it seems that he is not working on coding AI.