You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

jacob_cannell comments on Steelmaning AI risk critiques - Less Wrong Discussion

26 Post author: Stuart_Armstrong 23 July 2015 10:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (98)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 27 July 2015 06:21:45PM *  2 points [-]

narrow enough that it doesn't exclude a motivation to sim humans

Most UFAI will have convergent instrumental reasons to sim at least some humans, just as a component of simulating the universal in general towards better prediction/understanding.

FAI has that same small motivation plus the more direct end goal of creating enormous numbers of sims to satisfy human's highly convergent desire for an afterlife to exist. The creation of an immortal afterlife is the single most important defining characteristic of FAI. Humans have spent a huge amount of time thinking and debating about what kinds of gods should/could exist, and afterlife/immortality is the number one concern - and transhumanists are certainly no exception.