You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Vaniver comments on Open thread, Apr. 01 - Apr. 05, 2015 - Less Wrong Discussion

5 Post author: MrMind 31 March 2015 10:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (179)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vaniver 31 March 2015 01:10:31PM *  0 points [-]

LW and related blogs are basically spoiling fantasy fiction to me. DAE have an experience like this? How to overcome it?

That which can be destroyed by the truth...

Am I no the first one to notice the all-improving Philosopher's Stone could not exist in principle because improvement is a mental category and not real, right?

To some extent the "value aligned agents" problem, formerly known as "friendly AI," boils down to "how would we actually check our 'improvement-map' for validity and create agents that will actually enforce that improvement-map on reality, rather than something else?"