Vaniver comments on Open thread, Apr. 01 - Apr. 05, 2015 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (179)
That which can be destroyed by the truth...
To some extent the "value aligned agents" problem, formerly known as "friendly AI," boils down to "how would we actually check our 'improvement-map' for validity and create agents that will actually enforce that improvement-map on reality, rather than something else?"