christopherj comments on Solomonoff Cartesianism - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (45)
I know a way to guarantee wireheading is suboptimal: make the reward signal be available processing power. Unfortunately this would guarantee that the AI is unfriendly, but at least it will self-improve!