Gram_Stone comments on Open Thread, Feb. 2 - Feb 8, 2015 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (253)
For a couple of days, I've been trying to explain to pinyaka why minds-in-general, and specifically, maximizers, are not necessarily reward maximizers. It's really forced me to flesh out my current understanding of AGI. I wrote the most detailed natural language explanation of why minds-in-general and maximizers are not necessarily reward maximizers that I could muster in my most recent reply, and just in case it still didn't click for pinyaka, I thought I'd prepare a pseudocode example since I had a sense that I could do it. Then I thought that instead of just leaving it on my hard drive or at the bottom of a comment thread, it might be a good idea to share it here to get feedback on how well I'm understanding everything. I'm not a programmer, or a computer scientist, or a mathematician or anything; I pretty much just read a book about Python a few years ago, read Superintelligence, and poked around LessWrong for a little bit, so I have a feeling that I didn't quite get this right and I'd love to refine my model. The code's pretty much Python.
EDIT: I couldn't get the codeblocks and indenting to work, so I put it on Pastebin: http://pastebin.com/UfP92Q9w