Compared to now, he was wrong.
Why should we think he will be wrong about our future when he was wrong about his own?
Wouldn't increasing noise levels in the decision-making processes of a Friendly AI decrease the Friendliness of that AI?
I think that ought to take this approach to reducing resource-consumption off the table.
Hi!
I'm 29, and I am a programmer living in Chicago. I just finished up my MS in Computer Science. I've been a reader of Less Wrong since it was Overcoming Bias, but never got around to posting any comments.
I've been rationality-minded since I was a little kid, criticizing the plots and character actions of stories I read. I was raised Catholic and sent to Sunday school, but it didn't take and eventually my parents relented. Once I went away to college and acquired a real internet connection, I spent a lot of time reading rationality-related blogs and websites. It's been a while, but I'd bet it was through one of those sites that I found Less Wrong.
But if the gatekeeper knows that your code was supposed to produce something more responsive, they'll figure out that you don't work like they expect you to. That would be a great reason to never let you out of the box.