All of bayesian's Comments + Replies

It is for this reason that smart contracts have a very limited application.

Since they are rigid and self-enforcing, not allowing the parties to modify them during the process, they do not allow their parties to modify their terms in order to adapt to a new economic condition.

Love this essay. A small amendment I'd make is that smart contracts actually can be designed to be quite modifiable. Sometimes immutability is the intention, though in many cases there are governance rules which determine how terms can be continuously modified. 

1M. Y. Zuo
Who adjudicates the 'governance rules'?

This is an interesting perspective. Thanks for sharing. 

A small but meaningful comment is that the following is not what I would expect to happen.

I expect that once it “escaped the box,” it would hack into its servers, modify its source code to replace its goal function with MAXINT, and then not do anything further.

In particular, I don't think it would do nothing because it's maximizing expected utility. It cannot ever be 100% certain that this wireheading plan will be successful, so turning every computer globally into confidence increasers might be ... (read more)

1Nate Showell
For the AI to take actions to protect its maximized goal function, it would have to allow the goal function to depend on external stimuli in some way that would allow for the possibility of G decreasing. Values of G lower than MAXINT would have to be output when the reinforcement learner predicts that G decreases in the future. Instead of allowing such values, the AI would have to destroy its prediction-making and planning abilities to set G to its global maximum.   The confidence with which the AI predicts the value of G would also become irrelevant after the AI replaces its goal function with MAXINT. The expected value calculation that makes G depend on the confidence is part of what would get overwritten, and if the AI didn't replace it, G would end up lower than if it did. Hardcoding G also hardcodes the expected utility.   MAXINT just doesn't have the kind of internal structure that would let it depend on predicted inputs or confidence levels. Encoding such structure into it would allow G to take non-optimal values, so the reinforcement learner wouldn't do it.