Tyrrell_McAllister comments on Explicit Optimization of Global Strategy (Fixing a Bug in UDT1) - Less Wrong

17 Post author: Wei_Dai 19 February 2010 01:30AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (38)

You are viewing a single comment's thread. Show more comments above.

Comment author: Tyrrell_McAllister 20 February 2010 02:25:30PM 0 points [-]

If an agent has no such tendency, and expects this kind of problem, then it will aspire to develop a similar tendency.

This sounds like your decision theory is "Decide to use the best decision theory."

I guess there's an analogy to people whose solution to the hard problems that humanity faces is "Build a superintelligent AI that will solve those hard problems."

Comment author: timtyler 20 February 2010 04:03:02PM *  0 points [-]

Not really - provided you make decisions deterministically you should be OK in this example. Agents inclined towards randomization might have problems with it - but I am not advocating that.