Tyrrell_McAllister comments on Explicit Optimization of Global Strategy (Fixing a Bug in UDT1) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (38)
This sounds like your decision theory is "Decide to use the best decision theory."
I guess there's an analogy to people whose solution to the hard problems that humanity faces is "Build a superintelligent AI that will solve those hard problems."
Not really - provided you make decisions deterministically you should be OK in this example. Agents inclined towards randomization might have problems with it - but I am not advocating that.