whpearson comments on Call for new SIAI Visiting Fellows, on a rolling basis - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (264)
Interesting question. Not quite what I was getting at. I hope you don't mind if I use a situation where extra processing can get you more information.
A normal decision theory can be represented as simple function from model to action. It should halt.
decisiontheory :: Model -> Action
Lets say you have a model that you can keep on expanding the consequences of and get a more accurate picture of what is going to happen, like playing chess with a variable amount of look ahead. What the system is looking for is a program that will recursively self improve and be Friendly (where making an action is considered making an AI).
It has a function that can either carry on expanding the model or return an action.
modelOrAct :: Model -> Either Action Model
You can implement decisiontheory with this code
decisiontheory :: Model -> Action
decisiontheory m = either (decisionModel) (id) (modelOrAct m)
However this has the potential to infinite loop due to its recursive definition. This would happen if the expected utility of increasing the accuracy of the model is greater than performing an action and there is no program it can prove safe. You would want some way of interrupting it to update the model with information from the real world as well as the extrapolation.
So I suppose the difference in this case is that due to making a choice on which mental actions to perform you can get stuck not getting information from the world about real world actions.