RobinHanson comments on What Program Are You? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (42)
Don't these kinds of considerations apply to any decision theory? Don't they all suppose that you're given some kind of carving-up of the world into various things with various manipulable properties? Don't they all suppose that you have some kind of identity criteria for saying when things are "the same", and for partitioning up events to assign payoffs to them? Is any decision theory responsible for dictating what your initial carving-up of the world should be?
I think that TDT and UDT assume that the agent, for whatever reason, starts out with a given decomposition of itself into program and data. If it had started with a different decomposition, it would have been a different agent, and so, unsurprisingly, might have made different decisions.
Ordinary Causal Decision Theory does not depend on a carving of agents into programs and data.
My understanding is that TDT and UDT are supposed to be used by an agent that we design. In all likelihood, we will have decomposed the agent into program and data in the process of designing it. When the agent starts to use the decision theory, it can take that decomposition as given.
This consideration applies to ourselves, insofar as we have a hand in designing ourselves.
Reading this statement, it comes across as quite objectionable. I think that this is because dividing something into program and data seems it cannot be done in a non-arbitrary manner--many programming languages don't distinguish between code and data, and a universal Turing machine must interpret its input as program at some point.
Perhaps one could have as special "how to write a program" decision theory, but that would not be a general decision theory applicable to all other decisions.
Isn't this like criticizing Bayesianism because it doesn't tell you how to select your initial prior? For practical purposes, that doesn't matter because you already have a prior; and once you have a prior, Bayesianism is enough to go on from there.
Similarly, you already decompose at least some part of yourself into program and data (don't you?). This is enough for that part of yourself to work with these decision theories. And using them, you can proceed to decide how to decompose the rest of yourself, or even to reflect on the original decomposition and choose a new one.
The following is slightly tongue in cheek, but I don't normally place a stable boundary between program and data on myself, I revise it depending on purpose. The following is one view I find useful sometimes
Nope, I'm all program. What you would call data is just programming in weaker languages than Turing complete ones. I can rewrite my programming, do meta analysis on it.
The information streaming into my eyes is a program that I don't know what it will make me do, it could make me flinch or it change the conceptual way that I see the world. The visual system is just an interpreter for the programming optical signals.
"Prior" is like a get out of jail card. Whenever the solution to some problem turns out to conveniently depend on an unknown probability distribution, you can investigate further, or you can say "prior" and stop there. For example, the naive Bayesian answer to game theory would be "just optimize based on your prior over the enemy's actions", which would block the route to discovering Nash equilibria.
It's true that it's worthwhile to investigate where priors ought to come from. My point is only that you can still put Bayesianism to work even before you've made such investigations.