An Open Thread: a place for things foolishly April, and other assorted discussions.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
Update: Tom McCabe has created a sub-Reddit to use for assorted discussions instead of relying on open threads. Go there for the sub-Reddit and discussion about it, and go here to vote on the idea.
Consider "syntactic preference" as an order on agent's strategies (externally observable possible behaviors, but in mathematical sense, independently on what we can actually arrange to observe), where the agent is software running on an ordinary computer. This is "ontological boxing", a way of abstracting away any unknown physics. Then, this syntactic order can be given interpretation, as in logic/model theory, for example by placing the "agent program" in environment of all possible "world programs", and restating the order on possible agent's strategies in terms of possible outcomes for the world programs (as an order on sets of outcomes for all world programs), depending on the agent.
This way, we first factor out the real world from the problem, leaving only the syntactic backbone of preference, and then reintroduce a controllable version of the world, in a form of any convenient mathematical structure, an interpretation of syntactic preference. The question of whether the model world is "actually the real world", and whether it reflects all possible features of the real world, is sidestepped.
Thanks (and upvoted) for this explanation of your current approach. I think it's definitely worth exploring, but I currently see at least two major problems.
The first is that my preferences seem to have a logical dependency on the ultimate nature of reality. For example, I currently think reality is just "all possible mathematical structures", but I don't know what my preferences are until I resolve what "all possible mathematical structures" means exactly. What would happen if you tried to use your idea to extract my preferences before... (read more)