Roko comments on Open Thread: February 2010, part 2 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (857)
I think mind copying technology may be a better illustration of the subjective anticipation problem than MW QM, but I agree that it's a good example of the ontology problem. BTW, do you have a reference for where the ontology problem was first stated, in case I need to reference it in the future?
Thanks for the pointer, but I think the argument you gave in that post is wrong. You argued that an agent smaller than the universe has to represent its goals using an approximate ontology (and therefore would have to later re-phrase its goals relative to more accurate ontologies). But such an agent can represent its goals/preferences in compressed form, instead of using an approximate ontology. With such compressed preferences, it may not have the computational resources to determine with certainty which course of action best satisfies its preferences, but that is just a standard logical uncertainty problem.
I think the ontology problem is a real problem, but it may just be a one-time problem, where we or an AI have to translate our fuzzy human preferences into some well-defined form, instead of a problem that all agents must face over and over again.
The reason I think it can just be a one-time shock is that we can extend our preferences to cover all possible mathematical structures. (I talked about this in Towards a New Decision Theory.) Then, no matter what kind of universe we turn out to live in, whichever theory of quantum gravity turns out to be correct, the structure of the universe will correspond to some mathematical structure which we will have well-defined preferences over.
I addressed this issue a bit in that post as well. Are you not convinced that rational decision-making is possible in Tegmark's Level IV Multiverse?
The next few posts on my blog are going to be basically about approaching this problem (and given the occasion, I may as well commit to writing the first post today).
You should read [*] to get a better idea of why I see "preference over all mathematical structures" as a bad call. We can't say what "all mathematical structures" is, any given foundation only covers a portion of what we could invent. As the real world, mathematics that we might someday encounter can only be completely defined by the process of discovery (but if you capture this process, you may need nothing else).
--
[*] S. Awodey (2004). `An Answer to Hellman's Question: 'Does Category Theory Provide a Framework for Mathematical Structuralism?". Philosophia Mathematica 12(1):54-64.
Hope to finish it today... Though I won't talk about philosophy of mathematics in this sub-series, I'm just going to reduce the ontological confusion about preference and laws of physics to a (still somewhat philosophical, but taking place in a comfortably formal setting) question of static analysis of computer programs.
Great to hear. Looking forward to reading it.
Yes, talking about "preference over all mathematical structures" does gloss over some problems in the philosophy of mathematics, and I am sympathetic to anti-foundationalist views like Awodey's.
Also, in general I agree with Roko on the need for an AI that can do philosophy better than any human, so in this thread I was mostly picking a nit with a specific argument that he had.
(I was going to remind you about the missing post, but I see Roko already did. :)
I disagree on the first part, and agree on the second part.
Yes, and that's enough for rational decision making. I'm not really sure why you're not seeing that...
There is a deep analogy between how you can't change the laws of physics (contents of reality, apart from lawfully acting) and how you can't change your own program. It's not a delusion unless it can be reached by mistake. The theist can't be right to act as if a deity exists unless his program (brain) is such that it is the correct way to act, and he can't change his mind for it to become right, because it's impossible to change one's program, only act according to it.
I agree that it's ugly to think of the weights as a pretense on how real certain parts of reality are. That's why I think it may be better to think of them as representing how much you care about various parts of reality. (For the benefit of other readers, I talked about this in What Are Probabilities, Anyway?.)
Actually, I haven't completely given up the idea that there is some objective notion of how real, or how important, various parts of reality are. It's hard to escape the intuition that some parts of math are just easier to reach or find than others, in a way that is not dependent how human minds work.
I believe even personal identity falls under this category. A lot of moral intuitions work with the-me-in-the-future object, as marked in the map. To follow these intuitions, it is very important for us to have a good idea of where the-me-in-the-future is, to have a good map of this thing. When you get to weird thought experiments with copying, this epistemic step breaks down, because if there are multiple future-copies, the-me-in-the-future is a pattern that is absent. As a result, moral intuitions, that indirectly work through this mark on the map, get confused and start giving the wrong answers as well. This can be readily observed for example from preferential inconsistency in time expected in such thought experiments (you precommit to teleporting-with-delay, but then your copy that is to be destroyed starts complaining).
Personal identity is (in general) a wrong epistemic question asked by our moral intuition. Only if preference is expressed in terms of the territory (or rather in a form flexible enough to follow all possible developments), including the parts currently represented in moral intuition in terms of the-me-in-the-future object in the territory, will the confusion with expectations and anthropic thought experiments go away.
I invented it sometime around the dawn of time, don't know if Marcello did in advance or not.
Actually, I don't know if I could have claimed to invent it, there may be science fiction priors.