Wei_Dai comments on Open Thread: February 2010, part 2 - Less Wrong

10 Post author: CronoDAS 16 February 2010 08:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (857)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 17 February 2010 05:57:00PM 2 points [-]

I wish you had written this a few weeks earlier, because it's perfect as a link for the "their associated difficulties and dangers" phrase in my "Complexity of Value != Complexity of Outcome" post.

Please consider upgrading this comment to a post, perhaps with some links and additional explanations. For example, what is the ontology problem in ethics?

Comment deleted 17 February 2010 06:30:53PM [-]
Comment author: MichaelVassar 19 February 2010 04:58:53PM 1 point [-]

In practice, I find that this is never a problem. You usually rest your values on some intuitively obvious part whatever originally caused you to create the concepts in question.

Comment deleted 19 February 2010 09:31:49PM *  [-]
Comment author: Wei_Dai 19 February 2010 09:41:14PM 0 points [-]

I think mind copying technology may be a better illustration of the subjective anticipation problem than MW QM, but I agree that it's a good example of the ontology problem. BTW, do you have a reference for where the ontology problem was first stated, in case I need to reference it in the future?

Comment deleted 20 February 2010 11:58:31PM *  [-]
Comment author: Wei_Dai 21 February 2010 02:04:26AM 1 point [-]

Thanks for the pointer, but I think the argument you gave in that post is wrong. You argued that an agent smaller than the universe has to represent its goals using an approximate ontology (and therefore would have to later re-phrase its goals relative to more accurate ontologies). But such an agent can represent its goals/preferences in compressed form, instead of using an approximate ontology. With such compressed preferences, it may not have the computational resources to determine with certainty which course of action best satisfies its preferences, but that is just a standard logical uncertainty problem.

I think the ontology problem is a real problem, but it may just be a one-time problem, where we or an AI have to translate our fuzzy human preferences into some well-defined form, instead of a problem that all agents must face over and over again.

Comment deleted 21 February 2010 02:36:17AM *  [-]
Comment author: Wei_Dai 21 February 2010 03:23:02AM 3 points [-]

The reason I think it can just be a one-time shock is that we can extend our preferences to cover all possible mathematical structures. (I talked about this in Towards a New Decision Theory.) Then, no matter what kind of universe we turn out to live in, whichever theory of quantum gravity turns out to be correct, the structure of the universe will correspond to some mathematical structure which we will have well-defined preferences over.

perhaps some kind of ultimate ensemble theory already has [eroded any rational decision-making].

I addressed this issue a bit in that post as well. Are you not convinced that rational decision-making is possible in Tegmark's Level IV Multiverse?

Comment author: Vladimir_Nesov 21 February 2010 10:00:34AM *  3 points [-]

The next few posts on my blog are going to be basically about approaching this problem (and given the occasion, I may as well commit to writing the first post today).

You should read [*] to get a better idea of why I see "preference over all mathematical structures" as a bad call. We can't say what "all mathematical structures" is, any given foundation only covers a portion of what we could invent. As the real world, mathematics that we might someday encounter can only be completely defined by the process of discovery (but if you capture this process, you may need nothing else).

--
[*] S. Awodey (2004). `An Answer to Hellman's Question: 'Does Category Theory Provide a Framework for Mathematical Structuralism?". Philosophia Mathematica 12(1):54-64.

Comment deleted 22 February 2010 05:38:47PM *  [-]
Comment author: Vladimir_Nesov 21 February 2010 10:12:51AM *  1 point [-]

In reality problems of the form where you discover that your preferences are stated in terms of an incorrect ontology, e.g. souls, anticipated future experience, are where this really bites.

I believe even personal identity falls under this category. A lot of moral intuitions work with the-me-in-the-future object, as marked in the map. To follow these intuitions, it is very important for us to have a good idea of where the-me-in-the-future is, to have a good map of this thing. When you get to weird thought experiments with copying, this epistemic step breaks down, because if there are multiple future-copies, the-me-in-the-future is a pattern that is absent. As a result, moral intuitions, that indirectly work through this mark on the map, get confused and start giving the wrong answers as well. This can be readily observed for example from preferential inconsistency in time expected in such thought experiments (you precommit to teleporting-with-delay, but then your copy that is to be destroyed starts complaining).

Personal identity is (in general) a wrong epistemic question asked by our moral intuition. Only if preference is expressed in terms of the territory (or rather in a form flexible enough to follow all possible developments), including the parts currently represented in moral intuition in terms of the-me-in-the-future object in the territory, will the confusion with expectations and anthropic thought experiments go away.

Comment author: Eliezer_Yudkowsky 21 February 2010 12:36:38AM 1 point [-]

I invented it sometime around the dawn of time, don't know if Marcello did in advance or not.

Actually, I don't know if I could have claimed to invent it, there may be science fiction priors.