Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

nyan_sandwich comments on Pascal's Muggle: Infinitesimal Priors and Strong Evidence - Less Wrong

43 Post author: Eliezer_Yudkowsky 08 May 2013 12:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (404)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 08 May 2013 06:50:48PM *  4 points [-]

Why is decision/probability theory allowed to constrain the space of "physical" models? It seems that the proper theory should not depend on metaphysical assumptions.

If they are starting to require uncertain metaphysical assumptions, I think that counts as "not working together".

Comment author: Jack 10 May 2013 12:21:41AM 3 points [-]

Metaphysical assumptions are one thing: this one involves normative assumptions. There is zero reason to think we evolved values that can make any sense at all of saving 3^^^3 people. The software we shipped with cannot take numbers like that in it's domain. That we can think up thought experiments that confuse our ethical intuitions is already incredibly likely. Coming up with kludgey methods to make decisions that give intuitively correct answers to the thought experiments while preserving normal normative reasoning and then--- from there--- concluding something about what the universe must be like is a really odd epistemic position to take.

Comment author: shminux 08 May 2013 08:04:27PM 0 points [-]

It seems that the proper theory should not depend on metaphysical assumptions.

That's the part that starts grating on me. Especially when Eliezer mentions Tegmark Level IV with a straight face. I assume that I do not grok his meaning in fullness. If he means what I think he means, it would be a great disappointment.

Comment author: TimS 08 May 2013 08:24:29PM 9 points [-]

shminux,

It's just a fact that you endorse a very different theory of "reality" than Eliezer. Why disguise your reasonable disagreement with him by claiming that you don't understand him?

You talk like you don't notice when highly-qualified-physicist shminux is talking and when average-armchair-philosopher shminux is talking.

Which is annoying to me in particular because physicist shminux knows a lot more than I, and I should pay attention to what he says in order to be less wrong, while philosopher shminux is not entitled to the same weight. So I'd like some markers of which one is talking.

Comment author: shminux 08 May 2013 09:18:09PM *  7 points [-]

I thought I was pretty clear re the "markers of which one is talking". But let me recap.

Eliezer has thought about metaethics, decision theories and AI design for much much longer time and much much more seriously than I have. I can see that when I read what he writes about the issues I have not even thought of. While I cannot tell if it is correct, I can certainly tell that there is a fair amount of learning I still have to do if I wanted to be interesting. This is the same feeling I used to get (and still get on occasion) when talking with an expert in, say, General Relativity, before I learned the subject in sufficient depth. Now that I have some expertise in the area, I see the situation from the other side, as well. I can often recognize a standard amateurish argument before the person making it has finished. I often know exactly what implicit false premises lead to this argument, because I had been there myself. If I am lucky, I can successfully point out the problematic assumptions to the amateur in question, provided I can simplify it to the proper level. If so, the reaction I get is "that's so cool... so deep... I'll go and ponder it, Thank you, Master!", the same thing I used to feel when hearing an expert answer my amateurish questions.

As far as Eliezer's area of expertise is concerned, I am on the wrong side of the gulf. Thus I am happy to learn what I can from him in this area and be gratified if my humble suggestions prove useful on occasion.

I am much more skeptical about his forays into Quantum Mechanics, Relativity and some other areas of physics I have more than passing familiarity with. I do not get the feeling that what he says is "deep", and only occasionally that it is "interesting". Hence I am happy to discount his musings about MWI as amateurish.

There is this grey area between the two, which could be thought of as philosophy of science. While I am far from an expert in the area, I have put in a fair amount of effort to understand what the leading edge is. What I find is warring camps of hand-waving "experts" with few interesting insights and no way to convince the rival school of anything. These interesting insights mostly happen in something more properly called math, linguistics or cognitive science, not philosophy proper. There is no feeling of awe you get from listening to a true expert in a certain field. Expert physicists who venture into philosophy, like Tegmark and Page, quickly lose their aura of expertise and seem mere mortals with little or no advantage over other amateurs.

When Eliezer talks about something metaphysical related to MWI and Tegmark IV, or any kind of anthropics, I suspect that he is out of his depth, because he sounds as such. However, knowing that he is an expert in a somewhat related area makes me think that I may well have missed something important, and so I give him the benefit of a doubt and try to figure out what I may have missed. If the only difference is that I "endorse a very different theory of "reality" than Eliezer", and if this is indeed only the matter of endorsement, and there is no way to tell experimentally who is right, now or in the far future, then his "theory of reality" becomes much less relevant to me and therefore much less interesting. Oh, and here I don't mean realism vs instrumentalism, I mean falsifiable models of the "real external world", as opposed to anything Everett-like or Barbour-like.

Comment author: Eliezer_Yudkowsky 08 May 2013 10:03:05PM 6 points [-]

Even if the field X is confused, to confidently dismiss subtheory Y you must know something confidently about Y from within this confusion, such as that Y is inconsistent or nonreductionist or something. I often occupy this mental state myself but I'm aware that it's 'arrogant' and setting myself above everyone in field X who does think Y is plausible - for example, I am arrogant with respect to respected but elderly physicists who think single-world interpretations of QM are plausible, or anyone who thinks our confusion about the ultimate nature of reality can keep the God subtheory in the running. Our admitted confusion does not permit that particular answer to remain plausible.

I don't think anyone I take seriously would deny that the field of anthropics / magical-reality-fluid is confused. What do you think you know about all computable processes, or all logical theories with models, existing, which makes that obviously impermitted? In case it's not clear, I wasn't endorsing Tegmark Level IV as the obvious truth the way I consider MWI obvious, nor yet endorsing it at all, rather I was pointing out that with some further specification a version of T4 could provide a model in which frequencies would go as the probabilities assigned by the complexity+leverage penalty, which would not necessarily make it true. It is not clear to me what epistemic state you could occupy from which this would justly disappoint you in me, unless you considered T4 obviously forbidden even from within our confusion. And of course I'm fine with your being arrogant about that, so long as you realize you're being arrogant and so long as you have the epistemological firepower to back it up.

Comment author: shminux 08 May 2013 11:34:54PM 0 points [-]

Even if the field X is confused, to confidently dismiss subtheory Y you must know something confidently about Y from within this confusion, such as that Y is inconsistent or nonreductionist or something.

Maybe I was unclear. I don't dismiss Y=TL4 as wrong, I ignore it as untestable and therefore useless for justifying anything interesting, like how an AI ought to deal with tiny probabilities of enormous utilities. I agree that I am "arrogant" here, in the sense that I discount an opinion of a smart and popular MIT prof as misguided. The postulate "mathematical existence = physical existence" raises a category error exception for me, as one is, in your words, logic, the other is physics. In fact, I don't understand why privilege math to begin with. Maybe the universe indeed does not run on math (man, I still chuckle every time I recall that omake). Maybe the trouble we have with understanding the world is that we rely on math too much (sorry, getting too Chopra here). Maybe the matrix lord was a sloppy programmer whose bugs and self-contradictory assumptions manifest themselves to us as black hole singularities, which are hidden from view only because the code maintainers did a passable job of acting on the QA reports. There are many ideas which are just as pretty and just as unjustifiable as TL4. I don't pretend to fully grok the "complexity+leverage penalty" idea, except to say that your dark energy example makes me think less of it, as it seems to rely on considerations I find dubious (that any model with the potential of affecting gazillions of people in the far future if accurate is extremely unlikely despite being the currently best map available). Is it arrogant? Probably. Is it wrong? Not unless you prove the alternative right.

Comment author: endoself 09 May 2013 02:14:08AM *  4 points [-]

Maybe I was unclear. I don't dismiss Y=TL4 as wrong, I ignore it as untestable and therefore useless for justifying anything interesting, like how an AI ought to deal with tiny probabilities of enormous utilities.

He's not saying that the leverage penalty might be correct because we might live in a certain type of Tegmark IV, he's saying that the fact that the leverage penalty would be correct if we did live in Tegmark IV + some other assumptions shows (a) that it is a consistent decision procedure and¹ (b) it is the sort of decision procedure that emerges reasonably naturally and is thus a more reasonable hypothesis than if we didn't know it comes up natuarally like that.

It is possible that it is hard to communicate here since Eliezer is making analogies to model theory, and I would assume that you are not familiar with model theory.

¹ The word 'and' isn't really correct here. It's very likely that EY means one of (a) and (b), and possibly both.

Comment author: Eliezer_Yudkowsky 09 May 2013 05:35:04AM 0 points [-]

(Yep. More a than b, it still feels pretty unnatural to me.)

Comment author: shminux 09 May 2013 08:39:28PM *  1 point [-]

Huh. This whole exchange makes me more certain than I am missing something crucial, but reading and dissecting it repeatedly does not seem to help. And apparently it's not the issue of not knowing enough math. I guess the mental block I can't get over is "why TL4?". Or maybe "what other mental constructs could one use in place of TL4 to make a similar argument?"

Maybe paper-machine or someone else on #lesswrong will be able to clarify this.

Comment author: [deleted] 09 May 2013 09:20:49PM *  1 point [-]

Hey, don't look at me. I'm with you on "Existence of T4 is untestable therefore boring."

Comment author: Eliezer_Yudkowsky 09 May 2013 10:36:18PM 1 point [-]

Or maybe "what other mental constructs could one use in place of TL4 to make a similar argument?"

Have you got one?

Comment author: shminux 09 May 2013 05:15:04AM 0 points [-]

It is possible that it is hard to communicate here since Eliezer is making analogies to model theory, and I would assume that you are not familiar with model theory.

You are right, I am out of my depth math-wise. Maybe that's why I can't see the relevance of an untestable theory to AI design.

Comment author: wedrifid 09 May 2013 02:08:15PM 5 points [-]

Maybe that's why I can't see the relevance of an untestable theory to AI design.

It seems to be the problem that is relevant to AI design. How does an expected utility maximising agent handle edge cases and infinitesimals given logical uncertainty and bounded capabilities? If you get that wrong then Rocks Fall and Everyone Dies. The relevance of any given theory of how such things can be modelled is then based on either suitability for use in an AI design (or conceivably the implications if an AI constructed and used said model).

Comment author: Eliezer_Yudkowsky 09 May 2013 08:10:51PM 1 point [-]

(Also yep.)

Comment author: homunq 12 May 2013 10:36:26AM *  1 point [-]

TL4, or at least (TL4+some measure theory that gives calculable and sensible answers), is not entirely unfalsifiable. For instance, it predicts that a random observer (you) should live in a very "big" universe. Since we have plausible reasons to believe TL0-TL3 (or at least, I think we do), and I have a very hard time imagining specific laws of physics that give "bigger" causal webs than you get from TL0-TL3, that gives me some weak evidence for TL4; it could have been falsified but wasn't.

It seems plausible that that's the only evidence we'll ever get regarding TL4. If so, I'm not sure that either of the terms "testable" or "untestable" apply. "Testable" means "susceptible to reproducible experiment"; "untestable" means "unsusceptible to experiment"; so what do you call something in between, which is susceptible only to limited and irreproducible evidence? Quasitestable?

Of course, you could still perhaps say "I ignore it as only quasitestable and therefore useless for justifying anything interesting".

Comment author: Kawoomba 08 May 2013 08:12:29PM 2 points [-]

If he means what I think he means, it would be a great disappointment.

'Splain yo'self.

Comment author: shminux 08 May 2013 09:32:10PM 0 points [-]

See my reply to TimS.

Comment author: Eliezer_Yudkowsky 08 May 2013 07:34:48PM 0 points [-]

I'm not familiar with any certain metaphysical assumptions. And the constraint here is along the lines of "things converge" where it is at least plausible that reality has to converge too. (Small edit made to final paragraphs to reflect this.)