This post is a summary of the different positions expressed in the comments to my previous post and elsewhere on LW. The central issue turned out to be assigning "probabilities" to individual theories within an equivalence class of theories that yield identical predictions. Presumably we must prefer shorter theories to their longer versions even when they are equivalent. For example, is "physics as we know it" more probable than "Odin created physics as we know it"? Is the Hamiltonian formulation of classical mechanics apriori more probable than the Lagrangian formulation? Is the definition of reals via Dedekind cuts "truer" than the definition via binary expansions? And are these all really the same question in disguise?

One attractive answer, given by shokwave, says that our intuitive concept of "complexity penalty" for theories is really an incomplete formalization of "conjunction penalty". Theories that require additional premises are less likely to be true, according to the eternal laws of probability. Adding premises like "Odin created everything" makes a theory less probable and also happens to make it longer; this is the entire reason why we intuitively agree with Occam's Razor in penalizing longer theories. Unfortunately, this answer seems to be based on a concept of "truth" granted from above - but what do differing degrees of truth actually mean, when two theories make exactly the same predictions?

Another intriguing answer came from JGWeissman. Apparently, as we learn new physics, we tend to discard inconvenient versions of old formalisms. So electromagnetic potentials turn out to be "more true" than electromagnetic fields because they carry over to quantum mechanics much better. I like this answer because it seems to be very well-informed! But what shall we do after we discover all of physics, and still have multiple equivalent formalisms - do we have any reason to believe simplicity will still work as a deciding factor? And the question remains, which definition of real numbers is "correct" after all?

Eliezer, bless him, decided to take a more naive view. He merely pointed out that our intuitive concept of "truth" does seem to distinguish between "physics" and "God created physics", so if our current formalization of "truth" fails to tell them apart, the flaw lies with the formalism rather than with us. I have a lot of sympathy for this answer as well, but it looks rather like a mystery to be solved. I never expected to become entangled in a controversy over the notion of truth on LW, of all places!

A final and most intriguing answer of all came from saturn, who alluded to a position held by Eliezer and sharpened by Nesov. After thinking it over for awhile, I generated a good contender for the most confused argument ever expressed on LW. Namely, I'm going to completely ignore the is-ought distinction and use morality to prove the "strong" version of Occam's Razor - that shorter theories are more "likely" than equivalent longer versions. You ready? Here goes:

Imagine you have the option to put a human being in a sealed box where they will be tortured for 50 years and then incinerated. No observational evidence will ever leave the box. (For added certainty, fling the box away at near lightspeed and let the expansion of the universe ensure that you can never reach it.) Now consider the following physical theory: as soon as you seal the box, our laws of physics will make a localized exception and the victim will spontaneously vanish from the box. This theory makes exactly the same observational predictions as your current best theory of physics, so it lies in the same equivalence class and you should give it the same credence. If you're still reluctant to push the button, it looks like you already are a believer in the "strong Occam's Razor" saying simpler theories without local exceptions are "more true". QED.

It's not clear what, if anything, the above argument proves. It probably has no consequences in reality, because no matter how seductive it sounds, skipping over the is-ought distinction is not permitted. But it makes for a nice koan to meditate on weird matters like "probability as preference" (due to Nesov and Wei Dai) and other mysteries we haven't solved yet.

ETA: Hal Finney pointed out that the UDT approach - assuming that you live in many branches of the "Solomonoff multiverse" at once, weighted by simplicity, and reducing everything to decision problems in the obvious way - dissolves our mystery nicely and logically, at the cost of abandoning approximate concepts like "truth" and "degree of belief". It agrees with our intuition in advising you to avoid torturing people in closed boxes, and more generally in all questions about moral consequences of the "implied invisible". And it nicely skips over all the tangled issues of "actual" vs "potential" predictions, etc. I'm a little embarrassed at not having noticed the connection earlier. Now can we find any other good solutions, or is Wei's idea the only game in town?

New to LessWrong?

New Comment
74 comments, sorted by Click to highlight new comments since: Today at 10:15 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

gets out the ladder and climbs up to the scoreboard

5 posts without a tasteless and unnecessary torture reference

replaces the 5 with a 0

climbs back down

Years ago, before coming up with even crazier ideas, Wei Dai invented a concept that I named UDASSA. One way to think of the idea is that the universe actually consists of an infinite number of Universal Turing Machines running all possible programs. Some of these programs "simulate" or even "create" virtual universes with conscious entities in them. We are those entities.

Generally, different programs can produce the same output; and even programs that produce different output can have identical subsets of their output that may include conscious entities. So we live in more than one program's output. There is no meaning to the question of what program our observable universe is actually running. We are present in the outputs of all programs that can produce our experiences, including the Odin one.

Probability enters the picture if we consider that a UTM program of n bits is being run in 1/2^n of the UTMs (because 1/2^n of all infinite bit strings will start with that n bit string). That means that most of our instances are present in the outputs of relatively short programs. The Odin program is much longer (we will assume) than one without him, so the overwhelming majority of our copies are in universes without Odin. Probabilistically, we can bet that it's overwhelmingly likely that Odin does not exist.

6DanielVarga13y
This is a cool theory, but it is probably equivalent to another, less cool theory that yields identical predictions and does not reference infinite virtual universes. :)
5paulfchristiano13y
Although it postulates the existence of infinitely many inaccessible universes, it may be simpler than equivalent theories which imply only a single universe. I feel like this is an argument we've seen before, but with more hilarious self-referentiality.
0khafra13y
Perhaps in The Finale of the Ultimate Meta Mega Crossover?
0DanielVarga13y
If I am not mistaken, it is a bit more formalized version of Greg Egan's Dust Theory.
0paulfchristiano13y
I was actually referring to the (slightly superficial) similarity to the MWI vs. collapse discussion that indirectly prompted this post.
2cousin_it13y
Yep, I already arrived at that answer elsewhere in the thread. It's very nice and consistent and fits very well with UDT (Wei Dai's current "crazy" idea). There still remains the mystery of where our "subjective" probabilities come from, and the mystery why everything doesn't explode into chaos, but our current mystery becomes solved IMO. To give a recent quote from Wei, "There are copies of me all over math".
1red7513y
Should we stop on UDASSA? Can we consider universe that consists of continuum of UDASSAs each running some (infinite) subset of set of all possible programs.
2red7513y
If anyone is interested. This extension doesn't seem to lead to anything of interest. If we map continuum of UDASSA multiverses into [0;1) then Lebesgue measure of set of multiverses which run particular program is 1/2. Let binary number 0.b1 b2 ... bn ... be representation of multiverse M if for all n: (bn=1 iff M runs program number n, and bn=0 otherwise). It is easy to see that map of set of multiverses which run program number n is a collection of intervals [i/2^n;2i/2^n) for i=1..2^(n-1). Thus its Lebesgue measure is 2^(n-1)/2^n=1/2.

If two theories imply different invisibles, they shouldn't be considered equivalent. That no evidence can tell them apart, and still they are not equal, is explained by them having different priors. But if two theories are logically (agent-provably, rather) equivalent, this is different, as the invisibles they imply and priors measuring them are also the same.

0Will_Sawin13y
Can a theory be proved logically equivalent to a theory with more, or fewer, morally valuable agents?

The thing about positivism is it pretends to be a down-to-earth common-sense philosophy, and then the more you think about it the more it turns into this crazy surrealist madhouse. So we can't measure parallel universes and there's no fact of the matter as to whether they exist. But people in parallel universes can measure them, but there's no fact of the matter whether these people exist, and there's a fact of the matter whether these universes exist if and only if these people exist to measure them, so there's no fact of the matter whether there is a fac... (read more)

0cousin_it13y
I'm fine with throwing away positivism, as long as we find something viable to replace it with. If you think yielding identical observations does not make two theories equivalent, then what is your criterion for equivalence of theories? Or are all theories different and incompatible, so only one definition of real numbers can ever be "true"? This looks like replacing one surrealist madhouse with another.
2ata13y
I could accept that two theories are equivalent if they yield identical observations to every possible observer, everywhere, or better yet, if they yield identical output for any given input if implemented as programs. If you write a program which simulates the laws of physics, and then you write another program which simulates "Odin" calling a function that simulates the laws of physics and doing nothing else, then I would accept that they represent equivalent theories, if they really do always result in the exact same (or isomorphic) output under every circumstance. (Though an Odin that impotent or constrained would be more of a weird programming mistake than a god.) But if the two programs don't systematically produce equivalent output for equivalent input, then they are not equivalent programs, even if none of the agents being simulated can tell the difference.

This theory makes exactly the same observational predictions as your current best theory of physics, so it lies in the same equivalence class and you should give it the same credence.

You're blurring an important distinction between two types of equivalence:

  • Empirical equivalence, where two program-theories give the same predictions on all currently known empirical observations.
  • Formal equivalence, where two program-theories give identical predictions on all theoretically possible configurations, and this can be proved mathematically.

If two theories ... (read more)

Boxes proofed against all direct and indirect observation, potential for observation mixed with concrete practicality of such observation, strictly-worse choices, morality... one would be hard-pressed to muddle your thought experiment more than that.

Let's try to make it a little more straightforward: assume that there exists a certain amount of physical space which falls outside our past light cone. Do you think it is equally likely that it contains galaxies and that it contains unicorns? More importantly, do you think the preceding question means anything?

0cousin_it13y
In my example, as in many others, morality/utility is necessary for reducing questions about "beliefs" to questions about decisions. (Similar to how adding payoffs to the Sleeping Beauty problem clarifies matters a lot, and how naively talking about probabilities in the Absent-Minded Driver introduces a time loop.) In your formulation I may legitimately withhold judgment about unicorns - say the question is as meaningless as asking whether integers "are" a subset of reals, or a distinct set - because it doesn't affect my future utility either way. In my formulation you can't wiggle out as easily.
0NihilCredo13y
[Edited out - I need to think this over a little longer]
4cousin_it13y
I thought about your questions some more, and stumbled upon a perspective that makes them all meaningful - yes, even the one about defining the real numbers. You have to imagine yourself living in a sort of "Solomonoff multiverse" that runs a weighted mix of all possible programs, and act as if to maximize your expected utility over that whole multiverse. Never mind "truth" or "degrees of belief" at all! If Omega comes to you and asks whether an inaccessible region of space contains galaxies or unicorns, bravely answer "galaxies" because it wins you more cookies weighted by universe-weight - simpler programs have more of it. This seems to be the coherent position that many commenters seem to be groping toward...

I'm skeptical of the idea that the hypothesis "Odin created physics as we know it" would actually make no additional predictions over the hypothesis . I'm tempted to say that as a last resort we could distinguish between them by generating situations like "Omega asks you to bet on whether Odin exists, and will evaluate you using a logarithmic scoring rule and will penalize you that many utilons", though at this point maybe it is unjustified to invoke Omega without explaining how she knows these things.

But what do you think of some of th... (read more)

2shokwave13y
As the originator of that hypothesis, the idea I had in mind was that there are two theories: physics as we know it, and Odin created physics as we know it. Scientists hold the first theory, and Odinists hold the second. The theories predict exactly the same things, so that Odinists and scientists have the same answers to problems, but the Odinists theory lets them say that Odin exists - and they happen to have a book that starts with "If Odin exists, then..." and goes on to detail how they should live their lives. The scientists have a great interest in showing that the second theory is wrong, because the Odinists are otherwise justified in their pillaging of the scientists' home towns. But the Odinists are clever folk, and say we shouldn't expect to see anything different from the world as we know it because Odin is all-powerful at world-creation. Honestly, I should have picked Loki.
1ata13y
Are you defining Odin's role and behaviour such that he is guaranteed not to actually do anything that impinges on reality beyond creating the laws of physics that we already know? Or is it just that he hasn't interfered with anything so far, or hasn't interfered with anything in such a way that anything we can observe is different? (Edit: I ask because any claims about metaethics that depend on Odin's existence would seem to require raising him to the level of a causally-active component of the theory rather than an arbitrary and unfalsifiable epiphenomenon.)
0shokwave13y
I am defining him as being an arbitrary and unfalsifiable epiphenomenon everywhere excepting that he was causally active in the creation of the book that details the ethical lives Odinists ought to live. Basically, he hasn't interfered with anything in such a way that anything we could ever observe is different, except he wrote a book about it. It's clear to me that anyone could choose to reject Odinism, but it's not clear what arguments other than a strong Occam's razor could convince a sufficiently reasonable and de-biased (ie genuinely truth-seeking) Odinist to give up their belief.

no matter how seductive it sounds, skipping over the is-ought distinction is not permitted

Yeah, some of us are still not convinced on that one.

Speaking of which, does anyone actually have something resembling a proof of this? People just seem to cast it about flippantly.

0Jack13y
So what Hume was talking about when he addressed this is just that people sometimes come to is conclusions on the basis of ought statements and ought statements on the basis of is statements. Hume makes that point that no rule in deductive logic renders this move valid. You would have to defend some introduction rule for ought. Or I guess throw out deductive logic. That said, cousin_it's argument can be saved with a rather uncontroversial premise: The reason we don't want to send this person adrift is because we believe the "He will continue to be tortured even though we aren't observing him." This seems uncontroversial, my problem with the argument is that a) I'm not sure the hypothetical successfully renders the case "unobservable" and b)I'm not sure our evolved moral intuitions are equipped to rule meaningfully on such events.
3Perplexed13y
That is the first (Hume's) half of the argument. The second half is G.E. Moore's "open question" argument which tries to show that you can't come up with a valid introduction rule for ought by the obvious trick of defining "ought" in terms of simple concepts that don't already involve morality. The irony here is the Hume is remembered for the "is/ought" thing even though he immediately proceeded to provide an account of "ought" in terms of "is". The way he did it is to break morality into two parts. The first part might be called the "moral instinct". But this is a real feature of human nature; it exists; it can be examined; it is something that lives entirely in the world of "is". Of course, no one who thinks that there is something "spiritual" or "supernatural" about morality is particularly bothered by the fact that moral instincts are completely natural entities made out of "is" stuff. They maintain that there a second part to morality - call it "true morality" - and that the "moral instinct" is just an imperfect guide to "true morality". It is the "true morality" that owns the verb "ought" and hence it cannot be reduced to "is". Hume is perfectly happy to have the distinction made between "moral instincts" and "true morality". He just disagrees that "true morality" is on any kind of higher plane. According to Hume, when you look closely, you will find that true morality, the ideal toward which our moral instincts tend, is nothing other than enlightened rational self interest, together with a certain amount of social convention - both of which can quite easily be reduced to "is". So, I'm claiming that Hume made the first part of the argument precisely because he intended to define "ought" in terms of "is". But Moore came along later, didn't buy Hume's definition, and came up with the "open question" argument to 'prove' that no one else could define "ought" either.
2Will_Sawin13y
Isn't the problem that ought already has a definition? "ought" is defined as "that stuff that you should do" This definition sounds circular because it is. I can't physically point to an ought like I can an apple, but "ought" is a concept all human beings have, separate from learning language. "is" is actually another example of this. So the reason you can't define ought is the same reason that you can't define an apple as those red roundish things and then define an apple as a being capable of flight. We can define new words, like Hume-ought, Utilitarian-ought, Eliezer-ought, based on what various people or schools of thought say those words mean. But "ought=Hume-ought" or whatever is not a definition, it's a statement of moral fact, and you can't prove it unless you take a statement of moral fact as an assumption.
2Perplexed13y
In a sense, that is exactly the point that Moore is making with the "open question" argument. But the situation is a bit more complicated. The stuff you should do can be further broken down into "stuff you should do for your own sake" and "stuff you should for moral reasons". I.e. "ought" splits into two words - a practical-ought and a moral-ought. Now, one way of looking at what Hume did is to say that he simply defined moral-ought as practical ought. A dubious procedure, as you point out. But another way of looking at what he did is that he analyzed the concept of 'moral-ought' and discovered a piece of it that seems to have been misclassified. That piece really should be classified as a variety of 'practical-ought'. And then, having gotten away with it once, he goes on to do it again and again until there is nothing left of independent 'moral-ought'. Dissolved away. What's more, if you are not strongly pre-committed to defending the notion of an independent moral 'ought', the argument can be rather convincing. And as a supplementary incentive, notice that by dissolving and relocating the moral 'ought' in this way, Hume has solved the second key question about morality: "Now that I know how I morally ought to behave, what reason do I have to behave as I morally ought to behave? Hume's answer: "Because 'moral ought' is just a special case of 'practical ought'.
0thomblake13y
Despite being a fellow-traveler in these areas, I had no idea Hume actually laid out all these pieces. I'll have to go read some more Hume. I tend to defend it as straightforward application of Sidgwick's definition of ethics coupled with the actual English meaning of 'should', but clearly a good argument preceding that by a century or two would be even better.
2Perplexed13y
Try this

Is the Hulatioamiltonian formulation of classical mechanics apriori more probable than the Lagrangian formn?

They are both derivable from the same source, Newtonian mechanics plus the ZF Set theory. They are equivalent and therefore equally probable.

The shortest possible version of them all - mutually equivalent theories - is the measure how (equally) probable are they.

My favourite justification of the Occam razor is that even if two theories are equivalent in their explicit predictions, the simpler one is usually more likely to inspire correct generalisations. The reason may be that the more complicated the theory is, the more arbitrary constraints it puts on our thinking, and those constraints can prevent us from seeing the correct more general theory. For example, some versions of aether theory can be made eqivalent to special relativity, but the assumptions of absolute space and time make it nearly impossible to discover something equivalent to general relativity, starting from aether.

I personally think Occam's razor is more about describing what you know. If two theories are equally good in their explanatory value, but one has some extra bells and whistles added on, you have to ask what basis you have for deciding to prefer the bells and whistles over the no bells and whistles version.

Since both theories are in fact equally good in their predictions, you have no grounds for preferring one over the other. You are in fact ignorant of which theory is the correct one. However, the simplest one is the one that comes closest to describing th... (read more)

Adding premises like "Odin created everything" makes a theory less probable and also happens to make it longer; this is the entire reason why we intuitively agree with Occam's Razor in penalizing longer theories. Unfortunately, this answer seems to be based on a concept of "truth" granted from above - but what do differing degrees of truth actually mean, when two theories make exactly the same predictions?

and

Imagine you have the option to put a human being in a sealed box where they will be tortured for 50 years and then incinerate

... (read more)
2cousin_it13y
A prediction that's impossible to test is a contradiction in terms. Show me any unfalsifiable theory, and I'll invent some predictions that follow from it, they will just be "impossible to test".
3Matt_Simpson13y
Ok, so don't call the existence of Odin or what's happening inside the box "predictions." Then I'll rephrase my question: Why do we only care about "predictions" and not "everything a theory says about reality?" Clearly all three pairs of theories I mentioned above say different things about reality even if it is impossible in some sense to observe this difference. (I'll add to this later, but I'm pressed for time currently) edit: nothing to add, actually
1cousin_it13y
How can we distinguish statements that are "about reality" from statements that aren't, if we just threw away the criteria of predictive power and verification?
2Matt_Simpson13y
How about counterfactual predictive power and verification? If I could observe the inside of that box, then I could see a difference between the two theories. I realize this opens a potential can of worms, i.e., what sort of counterfactuals are we allowed to consider? But in any case, this is how I've understood the basic idea of falsifiability. Compare to Yvain's logs of the universe idea. (He's doing something different with it, I know)
0komponisto13y
...and this is why Popperian falsificationism is wrong! There aren't any "unfalsifiable" theories, though there may be unintelligible theories.
0[anonymous]13y
I disagree, since prediction != theory. It is certainly possible to have a theory (e.g. Freud's ideas about the ego and superego) that make no predictions. In the comment above, cousin_it is correct in that "unfalsifiable prediction" is a contradiction, but "unfalsifiable theory" is not. It just means that the theory is not well-formed and does not pay rent.
0komponisto13y
Though cousin_it will have to speak for himself, I believe he was specifically disagreeing with this when he wrote:

Theories that require additional premises are less likely to be true, according to the eternal laws of probability ... Unfortunately, this answer seems to be based on a concept of "truth" granted from above - but what do differing degrees of truth actually mean, when two theories make exactly the same predictions?

Reading this and going back to my post to work out what I was thinking, I have a sort-of clarification for the issue in the quote. The original argument was that, before experiencing the universe, all premises are a priori equally lik... (read more)

I wonder if this can't be considered more pragmatically? There was a passage in the MIT Encyclopedia of Cognitive Sciences in the Logic entry that seems relevant:

Johnson-Laird and Byrne (1991) have argued that postulating more imagelike MENTAL MODELS make better predictions about the way people actually reason. Their proposal, applied to our sample argument, might well help to explain the difference in difficulty in the various inferences mentioned earlier, because it is easier to visualize “some people” and “at least three people” than it is to visualiz

... (read more)
1[anonymous]13y
I was thinking on a similar line: Given that computation has costs, memory is limited, to make the best possible predictions given some resources one needs to use the computationally least expensive way. Assuming that generating a mathematical model is (at least on average) more difficult for more complex theories, wasting time by creating (at the end equivalent) models by having to incorporate epiphenomenal concepts leads to practically worse predictions. So not using the strong Occam's razor would lead to worse results. And because we have taking moral issues with us: not using the best possible way would even be morally bad, as we would lose important information for optimizing our moral behavior, as we cannot look as far into the future/would have less accurate predictions at our disposal due to our limited resources. ETA: The difference to your post above is mainly that this holds true for a perfect bayesian superintelligence still, and should be invariant to different computation substrate.

What if Tegmark's multiverse is true? All the equivalent formulations of reality would "exist" as mathematical structures, and if there's nothing to differentiate between them, it seems that all we can do is point to appropriate equivalence class in which "we" exist.

However, the unreachable tortured man scenario suggests that it may be useful to split that class anyway. I don't know much about Solomonoff prior - does it make sense now to build a probability distribution over the equivalence class and say what is the probability mass of its part that contains the man?

Theories that require additional premises are less likely to be true, according to the eternal laws of probability. Adding premises like "Odin created everything" makes a theory less probable and also happens to make it longer; this is the entire reason why we intuitively agree with Occam's Razor in penalizing longer theories. Unfortunately, this answer seems to be based on a concept of "truth" granted from above -

Not to me it doesn't. (Though I may not understand what you mean by "truth" here.) Bayesian probability theory... (read more)

Another intriguing answer came from JGWeissman. Apparently, as we learn new physics, we tend to discard inconvenient versions of old formalisms. So electromagnetic potentials turn out to be "more true" than electromagnetic fields because they carry over to quantum mechanics much better. I like this answer because it seems to be very well-informed!

I don't like this explanation- while potentials are useful calculation tools both macroscopically and quantum mechanically, fields have unique values whereas potentials have non-unique values. It's no... (read more)

2wnoise13y
You can just as easily move to a different mathematical structure where the gauge is "modded out", a "torsor". Similarly, in quantum mechanics where the phase of the wavefunction has no physical significance, rather than working with the vectors of a Hilbert space, we work with rays (though calculational rules in practice reduce to vectors). There are methods of gaugeless quantization but I'm not familiar with them, though I'd definitely like to learn. (I'd hope they'd get around some of the problems I've had with QFT foundations, though that's probably a forlorn hope.)
1Sniffnoy13y
Immediate thought: Why not just regard the potentials as actual elements of a quotient space? :)
0Perplexed13y
Are you familiar with the Aharonov-Bohm effect? My understanding is that it is a phenomenon which, in some sense, shows that the EM potential is a "real thing", not just a mathematical artifact.
0Vaniver13y
I am and your understanding is correct for most applications. I don't think it matters for this question, as my understanding is that the operative factor behind the Aharonov-Bohm effect is the nonlocality of wavefunctions.* Because wavefunctions are nonlocal, the potential formulation is staggeringly simpler than a force formulation. (The potentials are more real in the sense that the only people who do calculations with forces are imaginary!) You still have gauge freedom with the Aharonov-Bohm effect- if you adjust the four-potential everywhere, all it does is adjust the phase everywhere, and all you can measure are phase differences. Although, that highlights an inconsistency: if I'm willing to accept wavefunctions as real, despite their phase freedom, then I should be willing to accept potentials are real, despite their gauge freedom. I'm going to think this one over, but barring any further thoughts it looks like that's enough to change my mind. *I could be wrong: I have enough physics training to speculate on these issues, but not to conclude. [edit] It also helps that Feynman, who certainly knows more about this than I do, sees the potentials as more real (I suppose this means 'fundamental'?) than the fields.
0wnoise13y
Heh. It gets worse. Typically one is taught that the wavefunction is defined up to a global constant. You might have thought that the difference in phase between two places would at least be well defined. This is true, so long as you stick to one reference frame. A Galilean boost will preserve the magnitude everywhere, but add a different phase everywhere.

Isn't this comment a shorter version of this post Belief in the Implied Invisible?

I just added to the post.

Your thought experiment of the person in the sealed torture box ignores the question of what evidence I have to believe that such a box exists and what evidence I have that the physical theory you've outlined is true (in the thought experiment).

The fact that a theory makes the same predictions as some other theory is irrelevant if I don't have good reason for thinking the theory might be true in the first place. The problem with "Odin created physics" is that I have no good reasons to believe in the existence of Norse gods and that the universe w... (read more)

1khafra13y
Counting the genesis of the theory into its likelihood sounds a lot like couting the stopping condition of repeated trials.
0anonym13y
I meant something closer to determining whether the process by which a theory was created was a rational process based on evidence. "Odin create physics" is clearly not in that category, and neither is the torture box hypothesis.

I like this argument. But in this case I think there's another argument that doesn't rely on morality so much.

Your belief that the two theories in question will always make the same predictions is conditional on the box being perfectly sealed, and the universe continuing to expand forever. There's a small chance that these things are not true, and if that turns out to be the case, you may or may not expect to see the guy again, depending on what physical theory you believe in.

I think Matt Simpson is getting at this when he talks about counterfactual predic... (read more)

In other words. You can't make a theory less/more probable just by expressing it differently, with more/less words.

Only the shortest known formulation counts.

2cousin_it13y
I wasn't trying to make this point. Since the last post I've updated my position and got rid of most of my certainty. Now it's all a mystery to me.
0Thomas13y
An intellectually honest man. Enough for the beginning.

Doesn't the human inside qualify as an observer? For all we know, WE outside the box could be the ones tortured for 50 years and then incinerated once the button is pushed.

It probably has no consequences in reality, because no matter how seductive it sounds, skipping over the is-ought distinction is not permitted.

What, we can't just assume for now that torture is bad without getting into metaethics?

2cousin_it13y
We can assume that, but we can't make the conclusion about Occam's Razor that my argument makes. There's a mistake in it somewhere. A statement like "torture is bad" can never imply a statement like "this physical or mathematical theory is true"; the world doesn't work like that.
8ata13y
Of course it can't imply it, but it can test whether you actually believe it. The bit that says "If you're still reluctant to push the button, it looks like you already are a believer in the 'strong Occam's Razor' saying simpler theories without local exceptions are 'more true'" sounds fine to me. Then the only question is, in the long run and outside the context of weird hypotheticals, whether this kind of thinking wins more than it loses. But we can reframe it not to talk about morality, and keep things on the "is" side of the divide. Suppose you are a paperclip maximizer, and imagine you have a sealed box with 50 paperclips in it. You have a machine with a button which, if pressed, will create 5 paperclips and give them to you, and will vaporize the contents of the box, while not visibly affecting the box or anything external to it. Consider the following physical theory: Right after you sealed the box, our laws of physics will make a temporary exception and will immediately teleport the paperclips to the core of a distant planet, where they will be safe and intact indefinitely. Given that this makes the same observational predictions as our current understanding of the laws of physics, would pressing the button be the paperclip-maximizing thing to do? If I were a paperclip maximizer, I would not press the button. If that means accepting the "strong Occam's Razor", so be it.
1cousin_it13y
This is begging the question. The answer depends on the implementation of the maximizer. Of course, if you have a "strong Occamian" prior, you imagine a paperclip maximizer based on that!
3ata13y
Okay, but... what decision actually maximizes paperclips? The world where the 50 paperclips have been teleported to safety may be indistinguishable, from the agent's perspective, from the world where the laws of physics went on working as they usually do, but... I guess I'm having trouble imagining holding an epistemology where those are considered equivalent worlds rather than just equivalent states of knowledge. That seems like it's starting to get into ontological relativism. Suppose you've just pressed the button. You're you, not a paperclip maximizer; you don't care about paperclips, you just wanted to see what happens, because you have another device: it has one button, and an LED. If you press the button, the LED will light up if and only if the paperclips were teleported to safety due to a previously unknown law of physics. You press the button. The light turns on. How surprised are you?
0JGWeissman13y
And a paperclipper with an anti-Occamian prior that does push the button is revealing a different answer to the supposedly meaningless question. Either way, it is a assigning utility to stuff it cannot observe, and this shows that questions about the implied invisible, about the differences in theories with no observable differences, can be important.
0[anonymous]13y
With all due respect, you don't know that. It depends on the implementation of the paperclip maximizer, and how to "properly" implement it is exactly the issue we're discussing here.

I don't think that argument is even valid. After all, I have the option of putting a human in a box. If I do, one hypothesis states that the human will be tortured and then killed. The other hypothesis states that the human will "vanish"; it's not precisely clear what "vanish" means here, but I'm going to assume that since this state is supposed to be identical in my experience to the state in the first hypothesis, the human will no longer exist. (Alternative explanations, such as the human being transported to another universe which I ... (read more)

3cousin_it13y
I don't think you're addressing the core of the argument. Even if you don't actually press the button, how much disutility you assign to pressing it depends on your beliefs. If you think the action will cause 50 years of torture, you're a believer in the "strong Occam's Razor" and the proof is complete.
0Will_Sawin13y
A simple fix is to have the button-pressing also prevent, say, 45 years of observable torture. That gets you more complicated ethics, but that may be a sacrifice worth making to put the zero point between the two.
[-][anonymous]13y00

Very nice summary!