Comment author: James_Miller 22 August 2016 07:32:21PM 6 points [-]

Excellent. My personal theory is that the universe is fine-tuned for both life and for the Fermi paradox with a late great filter because across the multiverse most lifeforms such as us will exist in such universes in part because without a great filter intelligent life will quickly turn into something not in our reference class and then use all the resources of their universe and so make their universe inhospitable to life in our reference class.

Comment author: torekp 29 August 2016 10:23:30PM 1 point [-]

Can you please clarify "our reference class"? And are you using some form of Self-Sampling Assumption?

Comment author: MockTurtle 18 August 2016 12:34:06PM 1 point [-]

Even though it's been quite a few years since I attended any quantum mechanics courses, I did do a talk as an undergraduate on this very experiment, so I'm hoping that what I write below will not be complete rubbish. I'll quickly go through the double slit experiment, and then try to explain what's happening in the delayed choice quantum eraser and why it happens. Disclaimer: I know (or knew) the maths, but our professors did not go to great lengths explaining what 'really' happens, let alone what happens according to the MWI, so my explanation comes from my understanding of the maths and my admittedly more shoddy understanding of the MWI. So take the following with a grain of salt, and I would welcome comments and corrections from better informed people! (Also, the names for the different detectors in the delayed choice explanation are taken from the wikipedia article)

In the normal double slit experiment, letting through one photon at a time, the slit through which the photon went cannot be determined, as the world-state when the photon has landed could have come from either trajectory (so it's still within the same Everett branch), and so both paths of the photon were able to interfere, affecting where it landed. As more photons are sent through, we see evidence of this through the interference pattern created. However, if we measure which slit the photon goes through, the world states when the photon lands are different for each slit the photon went through (in one branch, a measurement exists which says it went through slit A, and in the other, through slit B). Because the end world states are different, the two branch-versions of the photon did not interfere with each other. I think of it like this: starting at a world state at point A, and ending at a world state at point B, if multiple paths of a photon could have led from A to B, then the different paths could interfere with each other. In the case where the slit the photon went through is known, the different paths could not both lead to the same world state (B), and so existed in separate Everett branches, unable to interfere with each other.

Now, with the delayed choice: the key is to resist the temptation to take the state "signal photon has landed, but idler photon has yet to land" as point B in my above analogy. If you did, you'd see that the world state can be reached by the photon going through either slit, and so interference inside this single branch must have occurred. But time doesn't work that way, it turns out: the true final world states are those that take into account where the idler photon went. And so we see that in the world state where the idler photon landed in D1 or D2, this could have occurred whether the photon went through either slit, and so both on D0 (for those photons) and D1/D2, we end up seeing interference patterns, as we're still within a single branch, so to speak (when it comes to this limited interaction, that is). Whereas in the case where the idler photon reaches D3, that world state could not have been reached by the photon going through either slit, and so the trajectory of the photon did not interfere with any other trajectory (since the other trajectory led to a world state where the idler photon was detected at D4, so a separate branch).

So going back to my point A/B analogy, imagine three world states A, B and C as points on a page, and STRAIGHT lines represent different hypothetical paths a photon could take, you can see that if two paths lead from point A to point B, the lines would be on top of each other, meaning a single branch, and the paths would interfere. But if one of the paths led to point A and the other to point B, they would not be on top of each other, they go into different branches, and so the paths would not interfere.

Comment author: torekp 28 August 2016 05:02:17PM 0 points [-]

Belated thanks to you and MrMind, these answers were very helpful.

Comment author: torekp 16 August 2016 12:52:48AM 2 points [-]

Can someone sketch me the Many-Worlds version of what happens in the delayed choice quantum eraser experiment? Does a last-minute choice to preserve or erase the which-path information affect which "worlds" decohere "away from" the experimenter? If so, how does that go, in broad outline? If not, what?

Comment author: turchin 12 July 2016 09:26:02PM 0 points [-]

"Superintelligence cannot be contained: Lessons from Computability Theory" http://arxiv.org/pdf/1607.00913.pdf

"Superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. In light of recent advances in machine intelligence, a number of scientists, philosophers and technologists have revived the discussion about the potential catastrophic risks entailed by such an entity. In this article, we trace the origins and development of the neo-fear of superintelligence, and some of the major proposals for its containment. We argue that such containment is, in principle, impossible, due to fundamental limits inherent to computing itself. Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world, strict containment requires simulations of such a program, something theoretically (and practically) infeasible."

Comment author: torekp 13 July 2016 12:51:46AM *  0 points [-]

Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world

What is the notion of "includes" here? Edit: from pp 4-5:

This means that a superintelligent machine could simulate the behavior of an arbitrary Turing machine on arbitrary input, and hence for our purpose the superintelligent machine is a (possibly identical) super-set of the Turing machines. Indeed, quoting Turing, “a man provided with paper, pencil, and rubber, and subject to strict discipline, is in effect a universal machine”

Comment author: ESRogs 14 May 2016 08:11:05AM 1 point [-]

What would it mean for our universe not to be exhausted by its mathematical properties? Isn't whether a property seems mathematical just a function of how precisely you've described it?

Comment author: torekp 15 May 2016 03:45:11PM *  1 point [-]

Let's start with an example: my length-in-meters, along the major axis, rounded to the nearest integer, is 2. In this statement "2", "rounded to nearest integer", and "major axis" are clearly mathematical; while "length-in-meters" and "my (me)" are not obviously mathematical. The question is how to cash out these terms or properties into mathematics.

We could try to find a mathematical feature that defines "length-in-meters", but how is that supposed to work? We could talk about the distance light travels in 1 / 299,792,458 seconds, but now we've introduced both "seconds" and "light". The problem (if you consider non-mathematical language a problem) just seems to be getting worse.

Additionally, if every apparently non-mathematical concept is just disguised mathematics, then for any given real world object, there is a mathematical structure that maps to that object and no other object. That seems implausible. Possibly analogous, in some way I can't put my finger on: the Ugly Duckling theorem.

Comment author: entirelyuseless 02 May 2016 12:54:02PM *  1 point [-]

On the object level, I think you are almost completely wrong.

You say, "There is not one culpable atom in the universe." This is true, but your implied conclusion, that there are no culpable persons in the universe, is false. Likewise, there may not be any agenty dust in the universe. But if your implied conclusion is that there are no agents in the universe, then your conclusion is false.

But if there are agents in the universe, and there are, then there can be good and bad agents there, just as there are good and bad apples in the universe.

Richard Chappell, I think, has used Singer's own argument against him. Suppose you are jogging somewhere in order to make a donation to a foreign charity. The number of expected lives saved from your donation is 3. On the way, you witness a young child drowning in a river. You have a choice: continue on, expecting to save 2 lives overall. Or save the child, expecting to lose 2 lives overall.

Everyone knows that the right choice here is to save the child, and that the utilitarian choice is wrong.

The utilitarian error is this: it is asking, "what actions will have the most beneficial effects?" But that is the wrong question. The right question is, "What is the right thing to do?"

(Edit: there is another inconsistency in your way of thinking. If you assume there is no culpability in the universe because atoms are not culpable, neither is it worthwhile to save human lives, because there are no atoms in the universe that are worth bothering about.)

Comment author: torekp 06 May 2016 01:28:14AM 1 point [-]

Likewise, there may not be any agenty dust in the universe. But if your implied conclusion is that there are no agents in the universe, then your conclusion is false.

This. I call the inference "no X at the microlevel, therefore, no such thing as X" the Cherry Pion fallacy. (As in, no cherry pions, implies no cherry pie.) Of course more broadly speaking it's an instance of the fallacy of composition, but, this variety seems to be more tempting than most, so it merits its own moniker.

It's a shame. The OP begins with some great questions, and goes on to consider relevant observations like

When we are sad, we haven't attributed the cause of the inciting event to an agent; the cause is situational, beyond human control. When we are angry, we've attributed the cause of the event to the actions of another agent.

But from there, the obvious move is one of charitable interpretation, saying, Hey! Responsibility is declared in these sorts of situations, when an agent has caused an event that wouldn't have happened without her, so maybe, "responsibility" means something like "the agent caused an event that wouldn't have happened without her". Then one could find counterexamples to this first formulation, and come up with a new formulation that got the new (and old) examples right ... and so on.

Comment author: torekp 17 April 2016 02:15:13PM 0 points [-]

Go through a Venn diagram explanation of Bayes's Theorem. Not necessarily the formula, but just a graphical representation of updating on evidence. Draw attention to the distribution of probability of H between E and not-E. Point out that if the probability of H doesn't go down upon the discovery of not E, it can't possibly go up upon the discovery of E.

This has the advantage of showing the requirement of falsifiability to be an extreme case of a more powerful general principle.

This could be supplemental to some of the great suggestions by your other commenters.

Comment author: Gram_Stone 02 April 2016 08:23:51PM 1 point [-]

Thanks for all of this, I wasn't aware of any of these things.

The poll question takes the Axiom to be a normative principle, not a day to day recipe for every decision.

This may sound nitpicky, but poll questions don't take anything to be anything; people do. I wonder if your results won't be skewed by the people who actually make the mistake that you didn't make but that I thought you made, or ignored by people like me who think they know more and that the question is silly but who actually know less and don't understand the question. I almost skipped the poll entirely, and would never have read your wonderful comment. Maybe you could add some elaboration in the OP, or suggest that voters read this thread? Not sure.

Comment author: torekp 03 April 2016 12:27:38AM 1 point [-]

Sure, if there were more people answering the poll, there'd probably be some that took the Axiom of Independence, and/or expected utility theory, in the way you worried about. It's a fair point. But so far I'm the only skeptical vote.

Comment author: Gram_Stone 02 April 2016 01:32:25PM 0 points [-]

So, I think that this is actually a loaded question that may result from a common misconception about the thrust of Eliezer's arguments when he juxtaposes normative decision theory with empirical observations about human behavior. If your question is implicitly about normative decision theory, then yeah, conformance to the Axiom of Independence is a requirement of rationality. But it's clear that humans cannot do the math of probability theory and decision theory in real time, and that they were created in a very particular environment that is not that similar to the skeletal reality that normative decision agents inhabit. This is why we have things like framing effects and risk aversion (the example in the Allais paradox): you make a scale for the situation you're in because it lets you do a cheap approximation of the normative approach, or you pick the certainty over the uncertainty, because most biological creatures have to worry about ruin. This also means that you have different scales in different situations, even trivially different ones, so if we looked at you as a normative agent, you would have inconsistent preferences. Obviously we can't get through the day without framing effects, but it seems to help to have an idea of the psychological reasons for why we take normatively stupid bets sometimes; and being able to decide when you need to rely on framing effects and risk aversion as a tractable, helpful heuristic, and when you need to throw it out and do something that scares your jury-rigged brain, but that is probably a good idea anyway. And it cannot hurt to know how to do this, for if you knew how to evaluate situations and decide whether or not you should use a heuristic like risk aversion, you could always just choose the strategy that you would have used if you didn't know how to do that.

Comment author: torekp 02 April 2016 07:34:19PM 2 points [-]

The poll question takes the Axiom to be a normative principle, not a day to day recipe for every decision. I agree that the case for it as a normative principle is better than taking it as a prescription. I just don't think it's a completely convincing case.

I agree with Wei Dai's remark that

the Axiom of Independence implies dynamic consistency, but not vice versa. If we were to replace the Axiom of Independence with some sort of Axiom of Dynamic Consistency, we would no longer be able to derive expected utility theory. (Similarly with dutch book/money pump arguments, there are many ways to avoid them besides being an expected utility maximizer.)

If a Dutchman throws a book at you - duck! You don't need to be the sort of agent to whom expected utility theory applies.

The deep reason why utility theory fails to be required by rationality, is that there is no general separability between the decision process itself and the "outcomes" that agents care about. I'm putting "outcomes" in scare quotes because the term strongly suggests that what matters is the destination, not the journey (where the journey includes the decision process and its features such as risk).

There are many particular occasions, at least for many agents (including me), on which there is such separability. That's why I find expected utility theory useful. But rationally required? Not so much.

Here's a toy version of the journey/destination problem. (I think I'm borrowing from Kaj Sotala, who probably said it better, but I can't find the original.) Suppose I sell my convertible Monday for $5000 and buy an SUV for $5010. On Tuesday I sell the SUV for $5000 and buy a Harley for $5010. On Wednesday I sell the Harley for $5000 and buy the original convertible back for $5010. Oh no, I've been money pumped! Except, wait - I got to drive a different vehicle each day, something that I enjoy. I'm out $30, but that might be a small price to pay for the privilege. This example doesn't involve risk per se, but does illustrate the care needed to avoid defining "outcomes" in such a way as to avoid begging questions against an agent's values.

Comment author: torekp 01 April 2016 10:33:48PM *  1 point [-]

So after reading the Allais paradox posts or being otherwise familiar with the topic, what do lesswrongers think?

Submitting...

View more: Prev | Next