Posts

Sorted by New

Wiki Contributions

Comments

Did you notice that the year listed is 2014? I think that's a mistake.

Oh. I can imagine a distribution that looks like that. It would have been helpful if he had given us all the numbers. Perhaps he does in this blog post, but I got confused part way through and couldn't make it to the end.

Would it look like this?

There must be something I'm missing here. The previous post pretty definitively proved to me that the no communication clause must be false.

Consider the latter two experiments in the last post:

A transmitted 20°, B transmitted 40°: 5.8%

A transmitted 0°, B transmitted 40°: 20.7%

Lets say I'm on Planet A and my friend is on Planet B, and we are both constantly receiving entangled pairs of photons from some satellite stationed between us. I'm filtering my photons on planet A at 20°, and my friend on planet B is filtering his at 40°. He observes a 5.8% chance that his photons are transmitted, in accordance with the experiment. I want to send him a signal faster than light, so I turn my filter to 0°. He should now observe that his photons have a 20.7% chance of being transmitted.

This takes some statistical analysis before he can determine that the signal has really been sent, but the important part is that it makes the speed of sending the message not dependent on the distance, but on the number of particles sent. Given a sufficient distance and enough particles, it should be faster than light, right?

There is so much wrong with this example that I don't know where to start.

You make up a hypothetical person who dies because she doesn't heed an explicit warning that says "if you do this, you will die". Then you make several ridiculous claims about this hypothetical person:

1) You claim this event will happen, with absolute certainty. 2) You claim this event occurs because this individual has low intelligence, and that it is unfair because a person does not choose to be born intelligent. 3) You claim this event is a tragedy.

I disagree with all of these, and I will challenge them individually. But first, the meta-claim of this argument is that I am supposed to consider compromises that I don't even believe in. Why would I ever do that? Suppose that the downside of a policy decision is "less people will go to heaven". If you are not religious, this sounds like a ridiculous nonsensical downside, and thus no downside at all. And where do you draw the line on perceived downsides anyway? Do you allow people to just make up metaphysical superstitious downsides, and then proceed to weigh those as well? Because that seems like a waste of time to me. Perhaps you do weigh those possibilities, but you assign them so low a probability that they effectively disappear, but clearly your opponent doesn't assign the same probabilities to them as you do. So you have to take the argument to the place where the real disagreements occur. Which leads me to these three claims.

1) You claim this event will happen, with absolute certainty.

1 is not a probability. Besides, the original article mentions safeguards that should reduce the probability that this event ever happens. The type of safeguards depend on your hypothetical person, of course. Lets say your hypothetical person is drunk. The clerk could give a breathalyzer test. Maybe your hypothetical person isn't aware of the warnings. The clerk could read them off at the checkout. Maybe the person doesn't listen or understand. The clerk could quiz them on the content he just read to ensure it sinks in.

But then, I guess the real point of the article is that the hypothetical person doesn't believe the warnings, which brings us to:

2) You claim this event occurs because this individual has low intelligence, and that it is unfair because a person does not choose to be born intelligent.

Receiving a warning explicitly stating "if you do this, you will die" is hardly a mental puzzle. Is this really even a measure of intelligence? This seems like a stretch.

Bleach is sold at normal stores, without any restrictions. If you drink it, you could die. Many people have heard this warning. Do people disbelieve it? Do they risk testing the hypothesis on theirself? Why would anyone risk death like this? I am genuinely curious as to how this can be related to intelligence. Someone please explain this to me.

Generally if someone drinks bleach, it is because they believed the warning and wanted to die. Is this a tragedy? Should we ban bleach? This brings me to:

3) You claim this event is a tragedy.

Is it really?

People are hardly a valuable resource right now. In fact, there are either too many of us, or there will be soon. If one person dies, everyone else gets more space and resources. It's kind of like your article on dust specs vs torture, except that a suicidal person selects theirself, rather than being randomly selected. Unless you apply some argument about determinism and say that a person doesn't choose to be born suicidal (or choose to lead a life whose circumstances would lead anyone to be suicidal, etc).

Should a person be allowed to commit suicide? If we prevent them from doing so, are we infringing on their rights? Or are they infringing on their own rights? I don't really know. I do know and love some amazing people who have committed suicide, and I wish I could have prevented them. This is a real complication to this issue for me, because I value different people differently: I'd gladly allow many people I've never met to die if it would save one person I love. But I understand that other people don't value the same people I do, so this feeling is not easy to transfer into general policies.

Is evolution not fair? If we decide to prop up every unfit individual and prevent every suicide, genetic evolution becomes severely neutered. We can't really adapt to our environment if we don't let it select from us. Thus it would be to our genetic benefit to allow people to die, as it would eventually select out whatever genes caused them to do this. But then, some safety nets seem reasonable. We wouldn't consider banning glasses in order to select for better vision. We need to strike some sort of balance here though, and not waste too many resources propping up individuals who will only multiply their cost to everyone with future generations of their genes and memes. I think that, currently, the point at which this balance is set is when it simply costs too much cash to keep someone alive, though we will gladly provide all people with a certain amount of food and shelter. The specific amount provided is under constant debate.

So, are we obligated to protect every random individual ever born? Is it a tragedy if anyone dies? I think that's debatable. It isn't a definite downside. In fact, it could even be an upside.

Debates can easily appear one-sided, for each side. For example, some people believe that if you follow a particular conduct in life, you will go to heaven. To these people, any policy decision that results in sending less people to heaven is a tragedy. But to people who don't believe in heaven, this downside does not exist.

This is not just an arbitrary example. This shows up all the time in US politics. Until people can agree on whether or not heaven exists, how can any of these debates not seem one-sided?

I think it's a good thing to do this. It is analogous to science.

If you're a good reasoner and you encounter evidence that conflicts with one of your beliefs, you update that belief.

Likewise, if you want to update someone else's belief, you can present evidence that conflicts with it in hopes they will be a good reasoner and update their belief.

This would not be so effective if you just told them your conclusion flat out, because that would look like just another belief you are trying to force upon them.

What's this "the Matrix" everyone in this thread is talking about? The movie? The idea that we're all in a computer simulation?

Btw, as for causality loops, Feynman describes antimatter as "just like regular matter, only traveling backwards in time", which means if we allow for time travel, we've just reduced the number of types of particles in our description of reality by half =].

Anthropomorphizing animals is justified based on the degree of similarity between their brains and ours. For example, we know that the parts of our brain we have found are responsible for strong emotions are also present in reptiles, so we might assume that reptiles also have strong emotions. Mammals are more similar to us, so we feel more moral obligation to them.

I'm with you. You have to look at the outcomes, otherwise you end up running into the same logical blinders that make Quantum Mechanics hard to accept.

After reading some of the Quantum Mechanics sequence, I am more willing to believe in Omega's omniscience. Quantum mechanics allows for multiple timelines leading to the same outcome to interfere and simply never happen, even if they would have been probable in classical mechanics. Perhaps all timelines leading to the outcome where one-boxing does not yield money happen to interfere. Even if you take a more literal interpretation of the problem statement, where it is your own mind that determines the box's content, your mind is made of particles which could conceivably affect the universe's configuration.

Are you implying that the presence of a detector instead of an obstacle changes what the other detectors detect, or not?

The text is unclear here:

Detector 1 goes off half the time and Detector 2 goes off half the time.

Does "half the time" mean "half the time that any detector goes off", or "half the time you shoot a photon"? I would expect that, with the obstacle in place, half the time you shoot a photon no detector would go off, because the first mirror would deflect it into an obstacle. Seeing no detector go off is distinct and observable, so I don't see any way it could be eliminated as a possibility like the other case described here where two possible timelines that lead to the same world interfere and cancel out. So I would assume Eliezer means "half the time that any detector goes off". If so, I'd like to see the text updated to be more clear about this.

Load More