Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: PhilGoetz 17 December 2017 06:26:40PM *  0 points [-]

I just now read that one post. It isn't clear how you think it's relevant. I'm guessing you think that it implies that positing free will is invalid.

You don't have to believe in free will to incorporate it into a model of how humans act. We're all nominalists here; we don't believe that the concepts in our theories actually exist somewhere in Form-space.

When someone asks the question, "Should you one-box?", they're using a model which uses the concept of free will. You can't object to that by saying "You don't really have free will." You can object that it is the wrong model to use for this problem, but then you have to spell out why, and what model you want to use instead, and what question you actually want to ask, since it can't be that one.

People in the LW community don't usually do that. I see sloppy statements claiming that humans "should" one-box, based on a presumption that they have no free will. That's making a claim within a paradigm while rejecting the paradigm. It makes no sense.

Consider what Eliezer says about coin flips:

We've previously discussed how probability is in the mind. If you are uncertain about whether a classical coin has landed heads or tails, that is a fact about your state of mind, not a property of the coin. The coin itself is either heads or tails. But people forget this, and think that coin.probability == 0.5, which is the Mind Projection Fallacy: treating properties of the mind as if they were properties of the external world.

The mind projection fallacy is treating the word "probability" not in a nominalist way, but in a philosophical realist way, as if they were things existing in the world. Probabilities are subjective. You don't project them onto the external world. That doesn't make "coin.probability == 0.5" a "false" statement. It correctly specifies the distribution of possibilities given the information available within the mind making the probability assessment. I think that is what Eliezer is trying to say there.

"Free will" is a useful theoretical construct in a similar way. It may not be a thing in the world, but it is a model for talking about how we make decisions. We can only model our own brains; you can't fully simulate your own brain within your own brain; you can't demand that we use the territory as our map.

Comment author: ike 18 December 2017 12:33:37AM 0 points [-]

It's not just the one post, it's the whole sequence of related posts.

It's hard for me to summarize it all and do it justice, but it disagrees with the way you're framing this. I would suggest you read some of that sequence and/or some of the decision theory papers for a defense of "should" notions being used even when believing in a deterministic world, which you reject. I don't really want to argue the whole thing from scratch, but that is where our disagreement would lie.

Comment author: PhilGoetz 15 December 2017 07:48:31PM *  0 points [-]

This was argued against in the Sequences and in general, doesn't seem to be a strong argument. It seems perfectly compatible to believe your actions follow deterministically and still talk about decision theory - all the functional decision theory stuff is assuming a deterministic decision process, I think.

It is compatible to believe your actions follow deterministically and still talk about decision theory. It is not compatible to believe your actions follow deterministically, and still talk about decision theory from a first-person point of view, as if you could by force of will violate your programming.

To ask what choice a deterministic entity should make presupposes both that it does, and does not, have choice. Presupposing a contradiction means STOP, your reasoning has crashed and you can prove any conclusion if you continue.

Comment author: ike 16 December 2017 10:00:40PM 0 points [-]

Have you read http://lesswrong.com/lw/rb/possibility_and_couldness/ and the related posts and have some disagreement with them?

Comment author: PhilGoetz 15 December 2017 07:12:03PM *  0 points [-]

The part of physics that implies someone cannot scan your brain and simulate inputs so as to perfectly predict your actions is quantum mechanics. But I don't think invoking it is the best response to your question. Though it does make me wonder how Eliezer reconciles his thoughts on one-boxing with his many-worlds interpretation of QM. Doesn't many-worlds imply that every game with Omega creates worlds in which Omega is wrong?

If they can perfectly predict your actions, then you have no choice, so talking about which choice to make is meaningless. If you believe you should one-box based if Omega can perfectly predict your actions, but two-box otherwise, then you are better off trying to two-box: In that case, you've already agreed that you should two=box if Omega can't perfectly predict your actions. If Omega can, you won't be able to two-box unless Omega already predicted that you would, so it won't hurt to try to 2-box.

Comment author: ike 15 December 2017 07:37:17PM 0 points [-]

If they can perfectly predict your actions, then you have no choice, so talking about which choice to make is meaningless.

This was argued against in the Sequences and in general, doesn't seem to be a strong argument. It seems perfectly compatible to believe your actions follow deterministically and still talk about decision theory - all the functional decision theory stuff is assuming a deterministic decision process, I think.

Re QM: sometimes I've seen it stipulated that the world in which the scenario happens is deterministic. It's entirely possible that the amount of noise generated by QM isn't enough to affect your choice (besides for a very unlikely "your brain has a couple bits changed randomly in exactly the right way to change your choice", but that should be way too many orders of magnitude unlikely so as to not matter in any expected utility calculation).

Comment author: ike 15 December 2017 07:04:37PM 0 points [-]

What part of physics implies someone cannot scan your brain and simulate inputs so as to perfectly predict your actions?

Comment author: ike 08 April 2017 03:35:57PM 0 points [-]

An evolved system is complex and dynamic, and can lose its stability. A created system is presumed to be static and always stable, so Christians don't consider LUC to be an issue with respect to the environment.

The distinction here would be that a created system's complexity is designed to be stable even with changes, not that it isn't complex and dynamic.

Comment author: ike 16 March 2017 07:52:03PM 2 points [-]

If you don't know the current time, you obviously can't reason as if you did. If we were in a simulation, we wouldn't know the time in the outside world.

Reasoning of the sort "X people exist in state A at time t, and Y people exist in state B at time t, therefore I have a X:Y odds ratio of being in state A compared to state B" only work if you know you're in time t.

If you carefully explicate what information each person being asked to make a decision has, I'm pretty sure your argument would fall apart. You definitely aren't being explicit enough now about whether the people in your toy scenario know what timeslice they're in.

Comment author: ike 16 March 2017 07:53:11PM *  0 points [-]

The "directly relevant information" is the information you know, and not any information you don't know.

If you want to construct a bet, do it among all possibly existing people that, as far as they know, could be each other. So any information that one person knows at the time of the bet, everyone else also knows.

If you don't know the time, then the bet is among all similarly situated people who also don't know the time, which may be people in the future.

Comment author: ike 16 March 2017 07:52:03PM 2 points [-]

If you don't know the current time, you obviously can't reason as if you did. If we were in a simulation, we wouldn't know the time in the outside world.

Reasoning of the sort "X people exist in state A at time t, and Y people exist in state B at time t, therefore I have a X:Y odds ratio of being in state A compared to state B" only work if you know you're in time t.

If you carefully explicate what information each person being asked to make a decision has, I'm pretty sure your argument would fall apart. You definitely aren't being explicit enough now about whether the people in your toy scenario know what timeslice they're in.

Comment author: ike 20 February 2017 03:49:32AM *  0 points [-]

Has anyone rolled the die more than once? If not, it's hard to see how it could converge on that outcome unless everybody that's betting saw a 3 (even a single person seeing differently should drive the price downward). Therefore, it depends on how many people saw rolls, and you should update as if you've seen as many 3s as other people have bet.

You should bet on six if your probability is still higher than 10%.

If the prediction market caused others to update previously then it's more complicated. Probably you should assume it reflects all available information, and therefore exactly one 3 was seen. Ultimately there's no good answer because there's Knightian uncertainty in markets.

[Link] Attacking machine learning with adversarial examples

3 ike 17 February 2017 12:28AM

[Link] Gates 2017 Annual letter

4 ike 15 February 2017 02:39AM

View more: Next