entirelyuseless comments on Rationality Quotes Thread September 2015 - Less Wrong

3 Post author: elharo 02 September 2015 09:25AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (482)

You are viewing a single comment's thread. Show more comments above.

Comment author: CCC 07 October 2015 11:51:22AM 0 points [-]

I did think I had a good argument for free will (given the existence of God), but TheAncientGeek has punctured that. (I had a second argument as well, but I'm waiting to see whether TheAncientGeek has any comment on that one).

Aside from that, all I've really got is that:

(a) What I do feels like free will; that may be an illusion. (b) What other people do is consistent enough to suggest that their actions are being guided by individual, similarly free-willed minds.

...both of which are fairly weak evidence, if anything.

Comment author: entirelyuseless 07 October 2015 03:09:58PM *  0 points [-]

It's clear that if you put someone in very similar situations and ask them to make a choice, over time they will converge to making a certain choice a certain percentage of the time. That could easily be the same percentage of the time that would be predicted by deterministic physics plus e.g. quantum uncertainty, so I don't see any reason in principle why your account of free will could not be consistent with everything happening according to the laws of physics, if there is randomness in the laws of physics.

As for the feeling, if a deterministic chess computer had feelings, it would have to have the feeling that it could make any move it wanted, because if it didn't feel that way, it couldn't consider all the possibilities, and it can't decide on a move without considering all the possibilities. This doesn't prevent chess computers from being deterministic, so it might not prevent you from having a feeling like that, even if your actions are in fact deterministic.

Comment author: Lumifer 07 October 2015 03:53:00PM 1 point [-]

It's clear that if you put someone in very similar situations and ask them to make a choice, over time they will converge to making a certain choice a certain percentage of the time.

No, it's not clear at all. If ask me to make choices in similar situations, first I might humor you, then I'll get bored and start fucking around with the system, and then I'll get really bored and stop cooperating with you. There won't be much of a convergence over time.

The abstraction is not the territory.

Comment author: JDR 07 October 2015 04:56:54PM 1 point [-]

then I'll get bored and start fucking around with the system, and then I'll get really bored and stop cooperating with you. There won't be much of a convergence over time.

The problem with that model seems to be that as time goes on, the situation in which you are put in becomes increasingly dissimilar to the original one, just because of we've added memories of having had to make this choice x number of times before. If we could run the experiment so that you always felt like it was the first time you were in this situation, perhaps by putting the same kind of decision in different contexts and spreading them out over time and with various distractions, do you think you'd still deviate in the same way?

I know I'm going back from territory to less practical abstraction here, but I think this kind of difficult-to-collect data would be more revealing for this question.

Comment author: Lumifer 07 October 2015 05:10:47PM *  1 point [-]

If we could run the experiment so that

Most of my point is that you can not. Among I things, I change over time.

As a practical example, I drink beer. Various kinds of. My beer preferences do not converge over time. Instead, they wander over different styles, different hoppiness/maltiness/etc., even different breweries. I have no idea what kind of beer I will like in, say, a year, but it probably will be different from what I like now.

Showing that something works in a toy model does not show that the same thing works in actual reality.

Comment author: JDR 07 October 2015 05:58:09PM 0 points [-]

Sure, I totally agree with you - in real life, we can really put a person in exactly the same situation twice. If we could, this whole free will argument would be a lot easier to solve.

That said, I do think the toy models are useful. Pretending we can do this experiment gives an answer to the problem I've never managed to pick a hole in (and tbh getting other people's input on it is the hidden motivation for entering this discussion):

If we could let you choose a beer, then rewind the universe - including all particles, forces, and known and unknown elements of cognition anyone might postulate such as souls and deities back to their starting position - then let it go again, there are only really two things that could happen: *1) you choose the same beer because that's what the universe was leading up to or *2) you choose a different beer despite the fact that all parameters of the universe known and unknown are the same.

The first outcome would suggest determinism; the second randomness, or at least independence from all variables which we consider "self" such as personality, memory and perhaps souls and things, since they were all rewound with the universe. I'd be really interested to hear of any third option anyone can think of!

As you say, showing this in a toy model isn't the same as showing it in actual reality; but when the actual experiment is impossible, one is arguing about abstract concepts anyway, and one has a lot of difficulty imagining outcomes not encompassed in the model I'm not sure we can do much better.

Comment author: Lumifer 07 October 2015 06:27:18PM 0 points [-]

Pretending we can do this experiment gives an answer to the problem

Within the toy model, yes. In actual reality, you still don't know.

I'd be really interested to hear of any third option anyone can think of!

The trivial third option is to drink wine :-P

On a bit more serious note, if you set up the problem so that the outcomes are X and not-X, there could be no third option.

Comment author: entirelyuseless 07 October 2015 06:34:40PM 0 points [-]

I suspect that if we take the average of e.g. the bitterness of the beers that you have been drinking, it has already converged to an average, and future developments will probably not change that average much, even if there are some years when you drink sweet beers and some years when you drink bitter beers.

Comment author: Lumifer 07 October 2015 06:41:07PM 0 points [-]

I suspect that if we take the average of e.g. the bitterness of the beers that you have been drinking, it has already converged to an average

Empirically speaking, you are wrong.

Comment author: entirelyuseless 07 October 2015 06:51:43PM 0 points [-]

Perhaps, although I don't see how you can know that unless you have been making measurements, or unless it has definitely been going in the direction of getting more and more sweet, or more and more bitter.

In any case, since beer does not differ an infinite amount in sweetness and bitterness, it won't be easy to stop that average from converging sooner or later.

Comment author: CCC 08 October 2015 09:23:21AM 0 points [-]

As for the feeling, if a deterministic chess computer had feelings, it would have to have the feeling that it could make any move it wanted, because if it didn't feel that way, it couldn't consider all the possibilities, and it can't decide on a move without considering all the possibilities.

...I'm not seeing this. It can consider all the possibilities even if it knows that it must play the possibility with the highest odds of winning - in fact, knowing that means that it must consider all the possibilities in order to calculate those odds, surely?

Comment author: entirelyuseless 08 October 2015 12:37:12PM 1 point [-]

Even if it knows that it must play the move with the highest odds of winning, as far as it knows when it starts considering, that could be any of the moves.

But yes that would be knowledge that its move is objectively deterministic. This would not necessarily prevent it from feeling like it could make any move it wanted, just like people who believe themselves subject to deterministic physics still feel like they can do whatever they want.

But the chess computer doesn't have to know what is determining its moves, in which case it will be even more likely to feel that it can make whatever move it wants.

Comment author: CCC 12 October 2015 10:55:26AM 0 points [-]

Well, yes, feeling like it has freedom doesn't really prevent it from not having freedom; but I don't see how the feeling of freedom makes any difference at all. Why shouldn't the chess computer feel constrained?

Comment author: entirelyuseless 12 October 2015 01:29:13PM 1 point [-]

I agree that the feeling doesn't make any difference. That's what I'm saying: whether it feels constrained or not, it may or may not be deterministic. Those are two different things. The same is true for us.