Ganapati comments on Diseased thinking: dissolving questions about disease - Less Wrong

236 Post author: Yvain 30 May 2010 09:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (343)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 07 June 2010 08:27:43AM *  2 points [-]

Yep, your view is confused.

So where does choice enter the equation, including the optimising function for the choice, the consequences?

The optimizing function is implemented in your biology, which is implemented in physics.

Comment author: Ganapati 08 June 2010 07:48:05AM 0 points [-]

In other words, the 'choices' you make are not really choices, but already predetermined, You didn't really choose to be a determinist, you were programmed to select it, once you encountered it.

Comment author: cousin_it 08 June 2010 11:53:42AM *  2 points [-]

Yep, kind of. But your view of determinism is too depressing :-)

My program didn't know in advance what options it would be presented with, but it was programmed to select the option that makes the most sense, e.g. the determinist worldview rather than the mystical one. Like a program that receives an array as input and finds the maximum element in it, the output is "predetermined", but it's still useful. Likewise, the worldview I chose was "predetermined", but that doesn't mean my choice is somehow "wrong" or "invalid", as long as my inner program actually implements valid common sense.

Comment author: Ganapati 09 June 2010 08:54:41AM -2 points [-]

My program didn't know in advance what options it would be presented with, but it was programmed to select the option that makes the most sense, e.g. the determinist worldview rather than the mystical one.

You couldn't possibly know that! Someone programmed to pick the mystical worldview would feel exactly the same and would have been programmed not to recognise his/her own programming too :-)

Like a program that receives an array as input and finds the maximum element in it, the output is "predetermined", but it's still useful.

Of course the output is useful, for the programmer, if any :-)

Likewise, the worldview I chose was "predetermined", but that doesn't mean my choice is somehow "wrong" or "invalid", as long as my inner program actually implements valid common sense.

It doesn't appear that regardless of what someone has been programmed to pick, the 'feelings' don't seem to be any different.

Comment author: cousin_it 09 June 2010 09:51:12AM 2 points [-]

If my common sense is invalid and just my imagination, then how in the world do I manage to program computers successfully? That seems to be the most objective test there is, unless you believe all computers are in a conspiracy to deceive humans.

Comment author: Ganapati 13 June 2010 07:53:43AM 0 points [-]

Just to clarify, in a deterministic universe, there are no "invalid" or "wrong" things. Everything just is. Every belief and action is just as valid as any other because that is exactly how each of them has been determined to be.

Comment author: cousin_it 13 June 2010 09:46:35AM *  3 points [-]

No, this belief of yours is wrong. A deterministic universe can contain a correct implementation of a calculator that returns 2+2=4 or an incorrect one that returns 2+2=5.

Comment author: Ganapati 13 June 2010 02:26:14PM *  0 points [-]

A deterministic universe can contain a correct implementation of a calculator that returns 2+2=4 or an incorrect one that returns 2+2=5.

Sure it can. But it is possible to declare one of them as valid only because you are outside of both and you have a notion of what the result should be.

But to avoid the confusion over the use of words I will restate what I said earlier slightly differently.

In a deterministic universe, neither of a pair of opposites like valid/invalid, right/wrong, true/false etc has more significance than the other. Everything just is. Every belief and action is just as significant as any other because that is exactly how each of them has been determined to be.

Comment author: cousin_it 14 June 2010 01:30:25PM *  1 point [-]

I thought about your argument a bit and I think I understand it better now. Let's unpack it.

First off, if a deterministic world contains a (deterministic) agent that believes the world is deterministic, that agent's belief is correct. So no need to be outside the world to define "correctness".

Another matter is verifying the correctness of beliefs if you're within the world. You seem to argue that a verifier can't trust its own conclusion if it knows itself to be a deterministic program. This is debatable - it depends on how you define "trust" - but let's provisionally accept this. From this you somehow conclude that the world and your mind must be in fact non-deterministic. To me this doesn't follow. Could you explain?

Comment author: Ganapati 12 June 2010 06:24:59AM 0 points [-]

I program computers successfully too :-)

Comment author: Vladimir_Nesov 08 June 2010 03:15:15PM 1 point [-]

the 'choices' you make are not really choices, but already predetermined

The only way that choices can be made is by being predetermined (by your decision-making algorithm). Paraphrasing the familiar wordplay, choices that are not predetermined refer to decisions that cannot be made, while the real choices, that can actually be made, are predetermined.

Comment author: Blueberry 12 June 2010 05:00:52PM 1 point [-]

I like this phrasing; it makes things very clear. Are you alluding to this quote, or something else?

Comment author: Vladimir_Nesov 12 June 2010 05:33:00PM 0 points [-]

Yes.

Comment author: Ganapati 09 June 2010 08:43:51AM 0 points [-]

Of course! Since all the choices of all the actors are predetermined, so is the future. So what exactly would be the "purpose" of acting as if the future were not already determined and we can choose an optimising function based the possible consequences of different actions?

Comment author: Vladimir_Nesov 09 June 2010 10:49:50AM 4 points [-]

Since the consequences are determined by your algorithm, whatever your algorithm will do, will actually happen. Thus, the algorithm can contemplate what would be the consequences of alternative choices and make the choice it likes most. The consideration of alternatives is part of the decision-making algorithm, which gives it the property of consistently picking goal-optimizing decisions. Only these goal-optimizing decisions actually get made, but the process of considering alternatives is how they get computed.

Comment author: Ganapati 12 June 2010 06:14:21AM -1 points [-]

Sure. So consequentialism is the name for the process that happens in every programmed entity, making it useless to distinguish between two different approaches.

Comment author: RobinZ 09 June 2010 12:15:22PM 2 points [-]

In a deterministic universe, the future is logically implied by the present - but you're in the present. The future isn't fated - if, counterfactually, you did something else, then the laws of physics would imply very different events as a consequence - and it isn't predictable - even ignoring computational limits, if you make any error, even on an unmeasurable level, in guessing the current state, your prediction will quickly diverge from reality - it's just logically consistent.

Comment author: Ganapati 12 June 2010 05:52:00AM *  0 points [-]

if, counterfactually, you did something else, ...

How could it happen? Each component of the system is programmed to react in a predetermined way to the inputs it receives from the rest of the system. The the inputs are predetermined as is the processing algorithm. How can you or I do anything that we have not been preprogrammed to do?

Consdier an isolated system with no biological agents involved. It may contain preprogrammed computers. Would you or would you not expect the future evolution of the system to be completely determined. If you would expect its future to be completely determined, why would things change when the system, such as ours, contains biological agents? If you do not expect the future of the system to be completely determined, why not?

Comment author: RobinZ 12 June 2010 01:49:53PM 1 point [-]

I said "counterfactual". Let me use an archetypal example of a free-will hypothetical and query your response:

Suppose that there are two worlds, A and A', which are at a certain time indistinguishable in every measurable way. They differ, however, and differ most strongly in the nature of a particular person, Alice, who lives in A versus the nature of her analogue in A', whom we shall call Alice' for convenience.

In the two worlds at the time at which A and A' are indistinguishable, Alice and Alice' are entering a restaurant. They are greeted by a server, seated, and given menus, and the attention of both Alice and Alice' rapidly settles upon two items: the fettucini alfredo and the eggplant parmesan. As it happens, the previously-indistinguishable differences between Alice and Alice' are such that Alice orders fettucini alfredo and Alice' orders eggplant parmesan.

What dishes will Alice and Alice' receive?

I'm off to the market, now - I'll post the followup in a moment.

Comment author: RobinZ 12 June 2010 03:13:06PM 0 points [-]

Now: I imagine most people would say that Alice would receive the fettucini and Alice' the eggplant. I will proceed on this assumption

Now suppose that Alice and Alice' are switched at the moment they entered the restaurant. Neither Alice nor Alice' notice any change. Nobody else notices any change, either. In fact, insofar as anyone in universe A (now containing Alice') and universe A' (now containing Alice) can tell, nothing has happened.

After the switch, Alice' and Alice are seated, open their menus, and pick their orders. What dishes will Alice' and Alice receive?

Comment author: Blueberry 12 June 2010 04:47:42PM 3 points [-]

I'm missing the point of this hypothetical. The situation you described is impossible in a deterministic universe. Since we're assuming A and A' are identical at the beginning, what Alice and Alice' order is determined from that initial state. The divergence has already occurred once the two Alices order different things: why does it matter what the waiter brings them?

I'm not sure exactly how these universes would work: it seems to be a dualistic one. Before the Alices order, A and A' are physically identical, but the Alices have different "souls" that can somehow magically change the physical makeup of the universe in strangely predictable ways. The different nature of Alice and Alice' has changed the way two identical sets of atoms move around.

If this applies to the waiter as well, we can't predict what he'll decide to bring Alice: for all we know he may turn into a leopard, because that's his nature.

Comment author: RobinZ 12 June 2010 05:02:30PM 0 points [-]

The requirement is not that there is no divergence, but that the divergence is small enough that no-one could notice the difference. Sure, if a superintelligent AI did a molecular-level scan five minutes before the hypothetical started it would be able to tell that there was a switch, but no such being was there.

And the point of the hypothetical is that the question "what if, counterfactually, Alice ordered the eggplant?" is meaningful - it corresponds to physically switching the molecular formation of Alice with that of Alice' at the appropriate moment.