cousin_it comments on Diseased thinking: dissolving questions about disease - Less Wrong

236 Post author: Yvain 30 May 2010 09:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (343)

You are viewing a single comment's thread. Show more comments above.

Comment author: Ganapati 08 June 2010 07:48:05AM 0 points [-]

In other words, the 'choices' you make are not really choices, but already predetermined, You didn't really choose to be a determinist, you were programmed to select it, once you encountered it.

Comment author: cousin_it 08 June 2010 11:53:42AM *  2 points [-]

Yep, kind of. But your view of determinism is too depressing :-)

My program didn't know in advance what options it would be presented with, but it was programmed to select the option that makes the most sense, e.g. the determinist worldview rather than the mystical one. Like a program that receives an array as input and finds the maximum element in it, the output is "predetermined", but it's still useful. Likewise, the worldview I chose was "predetermined", but that doesn't mean my choice is somehow "wrong" or "invalid", as long as my inner program actually implements valid common sense.

Comment author: Ganapati 09 June 2010 08:54:41AM -2 points [-]

My program didn't know in advance what options it would be presented with, but it was programmed to select the option that makes the most sense, e.g. the determinist worldview rather than the mystical one.

You couldn't possibly know that! Someone programmed to pick the mystical worldview would feel exactly the same and would have been programmed not to recognise his/her own programming too :-)

Like a program that receives an array as input and finds the maximum element in it, the output is "predetermined", but it's still useful.

Of course the output is useful, for the programmer, if any :-)

Likewise, the worldview I chose was "predetermined", but that doesn't mean my choice is somehow "wrong" or "invalid", as long as my inner program actually implements valid common sense.

It doesn't appear that regardless of what someone has been programmed to pick, the 'feelings' don't seem to be any different.

Comment author: cousin_it 09 June 2010 09:51:12AM 2 points [-]

If my common sense is invalid and just my imagination, then how in the world do I manage to program computers successfully? That seems to be the most objective test there is, unless you believe all computers are in a conspiracy to deceive humans.

Comment author: Ganapati 13 June 2010 07:53:43AM 0 points [-]

Just to clarify, in a deterministic universe, there are no "invalid" or "wrong" things. Everything just is. Every belief and action is just as valid as any other because that is exactly how each of them has been determined to be.

Comment author: cousin_it 13 June 2010 09:46:35AM *  3 points [-]

No, this belief of yours is wrong. A deterministic universe can contain a correct implementation of a calculator that returns 2+2=4 or an incorrect one that returns 2+2=5.

Comment author: Ganapati 13 June 2010 02:26:14PM *  0 points [-]

A deterministic universe can contain a correct implementation of a calculator that returns 2+2=4 or an incorrect one that returns 2+2=5.

Sure it can. But it is possible to declare one of them as valid only because you are outside of both and you have a notion of what the result should be.

But to avoid the confusion over the use of words I will restate what I said earlier slightly differently.

In a deterministic universe, neither of a pair of opposites like valid/invalid, right/wrong, true/false etc has more significance than the other. Everything just is. Every belief and action is just as significant as any other because that is exactly how each of them has been determined to be.

Comment author: cousin_it 14 June 2010 01:30:25PM *  1 point [-]

I thought about your argument a bit and I think I understand it better now. Let's unpack it.

First off, if a deterministic world contains a (deterministic) agent that believes the world is deterministic, that agent's belief is correct. So no need to be outside the world to define "correctness".

Another matter is verifying the correctness of beliefs if you're within the world. You seem to argue that a verifier can't trust its own conclusion if it knows itself to be a deterministic program. This is debatable - it depends on how you define "trust" - but let's provisionally accept this. From this you somehow conclude that the world and your mind must be in fact non-deterministic. To me this doesn't follow. Could you explain?

Comment author: Ganapati 12 June 2010 06:24:59AM 0 points [-]

I program computers successfully too :-)