Comment author: DaveInNYC 21 October 2009 06:00:09PM *  6 points [-]

Kasparov competed against Deep Blue to steer the chessboard into a region where he won - knights and bishops were only his pawns

Were you trying to mix the literal and metaphorical here? Because I think that just his pawns were his pawns :)

In response to Timeless Control
Comment author: DaveInNYC 07 June 2008 04:52:21PM 0 points [-]

All the ideas expressed in this post, as well as the "timeless physics" one, seems amazingly obvious to me, and has for all of my adult life, and compared to a lot of OB posters, I am not that bright. Since I normally find many of Elizer's posts extremely counterintuitive and/or hard to grasp, I've got to ask the question: am I missing something here? Is Elizer saying something so mind-boggling out of this world that I do not even realize he is saying it?

In response to Thou Art Physics
Comment author: DaveInNYC 06 June 2008 06:27:54PM 3 points [-]

I know this is just re-iterating what Caledonian and Ben Jones said, but too have meaningful discussion on this subject you have to taboo "free will" and come up with a specific description of what you are trying to figure out. The most basic concept of free will is "being able to do what you desire to do," and that is not affected one whit by determinism, or MWI, or God knowing what you are going to do in advance, etc. I know there are a lot of other more sophisticated-sounding discussions regarding this ("ah, but can you choose to desire something else", etc) but I have yet to hear of a meaningful definition of "free will" that is affected at all by such things as MWI.

BTW, it drives me nuts when people say "well if we do not have free will, why punish criminals?" (or "we pretend free will exists so that we can punish criminals", etc). We punish criminals so that fewer crimes happen. Whether you think those criminals have "free will" has nothing to do with the results we get by punishing them.

In response to Timeless Identity
Comment author: DaveInNYC 03 June 2008 05:59:53PM 1 point [-]

I have been seriously considering cryonics; if the MWI is correct, I figure that even if there is a vanishingly small chance of it working, "I" will still wake up in one of the worlds where it does work. Then again, even if I do not sign up, there are plenty of worlds out there where I do. So signing up is less of an attempt to live forever as it is an attempt to line up my current existence with the memory of the person who is revived, if that makes any sense. To put it another way, if there is a world where I procrastinate signing up until right before I die, the person who is revived will have 99.9% of the same memories as someone who did not sign up at all, so if I don't end up signing up I do not lose much.

FWIW, I sent an email to Alcor a while ago that was never responded to, which makes me wonder if they have their act together enough to preserve me for the long haul.

On a related note, is there much agreement on what is "possible" as far as MWI goes? For example, in a classical universe if I know the position/momentum of every particle, I can predict the outcome of a coin flip with 1.0 probability. If we throw quantum events in the mix, how much does this change? I figure the answer should be in the range of (1.0 - tiny number) and (0.5 + tiny number).

Comment author: DaveInNYC 30 May 2008 11:44:45PM 0 points [-]

If the reason for keeping it private is that he plans to do the trick with more people (and it doesn't work if you know the method in advance) than it makes sense. But otherwise, I don't see much of a difference between somebody thinking "there is no argument that would convince me to let him out" and "argument X would not convince me to let him out". In fact, the latter is more plausible anyway.

In any event, I am the type of guy who always tries to find out how a magic trick is done and then is always disappointed when he finds out. So I'm probably better off not knowing :)

Comment author: DaveInNYC 30 May 2008 08:54:50PM 0 points [-]

I always thought that the justification for not revealing the transcripts in the AI box experiment was pretty weak. As it is, I can claim that whatever method Elizer used must have been effective for people more simple minded then me; ignorance of the specifics of the method does not make it harder to make that claim. In fact, it makes it easier, as I can imagine Eli just said "pretty please" or whatever. In any event, the important point of the AI box exercise is that *someone* reasonably competent could be convinced to let the AI out, even if *I* couldn't be convinced.

One thing I would liked to have known is if the subjects had a different opinion about the problem once they let the AI out. One would assume they did, but since all they said was "I let Elizer out of the box" it is somewhat hard to tell.

Comment author: DaveInNYC 16 May 2008 09:50:02PM 2 points [-]

"ME" - I've noticed that people on this forum seem to label ANYTHING that has to do with conditional probability "Bayesian". I'm not quite sure why this is; I have a hard enough time figuring out the real difference between a "frequentist" and a "Bayesian", but reading some of these posts I get the feeling that "Bayesian" around here means "someone who knows basic logic".

Comment author: DaveInNYC 15 May 2008 05:49:07PM 0 points [-]

While we are (sort of) on the topic of cryonics, who here is signed up for it? For those that are, what organization are you with, and are you going with the full-body plan, or just the brain? I'm considering Alcor's neuropreservation process.

Comment author: DaveInNYC 14 May 2008 07:57:17PM 2 points [-]

Caledonian - not sure if this is what was originally alluded to, but the Prisoner's Dilemma / Tragedy of the Commons scenario is one where agents acting in their best interest get screwed. Of course, that is why we have governments in the first place (i.e. to get around those problems).

M - How do you figure Somalia is libertarian? Libertarianism requires a stable government (i.e. a monopoly on force) which Somalia definitely does not have.

H.A. - I don't think the point was that Libertarians are more scientific than others, but that Libertarianism and Science are similar in the sense that they put more faith in processes than in people.

Comment author: DaveInNYC 13 May 2008 02:46:03PM 4 points [-]

Eli - As you said in an earlier post, it is not the testability part of MWI that poses a problem for most people with a scientific viewpoint, it is the fact that MWI came after Collapse. So the core part of the scientific method - testability/falsifiability - gives no more weight to Collapse than to MWI.

As to the "Bayesian vs. Science" question (which is really a "Metaphysics vs. Science" question), I'll go with Science every time. The scientific method has trounced logical argument time and time again.

Even if there turns out to be cases where the "logical" answer to a problem is correct, who cares if it does not make any predictions? If it is not testable, than it also follows you can't do anything useful with it, like cure cancer, or make better heroin.

View more: Prev | Next