Comment author: JGWeissman 24 September 2009 06:38:38PM 0 points [-]

For me this is a "no-brainer". Take box B, deposit it, and come back for more.

There is no opportunity to come back for more. Assume that when you take box B before taking box A, box A is removed.

Comment author: RickJS 25 September 2009 03:37:46AM 0 points [-]

Yes, I read about " ... disappears in a puff of smoke." I wasn't coming back for a measly $1K, I was coming back for another million! I'll see if they'll let me play again. Omega already KNOWS I'm greedy, this won't come as a shock. He'll probably have told his team what to say when I try it.

" ... and come back for more." was meant to be funny.

Anyway, this still doesn't answer my questions about "Omega has been correct on each of 100 observed occasions so far - everyone who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars."

Someone please answer my questions! Thanks!

Comment author: eirenicon 22 September 2009 12:57:33AM 0 points [-]

That's what the physical evidence says.

What the physical evidence says is that the boxes are there, the money is there, and Omega is gone. So what does your choice effect and when?

Comment author: RickJS 24 September 2009 05:10:02PM 1 point [-]

Well, I mulled that over for a while, and I can't see any way that contributes to answering my questions.

As to " ... what does your choice effect and when?", I suppose there are common causes starting before Omega loaded the boxes, that affect both Omega's choices and mine. For example, the machinery of my brain. No backwards-in-time is required.

Comment author: Eliezer_Yudkowsky 19 August 2009 03:22:16PM 1 point [-]

This is the crippleware version of TDT that pure CDT agents self-modify to. It's crippleware because if you self-modify at 7:00pm you'll two-box against an Omega who saw your code at 6:59am.

Comment author: RickJS 22 September 2009 12:28:53AM 0 points [-]

In Eliezer's article on Newcomb's problem, he says, "Omega has been correct on each of 100 observed occasions so far - everyone who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars. " Such evidence from previous players fails to appear in some problem descriptions, including Wikipedia's.

For me this is a "no-brainer". Take box B, deposit it, and come back for more. That's what the physical evidence says. Any philosopher who says "Taking BOTH boxes is the rational action," occurs to me as an absolute fool in the face of the evidence. (But I've never understood non-mathematical philosophy anyway, so I may a poor judge.)

Clarifying (NOT rhetorical) questions:

Have I just cheated, so that "it's not the Newcomb Problem anymore?"

When you fellows say a certain decision theory "two-boxes", are those theory-calculations including the previous play evidence or not?

Thanks for your time and attention.

Comment author: RickJS 21 September 2009 12:03:49AM 3 points [-]

LessWrong.com sends the user's password in the clear (as reported by ZoneAlarm Extreme Security 8.

Please consider warning people that is so.

Comment author: Jack 12 September 2009 06:47:46PM -1 points [-]

Maybe some Homo Sapiens would survive, humanity wouldn't. Are the human animals in 1984 "people"? After Winston Smith dies is there any humanity left?

I can envision a time when less freedom and more authority is necessary for our survival. But a god-like totalitarian pretty much comes out where extinction does in my utility function.

Comment author: RickJS 19 September 2009 11:41:34PM *  1 point [-]

Oh. My mistake. When you wrote, "Plus wishing for all people to be under the rule of a god-like totalitarian sounds to me like the best way to destroy humanity.", I read:

  • [Totalitarian rule... ] ... [is] ... the best way to destroy humanity, (as in cause and effect.)
  • OR maybe you meant: wishing ... [is] ... the best way to destroy humanity

It just never occurred to me you meant, "a god-like totalitarian pretty much comes out where extinction does in my utility function".

Are you willing to consider that totalitarian rule by a machine might be a whole new thing, and quite unlike totalitarian rule by people?

Comment author: Wei_Dai 11 September 2009 07:25:08PM 6 points [-]

What do you recommend I do about my preachy style?

I suggest trying to determine your true confidence on each statement you write, and use the appropriate language to convey the amount of uncertainty you have about its truth.

If you receive feedback that indicates that your confidence (or apparent confidence) is calibrated too high or too low, then adjust your calibration. Don't just issue a blanket disclaimer like "All of that is IN MY OPINION."

Comment author: RickJS 19 September 2009 11:01:55PM 3 points [-]

OK.

Actually, I'm going to restrain myself to just clarifying questions while I try to learn the assumed, shared, no-need-to-mention-it body of knowledge you fellows share.

Thanks.

Comment author: RickJS 12 September 2009 03:47:36AM *  3 points [-]

HOMEWORK REPORT

With some trepidation! I'm intensely aware I don't know enough.

"Why do I believe I have free will? It's the simplest explanation!" (Nothing in neurobiology is simple. I replace Occam's Razor with a metaphysical growth restriction: Root causes should not be increased without dire necessity).

OK, that was flip. To be more serious:

Considering just one side of the debate, I ask: "What cognitive architecture would give me an experience of uncaused, doing-whatever-I-want, free-as-a-bird Capricious Action that is so strong that I just can't experience (be present to) being a fairly deterministic machine?"

Cutting it down to a bare minimum: I imagine that I have a Decision Module (DM) that receives input from sensory-processing modules and suggested-action modules at its "boundary", so those inputs are distinguishable from the neuron-firings inside the boundary: the ones that make up the DM itself. IMO, there is no way for those internal neuron firings to be presented to the input ports. I guess that there is no provision for the DM to sense anything about its own machinery.

By dubious analogy, a Turing machine looks at its own tapes, it doesn't look at the action table that determines its next action, nor can it modify that table.

To a first approximation, no matter what notion of cause and effect I get, I just can't see any cause for my own decisions. Even if somebody asks, "Why did you stay and fight?", I'm just stuck with "It seemed like a good idea at the time!"

And these days, it seems to me that culture, the environment a child grows up within, is just full of the accouterments of free will: make the right choice, reward & punishment, shame, blame, accountability, "Why did you write on the wall? How could you be so STUPID!!?!!", "God won't tempt you beyond your ability to resist." etc.

Being a machine, I'm not well equipped to overcome all that on the strength of mere evidence and reason.

Now I'll start reading The Solution, and see if I was in the right ball park, or even the right continent.

Thanks for listening.

Comment author: Ron_Hardin 10 March 2008 01:14:45AM 0 points [-]

A=A is not a tautology.

Usually the first A is taken broadly and the second A narrowly.

The second, as they say, carries a pregnancy.

Comment author: RickJS 12 September 2009 02:36:37AM *  0 points [-]

META: thread parser failed?

It sounds like these posts should have been a sub-thread instead of all being attached to the original article?:

09 March 2008 11:05:11PM
09 March 2008 11:33:14PM
10 March 2008 01:14:45AM

Also, see the mitchell porter2 - Z. M. Davis - Frank Hirsch - James Blair - Unknown discussion below.

Comment author: Vladimir_Nesov 11 September 2009 08:34:32AM *  2 points [-]

This only makes it worse, because you can't excuse a signal. (See rationalization, signals are shallow).

Also: just because you believe you are not fanatical, doesn't mean you are not. People can be caught in affective death spirals even around correct beliefs.

Comment author: RickJS 11 September 2009 07:11:08PM 1 point [-]

Vladimir_Nesov wrote on 11 September 2009 08:34:32AM:

This only makes it worse, because you can't excuse a signal.

This only makes what worse? Does it makes me sound more fanatical?

Please say more abut "you can't excuse a signal". Did you mean I can't reverse the first impression the signal inspired in somebody's mind? Or something else?

Also: just because you believe you are not fanatical, doesn't mean you are not. People can be caught in affective death spirals even around correct beliefs.

OK I'll start with a prior = 10% that I am fanatical and / or caught in an affective death spiral.

What do you recommend I do about my preachy style?

I appreciate your writings on LessWrong. I'm learning a lot.

Thank you for your time and attention.

With respect and high regard,
Rick Schwall, Ph.D.
Saving Humanity from Homo Sapiens (seizing responsibility, (even if I NEVER get on the field)

Comment author: Jack 09 September 2009 05:54:25PM *  4 points [-]

I can't help but think that those activities aren't going to do much to save humanity. I don't want to send you into an existential crisis or anything but maybe you should tune down your job description. "Saving Humanity from Homo Sapiens™" is maybe acceptable for Superman. It might be affably egotistical for someone who does preventive counter-terrorism re: experimental bioweapons. "Saving Humanity from Homo Sapiens one academic conference at a time" doesn't really do it for me.

Plus wishing for all people to be under the rule of a god-like totalitarian sounds to me like the best way to destroy humanity.

Comment author: RickJS 11 September 2009 06:32:45PM *  -1 points [-]

Jack wrote on 09 September 2009 05:54:25PM:

Plus wishing for all people to be under the rule of a god-like totalitarian sounds to me like the best way to destroy humanity.

I don't wish for it. That part was inside parentheses with a question mark. I merely suspect it MAY be needed.

Please explain to me how the destruction follows from the rule of a god-like totalitarian.

Thank you for your time and attention.

With respect and high regard,
Rick Schwall, Ph.D.
Saving Humanity from Homo Sapiens (seizing responsibility, (even if I NEVER get on the field)

View more: Prev | Next