Thanks, Eliezer!
This one was actually news to me. Separately is more efficient, eh? Hmmm... now I get to rethink my actions.
I had deliberately terminated my donations to charities that seemed closer to "rescuing lost puppies". I had also given up personal volunteering (I figured out {work - earn - donate} before I heard it here.) And now I'm really struggling with akrasia / procrastination / laziness /rebellion / escapism.
"You could, of course, reply that you don't trust selfish acts that are supposed to be other-benefiting as an "...
Thanks, Matt!
That's a nice educational post.
I want to pick a nit, not with you, but with Gigerenzer and " ... the conjunction fallacy can be mitigated by changing the wording of the question ... " Unfortunately, in real life, the problems come at you the way they do, and you need to learn to deal with it.
I say that rational thinking looks like this: pencil applied to paper. Or a spreadsheet or other decision support program in use. We can't do this stuff in our heads. At least I can't. Evolution didn't deliver arithmetic, much less rationali...
Thanks, Eliezer!
That's good stuff. I really relate to " ... the poisonous meme saying that someone who gives mere money must not care enough to get personally involved." That one runs on automatic in my head. It's just one of many ways my brain lies to me.
“Every time I spend money I feel like I'm losing hit points. ” Now, I don’t know your personal situation, and I can certainly relate. My mother is a child of the Great Depression and lived her life out of a fear of poverty. She taught me to worship Bargain and Sale and to abhor “unnecessar...
Thanks, Eliezer!
As one of your supporters, I have been sometimes concerned that you are doing blog posts instead of working out the Friendly AI theory. Much more concerned than I show. I do try to hold it down to an occasional straight question, and hold myself back from telling you what to do. The hypothesis that I know better than you is at least -50dB.
This post is yet another glimpse into the Grand Strategy behind the strategy, and helps me dispel the fear from my less-than-rational mind.
I find it unsettling that " ... after years of bogging dow...
“If you don't believe that the outputs of your thought processes are entangled with reality, why do you believe the outputs of your thought processes? ”
I don’t. Well not like Believe. Some few of them I will give 40 or even 60 deciBels.
But I’m clear that my brain lies to me. Even my visual processor lies. (Have you ever been looking for your keys, looked right at them, and gone on looking?)
I hold my beliefs loosely. I’m coachable. Maybe even gullible. You can get me to believe some untruth, but I’ll let go of that easily when evidence appears.
Thanks, Eliezer!
“Are there motives for seeking truth besides curiosity and pragmatism?”
I can think of several that have showed up in my life. I’m offering these for consideration, but not claiming these are good or bad, pure or impure etc. Some will doubtless overlap somewhat with each other and the ones stated.
My reason for writing this is not to correct Eliezer. Rather, I want to expand on his distinction between prior information and prior probability. Pages 87-89 of Probability Theory: the Logic of Science by E. T. Jaynes (2004 reprint with corrections, ISBN 0 521 59271 2) is dense with important definitions and principles. The quotes below are from there, unless otherwise indicated.
Jaynes writes the fundamental law of inference as
P(H|DX) = P(H|X) P(D|HX) / P(D|X) (4.3)
Which the reader may be more used to seeing as
P(H|D) = P(H) P(D|H) / P(D)
... E.T. Jaynes, Probability Theory: The Logic of Science
and make sure you get the "unofficial errata"
personality tests
Another test set is Gallup / Clifton StrengthsFinder 2.0 (http://www.strengthsfinder.com/113647/Homepage.aspx).
For me, the results were far more useful than the various "personality profiles" I have taken , sometimes at considerable cost to my employer.
"The CSF is an online measure of personal talent that identifies areas where an individual’s greatest potential for building strengths exists. ... The primary application of the CSF is as an evaluation that initiates a strengths-based development process in work and academ...
Yes, I read about " ... disappears in a puff of smoke." I wasn't coming back for a measly $1K, I was coming back for another million! I'll see if they'll let me play again. Omega already KNOWS I'm greedy, this won't come as a shock. He'll probably have told his team what to say when I try it.
" ... and come back for more." was meant to be funny.
Anyway, this still doesn't answer my questions about "Omega has been correct on each of 100 observed occasions so far - everyone who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars."
Someone please answer my questions! Thanks!
Well, I mulled that over for a while, and I can't see any way that contributes to answering my questions.
As to " ... what does your choice effect and when?", I suppose there are common causes starting before Omega loaded the boxes, that affect both Omega's choices and mine. For example, the machinery of my brain. No backwards-in-time is required.
In Eliezer's article on Newcomb's problem, he says, "Omega has been correct on each of 100 observed occasions so far - everyone who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars. " Such evidence from previous players fails to appear in some problem descriptions, including Wikipedia's.
For me this is a "no-brainer". Take box B, deposit it, and come back for more. That's what the physical evidence says. Any philosopher who says "Tak...
Oh. My mistake. When you wrote, "Plus wishing for all people to be under the rule of a god-like totalitarian sounds to me like the best way to destroy humanity.", I read:
It just never occurred to me you meant, "a god-like totalitarian pretty much comes out where extinction does in my utility function".
Are you willing to consider that totalitarian rule by a machine might be a whole new thing, and quite unlike totalitarian rule by people?
HOMEWORK REPORT
With some trepidation! I'm intensely aware I don't know enough.
"Why do I believe I have free will? It's the simplest explanation!" (Nothing in neurobiology is simple. I replace Occam's Razor with a metaphysical growth restriction: Root causes should not be increased without dire necessity).
OK, that was flip. To be more serious:
Considering just one side of the debate, I ask: "What cognitive architecture would give me an experience of uncaused, doing-whatever-I-want, free-as-a-bird Capricious Action that is so strong that...
META: thread parser failed?
It sounds like these posts should have been a sub-thread instead of all being attached to the original article?:
09 March 2008 11:05:11PM
09 March 2008 11:33:14PM
10 March 2008 01:14:45AM
Also, see the mitchell porter2 - Z. M. Davis - Frank Hirsch - James Blair - Unknown discussion below.
Vladimir_Nesov wrote on 11 September 2009 08:34:32AM:
This only makes it worse, because you can't excuse a signal.
This only makes what worse? Does it makes me sound more fanatical?
Please say more abut "you can't excuse a signal". Did you mean I can't reverse the first impression the signal inspired in somebody's mind? Or something else?
Also: just because you believe you are not fanatical, doesn't mean you are not. People can be caught in affective death spirals even around correct beliefs.
OK I'll start with a prior = 10% that I am f...
What do you recommend I do about my preachy style?
I suggest trying to determine your true confidence on each statement you write, and use the appropriate language to convey the amount of uncertainty you have about its truth.
If you receive feedback that indicates that your confidence (or apparent confidence) is calibrated too high or too low, then adjust your calibration. Don't just issue a blanket disclaimer like "All of that is IN MY OPINION."
Jack wrote on 09 September 2009 05:54:25PM:
Plus wishing for all people to be under the rule of a god-like totalitarian sounds to me like the best way to destroy humanity.
I don't wish for it. That part was inside parentheses with a question mark. I merely suspect it MAY be needed.
Please explain to me how the destruction follows from the rule of a god-like totalitarian.
Thank you for your time and attention.
With respect and high regard,
Rick Schwall, Ph.D.
Saving Humanity from Homo Sapiens (seizing responsibility, (even if I NEVER get on the field)
Jack wrote on 09 September 2009 05:54:25PM :
I can't help but think that those activities aren't going to do much to save humanity.
I hear that. I wasn't clear. I apologise.
I DON'T KNOW what I can do to turn humanity's course. And, I decline to be one more person who uses that as an excuse to go back to the television set. Those activities are part of my search for a place where I can make a difference.
"Saving Humanity from Homo Sapiens™" is maybe acceptable for Superman.
... but not acceptable from a mere man who cares, eh?
(Oh, all ...
I've been told that my writing sounds preachy or even religious-fanatical. I do write a lot of propositions without saying "In my opinion" in front of each one. I do have a standard boilerplate that I am to put at the beginning of each missive:
First, please read this caveat: Please do not accept anything I say as True.
Ever.
I do write a lot of propositions, without saying, "In My Opinion" before each one. It can sound preachy, like I think I've got the Absolute Truth, Without Error. I don't completely trust anything I have to say,...
Ah. Thanks! I think I get that.
But maybe I just think I do. I thought I understood that narrow part of Wei Dai's post on a problem that maybe defeats TDT. I had no idea that compassion had already been considered and compensated out of consideration. And that's such common shared knowledge here in the LessWrong community that it need not be mentioned.
I have a lot to learn. I now see I was very arrogant think I could contribute here. I should read the archives & wiki before I post. I apologize.
<<Begins to compute an estimated time to de-lurk. They collectively write several times faster than I can read, even if I don't slow down to mull it over. Hmmm... >>
Inorite? What is that?
I suspect I'm not smart enough to play on this site. I'm quite unsure I can even parse your sentence correctly, and I can't imagine a reason to adjust the external payoff matrices (they were given by Wei Dai, that is the original problem I'm discussing) so the internal payoff mtrices match something. I'm baffled.
Mostly, I study. I also go to a few conferences (I'll be at the Singularity Summit) and listen. I even occasionally speak on key issues (IMO), such as (please try thinking WITH these before attacking them. Try agreeing for at least a while.):
BRAVO, Eliezer! Huuzah! It's about time!
I don't know if you have succeeded in becoming a full rationalist, but I know I haven't! I keep being surprised / appalled / amused at my own behavior. Intelligence is way overrated! Rationalism is my goal, but I'm built on evolved wet ware that is often in control. Sometimes my conscious, chooses-to-be-rationalist mind is found to be in the kiddy seat with the toy steering wheel.
I haven't been publicly talking about my contributions to the Singularity Institute and others fighting to save us from ourselves. ...
Wei_Dai wrote on 19 August 2009 07:08:23AM :
... Omega's AIs will reason as follows: "I have 1/2 chance of playing against a TDT, and 1/2 chance of playing against a CDT. If I play C, then my opponent will play C if it's a TDT, and D if it's a CDT ...
That seems to violate the secrecy assumptions of the Prisoner's Dilemma problem! I thought each prisoner has to commit to his action before learning what the other one did. What am I missing?
Thanks!
About that report link (http://lesswrong.com/ ???): It doesn't say what it's going to do, what it is for (hate speech, strong language, advocating the overthrow, trolling, disagreeing with me...), nor does it give me a chance to explain.
Consider (think WITH this idea for a while. There will be plenty of time to refute it later. I find that, if I START with, "That's so wrong!", I really weaken my ability to "pan for the gold".)
Consider that you are using "we" and "self" as a pointer that jumps from one set to another moment by moment. Here is a list of some sets that may be confounded together here, see how many others you can think of. These United States (see the Constitution)
the people residing in that set
citizens who vote
citizens with a peculiar ...
That's a terrible focus on punishment. Read "Don't Shoot the Dog" by Karen Pryor and learn about behavior shaping through positive rewards.