All of RickJS's Comments + Replies

That's a terrible focus on punishment. Read "Don't Shoot the Dog" by Karen Pryor and learn about behavior shaping through positive rewards.

I agree that "think for yourself" is important. That includes updating on the words of the smart thinkers who read a lot of the relevant material. In which category I include Zvi, Eliezer, Nate Soares, Stuart Armstrong, Anders Sandberg, Stuart Russell, Rohin Shah, Paul Chistiano, and on and on.

I will say that .PDF format is end-user hostile.

Thanks, Eliezer!

This one was actually news to me. Separately is more efficient, eh? Hmmm... now I get to rethink my actions.

I had deliberately terminated my donations to charities that seemed closer to "rescuing lost puppies". I had also given up personal volunteering (I figured out {work - earn - donate} before I heard it here.) And now I'm really struggling with akrasia / procrastination / laziness /rebellion / escapism.

"You could, of course, reply that you don't trust selfish acts that are supposed to be other-benefiting as an "... (read more)

Thanks, Matt!

That's a nice educational post.

I want to pick a nit, not with you, but with Gigerenzer and " ... the conjunction fallacy can be mitigated by changing the wording of the question ... " Unfortunately, in real life, the problems come at you the way they do, and you need to learn to deal with it.

I say that rational thinking looks like this: pencil applied to paper. Or a spreadsheet or other decision support program in use. We can't do this stuff in our heads. At least I can't. Evolution didn't deliver arithmetic, much less rationali... (read more)

1realitygrill
I thought that a major point of heuristics and biases program, at least for economics, was that they were systematic and in a sense "baked-in" as default. If these errors are artifacts of tweaks/wording then that really undermines hope of theoretical extension. The value of this kind of knowledge becomes lopsided towards marketers, magicians, and others trying to manipulate or trick people more effectively. On the other hand, I think the idea of using the error data as clues as to neural architecture and functioning is great! It seems that neuroscience-clustered research is focused mostly bottom-up and rarely takes inspiration from the other direction. This raises an interesting point. We can do arithmetic in our heads, some of us more spectacularly than others. Do you mean to say that there is no way to employ/train our brains to do rational thinking more effectively and intuitively? I had always hoped that we could at least shape our intuition enough to give us a sense for situations where it would be better to calculate - though it's costly and slower. We do not always have our tools (although I guess in the future this is less and less likely).

Thanks, Eliezer!

That's good stuff. I really relate to " ... the poisonous meme saying that someone who gives mere money must not care enough to get personally involved." That one runs on automatic in my head. It's just one of many ways my brain lies to me.

“Every time I spend money I feel like I'm losing hit points. ” Now, I don’t know your personal situation, and I can certainly relate. My mother is a child of the Great Depression and lived her life out of a fear of poverty. She taught me to worship Bargain and Sale and to abhor “unnecessar... (read more)

Thanks, Eliezer!

As one of your supporters, I have been sometimes concerned that you are doing blog posts instead of working out the Friendly AI theory. Much more concerned than I show. I do try to hold it down to an occasional straight question, and hold myself back from telling you what to do. The hypothesis that I know better than you is at least -50dB.

This post is yet another glimpse into the Grand Strategy behind the strategy, and helps me dispel the fear from my less-than-rational mind.

I find it unsettling that " ... after years of bogging dow... (read more)

“If you don't believe that the outputs of your thought processes are entangled with reality, why do you believe the outputs of your thought processes? ”

I don’t. Well not like Believe. Some few of them I will give 40 or even 60 deciBels.

But I’m clear that my brain lies to me. Even my visual processor lies. (Have you ever been looking for your keys, looked right at them, and gone on looking?)

I hold my beliefs loosely. I’m coachable. Maybe even gullible. You can get me to believe some untruth, but I’ll let go of that easily when evidence appears.

Thanks, Eliezer!

“Are there motives for seeking truth besides curiosity and pragmatism?”

I can think of several that have showed up in my life. I’m offering these for consideration, but not claiming these are good or bad, pure or impure etc. Some will doubtless overlap somewhat with each other and the ones stated.

  1. As a weapon. Use it to win arguments (sometimes the point of an argument is to WIN, never mind learning the truth. I've got automatic competitiveness I need to keep on a short leash). Use it to win bar room bets. Acquire knowledge about the
... (read more)

My reason for writing this is not to correct Eliezer. Rather, I want to expand on his distinction between prior information and prior probability. Pages 87-89 of Probability Theory: the Logic of Science by E. T. Jaynes (2004 reprint with corrections, ISBN 0 521 59271 2) is dense with important definitions and principles. The quotes below are from there, unless otherwise indicated.

Jaynes writes the fundamental law of inference as

  P(H|DX) = P(H|X) P(D|HX) / P(D|X)         (4.3)

Which the reader may be more used to seeing as

 P(H|D) = P(H) P(D|H) / P(D)
... (read more)

personality tests

Another test set is Gallup / Clifton StrengthsFinder 2.0 (http://www.strengthsfinder.com/113647/Homepage.aspx).

For me, the results were far more useful than the various "personality profiles" I have taken , sometimes at considerable cost to my employer.

"The CSF is an online measure of personal talent that identifies areas where an individual’s greatest potential for building strengths exists. ... The primary application of the CSF is as an evaluation that initiates a strengths-based development process in work and academ... (read more)

Yes, I read about " ... disappears in a puff of smoke." I wasn't coming back for a measly $1K, I was coming back for another million! I'll see if they'll let me play again. Omega already KNOWS I'm greedy, this won't come as a shock. He'll probably have told his team what to say when I try it.

" ... and come back for more." was meant to be funny.

Anyway, this still doesn't answer my questions about "Omega has been correct on each of 100 observed occasions so far - everyone who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars."

Someone please answer my questions! Thanks!

0Johnicholas
The problem needs lots of little hypotheses about Omega. In general, you can create these hypotheses for yourself, using the principle of "Least Convenient Possible World" http://lesswrong.com/lw/2k/the_least_convenient_possible_world/ Or, from philosophy/argumentation theory, "Principle of Charity". http://philosophy.lander.edu/intro/charity.shtml In your case, I think you need to add at least two helper assumptions - Omega's prediction abilities are trustworthy, and Omega's offer will never be repeated - not for you, not for anyone.

Well, I mulled that over for a while, and I can't see any way that contributes to answering my questions.

As to " ... what does your choice effect and when?", I suppose there are common causes starting before Omega loaded the boxes, that affect both Omega's choices and mine. For example, the machinery of my brain. No backwards-in-time is required.

In Eliezer's article on Newcomb's problem, he says, "Omega has been correct on each of 100 observed occasions so far - everyone who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars. " Such evidence from previous players fails to appear in some problem descriptions, including Wikipedia's.

For me this is a "no-brainer". Take box B, deposit it, and come back for more. That's what the physical evidence says. Any philosopher who says "Tak... (read more)

0JGWeissman
There is no opportunity to come back for more. Assume that when you take box B before taking box A, box A is removed.
-1eirenicon
What the physical evidence says is that the boxes are there, the money is there, and Omega is gone. So what does your choice effect and when?

LessWrong.com sends the user's password in the clear (as reported by ZoneAlarm Extreme Security 8.

Please consider warning people that is so.

Oh. My mistake. When you wrote, "Plus wishing for all people to be under the rule of a god-like totalitarian sounds to me like the best way to destroy humanity.", I read:

  • [Totalitarian rule... ] ... [is] ... the best way to destroy humanity, (as in cause and effect.)
  • OR maybe you meant: wishing ... [is] ... the best way to destroy humanity

It just never occurred to me you meant, "a god-like totalitarian pretty much comes out where extinction does in my utility function".

Are you willing to consider that totalitarian rule by a machine might be a whole new thing, and quite unlike totalitarian rule by people?

OK.

Actually, I'm going to restrain myself to just clarifying questions while I try to learn the assumed, shared, no-need-to-mention-it body of knowledge you fellows share.

Thanks.

HOMEWORK REPORT

With some trepidation! I'm intensely aware I don't know enough.

"Why do I believe I have free will? It's the simplest explanation!" (Nothing in neurobiology is simple. I replace Occam's Razor with a metaphysical growth restriction: Root causes should not be increased without dire necessity).

OK, that was flip. To be more serious:

Considering just one side of the debate, I ask: "What cognitive architecture would give me an experience of uncaused, doing-whatever-I-want, free-as-a-bird Capricious Action that is so strong that... (read more)

-1Purged Deviator
[comment removed by author]
2StefanW
There is some confusion about the meaning of free will. I can decide freely whether to drink a coffee or a tea, but you will see me allways choosing the coffee. Am I free to choose? Really? I'm free to choose whether to use my bycicle to go to work, or take the bus. Well - it's raining. Let's take the bus. A bloody moron stole my bike - now I'm not free to choose, I'm forced to take the bus. There are inner and outer conditions which influence my decision. I'm not free to stop at the traffic light, but if I take the risk to pay the penalty, I'm free again. Maybe I internalized outer pressure in a way, that I can't distinguish it from inner whishes, bias or fear and animus. The second problem is, that as a model of our brain we look at it, if it was a machine or a computer. We know there a neurons, firing, and while facing our decision making process that way, it get's something foreign to us - we don't see it as part of ourself, like we see our feet in action while walking. If you would tell somebody, that he isn't walking, it's his feet which walk, everybody laughs. Yes - the feet are part of him. He cannot walk without his feet. And firing neurons are the same thing as thinking. The process of thinking is this machine in our head in action. It's your machine - it's you! And mine is mine, and it's me. So we don't fall into the fall of distinction between 'me' and 'my thoughts, my brain, some neurons, firing'. And we know, that there are inner and outer influences to our decision. We have a history, which influences whether we like the idea of driving by bus, or going by bicycle. There are some stronger and some not so strong influences, and maybe millions, so the process, to make a decision is too complex, to make a prediction in all cases. I know, I drank coffee for the last 20 years and not tea - but on the other hand, if there is a strong influence, I might drink tea tomorrow. Mainly a disruption of my habits. I might get forced to do something I don't

META: thread parser failed?

It sounds like these posts should have been a sub-thread instead of all being attached to the original article?:

09 March 2008 11:05:11PM
09 March 2008 11:33:14PM
10 March 2008 01:14:45AM

Also, see the mitchell porter2 - Z. M. Davis - Frank Hirsch - James Blair - Unknown discussion below.

4Z_M_Davis
Eliezer's posts (including comments) from before March were ported from the old, nonthreaded Overcoming Bias: that's why there are no threads and no sorting option.

Vladimir_Nesov wrote on 11 September 2009 08:34:32AM:

This only makes it worse, because you can't excuse a signal.

This only makes what worse? Does it makes me sound more fanatical?

Please say more abut "you can't excuse a signal". Did you mean I can't reverse the first impression the signal inspired in somebody's mind? Or something else?

Also: just because you believe you are not fanatical, doesn't mean you are not. People can be caught in affective death spirals even around correct beliefs.

OK I'll start with a prior = 10% that I am f... (read more)

What do you recommend I do about my preachy style?

I suggest trying to determine your true confidence on each statement you write, and use the appropriate language to convey the amount of uncertainty you have about its truth.

If you receive feedback that indicates that your confidence (or apparent confidence) is calibrated too high or too low, then adjust your calibration. Don't just issue a blanket disclaimer like "All of that is IN MY OPINION."

Jack wrote on 09 September 2009 05:54:25PM:

Plus wishing for all people to be under the rule of a god-like totalitarian sounds to me like the best way to destroy humanity.

I don't wish for it. That part was inside parentheses with a question mark. I merely suspect it MAY be needed.

Please explain to me how the destruction follows from the rule of a god-like totalitarian.

Thank you for your time and attention.

With respect and high regard,
Rick Schwall, Ph.D.
Saving Humanity from Homo Sapiens (seizing responsibility, (even if I NEVER get on the field)

-2Jack
Maybe some Homo Sapiens would survive, humanity wouldn't. Are the human animals in 1984 "people"? After Winston Smith dies is there any humanity left? I can envision a time when less freedom and more authority is necessary for our survival. But a god-like totalitarian pretty much comes out where extinction does in my utility function.

Jack wrote on 09 September 2009 05:54:25PM :

I can't help but think that those activities aren't going to do much to save humanity.

I hear that. I wasn't clear. I apologise.

I DON'T KNOW what I can do to turn humanity's course. And, I decline to be one more person who uses that as an excuse to go back to the television set. Those activities are part of my search for a place where I can make a difference.

"Saving Humanity from Homo Sapiens™" is maybe acceptable for Superman.

... but not acceptable from a mere man who cares, eh?

(Oh, all ... (read more)

I've been told that my writing sounds preachy or even religious-fanatical. I do write a lot of propositions without saying "In my opinion" in front of each one. I do have a standard boilerplate that I am to put at the beginning of each missive:

First, please read this caveat: Please do not accept anything I say as True.

Ever.

I do write a lot of propositions, without saying, "In My Opinion" before each one. It can sound preachy, like I think I've got the Absolute Truth, Without Error. I don't completely trust anything I have to say,... (read more)

1Vladimir_Nesov
This only makes it worse, because you can't excuse a signal. (See rationalization, signals are shallow). Also: just because you believe you are not fanatical, doesn't mean you are not. People can be caught in affective death spirals even around correct beliefs.

Ah. Thanks! I think I get that.

But maybe I just think I do. I thought I understood that narrow part of Wei Dai's post on a problem that maybe defeats TDT. I had no idea that compassion had already been considered and compensated out of consideration. And that's such common shared knowledge here in the LessWrong community that it need not be mentioned.

I have a lot to learn. I now see I was very arrogant think I could contribute here. I should read the archives & wiki before I post. I apologize.

<<Begins to compute an estimated time to de-lurk. They collectively write several times faster than I can read, even if I don't slow down to mull it over. Hmmm... >>

THanks for the clarification.

I guess I won't be posting articles to LessWrrong, as I have no clue what I'm doing wrong such that I get more downvotes than upvotes.

I would like some clarification on "LW doesn't register negative karma right now." Does that mean

  • my negative points are GONE, or
  • they are hiding and still need to be paid off before I can get a positive score?

Thanks

1thomblake
I believe they stick around invisibly. Your karma should always be dynamically the sum of upvotes and downvotes you've received.
0[anonymous]
They are hiding. To see your actual karma score, try to down-vote me and a little message will appear.

Inorite? What is that?

I suspect I'm not smart enough to play on this site. I'm quite unsure I can even parse your sentence correctly, and I can't imagine a reason to adjust the external payoff matrices (they were given by Wei Dai, that is the original problem I'm discussing) so the internal payoff mtrices match something. I'm baffled.

0thomblake
See Cyan's comment below. Do not be dispirited by lolspeak. Also, the reason to adjust the payoff matrices in the original problem is so that your 'internal' payoff matrices match those of Wei Dai's problem, or to put it another way, consider the problem in the least convenient possible world. Basically, the prisoner's dilemma is still there if you take the problem to be in utilons, which take into account things like your 'compassion' (in this case, valuing the reward given to the other person). I can't quite figure out what your formula for discounting is above, so let me simplify... It would be remiss for me to not do the math, though it is not my forte: Suppose the matrix represents jelly beans for you or the opponent, each worth 1 utilon. Further suppose that you get .25 utilons for each jelly bean the opponent gets, due to your 'compassion'. Now take this payoff matrix (in jellybeans): 375/500 -150/600 600/0 75/100 Which becomes in your 'internal' matrix (in utilons): 500/500 0/600 600/0 100/100 Now cooperation is dominated by defection for the 'compassionate' person. Someone please note if my numbers don't work out - it's early here.
2Cyan
"inorite".
-3thomblake
inorite?! Of course, this might still be muddy if you recast the payoff matrix in utilons, or (to abstract away less) adjust the "external" payoff matrices so that the "internal" payoff matrices match those of the original problem.
8SilasBarta
While a good question, Eliezer_Yudkowsky has already thoroughly answered it in The True Prisoner's Dilemma. His point there is, the values in the matrix are supposed to represent the participants' utility, rather than jail time, which accounts for your compassion for your friend. If it were simply prison sentences, your reasoning would apply, which is why EY says the true Prisoner's Dilemma requires convoluted, unusual scenarios, and why normal presentations of the PD don't make the situation clear.
-2Furcas
That Prisoner A is completely and utterly selfish is part of the Prisoner's Dilemma. If the prisoner's not selfish, it's not the Prisoner's Dilemma anymore. EDIT: Of course, this is only true if the numbers in the matrix represent years spent in jail, not utilons.

Mostly, I study. I also go to a few conferences (I'll be at the Singularity Summit) and listen. I even occasionally speak on key issues (IMO), such as (please try thinking WITH these before attacking them. Try agreeing for at least a while.):

  • "There is no safety in assuring we have a power switch on a super-intelligence. That would be power at a whole new level. That's pretty much Absolute Power and would bring out the innate corruption / corruptibility / self-interest in just about anybody."
  • "We need Somebody to take the dangerous
... (read more)
5Jack
I can't help but think that those activities aren't going to do much to save humanity. I don't want to send you into an existential crisis or anything but maybe you should tune down your job description. "Saving Humanity from Homo Sapiens™" is maybe acceptable for Superman. It might be affably egotistical for someone who does preventive counter-terrorism re: experimental bioweapons. "Saving Humanity from Homo Sapiens one academic conference at a time" doesn't really do it for me. Plus wishing for all people to be under the rule of a god-like totalitarian sounds to me like the best way to destroy humanity.
6thomblake
I'm not sure what this was supposed to add, especially with emphasis. Whose opinion would we think it is?

BRAVO, Eliezer! Huuzah! It's about time!

I don't know if you have succeeded in becoming a full rationalist, but I know I haven't! I keep being surprised / appalled / amused at my own behavior. Intelligence is way overrated! Rationalism is my goal, but I'm built on evolved wet ware that is often in control. Sometimes my conscious, chooses-to-be-rationalist mind is found to be in the kiddy seat with the toy steering wheel.

I haven't been publicly talking about my contributions to the Singularity Institute and others fighting to save us from ourselves. ... (read more)

2Psy-Kosh
Cool! Just am curious.. What do you do for 25 hours a week to save humanity from itself?
5Vladimir_Nesov
It's incomprehensible. Try debugging individual ideas first, written up more carefully.

Wei_Dai wrote on 19 August 2009 07:08:23AM :

... Omega's AIs will reason as follows: "I have 1/2 chance of playing against a TDT, and 1/2 chance of playing against a CDT. If I play C, then my opponent will play C if it's a TDT, and D if it's a CDT ...

That seems to violate the secrecy assumptions of the Prisoner's Dilemma problem! I thought each prisoner has to commit to his action before learning what the other one did. What am I missing?

Thanks!

About that report link (http://lesswrong.com/ ???): It doesn't say what it's going to do, what it is for (hate speech, strong language, advocating the overthrow, trolling, disagreeing with me...), nor does it give me a chance to explain.

2thomblake
Indeed - another thing for the nonexistent FAQ. As for the URL, it uses (inline) javascript and doesn't actually care about the href - a really stupid design decision, since it bounces you around (sends you back to /) if you have javascript off. But then, much of the site is kindof broken in that case.

I don't see a way to send my new article to the mods. When I'm done editing in my drafts folder, then what?

1dariusp
We made a recent change such that when you are creating or editing an article the "post to" selection always shows LessWrong. In the case that you don't yet have enough karma to post to LessWrong then it will be grayed out and have a message next to it explaining why. Old versions of IE unfortunately don't honor the grayed out option. In this case you can select LessWrong but submitting will inform you that you can't yet submit to the LessWrong category (of course you can always save to your drafts).
1Vladimir_Nesov
To publish an article, you need to have at least 20 points of Karma. Granted, this rule should be placed somewhere visible to the newcomers. It doesn't seem to be on the About page.

Terminology. Try to be consistent. "Liked" and "Vote Up": pick one and stick with it. IMHO

1thomblake
For those who don't get this one right away, if you check your user page, you see links to 'liked' and 'disliked' as categories of posts that you voted up or down. Since this doesn't seem to quite match the semantics of voting, the names of the categories on the user pages should be changed.

How about a basic Users' Guide, and include a link to it right in the top links bar?

Consider (think WITH this idea for a while. There will be plenty of time to refute it later. I find that, if I START with, "That's so wrong!", I really weaken my ability to "pan for the gold".)

Consider that you are using "we" and "self" as a pointer that jumps from one set to another moment by moment. Here is a list of some sets that may be confounded together here, see how many others you can think of. These United States (see the Constitution)

the people residing in that set

citizens who vote

citizens with a peculiar ... (read more)

1thomblake
There's a "Help" link below / next to the comment box, and it respects much of the MarkDown standard. To put a single line break at the end of the line, just end the line with two spaces. Paragraph breaks are created by a blank line in-between lines of text.