Rationality Quotes April 2012

4 Post author: Oscar_Cunningham 03 April 2012 12:42AM

Here's the new thread for posting quotes, with the usual rules:

  • Please post all quotes separately, so that they can be voted up/down separately.  (If they are strongly related, reply to your own comments.  If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself
  • Do not quote comments/posts on LW/OB
  • No more than 5 quotes per person per monthly thread, please.

Comments (858)

Sort By: Popular
Comment author: NancyLebovitz 29 April 2012 01:31:51PM 4 points [-]

In recent years, I've come to think of myself as something of a magician, and my specialty is pulling the wool over my own eyes.

--Kip W

Comment author: Vulture 28 April 2012 03:04:31AM 4 points [-]

Human beings have been designed by evolution to be good pattern matchers, and to trust the patterns they find; as a corollary their intuition about probability is abysmal. Lotteries and Las Vegas wouldn't function if it weren't so.

-Mark Rosenfelder (http://zompist.com/chance.htm)

Comment author: bojangles 27 April 2012 06:49:41PM *  2 points [-]

I stopped being afraid because I read the truth. And that's the scientifical truth which is much better. You shouldn't let poets lie to you.

-- Bjork

Comment author: [deleted] 27 April 2012 07:39:39AM 8 points [-]

Generally when I see write-ups of statistical results, I immediately go to the original source. The fact is that the media is liable to simply shade and color the results to suit their own pat narrative. That’s just human nature.

--Razib Khan, source

Comment author: iwdw 24 April 2012 03:48:57PM 7 points [-]

The fact that I can knock 12 points off a Hamilton Depression scale with an Ambien and a Krispy Kreme should serve as a warning about the validity and generalizability of the term "antidepressant."

Comment author: asparisi 20 April 2012 07:23:30PM 6 points [-]

"If you had a choice between the ability to detect falsehood and the ability to discover truth, which would you take? There was a time when I thought they were different ways of saying the same thing, but I no longer believe that. Most of my relatives, for example, are almost as good at seeing through subterfuge as they are at perpetrating it. I'm not at all sure, though, that they care much about truth. On the other hand, I'd always felt there was something noble, special, and honorable about seeking truth..."

  • Merlin, Sign of Chaos
Comment author: chaosmosis 18 April 2012 05:29:45PM *  8 points [-]

"When I was young I shoved my ignorance in people's faces. They beat me with sticks. By the time I was forty my blunt instrument had been honed to a fine cutting point for me. If you hide your ignorance, no one will hit you and you'll never learn."

-- Farenheit 451

I'll be sticking around a while, although I'm not doing too well right now (check the HPMOR discussion thread for those of you interested in viewing the carnage, it's beautiful). It's not really a rationality problem, but I need to learn how to deal with other people who have big egos, because apparently only two or three people received my comments the way I meant them to come across. Plus, I like the idea of losing so much karma in one day and then eventually earning it all back and being recognized as a super rationalist. Gaining the legitimate approval of a group who now have a lot against me will be a decent challenge.

Also I doubt that I would be able to resist commenting even if I wanted to. That's probably mostly it.

Comment author: MixedNuts 20 April 2012 05:48:27PM *  25 points [-]

Tips for dealing with people with big egos:

  • Don't insult anyone, ever. If Wagner posts, either say "Hmm, why do you believe Mendelssohn's music to be derivative?" or silently downvote, but don't call him an antisemitic piece of shit.
  • Attributing negative motivations (disliking you, wanting to win a debate, being prejudiced) counts as an insult.
  • Attributing any kind of motivation at all is pretty likely to count as an insult. You can ask about motivation, but only list positive or neutral ones or make it an open question.
  • Likewise, you can ask why you were downvoted. This very often gets people to upvote you again if they were wrong to downvote you (and if not, you get the information you want). Any further implication that they were wrong is an insult.
  • Stick closely to the question and do not involve the personalities of debaters.
  • Exception to the above: it's okay to pass judgement on a personality trait if it's a compliment. If you can't always avoid insulting people, occasionally complimenting them can help.
  • A lot of things are insults. You will slip up. This won't make people dislike you.
  • If you know what a polite and friendly tone is, have one.
  • If someone isn't polite and friendly, it means you need to be more polite and friendly.
  • If they're being very rude and mean and it's getting annoying, you can gently mention it. Still make the rest of your post polite and friendly and about the question.
  • If the "polite and about the question" part is empty, don't post.
  • If you have insulted someone in a thread - either more than once, or once and people are still hostile despite you being extra nice afterwards - people will keep being hostile in the thread and you should probably walk away from it.
  • If hostility in a thread is leaking into your mood, walk away from the whole site for a little while.
  • When you post in another thread, people will not hold any grudges against you from previous threads. Sorry for your epic quest, but we don't have much against you right now.
  • Apologies (rather than silence) are a good idea if you were clearly in the wrong and not overly tempted to add "but".

On politeness:

  • Some politeness norms are stupid and harmful and wrong, like "You must not criticize even if explicitly asked to" or "Disagreement is impolite". Fortunately, we don't have these here.
  • Some are good, like not insulting people. Insulting messages get across poorly. This happens even when people ignore the insult to answer the substance, because the message is overloaded.
  • Some are mostly local communication protocols that help but can be costly to constrain your message around. It's okay to drop them if you can't bear the cost.
  • Some are about fostering personal liking between people. They're worthwhile to people who want that and noise to people who don't.
  • Taking pains to be polite is training wheels. People who are good with words can say precisely and concisely what they mean in a completely neutral tone. People who aren't are injecting lots of accidental interpersonal content, so we need to make it harmless explicitly.

People who are exempted:

  • The aforementioned people, who will never accidentally insult anyone;
  • People whose contribution is so incredibly awesome that it compensates for being insufferable; I know of a few but none on LessWrong;
  • wedrifid, who is somehow capable of pleasant interaction while being a complete jerk.
Comment author: komponisto 22 April 2012 08:36:58PM 5 points [-]

wedrifid, who is somehow capable of pleasant interaction while being a complete jerk

Regardless of whether or not this is compatible with being a "complete jerk" in your sense, I wish to point out that wedrifid is in many respects an exemplary Less Wrong commenter. There are few others I can think of who are simultaneously as (1) informative, including about their own brain state, (2) rational, especially in the sense of being willing and able to disagree within factions/alliances and agree across them, and (3) socially clueful, in the sense of being aware of the unspoken interpersonal implications of all discourse and putting in the necessary work to manage these implications in a way compatible with one's other goals (naturally the methods used are community-specific but that is more than good enough).

In saying this, I don't know whether I'm expanding on your point or disagreeing with it.

Comment author: Wei_Dai 24 April 2012 05:50:04AM 3 points [-]

I would be interested in having wedrifid write a post systematically explaining his philosophy of how to participate on LW, because the bits and pieces of it that I've seen so far (your comment, TheOtherDave's, this comment by wedrifid) are not really forming into a coherent whole for me.

Comment author: wedrifid 24 April 2012 06:45:45AM 3 points [-]

I would be interested in having wedrifid write a post systematically explaining his philosophy of how to participate on LW, because the bits and pieces of it that I've seen so far (your comment, TheOtherDave's, this comment by wedrifid) are not really forming into a coherent whole for me.

That would be an interesting thing to do, too. It is on the list of posts that I may or may not get around to writing!

Comment author: wedrifid 22 April 2012 08:51:56PM *  4 points [-]

Regardless of whether or not this is compatible with being a "complete jerk" in your sense, I wish to point out that wedrifid is in many respects an exemplary Less Wrong commenter. There are few others I can think of who are simultaneously as (1) informative, including about their own brain state, (2) rational, especially in the sense of being willing and able to disagree within factions/alliances and agree across them, and (3) socially clueful, in the sense of being aware of the unspoken interpersonal implications of all discourse and putting in the necessary work to manage these implications in a way compatible with one's other goals (naturally the methods used are community-specific but that is more than good enough).

I appreciate your kind words komponisto! You inspire me to live up to them.

Comment author: TheOtherDave 20 April 2012 07:12:29PM *  6 points [-]

I'll add to this that actually paying attention to wedrifid is instructive here.

My own interpretation of wedrifid's behavior is that mostly s/he ignores all of these ad-hoc rules in favor of:
1) paying attention to the status implications of what's going on,
2) correctly recognizing that attempts to lower someone's status are attacks
3) honoring the obligations of implicit social alliances when an ally is attacked

I endorse this and have been trying to get better about #3 myself.

Comment author: Wei_Dai 20 April 2012 08:53:15PM 11 points [-]

The phrase "social alliances" makes me uneasy with the fear that if everyone did #3, LW would degenerate into typical green vs blue debates. Can you explain a bit more why you endorse it?

Comment author: TheOtherDave 20 April 2012 11:10:33PM 7 points [-]

If Sam and I are engaged in some activity A, and Pat comes along and punishes Sam for A or otherwise interferes with Sam's ability to engage in A...
...if on reflection I endorse A, then I endorse interfering with Pat and aiding Sam, for several reasons: it results in more A, it keeps me from feeling like a coward and a hypocrite, and I establish myself as a reliable ally. I consider that one of the obligations of social alliance.
...if on reflection I reject A, then I endorse discussing the matter with Sam in private. Ideally we come to agreement on the matter, and either it changes to case 1, or I step up alongside Sam and we take the resulting social status hit of acknowledging our error together. This, too, I consider one of the obligations of social alliance.
...if on reflection I reject A and I can't come to agreement with Sam, I endorse acknowledging that I've unilaterally dissolved the aspect of our social alliance that was mediated by A. (Also, I take that status hit all by myself, but that's beside the point here.)

I agree with you that if I instead skip the reflective step and reflexively endorse A, that quickly degenerates into pure tribal warfare. But the failure in this case is not in respecting the alliance, it's failing to reflect on whether I endorse A. If I do neither, then the community doesn't degenerate into tribal warfare, it degenerates into chaos.

Admittedly, chaos can be more fun, but I don't really endorse it.

All of that said, I do recognize that explicitly talking about "social alliances" (and, indeed, explicitly talking about social status at all) is a somewhat distracting thing to do, and doesn't help me make myself understood especially well to most audiences. It was kind of a self-indulgent comment, in retrospect, although an accurate one (IMO).

(I feel vaguely like Will_Newsome, now. I wonder if that's a good thing.)

Comment author: wedrifid 21 April 2012 06:05:17AM 16 points [-]

I feel vaguely like Will_Newsome, now. I wonder if that's a good thing.

Start to worry if you begin to feel morally obliged to engage in activity 'Z' that neither you, Sam or Pat endorse but which you must support due to acausal social allegiance with Bink mediated by the demon X(A/N)th, who is responsible for UFOs, for the illusion of stars that we see in the sky and also divinely inspired the Bhagavad-Gita.

Comment author: TheOtherDave 21 April 2012 03:20:55PM 3 points [-]

Been there, done that. (Not specifically. It would be creepy if you'd gotten the specifics right.)
I blame the stroke, though.

Comment author: wedrifid 21 April 2012 05:54:06PM 7 points [-]

Been there, done that. (Not specifically. It would be creepy if you'd gotten the specifics right.) I blame the stroke, though.

Battling your way to sanity against corrupted hardware has the potential makings of a fascinating story.

Comment author: TheOtherDave 21 April 2012 06:56:08PM 7 points [-]

It wasn't quite as dramatic as you make it sound, but it was certainly fascinating to live through.
The general case is here.
The specifics... hm.
I remain uncomfortable discussing the specifics in public.

Comment author: Wei_Dai 21 April 2012 12:43:34AM *  4 points [-]

if on reflection I endorse A, then I endorse interfering with Pat and aiding Sam, for several reasons: it results in more A, it keeps me from feeling like a coward and a hypocrite, and I establish myself as a reliable ally. I consider that one of the obligations of social alliance.

Is establishing yourself as a reliable ally an instrumental or terminal goal for you? If the former, what advantages does it bring in a group blog / discussion forum like this one? The kind of alliance you've mentioned so far are temporary ones formed implicitly by engaging someone in discussion, but people will discuss things with you if they think your comments are interesting, with virtually no consideration for how reliable you are as an ally. Are you hoping to establish other kinds of alliances here?

Comment author: TheOtherDave 21 April 2012 01:06:07AM 2 points [-]

Is establishing yourself as a reliable ally an instrumental or terminal goal for you?

Instrumental.

If the former, what advantages does it bring in a group blog / discussion forum like this one?

Trust, mostly. Which is itself an instrumental goal, of course, but the set of advantages that being trusted provides in a discussion is so ramified I don't know how I could begin to itemize it.
To pick one that came up recently, though, here's a discussion of one of the advantages of trust in a forum like this one, related to trolley problems and similar hypotheticals.
Another one that comes up far more often is other people's willingness to assume, when I say things that have both a sensible and a nonsensical interpretation, that I mean the former.

The kind of alliance you've mentioned so far are temporary ones formed implicitly by engaging someone in discussion, but people will discuss things with you if they think your comments are interesting, with virtually no consideration for how reliable you are as an ally.

Yes, I agree that when people form implicit alliances by (for example) engaging someone in discussion, they typically give virtually no explicit consideration for how reliable I am as an ally.

If you mean to say further that it doesn't affect them at all, I mostly disagree, but I suspect that at this point it might be useful to Taboo "ally."

People's estimation of how reliable I am as a person to engage in discussion with, for example, certainly does influence their willingness to engage me in discussion. And vice-versa. There are plenty of people I mostly don't engage in discussion, because I no longer trust that they will engage reliably.

Are you hoping to establish other kinds of alliances here?

Not that I can think of, but honestly this question bewilders me, so it's possible that you're asking about something I'm not even considering. What kind of alliances do you have in mind?

Comment author: Wei_Dai 22 April 2012 02:19:03AM 1 point [-]

To pick one that came up recently, though, here's a discussion of one of the advantages of trust in a forum like this one, related to trolley problems and similar hypotheticals. Another one that comes up far more often is other people's willingness to assume, when I say things that have both a sensible and a nonsensical interpretation, that I mean the former.

It's not clear to me that these attributes are strongly (or even positively) correlated with willingness to "stick up" for a conversation partner, since typically this behavioral tendency has more to do with whether a person is socially aggressive or timid. So by doing that, you're mostly signaling that you're not timid, with "being a good discussion partner" a much weaker inference, if people think in that direction at all. (This is the impression I have of wedrifid, for example.)

What kind of alliances do you have in mind?

I didn't have any specific kind of alliances in mind, but just thought the question might be worth asking. Now that I think about it, it might be for example that you're looking to make real-life friends, or contacts for advancing your career, or hoping to be recruit by SIAI.

Comment author: wedrifid 22 April 2012 02:22:44PM 2 points [-]

It's not clear to me that these attributes are strongly (or even positively) correlated with willingness to "stick up" for a conversation partner, since typically this behavioral tendency has more to do with whether a person is socially aggressive or timid. So by doing that, you're mostly signaling that you're not timid

This model of the world does an injustice to a class of people I hold in high esteem (those who are willing to defend others against certain types of social aggression even at cost to themselves) and doesn't seem to be a very accurate description of reality. A lot of information - and information I consider important at that - can be gained about a person simply by seeing who they choose to defend in which circumstances. Sure, excessive 'timidity' can serve to suppress this kind of behavior and so information can be gleaned about social confidence and assertiveness by seeing how freely they intervene. But to take this to the extreme of saying you are mostly signalling that you're not timid seems to be a mistake.

In my own experience - from back when I was timid in the extreme - the sort of "sticking up for", jumping to the defense against (unfair or undesirable) aggression is one thing that could break me out of my shell. To say that my defiance of my nature at that time was really just me being not timid after all would be to make a lie of the battle of rather significant opposing forces within the mind of that former self.

(This is the impression I have of wedrifid, for example.)

Merely that I am bold and that my behavioral tendencies and strategies in this kind of area are just signals of that boldness? Dave's model seems far more accurate and useful in this case.

Comment author: Wei_Dai 22 April 2012 07:46:27PM 2 points [-]

Merely that I am bold and that my behavioral tendencies and strategies in this kind of area are just signals of that boldness? Dave's model seems far more accurate and useful in this case.

I find that my brain doesn't automatically build detailed models of LW participants, even the most prominent ones like yourself, and I haven't found a strong reason to do so consciously, using explicit reasoning, except when I engage in discussion with someone, and even then I only try to model the part of their mind most relevant to the discussion at hand.

I realize that I may be engaging in typical mind fallacy in thinking that most other people are probably like me in this regard. If I am, I'd be curious to find out.

Comment author: wedrifid 21 April 2012 05:46:48AM *  1 point [-]

If Sam and I are engaged in some activity A, and Pat comes along and punishes Sam for A or otherwise interferes with Sam's ability to engage in A...
...if on reflection I endorse A, then I endorse interfering with Pat and aiding Sam, for several reasons: it results in more A, it keeps me from feeling like a coward and a hypocrite, and I establish myself as a reliable ally. I consider that one of the obligations of social alliance.
...if on reflection I reject A, then I endorse discussing the matter with Sam in private. Ideally we come to agreement on the matter, and either it changes to case 1, or I step up alongside Sam and we take the resulting social status hit of acknowledging our error together. This, too, I consider one of the obligations of social alliance.
...if on reflection I reject A and I can't come to agreement with Sam, I endorse acknowledging that I've unilaterally dissolved the aspect of our social alliance that was mediated by A. (Also, I take that status hit all by myself, but that's beside the point here.)

I really like your illustration here. To the extent that this is what you were trying to convey by "3)" in your analysis of wedrifid's style then I endorse it. I wouldn't have used the "alliances" description since that could be interpreted in a far more specific and less desirable way (like how Wei is framing it). But now that you have unpacked your thinking here I'm happy with it as a simple model.

Note that depending on the context there are times where I would approve of various combinations of support or opposition to each of "Sam", "Pat" and "A". In particular there are many behaviors "A" that the execution of will immediately place the victim of said behavior into the role of "ally that I am obliged to support".

Comment author: TheOtherDave 21 April 2012 03:03:47PM *  2 points [-]

Yeah, agreed about the distracting phrasing. I find it's a useful way for me to think about it, as it brings into sharp relief the associated obligations for mutual support, which I otherwise tend to obfuscate, but talking about it that way tends to evoke social resistance.

Agreed that there are many other scenarios in addition to the three I cite, and the specifics vary; transient alliances in a multi-agent system can get complicated.

Also, if you have an articulable model of how you make those judgments I'd be interested, especially if it uses more socially acceptable language than mine does.

Edit: Also, I'm really curious as to the reasoning of whoever downvoted that. I commit to preserving that person's anonymity if they PM me about their reasoning.

Comment author: MixedNuts 20 April 2012 07:29:28PM 8 points [-]

Might be too advanced for someone who just learned that saying "Please stop being stupid." is a bad idea.

Comment author: TheOtherDave 20 April 2012 07:42:42PM 4 points [-]

Sure. Then again, if you'd only intended that for chaosmosis' benefit, I assume you'd have PMed it.

Comment author: thomblake 18 April 2012 06:06:15PM *  8 points [-]

Plus, I like the idea of losing so much karma in one day and then eventually earning it all back

This discussion is off-topic for the "Rationality Quotes" thread, but...

If you're interested in an easy way to gain karma, you might want to try an experimental method I've been kicking around:

Take an article from Wikipedia on a bias that we don't have an article about yet. Wikipedia has a list of cognitive biases. Write a top-level post about that bias, with appropriate use of references. Write it in a similar style to Eliezer's more straightforward posts on a bias, examples first.

My prediction is that such an article, if well-written, should gain about +40 votes; about +80 if it contains useful actionable material.

Comment author: chaosmosis 18 April 2012 06:18:30PM *  1 point [-]

No, I want this to be harder than that. It needs to be a drawn out and painful and embarrassing process.

Maybe I'll eventually write something like that. Not yet.

Comment author: DSimon 18 April 2012 10:52:19PM *  10 points [-]

It needs to be a drawn out and painful and embarrassing process.

Oh, you want a Quest, not a goal. :-)

In that case, try writing an article that says exactly the opposite of something that somebody with very high (>10,000) karma says, even linking to their statement to make the contrast clear. Bonus points if you end up getting into a civil conversation directly with that person in the comments of your article.

Note: I believe that it is not only possible, but even easy, for you to do this and get a net karma gain. All you need is (a) a fairly good argument, and (b) a friendly tone.

Comment author: orthonormal 22 April 2012 06:48:41PM 5 points [-]

Try writing an article that says exactly the opposite of something that somebody with very high (>10,000) karma says, even linking to their statement to make the contrast clear. Bonus points if you end up getting into a civil conversation directly with that person in the comments of your article.

I nominate this as the Less Wrong Summer Challenge, for everybody.

(One modification I'd make: it shouldn't necessarily be the exact opposite: precisely reversed intelligence usually is stupidity. But your thesis should be mutually incompatible with any charitable interpretation of the original claim.)

Comment author: gRR 22 April 2012 07:16:39PM 1 point [-]

And now I realize I just did exactly that, and your prediction is absolutely correct. No bonus points for me, though.

Comment author: Bugmaster 18 April 2012 10:54:46PM 1 point [-]

You just need a reasonably friendly tone. I have a bunch of karma, and I haven't posted any articles yet (though I'm working on it).

Comment author: DSimon 18 April 2012 10:56:15PM 2 points [-]

Indeed, that would work if karma was merely the goal. But chaosmosis expressed a desire for a "painful and embarrasing process", meaning that the ante and risk must be higher.

Comment author: David_Gerard 18 April 2012 11:23:28PM 5 points [-]

One day I will write "How to karmawhore with LessWrong comments" if I can work out how to do it in such a way that it won't get -5000 within an hour.

Comment author: DSimon 18 April 2012 11:38:44PM *  16 points [-]

I know how you could do it. You need to come up with a detailed written strategy for maximizing karma with minimal actual contribution. Have some third party (or several) that LW would trust hold on to it in secrect.

Then, for a week or two, apply that strategy as directly and blatantly as you think you can get away with, racking up as many points as possible.

Once that's done, compile a list of those comments and post it into an article, along with your original strategy document and the verification from the third party that you wrote the strategy before you wrote the comments, rather than ad-hocing a "strategy" onto a run of comments that happened to succeed.

Voila: you have now pulled a karma hack and then afterwards gone white-hat with the exploit data. LW will have no choice but to give you more karma for kindly revealing the vulnerability in their system! Excellent. >:-)

Comment author: [deleted] 19 April 2012 05:07:55PM 1 point [-]

You need to come up with a detailed written strategy for maximizing karma with minimal actual contribution.

Create a dozen sockpuppet accounts and use them to upvote every single one of your posts. Duh.

Comment author: RichardKennaway 22 April 2012 07:15:27PM 5 points [-]

That's like getting a black belt in karate by buying one from the martial arts shop. It isn't karmawhoring unless you're getting karma from real people who really thought your comments worth upvoting.

Comment author: [deleted] 23 April 2012 06:55:47PM 1 point [-]

“Getting karma from real people who really thought your comments worth upvoting” sounds like a good thing, so why the (apparently) derogatory term karmawhoring?

Comment author: RichardKennaway 23 April 2012 07:14:54PM *  5 points [-]

It is good to have one's comments favourably appreciated by real people. Chasing after that appreciation, not so much. Especially, per an ancestor comment, trying to achieve that proxy measure of value while minimizing the actual value of what you are posting. The analogy with prostitution is close, although one difference is that the prostitute's reward -- money -- is of some actual use.

Comment author: Strange7 21 April 2012 07:25:11AM 5 points [-]

Not as straightforward as it sounds. Irrelevant one-sentence comments upvoted to +10 will attract more downvotes than they would otherwise.

Comment author: Bugmaster 19 April 2012 05:29:21PM 1 point [-]

This would indeed count as "minimal contribution", but still sounds like a lot of work...

Comment author: Dias 19 April 2012 07:58:43AM 5 points [-]

Have some third party (or several) that LW would trust hold on to it in secrect.

Nitpick: cryptography solves this much more neatly.

Of course, people could accuse you of having an efficient way of factorising numbers, but if you do karma is going to be the least of anyone's concerns.

Comment author: ciphergoth 19 April 2012 12:31:03PM 4 points [-]

Factorization doesn't enter into it - to precommit to a message that you will later reveal publically, publish a hash of the (salted) message.

Comment author: wedrifid 19 April 2012 08:29:12AM *  1 point [-]

Nitpick: cryptography solves this much more neatly.

But somewhat less transparently. The cryptographic solution still requires that an encrypted message is made public prior to the actions being taken and declaring an encrypted prediction has side effects. The neat solution is to still use trusted parties but give the trusted parties only the encrypted strategy (or a hash thereof).

Comment author: David_Gerard 18 April 2012 11:41:45PM 3 points [-]

My actual strategy was just to post lots. Going through the sequences provided a target-rich environment ;-)

Comment author: TheOtherDave 19 April 2012 12:18:18AM 5 points [-]

IME, per-comment EV is way higher in the HP:MoR discussion threads.

Comment author: David_Gerard 19 April 2012 07:03:12AM 2 points [-]

It so is. Karmawhoring in those is easy.

This suggests measuring posts for comment EV.

Comment author: Hul-Gil 19 April 2012 07:20:26AM *  3 points [-]

This suggests measuring posts for comment EV.

Now that is an interesting concept. I like where this subthread is going.

Interesting comparisons to other systems involving currency come to mind.

EV-analysis is the more intellectually interesting proposition, but it has me thinking. Next up: black-market karma services. I will facilitate karma-parties... for a nominal (karma) fee, of course. If you want to maintain the pretense of legitimacy, we will need to do some karma-laundering, ensuring that your posts appear that they could be worth the amount of karma they have received. Sock-puppet accounts to provide awful arguments that you can quickly demolish? Karma mines. And then, we begin to sell LW karma for Bitcoins, and--

...okay, perhaps some sleep is in order first.

Comment author: David_Gerard 19 April 2012 02:25:03PM 1 point [-]

And then, we begin to sell LW karma for Bitcoins, and--

It is clear we need to start work on a distributed, decentralised, cryptographically-secure Internet karma mechanism.

Comment author: [deleted] 18 April 2012 05:33:06PM 7 points [-]

It's not really a rationality problem, but I need to learn how to deal with other people who have big egos.

This is actually a really worthwhile skill to learn, independently of any LW-related foolishness. And it is actually a rationality problem.

Comment author: [deleted] 18 April 2012 07:54:07PM *  2 points [-]

And it is actually a rationality problem.

You mean to the extent that any problem at all is a rationality problem, or something else?

Comment author: [deleted] 18 April 2012 10:28:32PM 2 points [-]

It's a bias, as far as I'm concerned, and something that needs to be overcome. People with egos can be right, but if one can't deal with the fact that they're either right or wrong regardless of their egotism, then one is that much slower to update.

Comment author: wedrifid 18 April 2012 07:10:23PM 2 points [-]

It's not really a rationality problem, but I need to learn how to deal with other people who have big egos, because apparently only two or three people received my comments the way I meant them to come across.

It is what we would call an "instrumental rationality" problem. And one of the most important ones at that. Right up there with learning how to deal with our own big egos... which you seem to be taking steps towards now!

Comment author: [deleted] 18 April 2012 12:44:24PM 8 points [-]

A weak man is not as happy as that same man would be if he were strong. This reality is offensive to some people who would like the intellectual or spiritual to take precedence. It is instructive to see what happens to these very people as their squat strength goes up.

-- Mark Rippetoe, Starting Strength

Comment author: Manfred 22 April 2012 05:21:14AM 5 points [-]

Sample: men who come to this guy to get stronger, I assume?

Comment author: Nornagest 22 April 2012 06:37:02AM 3 points [-]

Hmm. This sort of thing seems plausible, but I wonder how much of it is strength-specific? I've heard of eudaimonic effects for exercise in general (not necessarily strength training) and for mastering any new skill, and I doubt he's filtering those out properly.

Comment author: HonoreDB 16 April 2012 03:26:14PM 8 points [-]

That's right, Emotion. Go ahead, put Reason out of the way! That's great! Fine! ...for Hitler.

--1943 Disney cartoon

Comment author: Bill_McGrath 16 April 2012 09:58:03AM 5 points [-]

Using an elementary accounting text and with the help of an accountant friend, I began. For me, a composer, accounting had always been the symbol of ultimate boredom. But a surprise awaited me: Accounting is just a simple, practical tool for measuring resources, so as to better allocate and use them. In fact, I quickly realized that basic accounting concepts had a utility far beyond finance. Resources are almost always limited; one must constantly weigh costs and benefits to make enlightened decisions.

--Alan Belkin From the Stock Market to Music, via the Theory of Evolution

This was just the first bit that stood out as LW-relevant; he also briefly mentions cognitive bias and touches on the possible benefits of cognitive science to the arts.

Comment author: [deleted] 16 April 2012 05:49:36AM 10 points [-]

The fundamental rule of political analysis from the point of psychology is, follow the sacredness, and around it is a ring of motivated ignorance.

--Jonathan Haidt, source

Comment author: Multiheaded 16 April 2012 12:07:56PM *  7 points [-]

He also talks about how sacredness is one of the fundamental values for human communities, and how liberal/left-leaning theorists don't pay enough attention to it (and refuse to acknowledge their own sacred/profane areas).

I have more to say about his values theory, I'll post some thoughts later.

UPD: I wrote a little something, now I'm just gonna ask Konkvistador whether he thinks it's neutral enough or too political for LW.

Comment author: [deleted] 16 April 2012 03:03:56PM *  2 points [-]

Please make sure you do. I suspect it will be interesting. :)

Comment author: lukeprog 15 April 2012 01:30:09PM 8 points [-]

Every intelligent ghost must contain a machine.

Aaron Sloman

Comment author: Klevador 14 April 2012 04:48:48AM *  12 points [-]

Any collocation of persons, no matter how numerous, how scant, how even their homogeneity, how firmly they profess common doctrine, will presently reveal themselves to consist of smaller groups espousing variant versions of the common creed; and these sub-groups will manifest sub-sub-groups, and so to the final limit of the single individual, and even in this single person conflicting tendencies will express themselves.

— Jack Vance, The Languages of Pao

Comment author: [deleted] 17 April 2012 10:52:27AM 7 points [-]

Shorter version:

Quot homines, tot sententiae (as many people, so many opinions)

-- Terence, Phormio

Comment author: MixedNuts 20 April 2012 05:52:14PM 3 points [-]

My favorite:

Two {people, rabbis, economists}, three opinions.

Comment author: Random832 13 April 2012 08:41:37PM *  19 points [-]

The other day I was thinking about Discworld, and then I remembered this and figured it would make a good rationality quote...

[Vimes] distrusted the kind of person who'd take one look at another man and say in a lordly voice to his companion, "Ah, my dear sir, I can tell you nothing except that he is a left-handed stonemason who has spent some years in the merchant navy and has recently fell on hard times," and then unroll a lot of supercilious commentary about calluses and stance and the state of a man's boots, when exactly the same comments could apply to a man who was wearing his old clothes because he'd been doing a spot of home bricklaying for a new barbecue pit, and had been tattooed once when he was drunk and seventeen and in fact got seasick on a wet pavement. What arrogance! What an insult to the rich and chaotic variety of the human experience!

-- Terry Pratchett, Feet of Clay

Comment author: RobinZ 14 April 2012 04:13:39AM 10 points [-]

Reminded of a quote I saw on TV Tropes of a MetaFilter comment by ericbop:

Encyclopedia Brown? What a hack! To this day, I occasionally reach into my left pocket for my keys with my right hand, just to prove that little brat wrong.

Comment author: tut 14 April 2012 09:27:31AM 2 points [-]

Sounds like Vimes doesn't like Sherlock Holmes much.

Comment author: Multiheaded 14 April 2012 09:45:41AM 1 point [-]

Gee, you think?

Comment author: tut 14 April 2012 11:31:51AM *  1 point [-]

Well, the quote made me think of this. Now that I looked up that post I notice that it is downvoted, so perhaps it isn't relevant. But the behavior that Vimes expresses distrust of in the Pratchett quote is pretty much the exact behavior that is used to show off how intelligent/perceptive Holmes is, and which the poster wants to use as an example for rationalists.

Comment author: ChristianKl 13 April 2012 01:39:22PM *  4 points [-]

If it can fool ten thousand users all at once (which ought to be dead simple, just add more servers), does that make it ten thousand times more human than Alan Turing?

Bruce Sterling

Comment author: maia 12 April 2012 05:22:24PM 10 points [-]

Suppose you know a golfer's score on day 1 and are asked to predict his score on day 2. You expect the golfer to retain the same level of talent on the second day, so your best guesses will be "above average" for the [better-scoring] player and "below average" for the [worse-scoring] player. Luck, of course, is a different matter. Since you have no way of predicting the golfers' luck on the second (or any) day, your best guess must be that it will be average, neither good nor bad. This means that in the absence of any other information, your best guess about the players' score on day 2 should not be a repeat of their performance on day 1. ...

The best predicted performance on day 2 is more moderate, closer to the average than the evidence on which it is based (the score on day 1). This is why the pattern is called regression to the mean. The more extreme the original score, the more regression we expect, because an extremely good score suggests a very lucky day. The regressive prediction is reasonable, but its accuracy is not guaranteed. A few of the golfers who scored 66 on day 1 will do even better on the second day, if their luck improves. Most will do worse, because their luck will no longer be above average.

Now let us go against the time arrow. Arrange the players by their performance on day 2 and look at their performance on day 1. You will find precisely the same pattern of regression to the mean. ... The fact that you observe regression when you predict an early event from a later event should help convince you that regression does not have a causal explanation.

  • Daniel Kahneman, Thinking, Fast and Slow
Comment author: CronoDAS 13 April 2012 08:34:52AM *  4 points [-]

If you know the scores of two different golfers on day 1, then you know more than if you know the score of only one golfer on day 1. You can't predict the direction in which regression to the mean will occur if your data set is a single point.

The following all have different answers:

I play a certain video game a lot. The last time I played it, my score was 39700. What's your best guess for my score the next time I play it?

(The answer is 39700; I'm probably not going to improve with practice, and you have no way to know if 39700 is unusually good or unusually bad.)

My friend and I both play a certain video game a lot. The last time I played it, my score was 39700. The last time my friend played it, his score was 32100. What's your best guess for my score the next time I play it?

(The answer is some number less than 39700; knowing that my friend got a lower score gives you a reason to believe that 39700 might be higher than normal.)

I played a video game for the first time yesterday. My score was 39700, and higher scores are better than lower ones. What's your best guess for my score the next time I play it?

(The answer is some number higher than 39700, because I'm no longer an absolute beginner.)

Comment author: maia 12 April 2012 05:51:31PM 9 points [-]

A shortcut for making less-biased predictions, taking base averages into account.

Regarding this problem: "Julie is currently a senior in a state university. She read fluently when she was four years old. What is her grade point average (GPA)?"

Recall that the correlation between two measures - in the present case, reading age and GPA - is equal to the proportion of shared factors among their determinants. What is your best guess about that proportion? My most optimistic guess is about 30%. Assuming this estimate, we have all we need to produce an unbiased prediction. Here are the directions for how to get there in four simple steps:

  1. Start with an estimate of average GPA.
  2. Determine the GPA that matches your impression of the evidence.
  3. Estimate the correlation between your evidence and GPA.
  4. If the correlation is .30, move 30% of the distance from the average to the matching GPA.
  • Daniel Kahneman, Thinking, Fast and Slow
Comment author: [deleted] 12 April 2012 07:39:12AM 14 points [-]

The most fundamental form of human stupidity is forgetting what we were trying to do in the first place

--Nietzsche

Comment author: Wei_Dai 10 April 2012 05:41:47PM 7 points [-]

当局者迷,旁观者清

Chinese proverb, meaning "the onlooker sees things more clearly", or literally, "the player lost, the spectator clear"

Comment author: RichardKennaway 11 April 2012 08:42:53AM 4 points [-]

In personal development workshops, the saying is, "the one with the mike in their hand is the last to see it." Of doctors and lawyers it is said that one who treats himself, or acts in court for himself, has a fool for a client.

Comment author: [deleted] 10 April 2012 05:48:38PM *  11 points [-]

三人成虎

Chinese proverb, "three men make a tiger", referring to a semi-mythological event during the Warring States period:

According to the Warring States Records, or Zhan Guo Ce, before he left on a trip to the state of Zhao, Pang Cong asked the King of Wei whether he would hypothetically believe in one civilian's report that a tiger was roaming the markets in the capital city, to which the King replied no. Pang Cong asked what the King thought if two people reported the same thing, and the King said he would begin to wonder. Pang Cong then asked, "what if three people all claimed to have seen a tiger?" The King replied that he would believe in it. Pang Cong reminded the King that the notion of a live tiger in a crowded market was absurd, yet when repeated by numerous people, it seemed real. As a high-ranking official, Pang Cong had more than three opponents and critics; naturally, he urged the King to pay no attention to those who would spread rumors about him while he was away. "I understand," the King replied, and Pang Cong left for Zhao. Yet, slanderous talk took place. When Pang Cong returned to Wei, the King indeed stopped seeing him.

-- Wikipedia

Comment author: [deleted] 10 April 2012 07:05:17PM 4 points [-]

One day the last portrait of Rembrandt and the last bar of Mozart will have ceased to be — though possibly a colored canvas and a sheet of notes will remain — because the last eye and the last ear accessible to their message will have gone.

--Oswald Spengler, The Decline of the West

Comment author: [deleted] 13 April 2012 09:02:00PM 2 points [-]

That sounds deep, but it has nothing to to with rationality

Comment author: [deleted] 14 April 2012 06:27:08AM 1 point [-]

Not really, for example it is actually pretty clearly connected to fun theory.

Comment author: MixedNuts 09 April 2012 03:24:07PM *  12 points [-]

On specificity and sneaking on connotations; useful for the liberal-minded among us:

I think, with racism and sexism and 'isms' generally, there's a sort of confusion of terminology.

A "Racist1" is someone, who, like a majority of people in this society, has subconsciously internalized some negative attitudes about minority racial groups. If a Racist1 takes the Implicit Association Test, her score shows she's biased against black people, like the majority of people (of all races) who took the test. Chances are, whether you know it or not, you're a Racist1.

A "Racist2" is someone who's kind of an insensitive jerk about race. The kind of guy who calls Obama the "Food Stamp President." Someone you wouldn't want your sister dating.

A "Racist3" is a neo-Nazi. You can never be quite sure that one day he won't snap and kill someone. He's clearly a social deviant.

People use the word "Racist" for all three things, and I think that's the source of a lot of arguments. When people get accused of being racists, they evade responsibility by saying, "Hey, I'm not a Racist3!" when in fact you were only saying they were Racist1 or Racist2. But some of the responsibility is on the accusers too -- if you say "That Republican's a racist" with the implication of "a jerk" and then backtrack and change the meaning to "vulnerable to unconscious bias", then you're arguing in bad faith. Never mind that some laws and rules which were meant to protect people from Racist3's are in fact deployed against Racist2's.

-celandine13

Comment author: Vladimir_M 24 April 2012 07:30:01PM *  8 points [-]

How about:

  1. Someone who, following an honest best effort to evaluate the available evidence, concludes that some of the beliefs that nowadays fall under the standard definition of "racist" nevertheless may be true with probabilities significantly above zero.

  2. Someone who performs Bayesian inference that somehow involves probabilities conditioned on the race of a person or a group of people, and whose conclusion happens to reflect negatively on this person or group in some way. (Or, alternatively, someone who doesn't believe that making such inferences is grossly immoral as a matter of principle.)

Both (1) and (2) fall squarely under the common usage of the term "racist," and yet I don't see how they would fit into the above cited classification.

Of course, some people would presumably argue that all beliefs in category (1) are in fact conclusively proven to be false with p~1, so it can be only a matter of incorrect conclusions motivated by the above listed categories of racism. Presumably they would also claim that, as a well-established general principle, no correct inferences in category (2) are ever possible. But do you really believe this?

Comment author: [deleted] 25 April 2012 09:02:49AM 3 points [-]

That (1) only makes sense if there is a “standard” definition of racist (and it's based on what people believe rather than/as well as what they do). The point of the celandine13 was indeed that there's no such thing.

Comment author: [deleted] 25 April 2012 12:37:47AM 4 points [-]

Someone who performs Bayesian inference that somehow involves probabilities conditioned on the race of a person or a group of people

The evidence someone's race constitutes about that person's qualities is usually very easily screened off, as I mentioned here. And given that we're running on corrupted hardware, I suspect that someone who does try to “performs Bayesian inference that somehow involves probabilities conditioned on the race of a person” ends up subconsciously double-counting evidence and therefore end up with less accurate results than somebody who doesn't. (As for cases when the evidence from race is not so easy to screen off... well, I've never heard anybody being accused of racism for pointing out that Africans have longer penises than Asians.)

Comment author: Vaniver 26 April 2012 04:18:09AM 6 points [-]

well, I've never heard anybody being accused of racism for pointing out that Africans have longer penises than Asians.

I have seen accusations for racism as responses to people pointing that out.

Comment author: Eugine_Nier 26 April 2012 04:09:55AM 6 points [-]

Also, according to the U.S. Supreme Court even if race is screened off, you're actions can still be racist or something.

Comment author: Eugine_Nier 25 April 2012 07:59:51AM 5 points [-]

The evidence someone's race constitutes about that person's qualities is usually very easily screened off, as I mentioned here.

In real life, you don't have the luxury of gathering forensic evidence on everyone you meet.

Comment author: [deleted] 25 April 2012 08:55:05AM *  3 points [-]

I'm not talking about forensic evidence. Even if white people are smarter in average than black people, I think just talking with somebody for ten minutes would give me evidence about their intelligence which would nearly completely screen off that from skin colour. Heck, even just knowing what their job is would screen off much of it.

Comment author: Eugine_Nier 26 April 2012 04:07:13AM *  5 points [-]

Even if white people are smarter in average than black people, I think just talking with somebody for ten minutes would give me evidence about their intelligence which would nearly completely screen off that from skin colour.

Also, as Eric Raymond discusses here, especially in the comments, you sometimes need to make judgements without spending ten minutes talking to everyone you see.

Heck, even just knowing what their job is would screen off much of it.

There's this thing called Affirmative Action, as I mentioned elsewhere in this thread.

Comment author: Multiheaded 07 May 2012 02:23:36PM *  3 points [-]

Also, as Eric Raymond discusses here, especially in the comments, you sometimes need to make judgements without spending ten minutes talking to everyone you see.

...

I do not require any “moral justification” for acting on the truth as it it really is; truth is its own warrant. (A comment by him).

I facepalmed. Really, Eric? Sorry, I don't think that a moral realist is perceptive enough to the nuances and ethical knots involved to be a judge on this issue. I don't know, he might be an excellent scientist, but it's extremely stupid to be so rash when you're attempting serious contrarianism.

But you reveal a confusion in your own thinking. It is not “treating other human beings as less-than-equal” to make rational decisions in risk situations; it is only that if you make decisions which are irrationally biased.

Yep, let's all try to overcome bias really really hard; there's only one solution, one desirable state, there's a straight road ahead of us; Kingdom of Rationality, here we come!

(Yvain, thank you a million times for that sobering post!)

Comment author: [deleted] 07 May 2012 01:54:21PM *  2 points [-]

Also, as Eric Raymond discusses here, especially in the comments, you sometimes need to make judgements without spending ten minutes talking to everyone you see.

You know, there are countries where the intentional homicide rate is smaller than in John Derbyshire's country by nearly an order of magnitude.

Heck, even just knowing what their job is would screen off much of it.

There's this thing called Affirmative Action, as I mentioned elsewhere in this thread.

That thing doesn't exist in all countries. Plus, I think the reason why you don't see that many two-digit-IQ people among (say) physics professors is not that they don't make it, it's that they don't even consider doing that, so even if some governmental policy somehow made it easier for black people with an IQ of 90 to succeed than for Jewish people with the same IQ, I would still expect a black physics professor to be smarter than (say) a Jewish truck driver.

Comment author: Eugine_Nier 08 May 2012 07:11:45AM 1 point [-]

so even if some governmental policy somehow made it easier for black people with an IQ of 90 to succeed than for Jewish people with the same IQ, I would still expect a black physics professor to be smarter than (say) a Jewish truck driver.

That's not the point. The point is that the black physics professor is less smart than the Jewish physics professor.

Comment author: Vaniver 26 April 2012 04:19:45AM 3 points [-]

Even if white people are smarter in average than black people, I think just talking with somebody for ten minutes would give me evidence about their intelligence which would nearly completely screen off that from skin colour.

What if verbal ability and quantitative ability are often decoupled?

Comment author: [deleted] 07 May 2012 01:43:31PM *  2 points [-]

I wasn't talking about "verbal ability" (which, to the extent that can be found out in ten minutes, correlates more with where someone grew up than with IQ), but about what they say, e.g. their reaction to finding out that I'm a physics student (though for this particular example there are lots of confounding factors), or what kinds of activities they enjoy.

Comment author: Vaniver 07 May 2012 05:26:02PM *  4 points [-]

If you're able to drive the conversation like that, you can get information about IQ, and that information may have a larger impact than race. But to "screen off" evidence means making that evidence conditionally independent- once you knew their level of interest in physics, race would give you no information about their IQ. That isn't the case.

Imagine that all races have Gaussian IQ distributions with the same standard deviation, but different means, and consider just the population of people whose IQs are above 132 ('geniuses' for this comment). In such a model, the mean IQ of black geniuses will be smaller than the mean IQ of white geniuses which will be smaller than the mean IQ of Jewish geniuses- so even knowing a lower bound for IQ won't screen off the evidence provided by race!

Comment author: [deleted] 07 May 2012 06:01:10PM 2 points [-]

Huh, sure, if the likelihood is a reversed Heaviside step. If the likelihood is itself a Gaussian, then the posterior is a Gaussian whose mean is the weighed average of that of the prior and that of the likelihood, weighed by the inverse squared standard deviations. So even if the st.dev. of the likelihood was half that of the prior for each race, the difference in posterior means would shrink by five times.

Comment author: Vaniver 07 May 2012 06:31:36PM *  4 points [-]

Right- there's lots of information out there that will narrow your IQ estimate of someone else more than their race will, like that they're a professional physicist or member of MENSA, but evidence only becomes worthless when it's independent of the quantity you're interested in given the other things you know.

Comment author: CaveJohnson 24 April 2012 04:19:21PM *  4 points [-]

This is missing Racist4:

Someone whose preferences result in disparate impact.

Comment author: BillyOblivion 17 April 2012 11:32:29AM 2 points [-]

So if a minority takes the Implicitly Association Test and finds out their biased against the dominant "race" in their area, are they a Racist1, or not?

I would also really question the validity of the Implicit Association Test. It says "Your data suggest a slight implicit preference for White People compared to Black People.", which given that blacks have been severely under-represented my social sub-culture for the last 27 years(Punk/Goth), the school I graduated from (Art School), and my professional environments (IT) for the last 20 years is probably not inaccurate.

However, it also says "Your data suggest a slight implicit preference for Herman Cain compared to Barack Obama." Which is nonsense. I have a STRONG preference for Herman Cain over Barack Obama.

Comment author: Manfred 17 April 2012 01:10:19PM *  1 point [-]

So if a minority takes the Implicitly Association Test and finds out their biased against the dominant "race" in their area, are they a Racist1, or not?

Looks like we need more "racism"s :D A common definition of racism that reflects the intuitions you bring up is "racism is prejudice plus power," (e.g., here) which isn't very useful from a decision-making point of view but which is very useful when looking at this racism as a functional thing experienced by the some group.

Comment author: cousin_it 12 April 2012 09:18:03AM 6 points [-]

Where would someone like Steve Sailer fit in this classification?

Comment author: GLaDOS 24 April 2012 04:16:10PM *  3 points [-]

Indeed as strange as it might sound (but not to those who know what he usually blogs about) Steve Sailer seems to genuinely like black people more than average and I wouldn't be surprised at all if a test showed he wasn't biased against them or was less biased than the average white American.

He also dosen't seem like racist2 from the vast majority of his writing, painting him as racist3 is plain absurd.

Comment author: Eugine_Nier 09 April 2012 05:58:20PM 3 points [-]

You left out one common definition.

A "Racist0" is someone who has accurate priors about the behavior of people of different races.

Also I don't see why calling Obama the "Food Stamp President" or otherwise criticizing his economic policy president makes one a jerk, much less a "Racist2" unless one already believes that all criticism of Obama is racist by definition.

Comment author: CronoDAS 13 April 2012 08:27:41AM 1 point [-]

Unfortunately, it seems to me that most of the information that "race" provides is screened off by various things that are only weakly correlated with race, and it also seems to me that our badly-designed hardware doesn't update very well upon learning these things. For example, "X is a college graduate, and is black" doesn't tell you all that much more than "X is a college graduate"; it's probably easier to deal with this by having inaccurate priors than by updating properly.

Comment author: steven0461 16 April 2012 12:07:16AM 4 points [-]

For example, "X is a college graduate, and is black" doesn't tell you all that much more than "X is a college graduate"

I'm not sure that what you have in mind here is screening, at least in the causal diagrams sense. If I'm not mistaken, learning that someone is a college graduate screens off race for the purpose of predicting the causal effects of college graduation, but it doesn't screen off race for the purpose of predicting causes of college graduation (such as intelligence) and their effects. You're right, though, that even in the latter case learning that someone is a college graduate decreases the size of the update from learning their race. (At least given realistic assumptions. If 99% of cyan people have IQ 80 and 1% have IQ 140, and 99% of magenta people have IQ 79 and 1% have IQ 240, learning that someone is a college graduate suddenly makes it much more informative to learn their race. But that's not the world we live in; it's just to illustrate the statistics.)

Comment author: Eugine_Nier 14 April 2012 04:23:56AM 3 points [-]

Unfortunately, it seems to me that most of the information that "race" provides is screened off by various things that are only weakly correlated with race,

Which are generally much harder to observe.

For example, "X is a college graduate, and is black" doesn't tell you all that much more than "X is a college graduate"

Um, Affirmative Action. Also tail ends of distributions.

Comment author: grendelkhan 15 April 2012 03:04:24PM 1 point [-]

Um, Affirmative Action. Also tail ends of distributions.

I was under the impression that AA applied to college admissions, and that college graduation is still entirely contingent on one's performance. (Though I've heard tell that legacy students both get an AA-sized bump to admissions and tend to be graded on a much less harsh scale.)

Additionally, it seems that there's a lot of 'different justification, same conclusion' with regards to claims about black people. For instance, "black people are inherently stupid and lazy" becomes "black people don't have to meet the same standards for education". The actual example I saw was that people subconsciously don't like to hire black people (the Chicago resume study) because they present a risk of an EEOC lawsuit. (The annual risk of being involved in an EEOC lawsuit is on the order of one in a million.)

Comment author: Eugine_Nier 15 April 2012 10:32:30PM 2 points [-]

Additionally, it seems that there's a lot of 'different justification, same conclusion' with regards to claims about black people.

I think it's more a case same observations, different proposed mechanisms.

Comment author: Desrtopa 15 April 2012 03:29:06PM 4 points [-]

I was under the impression that AA applied to college admissions, and that college graduation is still entirely contingent on one's performance. (Though I've heard tell that legacy students both get an AA-sized bump to admissions and tend to be graded on a much less harsh scale.)

A quick google search isn't giving me an actual percentage, but I believe that students who're admitted to and attend college, but do not graduate, are still significantly in the minority. Even those who barely made it in mostly graduate, if not necessarily with good GPAs.

Comment author: BillyOblivion 17 April 2012 11:52:31AM 1 point [-]

One of the criticisms of colleges engaging in "AA" type policies is that they often will put someone in a slightly higher level school (say Berkeley rather than Davis) than they really should be in and which because of their background they are unprepared for. Not necessarily intellectually--they could be very bright, but in terms of things like study skills and the like.

There is sufficient data to suggest this should be looked at more thoroughly. In general it is better for someone to graduate from a "lesser" school than to drop out of a better one.

Comment author: TimS 09 April 2012 06:10:45PM *  1 point [-]

I'm honestly confused. You don't see why calling Obama a "Food Stamp President" is different from criticizing his economic policy?

I guess I would not predict that particular phrase being leveled against Hillary or Bill Clinton - even from people who disagreed with their economic policies for the same reasons they disagree with Obama's economic policies.

Comment author: Eugine_Nier 09 April 2012 06:59:16PM 1 point [-]

I guess I would not predict that particular phrase being leveled against Hillary or Bill Clinton - even from people who disagreed with their economic policies for the same reasons they disagree with Obama's economic policies.

Well, Bill Clinton had saner economic policies, but otherwise I would predict that phrase, or something similar, being used against a white politician.

Comment author: TimS 09 April 2012 08:08:40PM 1 point [-]

You haven't answered my question:

Given the way that public welfare codes for both "lazy" and "black" in the United States, do you think that "Food Stamp President" has the same implications as some other critique of Obama's economic policies (in terms of whether the speaker intended to invoke Obama's race and whether the speaker judges Obama differently than some other politician with substantially identical positions)?

Comment author: Random832 10 April 2012 08:18:35PM 4 points [-]

"public welfare codes for both "lazy" and "black" in the United States"

Taking your word on that, what "other critique of Obama's economic policies" are you imagining that would not have the same implications, unless you mean one that ignores public welfare entirely in favor of focusing on some other economic issue instead?

Comment author: TimS 11 April 2012 12:53:16AM *  1 point [-]

A political opponent of Obama might say:

Basic economics says that what you pay for, you get more of. Therefore, when you extend long-term unemployment benefits, you get more long-term unemployment.

or

The current tax rate is too far to the right on the Laffer curve

or

The health insurance purchase mandate is unprecedented, UnAmerican, and unConstitutional

edit: or

People who pay no net income tax (because of low income and earned income tax credits) are drains on American society

(end edit)

without me thinking that the political opponent was intending to invoke Obama's race in some way. None of these are actual quotes, but I think they are coherent assertions that disagree with Obama's economic or legal philosophy. Edit: I feel confident I could find actual quote of equivalent content.

Comment author: Random832 11 April 2012 12:54:44PM 1 point [-]

Of course, none of the ones you suggested are actually about public welfare, in the sense of the government providing supplemental income for people who are unable to get jobs to provide themselves adequate income. So what we have is not a code word, but rather a code issue.

Except the first one, but with how you framed it as "public welfare codes for..." I don't see how that one wouldn't have the same connotations.

Comment author: Eugine_Nier 10 April 2012 12:14:44AM 3 points [-]

Well, yes by finding enough "code words" you can make any criticism of Obama racist.

Comment author: TheOtherDave 10 April 2012 01:03:18AM 1 point [-]

Yes, that's certainly true.

I'm really curious now, though. What's your opinion about the intended connotations of the phrase "food stamp President"? Do you think it's intended primarily as a way of describing Obama's economic policies? His commitment to preventing hunger? His fondness for individual welfare programs? Something else?

Or, if you think the intention varies depending on the user, what connotations do you think Gingrich intended to evoke with it?

Or, if you're unwilling to speculate as to Gingrich's motives, what connotations do you think it evokes in a typical resident of, say, Utah or North Dakota?

Comment author: TheOtherDave 09 April 2012 04:22:36PM 3 points [-]

...and also useful for those among us who don't identify as "liberal-minded."

Comment author: Oscar_Cunningham 09 April 2012 06:31:28PM 1 point [-]

Surely one of the definitions of "racist" should contain something about thinking that some races are better than others. Or is that covered under "neo-Nazi"?

Comment author: thomblake 10 April 2012 07:34:13PM *  3 points [-]

I'm pretty sure that's covered under Racist1. Note the word "negative".

Though it's odd that Racist1 specifically refers to "minorities". The entire suite seems to miss folks that favor a "minority" race.

Comment author: CaveJohnson 24 April 2012 04:25:03PM *  4 points [-]

Not really it is perfectly possible to be explicitly aware of one's racial preferences and not really be bothered by having such preferences, at least no more than one is bothered by liking salty food or green parks, yet not be a Nazi or prone to violence.

Indeed I think a good argument can be made not only that large number of such people lived in the 19th and 20th century, but that we probably have millions of them living today in say a place like Japan.

And that they are mostly pretty decent and ok people.

Edit: Sorry! I didn't see the later comments already covering this. :)

Comment author: gjm 12 April 2012 09:43:10PM 1 point [-]

Negative subconscious attitudes aren't the same thing as (though they might cause or be caused by) conscious opinions that such-and-such people are inferior in some way.

Comment author: thomblake 12 April 2012 09:44:36PM 3 points [-]

Ah yes - it's extra-weird that someone isn't allowed in that framework to have conscious racist opinions but not be a jerk about it.

Comment author: Normal_Anomaly 12 April 2012 10:53:14PM 1 point [-]

If one has conscious racist opinions, or is conscious that one has unconscious racist opinions (has taken the IAT but doesn't explicitly believe negative things about blacks) but doesn't act on them, it's probably because one doesn't endorse them. I'd class such a person as a Racist1.

Comment author: thomblake 12 April 2012 10:56:53PM 5 points [-]

I don't think not being an "insensitive jerk" is the same as not acting on one's opinions.

For example, if I think that people who can't do math shouldn't be programmers, and I make sure to screen applicants for math skills, that's acting on my opinions. If I make fun of people with poor math skills for not being able to get high-paying programmer jobs, that's being an insensitive jerk.

Comment author: Eugine_Nier 09 April 2012 07:41:11PM 4 points [-]

Depends on what you mean by "better". There's a difference between taking the data on race and IQ seriously, and wanting to commit genocide.

Comment author: TheOtherDave 09 April 2012 08:17:07PM 1 point [-]

(blink)

Can you unpack the relationship here between some available meaning of "better" and wanting to commit genocide?

Comment author: Eugine_Nier 09 April 2012 08:40:02PM 3 points [-]

Can you unpack the relationship here between some available meaning of "better" and wanting to commit genocide?

That's the question I was implicitly asking Oscar.

Comment author: wedrifid 09 April 2012 09:02:41PM 2 points [-]

Can you unpack the relationship here between some available meaning of "better" and wanting to commit genocide?

Most obvious plausible available meaning for 'better' that fits: "Most satisfies my average utilitarian values".

(Yes, most brands of simple utilitarianism reduce to psychopathy - but since people still advocate them we can consider the meaning at least 'available'.)

Comment author: Stephanie_Cunnane 09 April 2012 02:15:40AM 12 points [-]

From this moment forward, remember this: What you do is infinitely more important than how you do it. Efficiency is still important, but it is useless unless applied to the right things.

-Tim Ferriss, The 4-Hour Workweek

Comment author: CronoDAS 13 April 2012 08:18:15AM 3 points [-]

There is nothing so useless as doing efficiently what should not be done at all.

-- Peter Drucker

(I've quoted this line several times before.)

Comment author: wedrifid 13 April 2012 09:12:00AM 3 points [-]

There is nothing so useless as doing efficiently what should not be done at all.

Sure there is. Doing inefficiently what should not be done at all is even more useless. At least if you do it efficiently you can go ahead and do something else sooner.

It seems to me that efficiency is just as useful doing things that should not be done as it is other times, for a fixed amount of doing stuff that shouldn't be done.

Comment author: thomblake 13 April 2012 03:05:44PM 7 points [-]

Depends on the kind of efficiency, I guess.

If someone is systematically murdering people for an hour, I'd prefer they not get as much murdering done as they could.

Comment author: atorm 09 April 2012 02:18:17AM 2 points [-]

There are two worlds: the world that is, and the world that should be. We live in one, and must create the other, if it is ever to be. -paraphrased from Jim Butcher's Turn Coat

Comment author: gwern 07 April 2012 05:47:58PM 7 points [-]

"The human understanding when it has once adopted an opinion draws all things else to support and agree with it.

And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects or despises, or else by some distinction sets aside and rejects, in order that by this great and pernicious predetermination the authority of its former conclusion may remain inviolate."

--Francis Bacon, Novum Organum (1620) <!-- 1905 (Ellis, R. & Spedding, J., Trans.). London: Routledge. -->

Comment author: NancyLebovitz 07 April 2012 03:25:35PM *  2 points [-]

Civil wars are bitter because

People make their recollections fit with their suffering.

---Thucydides

Found here.

Comment author: Multiheaded 06 April 2012 08:20:31PM *  18 points [-]

[Hitler] has grasped the falsity of the hedonistic attitude to life. Nearly all western thought since the last war, certainly all "progressive" thought, has assumed tacitly that human beings desire nothing beyond ease, security, and avoidance of pain. In such a view of life there is no room, for instance, for patriotism and the military virtues. The Socialist who finds his children playing with soldiers is usually upset, but he is never able to think of a substitute for the tin soldiers; tin pacifists somehow won’t do. Hitler, because in his own joyless mind he feels it with exceptional strength, knows that human beings don’t only want comfort, safety, short working-hours, hygiene, birth-control and, in general, common sense; they also, at least intermittently, want struggle and self-sacrifice, not to mention drums, flag and loyalty-parades.

However they may be as economic theories, Fascism and Nazism are psychologically far sounder than any hedonistic conception of life. The same is probably true of Stalin’s militarized version of Socialism. All three of the great dictators have enhanced their power by imposing intolerable burdens on their peoples. Whereas Socialism, and even capitalism in a grudging way, have said to people "I offer you a good time," Hitler has said to them "I offer you struggle, danger and death," and as a result a whole nation flings itself at his feet.

(George Orwell's review of Mein Kampf)

(well, we have videogames now, yet... we gotta make them better! more vicseral!)

Comment author: Oligopsony 11 April 2012 05:44:17AM 3 points [-]

I don't see that that's true. Germany loved Hitler when he was giving them job security and easy victories and became much less popular once the struggle and danger and death arrived on the scene.

Comment author: Multiheaded 11 April 2012 12:59:11PM *  2 points [-]

They grumbled, but 95% of them obeyed, worked, killed and died up until the spring of 1945. A huge amount of Germans certainly believed that sticking with the Nazis until the conflict's end was a much lesser evil compared to another national humiliation on the scale of Versallies. And look at the impressive use to which him and Goebbels put evaporative cooling of group beliefs to radicalize the faithful after the July plot. Purging a few malcontents led to a significant increase in zeal and loyalty even as things were getting visibly worse and worse.

Comment author: Rhwawn 06 April 2012 07:54:30PM 17 points [-]

By relieving the brain of all unnecessary work, a good notation sets it free to concentrate on more advanced problems, and, in effect, increases the mental power of the race.

Alfred North Whitehead, “An Introduction to Mathematics” (thanks to Terence Tao)

Comment author: Eugine_Nier 06 April 2012 09:55:07PM 8 points [-]

So the interesting and substantive question is not whether one thinks the fit will survive and thrive better than the unfit. They will. The interesting question is what the rules are that determine what is "fit."

-- David Henderson on Social Darwinism

Comment author: spqr0a1 05 April 2012 11:44:50PM *  6 points [-]

To prize every thing according to its real use ought to be the aim of a rational being. There are few things which can much conduce to happiness, and, therefore, few things to be ardently desired. He that looks upon the business and bustle of the world, with the philosophy with which Socrates surveyed the fair at Athens, will turn away at last with his exclamation, 'How many things are here which I do not want'.

--Samuel Johnson, The Adventurer, #119, December 25, 1753.

Comment author: Stabilizer 06 April 2012 01:33:18AM 3 points [-]

Men, it has been well said, think in herds; it will be seen that they go mad in herds, while they only recover their senses slowly, and one by one.

-C. Mackay, Extraordinary Popular Delusions and the Madness of Crowds, 1852.

Comment author: arundelo 06 April 2012 01:05:02AM *  2 points [-]

Billings: [...] What do you think, Peters? What are the chances that this "jewpacabra" is real?

Peters: "I'm estimating somewhere around point zero zero zero zero zero zero zero zero one percent.

Billings: (Sighs) We can't afford to take that chance. [...]

-- Trey Parker, Jewpacabra

(This is at about five minutes fifty seconds into the episode.)

Edit: Related Sequence post.

Comment author: Pavitra 05 April 2012 12:59:38PM 11 points [-]

In the real world things are very different. You just need to look around you. Nobody wants to die that way. People die of disease and accident. Death comes suddenly and there is no notion of good or bad. It leaves, not a dramatic feeling but great emptiness. When you lose someone you loved very much you feel this big empty space and think, 'If I had known this was coming I would have done things differently.'

Yoshinori Kitase

Comment author: gwern 07 April 2012 08:39:20PM *  1 point [-]

Context: Aeris dies. (Spoilers!)

Comment author: gRR 07 April 2012 09:34:32PM *  6 points [-]

It would be interesting to calculate the total utility of an author wantonly murdering a universally beloved character. May turn out to be quite a crime...

Comment author: Nornagest 12 April 2012 04:47:45AM 3 points [-]

Well, it's certainly not limited to killing off characters, but people have been writing about emotional release as a response to tragedy in drama for quite a long time. Generally it's thought of as a good thing, if not necessarily a pleasant one, and I'm inclined to agree with this analysis; people go into fiction looking for an emotional response, and the enduring popularity of tragic storytelling suggests that they aren't exclusively looking for emotions generally regarded as positive.

Content warnings pointing to what a work's going for might not be a bad idea from a utilitarian standpoint, though. I personally handle tragedy well, for example, but I have a lot of trouble with cringe comedy.

Comment author: CronoDAS 13 April 2012 08:15:48AM 3 points [-]

I personally handle tragedy well, for example, but I have a lot of trouble with cringe comedy.

I've had to leave the room because I get embarrassed just watching characters in that kind of show...

Comment author: Desrtopa 12 April 2012 04:23:33AM 3 points [-]

Well, one of my favorite authors is infamous for doing this, and I for one think his works are the better for it. It certainly hasn't prevented them from becoming very popular.

Comment author: Bugmaster 05 April 2012 05:48:37AM *  14 points [-]

-- So... if they've got armor on, it's a battle !
-- And who told you that ?
-- A knight...
-- How'd you know he was a knight ?
-- Well... that's 'cause... he'd got armor on ?
-- You don't have to be a knight to buy armor. Any idiot can buy armor.
-- How do you know ?
-- 'Cause I sold armor.

-Game of Thrones (TV show)

Comment author: Stephanie_Cunnane 05 April 2012 04:09:46AM 17 points [-]

I believe I am accurate in saying that educators too are interested in learnings which make a difference. Simple knowledge of facts has its value. To know who won the battle of Poltava, or when the umpteenth opus of Mozart was first performed, may win $64,000 or some other sum for the possessor of this information, but I believe educators in general are a little embarrassed by the assumption that the acquisition of such knowledge constitutes education. Speaking of this reminds me of a forceful statement made by a professor of agronomy in my freshman year in college. Whatever knowledge I gained in his course has departed completely, but I remember how, with World War I as his background, he was comparing factual knowledge with ammunition. He wound up his little discourse with the exhortation, "Don't be a damned ammunition wagon; be a rifle!"

-Carl Rogers, On Becoming a Person: A Therapist's View of Psychotherapy (1961)

Comment author: VKS 04 April 2012 10:23:55AM 32 points [-]

Just as there are odors that dogs can smell and we cannot, as well as sounds that dogs can hear and we cannot, so too there are wavelengths of light we cannot see and flavors we cannot taste. Why then, given our brains wired the way they are, does the remark, "Perhaps there are thoughts we cannot think," surprise you?

  • Richard Hamming
Comment author: majus 10 April 2012 11:31:02PM 5 points [-]

In Pinker's book "How the Mind Works" he asks the same question. His observation (as I recall) was that much of our apparently abstract logical abilities are done by mapping abstractions like math onto evolved subsystems with different survival purposes in our ancestors: pattern recognition, 3D spatial visualization, etc. He suggests that some problems seem intractable because they don't map cleanly to any of those subsystems.

Comment author: Eliezer_Yudkowsky 04 April 2012 08:10:47PM 25 points [-]

It surprises people like Greg Egan, and they're not entirely stupid, because brains are Turing complete modulo the finite memory - there's no analogue of that for visible wavelengths.

Comment author: AspiringKnitter 05 April 2012 06:05:40AM 23 points [-]

If this weren't Less Wrong, I'd just slink away now and pretend I never saw this, but:

I don't understand this comment, but it sounds important. Where can I go and what can I read that will cause me to understand statements like this in the future?

Comment author: Viliam_Bur 05 April 2012 09:15:23AM *  31 points [-]

When speaking about sensory inputs, it makes sense to say that different species (even different individuals) have different ranges, so one can percieve something and other can't.

With computation it is known that sufficiently strong programming languages are in some sense equal. For example, you could speak about relative advantages of Basic, C/C++, Java, Lisp, Pascal, Python, etc., but in each of these languages you can write a simulator of the remaining ones. This means that if an algorithm can be implemented in one of these languages, it can be implemented in all of them -- in worst case, it would be implemented as a simulation of another language running its native implementation.

There are some technical details, though. Simulating another program is slower and requires more memory than the original program. So it could be argued that on a given hardware you could do a program in language X which uses all the memory and all available time, so it does not necessarily follow that you can do the same program in language Y. But on this level of abstraction we ignore hardware limits. We assume that the computer is fast enough and has enough memory for whatever purpose. (More precisely, we assume that in available time a computer can do any finite number of computation steps; but it cannot do an infinite number of steps. The memory is also unlimited, but in a finite time you can only manage to use a finite amount of memory.)

So on this level of abstraction we only care about whether something can or cannot be implemented by a computer. We ignore time and space (i.e. speed and memory) constraints. Some problems can be solved by algorithms, others can not. (Then, there are other interesting levels of abstraction which care about time and space complexity of algorithms.)

Are all programming languages equal in the above sense? No. For example, although programmers generally want to avoid infinite loops in their programs, if you remove a potential for infinite loops from the programming language (e.g. in Pascal you forbid "while" and "repeat" commands, and a possibility to call functions recursively), you lose ability to simulate programming languages which have this potential, and you lose ability to solve some problems. On the other hand, some universal programming languages seem extremely simple -- a famous example is a Turing machine. This is very useful, because it is easier to do mathematical proofs about a simple language. For example if you invent a new programming language X, all you have to do to prove its universality, is to write a Turing machine simulator, which is usually very simple.

Now back to the original discussion... Eliezer suggests that brain functionality should be likened to computation, not to sensory input. A human brain is computationally universal, because (given enough time, pen and paper) we can simulate a computer program, so all brains should be equal when optimally used (differing only in speed and use of resources). In another comment he adds that ability to compute isn't the same as ability to understand. Therefore (my conclusion) what one human can understand, another human can at least correctly calculate without understanding, given a correct algorithm.

Comment author: AspiringKnitter 05 April 2012 07:51:31PM 6 points [-]

Wow. That's really cool, thank you. Upvoted you, jeremysalwen and Nornagest. :)

Could you also explain why the HPMoR universe isn't Turing computable? The time-travel involved seems simple enough to me.

Comment author: thomblake 05 April 2012 08:57:48PM 7 points [-]

Not a complete answer, but here's commentary from a ffdn review of Chapter 14:

Kevin S. Van Horn
7/24/10 . chapter 14
Harry is jumping to conclusions when he tells McGonagall that the Time-Turner isn't even Turing computable. Time travel simulation is simply a matter of solving fixed-point equation f(x) = x. Here x is the information sent back in time, and f is a function that maps the information received from the future to the information that gets sent back in time. If a solution exists at all, you can find it to any desired degree of accuracy by simply enumerating all possible rational values of x until you find one that satisfies the equation. And if f is known to be both continuous and have a convex compact range, then the Brouwer fixed-point theorem guarantees that there will be a solution.

So the only way I can see that simulating the Time-Turner wouldn't be Turing computable would be if the physical laws of our universe give rise to fixed-point equations that have no solutions. But the existence of the Time-Turner then proves that the conditions leading to no solution can never arise.

Comment author: johnswentworth 09 April 2012 10:52:42PM 3 points [-]

There's also the problem of an infinite number of possible solutions.

Comment author: Nick_Tarleton 06 April 2012 02:04:48AM 7 points [-]

I got the impression that what "not Turing-computable" meant is that there's no way to only compute what 'actually happens'; you have to somehow iteratively solve the fixed-point equation, maybe necessarily generating experiences (waves hands confusedly) corresponding to the 'false' timelines.

Comment author: tgb 10 April 2012 11:29:12PM 2 points [-]

Sounds rather like our own universe, really.

Comment author: AspiringKnitter 05 April 2012 11:14:25PM 2 points [-]

Ah. It's math.

:) Thanks.

Comment author: Nornagest 05 April 2012 06:42:10AM *  3 points [-]

A computational system is Turing complete if certain features of its operation can reproduce those of a Turing machine, which is a sort of bare-bones abstracted model of the low-level process of computation. This is important because you can, in principle, simulate the active parts of any Turing complete system in any other Turing complete system (though doing so will be inefficient in a lot of cases); in other words, if you've got enough time and memory, you can calculate anything calculable with any system meeting a fairly minimal set of requirements. Thanks to this result, we know that there's a deep symmetry between different flavors of computation that might not otherwise be obvious. There are some caveats, though: in particular, the idealized version of a Turing machine assumes infinite memory.

Now, to answer your actual question, the branch of mathematics that this comes from is called computability theory, and it's related to the study of mathematical logic and formal languages. The textbook I got most of my understanding of it from is Hopcroft, Motwani, and Ullman's Introduction to Automata Theory, Languages, and Computation, although it might be worth looking through the "Best Textbooks on Every Subject" thread to see if there's a consensus on another.

Comment author: jeremysalwen 05 April 2012 06:18:04AM 2 points [-]
Comment author: Vaniver 04 April 2012 08:35:43PM 6 points [-]

brains are Turing complete modulo the finite memory

What does that statement mean in the context of thoughts?

That is, when I think about human thoughts I think about information processing algorithms, which typically rely on hardware set up for that explicit purpose. So even though I might be able to repurpose my "verbal manipulation" module to do formal logic, that doesn't mean I have a formal logic module.

Any defects in my ability to repurpose might be specific to me: I might able to think the thought "A-> B, ~A, therefore ~B" with the flavor of trueness, and another person can only think that thought with the flavor of falseness. If the truth flavor is as much a part of the thought as the textual content, then the second thinker cannot think the thought that the first thinker can.

Aren't there people who can hear sounds but not music? Are their brains not Turing complete? Are musical thoughts ones they cannot think?

Comment author: Eliezer_Yudkowsky 04 April 2012 09:33:35PM 14 points [-]

It means nothing, although Greg Egan is quite impressed by it. Sad but true: Someone with an IQ of, say, 90 can be trained to operate a Turing machine, but will in all probability never understand matrix calculus. The belief that Turing-complete = understanding-complete is false. It just isn't stupid.

Comment author: komponisto 05 April 2012 09:57:52PM 3 points [-]

[That human brains are Turing-complete] means nothing, although Greg Egan is quite impressed by it. Sad but true: Someone with an IQ of, say, 90 can be trained to operate a Turing machine, but will in all probability never understand matrix calculus.

It doesn't mean nothing; it means that people (like machines) can be taught to do things without understanding them.

(They can also be taught to understand, provided you reduce understanding to Turing-machine computations, which is harder. "Understanding that 1+1 = 2" is not the same thing as being able to output "2" to the query "1+1=".)

Comment author: Elithrion 05 April 2012 09:39:20PM 1 point [-]

I would imagine that he can be taught matrix calculus, given sufficient desire (on his and the teachers' parts), teaching skill, and time. I'm not sure if in practice it is possible to muster enough desire or time to do it, but I do think that understanding is something that can theoretically be taught to anyone who can perform the mechanical calculations.

Comment author: DanArmak 23 April 2012 01:38:19PM 1 point [-]

I can't imagine how hard it is to learn to program if you don't instinctively know how. Yet I know it is that hard for many people. Some succeed in learning, some don't. Those who do still have big differences in ability, and ability at a young age seems to be a pretty good predictor of lifetime ability.

I realize I must have learned the basics at some point, although I don't remember it. And I remember learning many more advanced concepts during the many years since. But for both the basics and the advanced subjects, I never experienced anything I can compare to what I'd call "learning" in other subjects I studied.

When programming, if I see/read something new, I may need some time (seconds or hours) to understand it, then once I do, I can use it. It is cognitively very similar to seeing a new room for the first time. It's novel, but I understand it intuitively and in most cases quickly.

When I studied e.g. biology or math at university, I had to deliberately memorize, to solve exercises before understanding the "real thing", to accept that some things I could describe I couldn't duplicate by building them from scratch no matter how much time I had and what materials and tools. This never happened to me in programming. I may not fully understand the domain problem that the program is manipulating. But I always understand the program itself.

And yet I've seen people struggle to understand the most elementary concepts of programming, like, say, distinguishing between names and values. I've had to work with some pretty poor programmers, and had the official job of on-the-job mentoring newbies on two occasions. I know it can be very difficult to teach effectively, it can be very difficult to learn.

Given that I encountered a heavily preselected set of people, who were trying to make programming their main profession, it's easy for me to believe that - at the extreme - for many people elementary programming is impossible to learn, period. And the same should apply to math and any other "abstract" subject for which biologically normal people don't have dedicated thinking modules in their brains.

Comment author: David_Gerard 08 April 2012 09:12:38AM *  8 points [-]

I fear you're committing the typical mind fallacy. The dyscalculic could simulate a Turing machine, but all of mathematics, including basic arithmetic, is whaargarbl to them. They're often highly intelligent (though of course the diagnosis is "intelligent elsewhere, unintelligent at maths"), good at words and social things, but literally unable to calculate 17+17 more accurately than "somewhere in the twenties or thirties" or "I have no idea" without machine assistance. I didn't believe it either until I saw it.

Comment author: Eliezer_Yudkowsky 05 April 2012 09:43:25PM 12 points [-]

Have you ever tried to teach math to anyone who is not good at math? In my youth I once tutored a woman who was poor, but motivated enough to pay $40/session. A major obstacle was teaching her how to calculate (a^b)^c and getting her to reliably notice that minus times minus equals plus. Despite my attempts at creative physical demonstrations of the notion of a balanced scale, I couldn't get her to really understand the notion of doing the same things to both sides of a mathematical equation. I don't think she would ever understand what was going on in matrix calculus, period, barring "teaching methods" that involve neural reprogramming or gain of additional hardware.

Comment author: NancyLebovitz 24 April 2012 07:45:18AM 1 point [-]

What was your impression of her intelligence otherwise?

Suzette Haden Elgin (a science fiction author and linguist who was quite intelligent with and about words) described herself as intractably bad at math.

Comment author: matt 13 April 2012 03:50:38AM *  14 points [-]

Your claim is too large for the evidence you present in support of it.

Teaching someone math who is not good at math is hard, but "will in all probability never understand matrix calculus"!? I don't think you're using the Try Harder.

Assume teaching is hard (list of weak evidence: it's a three year undergraduate degree; humanity has hardly allowed itself to run any proper experiments in the field, and those that have been run seem usually to be generally ignored by professional practitioners; it's massively subject to the typical mind fallacy and most practitioners don't know that fallacy exists). That you, "in your youth" (without having studied teaching), "once" tutored a woman who you couldn't teach very well… doesn't support any very strong conclusion.

It seems very likely to me that Omega could teach matrix calculus to someone with IQ 90 given reasonable time and motivation from the student. One of the things I'm willing to devote significant resources to in the coming years is making education into a proper science. Given the tools of that proper science I humbly submit that you could teach your former student a lot. Track the progress of the Khan Academy for some promising developments in the field.

Comment author: DanArmak 23 April 2012 01:13:30PM 1 point [-]

list of weak evidence

Some of it is weak evidence for the hardness claim (3 years degree), some against (all the rest). Does that match what you meant?

Comment author: matt 24 April 2012 07:28:04AM *  1 point [-]

I'd intended a different meaning of "hard". On reflection your interpretation seems a very reasonable inference from what I wrote.

What I meant: Teaching is hard enough that you shouldn't expect to find it easy without having spent any time studying it. Even as a well educated westerner, the bits of teaching you can reasonably expect to pick up won't take you far down the path to mastery.

(Thank you for you comment - it got me thinking.)

Comment author: wedrifid 13 April 2012 05:37:18AM 4 points [-]

humanity has hardly allowed itself to run any proper experiments in the field, and those that have been run seem usually to be generally ignored by professional practitioners

What are the experiments that are generally ignored?

Comment author: Elithrion 05 April 2012 11:52:31PM 5 points [-]

No, I haven't, and reading your explanation I now believe that there is a fair chance you are correct. However, one problem I have with it is that you're describing a few points of frustration, some of which I assume you ended up overcoming. I am not entirely convinced that had she spent, say one hundred hours studying each skill that someone with adequate talent could fully understand in one, she would not eventually fully understand it.

In cases of extreme trouble, I can imagine her spending forty hours working through a thousand examples, until mechanically she can recognise every example reasonably well, and find the solution correctly, then another twenty working through applications, then another forty hours analysing applications in the real world until the process of seeing the application, formulating the correct problem, and solving it becomes internalised. Certainly, just because I can imagine it doesn't make it true, but I'm not sure on what grounds I should prefer the "impossibility" hypothesis to the "very very slow learning" hypothesis.

Comment author: Incorrect 05 April 2012 09:54:53PM *  5 points [-]

I can't imagine how hard it would be to learn math without the concept of referential transparency.

Comment author: thomblake 05 April 2012 02:36:48AM 1 point [-]

The belief that Turing-complete = understanding-complete is false. It just isn't stupid.

I'm not sure what you mean by understanding-complete, but remember that the turing-complete system is both the operator and any machinery they are manipulating.

Comment author: Incorrect 05 April 2012 02:17:17AM 1 point [-]

So you are considering a man in a Chinese room to lack understanding?

Comment author: J_Taylor 05 April 2012 02:37:41AM 13 points [-]

Obviously the man in the Chinese room lacks understanding, by most common definitions of understanding. It is the room as a system which understands Chinese. (Assuming lookup tables can understand. By functional definitions, they should be able to.)

Comment author: Will_Newsome 04 April 2012 09:13:02PM *  6 points [-]

Aren't there people who can hear sounds but not music?

FWIW I've read a study that says about 50% of people can't tell the difference between a major and a minor chord even when you label them happy/sad. [ETA: Happy/sad isn't the relevant dimension, see the replies to this comment.] I have no idea how probable that is, but if true it would imply that half of the American population basically can't hear music.

Comment author: [deleted] 05 April 2012 04:05:43PM 16 points [-]

http://languagelog.ldc.upenn.edu/nll/?p=2074

It shocked the hell out of me, too.

Comment author: Dmytry 05 April 2012 04:55:31PM *  4 points [-]

This is weird. It is hard for me to hear the difference in the cadence, but crystal clear otherwise. In the cadence, the problem for me is that the notes are dragging on, like when you press pedal on piano a bit, that makes it hard to discern the difference.

Maybe they lost something in retelling here? Made up new stimuli for which it doesn't work because of harmonics or something?

Or maybe its just me and everyone on this thread? I have a lot of trouble hearing speech through noise (like that of flowing water), i always have to tell others, i am not hearing what you're saying i am washing the dishes. Though i've no idea how well other people can hear something when they are washing the dishes; maybe i care too much not to pretend to listen when i don't hear.

This needs proper study.

Comment author: Scottbert 09 April 2012 03:12:11PM 3 points [-]

Ditto for me -- The difference between the two chords is crystal clear, but in the cadence I can barely hear it.

I'm not a professional, but I sang in school chorus for 6 years, was one of the more skilled singers there, I've studied a little musical theory, and I apparently have a lot of natural talent. And the first time I heard the version played in cadence I didn't notice the difference at all. Freaky. I know how that post-doc felt when she couldn't hear the difference in the chords.

Comment author: arundelo 05 April 2012 11:22:03PM *  4 points [-]

The following recordings are played on an acoustic instrument by a human (me), and they have spaces in between the chords. The chord sequences are randomly generated (which means that the major-to-minor ratio is not necessarily 1:1, but all of them do have a mixture of major and minor chords).

Each of the following two recordings is a sequence of eight C major or C minor chords:

Each of the following two recordings is a sequence of eight "cadences" -- groups of four chords that are either

F B♭ C F

or

F B♭ Cminor F

Edit: Here's a listing of the chords in all four sound files.

Edit 2 (2012-Apr-22): I added another recording that contains these chords:

F B♭ C F
F B♭ Cmi F

repeated over and over, while the balance between the voices is varied, from "all voices roughly equal" to "only the second voice from the top audible". The second voice from the top is the only one that is different on the C minor chord. My idea is that hearing the changing voice foregrounded from its context like this might make it easier to pick it out when it's not foregrounded.

Comment author: tgb 05 April 2012 06:25:54PM 1 point [-]

I am with you on easily telling the two apart in the original chords but being unable to reliably tell the difference in the cadence version.

Comment author: Stephanie_Cunnane 04 April 2012 03:27:55AM 32 points [-]

Another learning which cost me much to recognize, can be stated in four words. The facts are friendly.

It has interested me a great deal that most psychotherapists, especially the psychoanalysts, have steadily refused to make any scientific investigation of their therapy, or to permit others to do this. I can understand this reaction because I have felt it. Especially in our early investigations I can well remember the anxiety of waiting to see how the findings came out. Suppose our hypotheses were disproved! Suppose we were mistaken in our views! Suppose our opinions were not justified! At such times, as I look back, it seems to me that I regarded the facts as potential enemies, as possible bearers of disaster. I have perhaps been slow in coming to realize that the facts are always friendly. Every bit of evidence one can acquire, in any area, leads one that much closer to what is true. And being closer to the truth can never be a harmful or dangerous or unsatisfying thing. So while I still hate to readjust my thinking, still hate to give up old ways of perceiving and conceptualizing, yet at some deeper level I have, to a considerable degree, come to realize that these painful reorganizations are what is known as learning, and that though painful they always lead to a more satisfying because somewhat more accurate way of seeing life. Thus at the present time one of the most enticing areas for thought and speculation is an area where several of my pet ideas have not been upheld by the evidence, I feel if I can only puzzle my way through this problem that I will find a much more satisfying approximation to the truth. I feel sure the facts will be my friends.

-Carl Rogers, On Becoming a Person: A Therapist's View of Psychotherapy (1961)

Comment author: Document 10 May 2012 01:29:03AM 1 point [-]

Another learning which cost me much to recognize, can be stated in four words. The facts are friendly.

A while ago I saw a good post or quote on LW on the problem of confusing a phrase one uses to encapsulate an insight with the insight itself. Unfortunately I don't remember where.

Comment author: Dorikka 05 April 2012 04:54:55PM 7 points [-]

Facts are friendly on average, that is. Individual pieces of evidence might lead you to update towards a wrong conclusion. /nitpick

Comment author: wedrifid 05 April 2012 04:56:36PM 2 points [-]

Facts are friendly on average, that is. Individual pieces of evidence might lead you to update towards a wrong conclusion. /nitpick

Even then we could potentially nitpick even further, depending on what is meant by 'average'.

Comment author: Ezekiel 04 April 2012 11:45:46PM 1 point [-]

And being closer to the truth can never be a harmful or dangerous or unsatisfying thing.

Knowing about evolution is pretty cool, but I'd be a lot more satisfied if I could believe that we were created as the pinnacle of design by a super-awesome Thing that had a specific plan in mind (and that my nation - and, come to that, tribe -was even more pinnacle than everyone else).

Comment author: TheOtherDave 05 April 2012 12:10:39AM 1 point [-]

...and if it turned out that believing that particular falsehood didn't have consequences that left you less satisfied.

Comment author: Ezekiel 05 April 2012 10:03:08AM 4 points [-]

Okay, hypothetical: Dying human. They believed in God their entire life and have lived as basically decent according to their own ethics, and therefore think they're going to be blissing out for the rest of infinity. They will believe this for the next couple of minutes, and then stop existing.

Would you, given the opportunity, dispel their illusion?

Comment author: TheOtherDave 05 April 2012 01:43:04PM 2 points [-]

Depends on what I expected the result of doing so to be.

If I expected the result to be that they are more unhappy than they otherwise would be for the rest of their lives with no other compensating benefit (which is certainly the conclusion your hypothetical encourages), then no I wouldn't.

If I expected the result to be either that they are happier than they otherwise would be for the rest of their lives, or that there is some other compensating benefit to them knowing what will actually happen, then yes I would.

Why do you ask?