Rationality Quotes April 2012
Here's the new thread for posting quotes, with the usual rules:
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself
- Do not quote comments/posts on LW/OB
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (858)
In recent years, I've come to think of myself as something of a magician, and my specialty is pulling the wool over my own eyes.
--Kip W
Human beings have been designed by evolution to be good pattern matchers, and to trust the patterns they find; as a corollary their intuition about probability is abysmal. Lotteries and Las Vegas wouldn't function if it weren't so.
-Mark Rosenfelder (http://zompist.com/chance.htm)
-- Bjork
--Razib Khan, source
-- Farenheit 451
I'll be sticking around a while, although I'm not doing too well right now (check the HPMOR discussion thread for those of you interested in viewing the carnage, it's beautiful). It's not really a rationality problem, but I need to learn how to deal with other people who have big egos, because apparently only two or three people received my comments the way I meant them to come across. Plus, I like the idea of losing so much karma in one day and then eventually earning it all back and being recognized as a super rationalist. Gaining the legitimate approval of a group who now have a lot against me will be a decent challenge.
Also I doubt that I would be able to resist commenting even if I wanted to. That's probably mostly it.
Tips for dealing with people with big egos:
On politeness:
People who are exempted:
Regardless of whether or not this is compatible with being a "complete jerk" in your sense, I wish to point out that wedrifid is in many respects an exemplary Less Wrong commenter. There are few others I can think of who are simultaneously as (1) informative, including about their own brain state, (2) rational, especially in the sense of being willing and able to disagree within factions/alliances and agree across them, and (3) socially clueful, in the sense of being aware of the unspoken interpersonal implications of all discourse and putting in the necessary work to manage these implications in a way compatible with one's other goals (naturally the methods used are community-specific but that is more than good enough).
In saying this, I don't know whether I'm expanding on your point or disagreeing with it.
I would be interested in having wedrifid write a post systematically explaining his philosophy of how to participate on LW, because the bits and pieces of it that I've seen so far (your comment, TheOtherDave's, this comment by wedrifid) are not really forming into a coherent whole for me.
That would be an interesting thing to do, too. It is on the list of posts that I may or may not get around to writing!
I appreciate your kind words komponisto! You inspire me to live up to them.
I'll add to this that actually paying attention to wedrifid is instructive here.
My own interpretation of wedrifid's behavior is that mostly s/he ignores all of these ad-hoc rules in favor of:
1) paying attention to the status implications of what's going on,
2) correctly recognizing that attempts to lower someone's status are attacks
3) honoring the obligations of implicit social alliances when an ally is attacked
I endorse this and have been trying to get better about #3 myself.
The phrase "social alliances" makes me uneasy with the fear that if everyone did #3, LW would degenerate into typical green vs blue debates. Can you explain a bit more why you endorse it?
If Sam and I are engaged in some activity A, and Pat comes along and punishes Sam for A or otherwise interferes with Sam's ability to engage in A...
...if on reflection I endorse A, then I endorse interfering with Pat and aiding Sam, for several reasons: it results in more A, it keeps me from feeling like a coward and a hypocrite, and I establish myself as a reliable ally. I consider that one of the obligations of social alliance.
...if on reflection I reject A, then I endorse discussing the matter with Sam in private. Ideally we come to agreement on the matter, and either it changes to case 1, or I step up alongside Sam and we take the resulting social status hit of acknowledging our error together. This, too, I consider one of the obligations of social alliance.
...if on reflection I reject A and I can't come to agreement with Sam, I endorse acknowledging that I've unilaterally dissolved the aspect of our social alliance that was mediated by A. (Also, I take that status hit all by myself, but that's beside the point here.)
I agree with you that if I instead skip the reflective step and reflexively endorse A, that quickly degenerates into pure tribal warfare. But the failure in this case is not in respecting the alliance, it's failing to reflect on whether I endorse A. If I do neither, then the community doesn't degenerate into tribal warfare, it degenerates into chaos.
Admittedly, chaos can be more fun, but I don't really endorse it.
All of that said, I do recognize that explicitly talking about "social alliances" (and, indeed, explicitly talking about social status at all) is a somewhat distracting thing to do, and doesn't help me make myself understood especially well to most audiences. It was kind of a self-indulgent comment, in retrospect, although an accurate one (IMO).
(I feel vaguely like Will_Newsome, now. I wonder if that's a good thing.)
Start to worry if you begin to feel morally obliged to engage in activity 'Z' that neither you, Sam or Pat endorse but which you must support due to acausal social allegiance with Bink mediated by the demon X(A/N)th, who is responsible for UFOs, for the illusion of stars that we see in the sky and also divinely inspired the Bhagavad-Gita.
Been there, done that. (Not specifically. It would be creepy if you'd gotten the specifics right.)
I blame the stroke, though.
Battling your way to sanity against corrupted hardware has the potential makings of a fascinating story.
It wasn't quite as dramatic as you make it sound, but it was certainly fascinating to live through.
The general case is here.
The specifics... hm.
I remain uncomfortable discussing the specifics in public.
Is establishing yourself as a reliable ally an instrumental or terminal goal for you? If the former, what advantages does it bring in a group blog / discussion forum like this one? The kind of alliance you've mentioned so far are temporary ones formed implicitly by engaging someone in discussion, but people will discuss things with you if they think your comments are interesting, with virtually no consideration for how reliable you are as an ally. Are you hoping to establish other kinds of alliances here?
Instrumental.
Trust, mostly. Which is itself an instrumental goal, of course, but the set of advantages that being trusted provides in a discussion is so ramified I don't know how I could begin to itemize it.
To pick one that came up recently, though, here's a discussion of one of the advantages of trust in a forum like this one, related to trolley problems and similar hypotheticals.
Another one that comes up far more often is other people's willingness to assume, when I say things that have both a sensible and a nonsensical interpretation, that I mean the former.
Yes, I agree that when people form implicit alliances by (for example) engaging someone in discussion, they typically give virtually no explicit consideration for how reliable I am as an ally.
If you mean to say further that it doesn't affect them at all, I mostly disagree, but I suspect that at this point it might be useful to Taboo "ally."
People's estimation of how reliable I am as a person to engage in discussion with, for example, certainly does influence their willingness to engage me in discussion. And vice-versa. There are plenty of people I mostly don't engage in discussion, because I no longer trust that they will engage reliably.
Not that I can think of, but honestly this question bewilders me, so it's possible that you're asking about something I'm not even considering. What kind of alliances do you have in mind?
It's not clear to me that these attributes are strongly (or even positively) correlated with willingness to "stick up" for a conversation partner, since typically this behavioral tendency has more to do with whether a person is socially aggressive or timid. So by doing that, you're mostly signaling that you're not timid, with "being a good discussion partner" a much weaker inference, if people think in that direction at all. (This is the impression I have of wedrifid, for example.)
I didn't have any specific kind of alliances in mind, but just thought the question might be worth asking. Now that I think about it, it might be for example that you're looking to make real-life friends, or contacts for advancing your career, or hoping to be recruit by SIAI.
This model of the world does an injustice to a class of people I hold in high esteem (those who are willing to defend others against certain types of social aggression even at cost to themselves) and doesn't seem to be a very accurate description of reality. A lot of information - and information I consider important at that - can be gained about a person simply by seeing who they choose to defend in which circumstances. Sure, excessive 'timidity' can serve to suppress this kind of behavior and so information can be gleaned about social confidence and assertiveness by seeing how freely they intervene. But to take this to the extreme of saying you are mostly signalling that you're not timid seems to be a mistake.
In my own experience - from back when I was timid in the extreme - the sort of "sticking up for", jumping to the defense against (unfair or undesirable) aggression is one thing that could break me out of my shell. To say that my defiance of my nature at that time was really just me being not timid after all would be to make a lie of the battle of rather significant opposing forces within the mind of that former self.
Merely that I am bold and that my behavioral tendencies and strategies in this kind of area are just signals of that boldness? Dave's model seems far more accurate and useful in this case.
I find that my brain doesn't automatically build detailed models of LW participants, even the most prominent ones like yourself, and I haven't found a strong reason to do so consciously, using explicit reasoning, except when I engage in discussion with someone, and even then I only try to model the part of their mind most relevant to the discussion at hand.
I realize that I may be engaging in typical mind fallacy in thinking that most other people are probably like me in this regard. If I am, I'd be curious to find out.
I really like your illustration here. To the extent that this is what you were trying to convey by "3)" in your analysis of wedrifid's style then I endorse it. I wouldn't have used the "alliances" description since that could be interpreted in a far more specific and less desirable way (like how Wei is framing it). But now that you have unpacked your thinking here I'm happy with it as a simple model.
Note that depending on the context there are times where I would approve of various combinations of support or opposition to each of "Sam", "Pat" and "A". In particular there are many behaviors "A" that the execution of will immediately place the victim of said behavior into the role of "ally that I am obliged to support".
Yeah, agreed about the distracting phrasing. I find it's a useful way for me to think about it, as it brings into sharp relief the associated obligations for mutual support, which I otherwise tend to obfuscate, but talking about it that way tends to evoke social resistance.
Agreed that there are many other scenarios in addition to the three I cite, and the specifics vary; transient alliances in a multi-agent system can get complicated.
Also, if you have an articulable model of how you make those judgments I'd be interested, especially if it uses more socially acceptable language than mine does.
Edit: Also, I'm really curious as to the reasoning of whoever downvoted that. I commit to preserving that person's anonymity if they PM me about their reasoning.
Might be too advanced for someone who just learned that saying "Please stop being stupid." is a bad idea.
Sure. Then again, if you'd only intended that for chaosmosis' benefit, I assume you'd have PMed it.
This discussion is off-topic for the "Rationality Quotes" thread, but...
If you're interested in an easy way to gain karma, you might want to try an experimental method I've been kicking around:
Take an article from Wikipedia on a bias that we don't have an article about yet. Wikipedia has a list of cognitive biases. Write a top-level post about that bias, with appropriate use of references. Write it in a similar style to Eliezer's more straightforward posts on a bias, examples first.
My prediction is that such an article, if well-written, should gain about +40 votes; about +80 if it contains useful actionable material.
No, I want this to be harder than that. It needs to be a drawn out and painful and embarrassing process.
Maybe I'll eventually write something like that. Not yet.
Oh, you want a Quest, not a goal. :-)
In that case, try writing an article that says exactly the opposite of something that somebody with very high (>10,000) karma says, even linking to their statement to make the contrast clear. Bonus points if you end up getting into a civil conversation directly with that person in the comments of your article.
Note: I believe that it is not only possible, but even easy, for you to do this and get a net karma gain. All you need is (a) a fairly good argument, and (b) a friendly tone.
I nominate this as the Less Wrong Summer Challenge, for everybody.
(One modification I'd make: it shouldn't necessarily be the exact opposite: precisely reversed intelligence usually is stupidity. But your thesis should be mutually incompatible with any charitable interpretation of the original claim.)
And now I realize I just did exactly that, and your prediction is absolutely correct. No bonus points for me, though.
You just need a reasonably friendly tone. I have a bunch of karma, and I haven't posted any articles yet (though I'm working on it).
Indeed, that would work if karma was merely the goal. But chaosmosis expressed a desire for a "painful and embarrasing process", meaning that the ante and risk must be higher.
One day I will write "How to karmawhore with LessWrong comments" if I can work out how to do it in such a way that it won't get -5000 within an hour.
I know how you could do it. You need to come up with a detailed written strategy for maximizing karma with minimal actual contribution. Have some third party (or several) that LW would trust hold on to it in secrect.
Then, for a week or two, apply that strategy as directly and blatantly as you think you can get away with, racking up as many points as possible.
Once that's done, compile a list of those comments and post it into an article, along with your original strategy document and the verification from the third party that you wrote the strategy before you wrote the comments, rather than ad-hocing a "strategy" onto a run of comments that happened to succeed.
Voila: you have now pulled a karma hack and then afterwards gone white-hat with the exploit data. LW will have no choice but to give you more karma for kindly revealing the vulnerability in their system! Excellent. >:-)
Create a dozen sockpuppet accounts and use them to upvote every single one of your posts. Duh.
That's like getting a black belt in karate by buying one from the martial arts shop. It isn't karmawhoring unless you're getting karma from real people who really thought your comments worth upvoting.
“Getting karma from real people who really thought your comments worth upvoting” sounds like a good thing, so why the (apparently) derogatory term karmawhoring?
It is good to have one's comments favourably appreciated by real people. Chasing after that appreciation, not so much. Especially, per an ancestor comment, trying to achieve that proxy measure of value while minimizing the actual value of what you are posting. The analogy with prostitution is close, although one difference is that the prostitute's reward -- money -- is of some actual use.
Not as straightforward as it sounds. Irrelevant one-sentence comments upvoted to +10 will attract more downvotes than they would otherwise.
This would indeed count as "minimal contribution", but still sounds like a lot of work...
Nitpick: cryptography solves this much more neatly.
Of course, people could accuse you of having an efficient way of factorising numbers, but if you do karma is going to be the least of anyone's concerns.
Factorization doesn't enter into it - to precommit to a message that you will later reveal publically, publish a hash of the (salted) message.
But somewhat less transparently. The cryptographic solution still requires that an encrypted message is made public prior to the actions being taken and declaring an encrypted prediction has side effects. The neat solution is to still use trusted parties but give the trusted parties only the encrypted strategy (or a hash thereof).
My actual strategy was just to post lots. Going through the sequences provided a target-rich environment ;-)
IME, per-comment EV is way higher in the HP:MoR discussion threads.
It so is. Karmawhoring in those is easy.
This suggests measuring posts for comment EV.
Now that is an interesting concept. I like where this subthread is going.
Interesting comparisons to other systems involving currency come to mind.
EV-analysis is the more intellectually interesting proposition, but it has me thinking. Next up: black-market karma services. I will facilitate karma-parties... for a nominal (karma) fee, of course. If you want to maintain the pretense of legitimacy, we will need to do some karma-laundering, ensuring that your posts appear that they could be worth the amount of karma they have received. Sock-puppet accounts to provide awful arguments that you can quickly demolish? Karma mines. And then, we begin to sell LW karma for Bitcoins, and--
...okay, perhaps some sleep is in order first.
It is clear we need to start work on a distributed, decentralised, cryptographically-secure Internet karma mechanism.
This is actually a really worthwhile skill to learn, independently of any LW-related foolishness. And it is actually a rationality problem.
You mean to the extent that any problem at all is a rationality problem, or something else?
It's a bias, as far as I'm concerned, and something that needs to be overcome. People with egos can be right, but if one can't deal with the fact that they're either right or wrong regardless of their egotism, then one is that much slower to update.
It is what we would call an "instrumental rationality" problem. And one of the most important ones at that. Right up there with learning how to deal with our own big egos... which you seem to be taking steps towards now!
-- Mark Rippetoe, Starting Strength
Sample: men who come to this guy to get stronger, I assume?
Hmm. This sort of thing seems plausible, but I wonder how much of it is strength-specific? I've heard of eudaimonic effects for exercise in general (not necessarily strength training) and for mastering any new skill, and I doubt he's filtering those out properly.
--1943 Disney cartoon
--Alan Belkin From the Stock Market to Music, via the Theory of Evolution
This was just the first bit that stood out as LW-relevant; he also briefly mentions cognitive bias and touches on the possible benefits of cognitive science to the arts.
--Jonathan Haidt, source
He also talks about how sacredness is one of the fundamental values for human communities, and how liberal/left-leaning theorists don't pay enough attention to it (and refuse to acknowledge their own sacred/profane areas).
I have more to say about his values theory, I'll post some thoughts later.
UPD: I wrote a little something, now I'm just gonna ask Konkvistador whether he thinks it's neutral enough or too political for LW.
Please make sure you do. I suspect it will be interesting. :)
Aaron Sloman
— Jack Vance, The Languages of Pao
Shorter version:
-- Terence, Phormio
My favorite:
The other day I was thinking about Discworld, and then I remembered this and figured it would make a good rationality quote...
-- Terry Pratchett, Feet of Clay
Reminded of a quote I saw on TV Tropes of a MetaFilter comment by ericbop:
Sounds like Vimes doesn't like Sherlock Holmes much.
Gee, you think?
Well, the quote made me think of this. Now that I looked up that post I notice that it is downvoted, so perhaps it isn't relevant. But the behavior that Vimes expresses distrust of in the Pratchett quote is pretty much the exact behavior that is used to show off how intelligent/perceptive Holmes is, and which the poster wants to use as an example for rationalists.
Bruce Sterling
If you know the scores of two different golfers on day 1, then you know more than if you know the score of only one golfer on day 1. You can't predict the direction in which regression to the mean will occur if your data set is a single point.
The following all have different answers:
(The answer is 39700; I'm probably not going to improve with practice, and you have no way to know if 39700 is unusually good or unusually bad.)
(The answer is some number less than 39700; knowing that my friend got a lower score gives you a reason to believe that 39700 might be higher than normal.)
(The answer is some number higher than 39700, because I'm no longer an absolute beginner.)
A shortcut for making less-biased predictions, taking base averages into account.
Regarding this problem: "Julie is currently a senior in a state university. She read fluently when she was four years old. What is her grade point average (GPA)?"
--Nietzsche
Chinese proverb, meaning "the onlooker sees things more clearly", or literally, "the player lost, the spectator clear"
In personal development workshops, the saying is, "the one with the mike in their hand is the last to see it." Of doctors and lawyers it is said that one who treats himself, or acts in court for himself, has a fool for a client.
Chinese proverb, "three men make a tiger", referring to a semi-mythological event during the Warring States period:
-- Wikipedia
--Oswald Spengler, The Decline of the West
That sounds deep, but it has nothing to to with rationality
Not really, for example it is actually pretty clearly connected to fun theory.
On specificity and sneaking on connotations; useful for the liberal-minded among us:
-celandine13
How about:
Someone who, following an honest best effort to evaluate the available evidence, concludes that some of the beliefs that nowadays fall under the standard definition of "racist" nevertheless may be true with probabilities significantly above zero.
Someone who performs Bayesian inference that somehow involves probabilities conditioned on the race of a person or a group of people, and whose conclusion happens to reflect negatively on this person or group in some way. (Or, alternatively, someone who doesn't believe that making such inferences is grossly immoral as a matter of principle.)
Both (1) and (2) fall squarely under the common usage of the term "racist," and yet I don't see how they would fit into the above cited classification.
Of course, some people would presumably argue that all beliefs in category (1) are in fact conclusively proven to be false with p~1, so it can be only a matter of incorrect conclusions motivated by the above listed categories of racism. Presumably they would also claim that, as a well-established general principle, no correct inferences in category (2) are ever possible. But do you really believe this?
That (1) only makes sense if there is a “standard” definition of racist (and it's based on what people believe rather than/as well as what they do). The point of the celandine13 was indeed that there's no such thing.
The evidence someone's race constitutes about that person's qualities is usually very easily screened off, as I mentioned here. And given that we're running on corrupted hardware, I suspect that someone who does try to “performs Bayesian inference that somehow involves probabilities conditioned on the race of a person” ends up subconsciously double-counting evidence and therefore end up with less accurate results than somebody who doesn't. (As for cases when the evidence from race is not so easy to screen off... well, I've never heard anybody being accused of racism for pointing out that Africans have longer penises than Asians.)
I have seen accusations for racism as responses to people pointing that out.
Also, according to the U.S. Supreme Court even if race is screened off, you're actions can still be racist or something.
In real life, you don't have the luxury of gathering forensic evidence on everyone you meet.
I'm not talking about forensic evidence. Even if white people are smarter in average than black people, I think just talking with somebody for ten minutes would give me evidence about their intelligence which would nearly completely screen off that from skin colour. Heck, even just knowing what their job is would screen off much of it.
Also, as Eric Raymond discusses here, especially in the comments, you sometimes need to make judgements without spending ten minutes talking to everyone you see.
There's this thing called Affirmative Action, as I mentioned elsewhere in this thread.
...
I facepalmed. Really, Eric? Sorry, I don't think that a moral realist is perceptive enough to the nuances and ethical knots involved to be a judge on this issue. I don't know, he might be an excellent scientist, but it's extremely stupid to be so rash when you're attempting serious contrarianism.
Yep, let's all try to overcome bias really really hard; there's only one solution, one desirable state, there's a straight road ahead of us; Kingdom of Rationality, here we come!
(Yvain, thank you a million times for that sobering post!)
You know, there are countries where the intentional homicide rate is smaller than in John Derbyshire's country by nearly an order of magnitude.
That thing doesn't exist in all countries. Plus, I think the reason why you don't see that many two-digit-IQ people among (say) physics professors is not that they don't make it, it's that they don't even consider doing that, so even if some governmental policy somehow made it easier for black people with an IQ of 90 to succeed than for Jewish people with the same IQ, I would still expect a black physics professor to be smarter than (say) a Jewish truck driver.
That's not the point. The point is that the black physics professor is less smart than the Jewish physics professor.
What if verbal ability and quantitative ability are often decoupled?
I wasn't talking about "verbal ability" (which, to the extent that can be found out in ten minutes, correlates more with where someone grew up than with IQ), but about what they say, e.g. their reaction to finding out that I'm a physics student (though for this particular example there are lots of confounding factors), or what kinds of activities they enjoy.
If you're able to drive the conversation like that, you can get information about IQ, and that information may have a larger impact than race. But to "screen off" evidence means making that evidence conditionally independent- once you knew their level of interest in physics, race would give you no information about their IQ. That isn't the case.
Imagine that all races have Gaussian IQ distributions with the same standard deviation, but different means, and consider just the population of people whose IQs are above 132 ('geniuses' for this comment). In such a model, the mean IQ of black geniuses will be smaller than the mean IQ of white geniuses which will be smaller than the mean IQ of Jewish geniuses- so even knowing a lower bound for IQ won't screen off the evidence provided by race!
Huh, sure, if the likelihood is a reversed Heaviside step. If the likelihood is itself a Gaussian, then the posterior is a Gaussian whose mean is the weighed average of that of the prior and that of the likelihood, weighed by the inverse squared standard deviations. So even if the st.dev. of the likelihood was half that of the prior for each race, the difference in posterior means would shrink by five times.
Right- there's lots of information out there that will narrow your IQ estimate of someone else more than their race will, like that they're a professional physicist or member of MENSA, but evidence only becomes worthless when it's independent of the quantity you're interested in given the other things you know.
This is missing Racist4:
Someone whose preferences result in disparate impact.
So if a minority takes the Implicitly Association Test and finds out their biased against the dominant "race" in their area, are they a Racist1, or not?
I would also really question the validity of the Implicit Association Test. It says "Your data suggest a slight implicit preference for White People compared to Black People.", which given that blacks have been severely under-represented my social sub-culture for the last 27 years(Punk/Goth), the school I graduated from (Art School), and my professional environments (IT) for the last 20 years is probably not inaccurate.
However, it also says "Your data suggest a slight implicit preference for Herman Cain compared to Barack Obama." Which is nonsense. I have a STRONG preference for Herman Cain over Barack Obama.
Looks like we need more "racism"s :D A common definition of racism that reflects the intuitions you bring up is "racism is prejudice plus power," (e.g., here) which isn't very useful from a decision-making point of view but which is very useful when looking at this racism as a functional thing experienced by the some group.
Where would someone like Steve Sailer fit in this classification?
Indeed as strange as it might sound (but not to those who know what he usually blogs about) Steve Sailer seems to genuinely like black people more than average and I wouldn't be surprised at all if a test showed he wasn't biased against them or was less biased than the average white American.
He also dosen't seem like racist2 from the vast majority of his writing, painting him as racist3 is plain absurd.
You left out one common definition.
Also I don't see why calling Obama the "Food Stamp President" or otherwise criticizing his economic policy president makes one a jerk, much less a "Racist2" unless one already believes that all criticism of Obama is racist by definition.
Unfortunately, it seems to me that most of the information that "race" provides is screened off by various things that are only weakly correlated with race, and it also seems to me that our badly-designed hardware doesn't update very well upon learning these things. For example, "X is a college graduate, and is black" doesn't tell you all that much more than "X is a college graduate"; it's probably easier to deal with this by having inaccurate priors than by updating properly.
I'm not sure that what you have in mind here is screening, at least in the causal diagrams sense. If I'm not mistaken, learning that someone is a college graduate screens off race for the purpose of predicting the causal effects of college graduation, but it doesn't screen off race for the purpose of predicting causes of college graduation (such as intelligence) and their effects. You're right, though, that even in the latter case learning that someone is a college graduate decreases the size of the update from learning their race. (At least given realistic assumptions. If 99% of cyan people have IQ 80 and 1% have IQ 140, and 99% of magenta people have IQ 79 and 1% have IQ 240, learning that someone is a college graduate suddenly makes it much more informative to learn their race. But that's not the world we live in; it's just to illustrate the statistics.)
Which are generally much harder to observe.
Um, Affirmative Action. Also tail ends of distributions.
I was under the impression that AA applied to college admissions, and that college graduation is still entirely contingent on one's performance. (Though I've heard tell that legacy students both get an AA-sized bump to admissions and tend to be graded on a much less harsh scale.)
Additionally, it seems that there's a lot of 'different justification, same conclusion' with regards to claims about black people. For instance, "black people are inherently stupid and lazy" becomes "black people don't have to meet the same standards for education". The actual example I saw was that people subconsciously don't like to hire black people (the Chicago resume study) because they present a risk of an EEOC lawsuit. (The annual risk of being involved in an EEOC lawsuit is on the order of one in a million.)
I think it's more a case same observations, different proposed mechanisms.
A quick google search isn't giving me an actual percentage, but I believe that students who're admitted to and attend college, but do not graduate, are still significantly in the minority. Even those who barely made it in mostly graduate, if not necessarily with good GPAs.
One of the criticisms of colleges engaging in "AA" type policies is that they often will put someone in a slightly higher level school (say Berkeley rather than Davis) than they really should be in and which because of their background they are unprepared for. Not necessarily intellectually--they could be very bright, but in terms of things like study skills and the like.
There is sufficient data to suggest this should be looked at more thoroughly. In general it is better for someone to graduate from a "lesser" school than to drop out of a better one.
I'm honestly confused. You don't see why calling Obama a "Food Stamp President" is different from criticizing his economic policy?
I guess I would not predict that particular phrase being leveled against Hillary or Bill Clinton - even from people who disagreed with their economic policies for the same reasons they disagree with Obama's economic policies.
Well, Bill Clinton had saner economic policies, but otherwise I would predict that phrase, or something similar, being used against a white politician.
You haven't answered my question:
Given the way that public welfare codes for both "lazy" and "black" in the United States, do you think that "Food Stamp President" has the same implications as some other critique of Obama's economic policies (in terms of whether the speaker intended to invoke Obama's race and whether the speaker judges Obama differently than some other politician with substantially identical positions)?
"public welfare codes for both "lazy" and "black" in the United States"
Taking your word on that, what "other critique of Obama's economic policies" are you imagining that would not have the same implications, unless you mean one that ignores public welfare entirely in favor of focusing on some other economic issue instead?
A political opponent of Obama might say:
or
or
edit: or
(end edit)
without me thinking that the political opponent was intending to invoke Obama's race in some way. None of these are actual quotes, but I think they are coherent assertions that disagree with Obama's economic or legal philosophy. Edit: I feel confident I could find actual quote of equivalent content.
Of course, none of the ones you suggested are actually about public welfare, in the sense of the government providing supplemental income for people who are unable to get jobs to provide themselves adequate income. So what we have is not a code word, but rather a code issue.
Except the first one, but with how you framed it as "public welfare codes for..." I don't see how that one wouldn't have the same connotations.
Well, yes by finding enough "code words" you can make any criticism of Obama racist.
Yes, that's certainly true.
I'm really curious now, though. What's your opinion about the intended connotations of the phrase "food stamp President"? Do you think it's intended primarily as a way of describing Obama's economic policies? His commitment to preventing hunger? His fondness for individual welfare programs? Something else?
Or, if you think the intention varies depending on the user, what connotations do you think Gingrich intended to evoke with it?
Or, if you're unwilling to speculate as to Gingrich's motives, what connotations do you think it evokes in a typical resident of, say, Utah or North Dakota?
...and also useful for those among us who don't identify as "liberal-minded."
Surely one of the definitions of "racist" should contain something about thinking that some races are better than others. Or is that covered under "neo-Nazi"?
I'm pretty sure that's covered under Racist1. Note the word "negative".
Though it's odd that Racist1 specifically refers to "minorities". The entire suite seems to miss folks that favor a "minority" race.
Not really it is perfectly possible to be explicitly aware of one's racial preferences and not really be bothered by having such preferences, at least no more than one is bothered by liking salty food or green parks, yet not be a Nazi or prone to violence.
Indeed I think a good argument can be made not only that large number of such people lived in the 19th and 20th century, but that we probably have millions of them living today in say a place like Japan.
And that they are mostly pretty decent and ok people.
Edit: Sorry! I didn't see the later comments already covering this. :)
Negative subconscious attitudes aren't the same thing as (though they might cause or be caused by) conscious opinions that such-and-such people are inferior in some way.
Ah yes - it's extra-weird that someone isn't allowed in that framework to have conscious racist opinions but not be a jerk about it.
If one has conscious racist opinions, or is conscious that one has unconscious racist opinions (has taken the IAT but doesn't explicitly believe negative things about blacks) but doesn't act on them, it's probably because one doesn't endorse them. I'd class such a person as a Racist1.
I don't think not being an "insensitive jerk" is the same as not acting on one's opinions.
For example, if I think that people who can't do math shouldn't be programmers, and I make sure to screen applicants for math skills, that's acting on my opinions. If I make fun of people with poor math skills for not being able to get high-paying programmer jobs, that's being an insensitive jerk.
Depends on what you mean by "better". There's a difference between taking the data on race and IQ seriously, and wanting to commit genocide.
(blink)
Can you unpack the relationship here between some available meaning of "better" and wanting to commit genocide?
That's the question I was implicitly asking Oscar.
Most obvious plausible available meaning for 'better' that fits: "Most satisfies my average utilitarian values".
(Yes, most brands of simple utilitarianism reduce to psychopathy - but since people still advocate them we can consider the meaning at least 'available'.)
-Tim Ferriss, The 4-Hour Workweek
-- Peter Drucker
(I've quoted this line several times before.)
Sure there is. Doing inefficiently what should not be done at all is even more useless. At least if you do it efficiently you can go ahead and do something else sooner.
It seems to me that efficiency is just as useful doing things that should not be done as it is other times, for a fixed amount of doing stuff that shouldn't be done.
Depends on the kind of efficiency, I guess.
If someone is systematically murdering people for an hour, I'd prefer they not get as much murdering done as they could.
--Francis Bacon, Novum Organum (1620) <!-- 1905 (Ellis, R. & Spedding, J., Trans.). London: Routledge. -->
Civil wars are bitter because
---Thucydides
Found here.
(George Orwell's review of Mein Kampf)
(well, we have videogames now, yet... we gotta make them better! more vicseral!)
I don't see that that's true. Germany loved Hitler when he was giving them job security and easy victories and became much less popular once the struggle and danger and death arrived on the scene.
They grumbled, but 95% of them obeyed, worked, killed and died up until the spring of 1945. A huge amount of Germans certainly believed that sticking with the Nazis until the conflict's end was a much lesser evil compared to another national humiliation on the scale of Versallies. And look at the impressive use to which him and Goebbels put evaporative cooling of group beliefs to radicalize the faithful after the July plot. Purging a few malcontents led to a significant increase in zeal and loyalty even as things were getting visibly worse and worse.
Alfred North Whitehead, “An Introduction to Mathematics” (thanks to Terence Tao)
-- David Henderson on Social Darwinism
--Samuel Johnson, The Adventurer, #119, December 25, 1753.
-C. Mackay, Extraordinary Popular Delusions and the Madness of Crowds, 1852.
-- Trey Parker, Jewpacabra
(This is at about five minutes fifty seconds into the episode.)
Edit: Related Sequence post.
Yoshinori Kitase
Context: Aeris dies. (Spoilers!)
It would be interesting to calculate the total utility of an author wantonly murdering a universally beloved character. May turn out to be quite a crime...
Well, it's certainly not limited to killing off characters, but people have been writing about emotional release as a response to tragedy in drama for quite a long time. Generally it's thought of as a good thing, if not necessarily a pleasant one, and I'm inclined to agree with this analysis; people go into fiction looking for an emotional response, and the enduring popularity of tragic storytelling suggests that they aren't exclusively looking for emotions generally regarded as positive.
Content warnings pointing to what a work's going for might not be a bad idea from a utilitarian standpoint, though. I personally handle tragedy well, for example, but I have a lot of trouble with cringe comedy.
I've had to leave the room because I get embarrassed just watching characters in that kind of show...
Well, one of my favorite authors is infamous for doing this, and I for one think his works are the better for it. It certainly hasn't prevented them from becoming very popular.
-Game of Thrones (TV show)
-Carl Rogers, On Becoming a Person: A Therapist's View of Psychotherapy (1961)
In Pinker's book "How the Mind Works" he asks the same question. His observation (as I recall) was that much of our apparently abstract logical abilities are done by mapping abstractions like math onto evolved subsystems with different survival purposes in our ancestors: pattern recognition, 3D spatial visualization, etc. He suggests that some problems seem intractable because they don't map cleanly to any of those subsystems.
It surprises people like Greg Egan, and they're not entirely stupid, because brains are Turing complete modulo the finite memory - there's no analogue of that for visible wavelengths.
If this weren't Less Wrong, I'd just slink away now and pretend I never saw this, but:
I don't understand this comment, but it sounds important. Where can I go and what can I read that will cause me to understand statements like this in the future?
When speaking about sensory inputs, it makes sense to say that different species (even different individuals) have different ranges, so one can percieve something and other can't.
With computation it is known that sufficiently strong programming languages are in some sense equal. For example, you could speak about relative advantages of Basic, C/C++, Java, Lisp, Pascal, Python, etc., but in each of these languages you can write a simulator of the remaining ones. This means that if an algorithm can be implemented in one of these languages, it can be implemented in all of them -- in worst case, it would be implemented as a simulation of another language running its native implementation.
There are some technical details, though. Simulating another program is slower and requires more memory than the original program. So it could be argued that on a given hardware you could do a program in language X which uses all the memory and all available time, so it does not necessarily follow that you can do the same program in language Y. But on this level of abstraction we ignore hardware limits. We assume that the computer is fast enough and has enough memory for whatever purpose. (More precisely, we assume that in available time a computer can do any finite number of computation steps; but it cannot do an infinite number of steps. The memory is also unlimited, but in a finite time you can only manage to use a finite amount of memory.)
So on this level of abstraction we only care about whether something can or cannot be implemented by a computer. We ignore time and space (i.e. speed and memory) constraints. Some problems can be solved by algorithms, others can not. (Then, there are other interesting levels of abstraction which care about time and space complexity of algorithms.)
Are all programming languages equal in the above sense? No. For example, although programmers generally want to avoid infinite loops in their programs, if you remove a potential for infinite loops from the programming language (e.g. in Pascal you forbid "while" and "repeat" commands, and a possibility to call functions recursively), you lose ability to simulate programming languages which have this potential, and you lose ability to solve some problems. On the other hand, some universal programming languages seem extremely simple -- a famous example is a Turing machine. This is very useful, because it is easier to do mathematical proofs about a simple language. For example if you invent a new programming language X, all you have to do to prove its universality, is to write a Turing machine simulator, which is usually very simple.
Now back to the original discussion... Eliezer suggests that brain functionality should be likened to computation, not to sensory input. A human brain is computationally universal, because (given enough time, pen and paper) we can simulate a computer program, so all brains should be equal when optimally used (differing only in speed and use of resources). In another comment he adds that ability to compute isn't the same as ability to understand. Therefore (my conclusion) what one human can understand, another human can at least correctly calculate without understanding, given a correct algorithm.
Wow. That's really cool, thank you. Upvoted you, jeremysalwen and Nornagest. :)
Could you also explain why the HPMoR universe isn't Turing computable? The time-travel involved seems simple enough to me.
Not a complete answer, but here's commentary from a ffdn review of Chapter 14:
There's also the problem of an infinite number of possible solutions.
I got the impression that what "not Turing-computable" meant is that there's no way to only compute what 'actually happens'; you have to somehow iteratively solve the fixed-point equation, maybe necessarily generating experiences (waves hands confusedly) corresponding to the 'false' timelines.
Sounds rather like our own universe, really.
Ah. It's math.
:) Thanks.
A computational system is Turing complete if certain features of its operation can reproduce those of a Turing machine, which is a sort of bare-bones abstracted model of the low-level process of computation. This is important because you can, in principle, simulate the active parts of any Turing complete system in any other Turing complete system (though doing so will be inefficient in a lot of cases); in other words, if you've got enough time and memory, you can calculate anything calculable with any system meeting a fairly minimal set of requirements. Thanks to this result, we know that there's a deep symmetry between different flavors of computation that might not otherwise be obvious. There are some caveats, though: in particular, the idealized version of a Turing machine assumes infinite memory.
Now, to answer your actual question, the branch of mathematics that this comes from is called computability theory, and it's related to the study of mathematical logic and formal languages. The textbook I got most of my understanding of it from is Hopcroft, Motwani, and Ullman's Introduction to Automata Theory, Languages, and Computation, although it might be worth looking through the "Best Textbooks on Every Subject" thread to see if there's a consensus on another.
https://en.wikipedia.org/wiki/Turing_completeness
What does that statement mean in the context of thoughts?
That is, when I think about human thoughts I think about information processing algorithms, which typically rely on hardware set up for that explicit purpose. So even though I might be able to repurpose my "verbal manipulation" module to do formal logic, that doesn't mean I have a formal logic module.
Any defects in my ability to repurpose might be specific to me: I might able to think the thought "A-> B, ~A, therefore ~B" with the flavor of trueness, and another person can only think that thought with the flavor of falseness. If the truth flavor is as much a part of the thought as the textual content, then the second thinker cannot think the thought that the first thinker can.
Aren't there people who can hear sounds but not music? Are their brains not Turing complete? Are musical thoughts ones they cannot think?
It means nothing, although Greg Egan is quite impressed by it. Sad but true: Someone with an IQ of, say, 90 can be trained to operate a Turing machine, but will in all probability never understand matrix calculus. The belief that Turing-complete = understanding-complete is false. It just isn't stupid.
It doesn't mean nothing; it means that people (like machines) can be taught to do things without understanding them.
(They can also be taught to understand, provided you reduce understanding to Turing-machine computations, which is harder. "Understanding that 1+1 = 2" is not the same thing as being able to output "2" to the query "1+1=".)
I would imagine that he can be taught matrix calculus, given sufficient desire (on his and the teachers' parts), teaching skill, and time. I'm not sure if in practice it is possible to muster enough desire or time to do it, but I do think that understanding is something that can theoretically be taught to anyone who can perform the mechanical calculations.
I can't imagine how hard it is to learn to program if you don't instinctively know how. Yet I know it is that hard for many people. Some succeed in learning, some don't. Those who do still have big differences in ability, and ability at a young age seems to be a pretty good predictor of lifetime ability.
I realize I must have learned the basics at some point, although I don't remember it. And I remember learning many more advanced concepts during the many years since. But for both the basics and the advanced subjects, I never experienced anything I can compare to what I'd call "learning" in other subjects I studied.
When programming, if I see/read something new, I may need some time (seconds or hours) to understand it, then once I do, I can use it. It is cognitively very similar to seeing a new room for the first time. It's novel, but I understand it intuitively and in most cases quickly.
When I studied e.g. biology or math at university, I had to deliberately memorize, to solve exercises before understanding the "real thing", to accept that some things I could describe I couldn't duplicate by building them from scratch no matter how much time I had and what materials and tools. This never happened to me in programming. I may not fully understand the domain problem that the program is manipulating. But I always understand the program itself.
And yet I've seen people struggle to understand the most elementary concepts of programming, like, say, distinguishing between names and values. I've had to work with some pretty poor programmers, and had the official job of on-the-job mentoring newbies on two occasions. I know it can be very difficult to teach effectively, it can be very difficult to learn.
Given that I encountered a heavily preselected set of people, who were trying to make programming their main profession, it's easy for me to believe that - at the extreme - for many people elementary programming is impossible to learn, period. And the same should apply to math and any other "abstract" subject for which biologically normal people don't have dedicated thinking modules in their brains.
I fear you're committing the typical mind fallacy. The dyscalculic could simulate a Turing machine, but all of mathematics, including basic arithmetic, is whaargarbl to them. They're often highly intelligent (though of course the diagnosis is "intelligent elsewhere, unintelligent at maths"), good at words and social things, but literally unable to calculate 17+17 more accurately than "somewhere in the twenties or thirties" or "I have no idea" without machine assistance. I didn't believe it either until I saw it.
Have you ever tried to teach math to anyone who is not good at math? In my youth I once tutored a woman who was poor, but motivated enough to pay $40/session. A major obstacle was teaching her how to calculate (a^b)^c and getting her to reliably notice that minus times minus equals plus. Despite my attempts at creative physical demonstrations of the notion of a balanced scale, I couldn't get her to really understand the notion of doing the same things to both sides of a mathematical equation. I don't think she would ever understand what was going on in matrix calculus, period, barring "teaching methods" that involve neural reprogramming or gain of additional hardware.
What was your impression of her intelligence otherwise?
Suzette Haden Elgin (a science fiction author and linguist who was quite intelligent with and about words) described herself as intractably bad at math.
Your claim is too large for the evidence you present in support of it.
Teaching someone math who is not good at math is hard, but "will in all probability never understand matrix calculus"!? I don't think you're using the Try Harder.
Assume teaching is hard (list of weak evidence: it's a three year undergraduate degree; humanity has hardly allowed itself to run any proper experiments in the field, and those that have been run seem usually to be generally ignored by professional practitioners; it's massively subject to the typical mind fallacy and most practitioners don't know that fallacy exists). That you, "in your youth" (without having studied teaching), "once" tutored a woman who you couldn't teach very well… doesn't support any very strong conclusion.
It seems very likely to me that Omega could teach matrix calculus to someone with IQ 90 given reasonable time and motivation from the student. One of the things I'm willing to devote significant resources to in the coming years is making education into a proper science. Given the tools of that proper science I humbly submit that you could teach your former student a lot. Track the progress of the Khan Academy for some promising developments in the field.
Some of it is weak evidence for the hardness claim (3 years degree), some against (all the rest). Does that match what you meant?
I'd intended a different meaning of "hard". On reflection your interpretation seems a very reasonable inference from what I wrote.
What I meant: Teaching is hard enough that you shouldn't expect to find it easy without having spent any time studying it. Even as a well educated westerner, the bits of teaching you can reasonably expect to pick up won't take you far down the path to mastery.
(Thank you for you comment - it got me thinking.)
What are the experiments that are generally ignored?
No, I haven't, and reading your explanation I now believe that there is a fair chance you are correct. However, one problem I have with it is that you're describing a few points of frustration, some of which I assume you ended up overcoming. I am not entirely convinced that had she spent, say one hundred hours studying each skill that someone with adequate talent could fully understand in one, she would not eventually fully understand it.
In cases of extreme trouble, I can imagine her spending forty hours working through a thousand examples, until mechanically she can recognise every example reasonably well, and find the solution correctly, then another twenty working through applications, then another forty hours analysing applications in the real world until the process of seeing the application, formulating the correct problem, and solving it becomes internalised. Certainly, just because I can imagine it doesn't make it true, but I'm not sure on what grounds I should prefer the "impossibility" hypothesis to the "very very slow learning" hypothesis.
I can't imagine how hard it would be to learn math without the concept of referential transparency.
I'm not sure what you mean by understanding-complete, but remember that the turing-complete system is both the operator and any machinery they are manipulating.
So you are considering a man in a Chinese room to lack understanding?
Obviously the man in the Chinese room lacks understanding, by most common definitions of understanding. It is the room as a system which understands Chinese. (Assuming lookup tables can understand. By functional definitions, they should be able to.)
FWIW I've read a study that says about 50% of people can't tell the difference between a major and a minor chord even when you label them happy/sad. [ETA: Happy/sad isn't the relevant dimension, see the replies to this comment.] I have no idea how probable that is, but if true it would imply that half of the American population basically can't hear music.
http://languagelog.ldc.upenn.edu/nll/?p=2074
It shocked the hell out of me, too.
This is weird. It is hard for me to hear the difference in the cadence, but crystal clear otherwise. In the cadence, the problem for me is that the notes are dragging on, like when you press pedal on piano a bit, that makes it hard to discern the difference.
Maybe they lost something in retelling here? Made up new stimuli for which it doesn't work because of harmonics or something?
Or maybe its just me and everyone on this thread? I have a lot of trouble hearing speech through noise (like that of flowing water), i always have to tell others, i am not hearing what you're saying i am washing the dishes. Though i've no idea how well other people can hear something when they are washing the dishes; maybe i care too much not to pretend to listen when i don't hear.
This needs proper study.
Ditto for me -- The difference between the two chords is crystal clear, but in the cadence I can barely hear it.
I'm not a professional, but I sang in school chorus for 6 years, was one of the more skilled singers there, I've studied a little musical theory, and I apparently have a lot of natural talent. And the first time I heard the version played in cadence I didn't notice the difference at all. Freaky. I know how that post-doc felt when she couldn't hear the difference in the chords.
The following recordings are played on an acoustic instrument by a human (me), and they have spaces in between the chords. The chord sequences are randomly generated (which means that the major-to-minor ratio is not necessarily 1:1, but all of them do have a mixture of major and minor chords).
Each of the following two recordings is a sequence of eight C major or C minor chords:
Each of the following two recordings is a sequence of eight "cadences" -- groups of four chords that are either
F B♭ C F
or
F B♭ Cminor F
Edit: Here's a listing of the chords in all four sound files.
Edit 2 (2012-Apr-22): I added another recording that contains these chords:
repeated over and over, while the balance between the voices is varied, from "all voices roughly equal" to "only the second voice from the top audible". The second voice from the top is the only one that is different on the C minor chord. My idea is that hearing the changing voice foregrounded from its context like this might make it easier to pick it out when it's not foregrounded.
I am with you on easily telling the two apart in the original chords but being unable to reliably tell the difference in the cadence version.
-Carl Rogers, On Becoming a Person: A Therapist's View of Psychotherapy (1961)
A while ago I saw a good post or quote on LW on the problem of confusing a phrase one uses to encapsulate an insight with the insight itself. Unfortunately I don't remember where.
Facts are friendly on average, that is. Individual pieces of evidence might lead you to update towards a wrong conclusion. /nitpick
Even then we could potentially nitpick even further, depending on what is meant by 'average'.
Knowing about evolution is pretty cool, but I'd be a lot more satisfied if I could believe that we were created as the pinnacle of design by a super-awesome Thing that had a specific plan in mind (and that my nation - and, come to that, tribe -was even more pinnacle than everyone else).
...and if it turned out that believing that particular falsehood didn't have consequences that left you less satisfied.
Okay, hypothetical: Dying human. They believed in God their entire life and have lived as basically decent according to their own ethics, and therefore think they're going to be blissing out for the rest of infinity. They will believe this for the next couple of minutes, and then stop existing.
Would you, given the opportunity, dispel their illusion?
Depends on what I expected the result of doing so to be.
If I expected the result to be that they are more unhappy than they otherwise would be for the rest of their lives with no other compensating benefit (which is certainly the conclusion your hypothetical encourages), then no I wouldn't.
If I expected the result to be either that they are happier than they otherwise would be for the rest of their lives, or that there is some other compensating benefit to them knowing what will actually happen, then yes I would.
Why do you ask?