Here's the new thread for posting quotes, with the usual rules:

  • Please post all quotes separately, so that they can be voted up/down separately.  (If they are strongly related, reply to your own comments.  If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself
  • Do not quote comments/posts on LW/OB
  • No more than 5 quotes per person per monthly thread, please.
New Comment
859 comments, sorted by Click to highlight new comments since: Today at 8:53 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I understand what an equation means if I have a way of figuring out the characteristics of its solution without actually solving it.

Paul Dirac

0Manfred12y
Excellent quote.

A few years into this book, I was diagnosed as diabetic and received a questionnaire in the mail. The insurance carrier stated that diabetics often suffer from depression and it was worried about me. One of the questions was “Do you think about death?” Yes, I do. “How often?” the company wanted to know. “Yearly? Monthly? Weekly? Daily?” And if daily, how many times per day? I dutifully wrote in, “About 70 times per day.” The next time I saw my internist, she told me the insurer had recommended psychotherapy for my severe depression. I explained to her why I thought about death all day—merely an occupational hazard—and she suggested getting therapy nonetheless. I thought, fine, it might help with the research.

The therapist found me tragically undepressed, and I asked her if she could help me design a new life that would maximize the few years that I had left. After all, one should have a different life strategy at sixty than at twenty. She asked why I thought I was going to die and why I had such a great fear of death. I said, I am going to die. It’s not a fear; it’s a reality. There must be some behavior that could be contraindicated for a man my age but other normally dangerous

... (read more)
6John_Maxwell12y
You're going to die. Or maybe not.
9Nisan12y
I like the first video, but I wish it ended at 4:20. It reminds me a lot of Ecclesiastes, which is a refreshingly honest essay about the meaning of life, with the moral "and therefore you should do what God wants you to do" tacked on at the end by an anonymous editor.

On counter-signaling, how not to do:

US police investigated a parked car with a personalized plate reading "SMUGLER". They found the vehicle, packed with 24 lb (11 kg) of narcotics, parked near the Canadian border at a hotel named "The Smugglers' Inn." Police believed the trafficker thought that being so obvious would deter the authorities.

-- The Irish Independent, "News In Brief"

Maybe the guy had been reading too much Edgar Allan Poe? As a child, I loved "The Purloined Letter" and tried to play that trick on my sister - taking something from her and hiding it "in plain sight". Of course, she found it immediately.

ETA: it was a girl, not a guy.

5RobertLumley12y
I find it highly unlikely that this is the whole story. Surely the police are not licensed to investigate a car based solely on its vanity plate and where it was parked...

You are probably right that more information drew police attention to the car, but "near the border" gets one most of the way to legally justified. In the 1970s, the US Supreme Court explicitly approved a permanent checkpoint approximately 50 miles north of the Mexican border.

7RobertLumley12y
Well that's a rather depressing piece of law...

There are big differences between "a study" and "a good study" and "a published study" and "a study that's been independently confirmed" and "a study that's been independently confirmed a dozen times over." These differences are important; when a scientist says something, it's not the same as the Pope saying it. It's only when dozens and hundreds of scientists start saying the same thing that we should start telling people to guzzle red wine out of a fire hose.

Chris Bucholz

6soreff12y
Mostly agreed. If I were to stand on a soapbox and say "light with a wavelength of 523.4371 nm is visible to the human eye", it would fall into the category of an unsubstantiated claim by a single person. But it is implied by the general knowledge that the human visual range is from roughly 400 nm to roughly 700 nm, and that has been confirmed by anyone who has looked at a spectrum with even crude wavelength calibration.
2Document12y
Shouldn't that say that it is the same?

Another learning which cost me much to recognize, can be stated in four words. The facts are friendly.

It has interested me a great deal that most psychotherapists, especially the psychoanalysts, have steadily refused to make any scientific investigation of their therapy, or to permit others to do this. I can understand this reaction because I have felt it. Especially in our early investigations I can well remember the anxiety of waiting to see how the findings came out. Suppose our hypotheses were disproved! Suppose we were mistaken in our views! Suppose our opinions were not justified! At such times, as I look back, it seems to me that I regarded the facts as potential enemies, as possible bearers of disaster. I have perhaps been slow in coming to realize that the facts are always friendly. Every bit of evidence one can acquire, in any area, leads one that much closer to what is true. And being closer to the truth can never be a harmful or dangerous or unsatisfying thing. So while I still hate to readjust my thinking, still hate to give up old ways of perceiving and conceptualizing, yet at some deeper level I have, to a considerable degree, come to realize that these painful reor

... (read more)
9Dorikka12y
Facts are friendly on average, that is. Individual pieces of evidence might lead you to update towards a wrong conclusion. /nitpick
2wedrifid12y
Even then we could potentially nitpick even further, depending on what is meant by 'average'.
-2Stephanie_Cunnane12y
Excellent point.
2Document12y
A while ago I saw a good post or quote on LW on the problem of confusing a phrase one uses to encapsulate an insight with the insight itself. Unfortunately I don't remember where.
1Ezekiel12y
Knowing about evolution is pretty cool, but I'd be a lot more satisfied if I could believe that we were created as the pinnacle of design by a super-awesome Thing that had a specific plan in mind (and that my nation - and, come to that, tribe -was even more pinnacle than everyone else).
1TheOtherDave12y
...and if it turned out that believing that particular falsehood didn't have consequences that left you less satisfied.
6Ezekiel12y
Okay, hypothetical: Dying human. They believed in God their entire life and have lived as basically decent according to their own ethics, and therefore think they're going to be blissing out for the rest of infinity. They will believe this for the next couple of minutes, and then stop existing. Would you, given the opportunity, dispel their illusion?
3TheOtherDave12y
Depends on what I expected the result of doing so to be. If I expected the result to be that they are more unhappy than they otherwise would be for the rest of their lives with no other compensating benefit (which is certainly the conclusion your hypothetical encourages), then no I wouldn't. If I expected the result to be either that they are happier than they otherwise would be for the rest of their lives, or that there is some other compensating benefit to them knowing what will actually happen, then yes I would. Why do you ask?
4Ezekiel12y
Because this is (to my mind) an example of a situation where the facts aren't friendly and the truth is harmful - thus (hopefully) justifying my objection to the original quote.
3TheOtherDave12y
OK. Thanks for clarifying.
-3JulianMorrison12y
Dispel all their illusions, including the one that assigned negative utility to unavoidable dying. There are better things to do with 2 minutes than expecting fun you won't receive.
5Ben_Welchner12y
If you know of any illusions that give inevitably ceasing to exist negative utility to someone leading a positive-utility life, I would love to have them dispelled for me.
0JulianMorrison12y
Sorry for the slow reply. Hmm. I may be a bit biased because I don't really have a high valuation on being alive as such (which is to say utility[X] is nearly the same as utility[X and Julian is alive] for me, all other things being equal - it's why I am not signed up for cryonics). However I think that any utility calculus that negatively values the fun you're not going to have when inevitably dead is as silly as negatively valuing the fun you didn't get to have because said events preceded your birth, and you inevitably can't extend your life into the past. You get more chance to fulfil your values in the real world by making use of your 2 minutes than by anticipating values that are not going to happen. And I do very much place utility on my values being fulfilled in a real, rather than self deceptive way.
0TimS12y
Yes, the whole statement has an implicit "In the real world" premise. I'd be happy if I had a magic wand that could violate the second law of thermodynamics, but in the real world . . .
0Ezekiel12y
I wasn't clear. Believing that would make me happy even if it wasn't true. There's no reason to assume reality would be nice enough to only hand us facts that we find satisfying. If you happen to have a brain that finds the process of learning more satisfying than any possible falsehood, then that's great... But I don't think many people have that advantage.
4TimS12y
There's a substantial minority in the community that dislikes the Litany of Gendlin, so you have plenty of company here. But even granting the premise that believing true things conflicts with being happy, believing true things has been useful for achieving every other type of goal. So it seems like you are endorsing trading off achievement of other goals in order to maximize happiness. Without challenging your decision to adopt particular terminal values, I am unsure if your chosen tradeoff is sustainable.
2Ezekiel12y
I'm not endorsing that, for exactly the reason you said: knowing stuff, on average, will let you achieve your goals. The original quote, though, stated that the truth is "never unsatisfying", which seemed to me to be a false statement.
4TheOtherDave12y
You sound pretty confident that, if you believed that we were created as the pinnacle of design by a super-awesome Thing that had a specific plan in mind, and that your nation/tribe was even more pinnacle than everyone else, you would be happier than you are now. Can you clarify your reasons for believing that? I mean, I grew up with a lot of people who believe that, and as a class they didn't seem noticeably happier than the people who didn't, so I'm inclined to doubt it. But I'm convinceable.
1Ezekiel12y
You got me, since during the time I did believe that I was a lot less happy than I am now, because that falsehood was part of a whole set of falsehoods which led to annoying obligations. But I do distinctly remember being satisfied with knowing the ultimate goal of the universe and my place in it, and how realising the truth made me feel unsatisfied. The statement "the truth is never an unsatisfying thing" seems to be affect-heuristic reasoning: going from "truth is useful" to "truth is good" to "truth always feels good to know".
0TheOtherDave12y
Sure. To the extent that you're simply arguing that the initial quote overreaches, I'm not disagreeing with you. But you seemed to be making more positive claims about the value of ignorance.

Just as there are odors that dogs can smell and we cannot, as well as sounds that dogs can hear and we cannot, so too there are wavelengths of light we cannot see and flavors we cannot taste. Why then, given our brains wired the way they are, does the remark, "Perhaps there are thoughts we cannot think," surprise you?

  • Richard Hamming

It surprises people like Greg Egan, and they're not entirely stupid, because brains are Turing complete modulo the finite memory - there's no analogue of that for visible wavelengths.

If this weren't Less Wrong, I'd just slink away now and pretend I never saw this, but:

I don't understand this comment, but it sounds important. Where can I go and what can I read that will cause me to understand statements like this in the future?

When speaking about sensory inputs, it makes sense to say that different species (even different individuals) have different ranges, so one can percieve something and other can't.

With computation it is known that sufficiently strong programming languages are in some sense equal. For example, you could speak about relative advantages of Basic, C/C++, Java, Lisp, Pascal, Python, etc., but in each of these languages you can write a simulator of the remaining ones. This means that if an algorithm can be implemented in one of these languages, it can be implemented in all of them -- in worst case, it would be implemented as a simulation of another language running its native implementation.

There are some technical details, though. Simulating another program is slower and requires more memory than the original program. So it could be argued that on a given hardware you could do a program in language X which uses all the memory and all available time, so it does not necessarily follow that you can do the same program in language Y. But on this level of abstraction we ignore hardware limits. We assume that the computer is fast enough and has enough memory for whatever purpose. (More precise... (read more)

9AspiringKnitter12y
Wow. That's really cool, thank you. Upvoted you, jeremysalwen and Nornagest. :) Could you also explain why the HPMoR universe isn't Turing computable? The time-travel involved seems simple enough to me.
8thomblake12y
Not a complete answer, but here's commentary from a ffdn review of Chapter 14:
9Nick_Tarleton12y
I got the impression that what "not Turing-computable" meant is that there's no way to only compute what 'actually happens'; you have to somehow iteratively solve the fixed-point equation, maybe necessarily generating experiences (waves hands confusedly) corresponding to the 'false' timelines.
1tgb12y
Sounds rather like our own universe, really.
4johnswentworth12y
There's also the problem of an infinite number of possible solutions.
0faul_sname12y
The number of solutions is finite but (very, very, mind-bogglingly) large.
2AspiringKnitter12y
Ah. It's math. :) Thanks.
4Nornagest12y
A computational system is Turing complete if certain features of its operation can reproduce those of a Turing machine, which is a sort of bare-bones abstracted model of the low-level process of computation. This is important because you can, in principle, simulate the active parts of any Turing complete system in any other Turing complete system (though doing so will be inefficient in a lot of cases); in other words, if you've got enough time and memory, you can calculate anything calculable with any system meeting a fairly minimal set of requirements. Thanks to this result, we know that there's a deep symmetry between different flavors of computation that might not otherwise be obvious. There are some caveats, though: in particular, the idealized version of a Turing machine assumes infinite memory. Now, to answer your actual question, the branch of mathematics that this comes from is called computability theory, and it's related to the study of mathematical logic and formal languages. The textbook I got most of my understanding of it from is Hopcroft, Motwani, and Ullman's Introduction to Automata Theory, Languages, and Computation, although it might be worth looking through the "Best Textbooks on Every Subject" thread to see if there's a consensus on another.
0MarkusRamikin12y
Curious, does "memory space" mean something more than just "memory"?
8wedrifid12y
Just a little more specific. Some people may hear "memory" and associate it with, say, the duration of their memory rather than how many can be physically held. For example when a human is said to have a 'really good memory' we don't tend to be trying to make a claim about the theoretical maximum amount of stuff they could remember.
4Nornagest12y
No, although either or both might be a little misleading depending on what connotations you attach to it: an idealized Turing machine stores all its state on a rewritable tape (or several tapes, but that's equivalent to the one-tape version) of symbols that's infinite in both directions. You could think of that as analogous to both memory and disk, or to whatever the system you're actually working with uses for storage.
0MarkusRamikin12y
Right, I know that. Was just curious why the extra verbiage in a post meant to explain something.
2Nornagest12y
Because it's late and I'm long-winded. I'll delete it.
3jeremysalwen12y
https://en.wikipedia.org/wiki/Turing_completeness

brains are Turing complete modulo the finite memory

What does that statement mean in the context of thoughts?

That is, when I think about human thoughts I think about information processing algorithms, which typically rely on hardware set up for that explicit purpose. So even though I might be able to repurpose my "verbal manipulation" module to do formal logic, that doesn't mean I have a formal logic module.

Any defects in my ability to repurpose might be specific to me: I might able to think the thought "A-> B, ~A, therefore ~B" with the flavor of trueness, and another person can only think that thought with the flavor of falseness. If the truth flavor is as much a part of the thought as the textual content, then the second thinker cannot think the thought that the first thinker can.

Aren't there people who can hear sounds but not music? Are their brains not Turing complete? Are musical thoughts ones they cannot think?

It means nothing, although Greg Egan is quite impressed by it. Sad but true: Someone with an IQ of, say, 90 can be trained to operate a Turing machine, but will in all probability never understand matrix calculus. The belief that Turing-complete = understanding-complete is false. It just isn't stupid.

5komponisto12y
It doesn't mean nothing; it means that people (like machines) can be taught to do things without understanding them. (They can also be taught to understand, provided you reduce understanding to Turing-machine computations, which is harder. "Understanding that 1+1 = 2" is not the same thing as being able to output "2" to the query "1+1=".)
3Elithrion12y
I would imagine that he can be taught matrix calculus, given sufficient desire (on his and the teachers' parts), teaching skill, and time. I'm not sure if in practice it is possible to muster enough desire or time to do it, but I do think that understanding is something that can theoretically be taught to anyone who can perform the mechanical calculations.

Have you ever tried to teach math to anyone who is not good at math? In my youth I once tutored a woman who was poor, but motivated enough to pay $40/session. A major obstacle was teaching her how to calculate (a^b)^c and getting her to reliably notice that minus times minus equals plus. Despite my attempts at creative physical demonstrations of the notion of a balanced scale, I couldn't get her to really understand the notion of doing the same things to both sides of a mathematical equation. I don't think she would ever understand what was going on in matrix calculus, period, barring "teaching methods" that involve neural reprogramming or gain of additional hardware.

Your claim is too large for the evidence you present in support of it.

Teaching someone math who is not good at math is hard, but "will in all probability never understand matrix calculus"!? I don't think you're using the Try Harder.

Assume teaching is hard (list of weak evidence: it's a three year undergraduate degree; humanity has hardly allowed itself to run any proper experiments in the field, and those that have been run seem usually to be generally ignored by professional practitioners; it's massively subject to the typical mind fallacy and most practitioners don't know that fallacy exists). That you, "in your youth" (without having studied teaching), "once" tutored a woman who you couldn't teach very well… doesn't support any very strong conclusion.

It seems very likely to me that Omega could teach matrix calculus to someone with IQ 90 given reasonable time and motivation from the student. One of the things I'm willing to devote significant resources to in the coming years is making education into a proper science. Given the tools of that proper science I humbly submit that you could teach your former student a lot. Track the progress of the Khan Academy for some promising developments in the field.

7wedrifid12y
What are the experiments that are generally ignored?
2DanArmak12y
Some of it is weak evidence for the hardness claim (3 years degree), some against (all the rest). Does that match what you meant?
2matt12y
I'd intended a different meaning of "hard". On reflection your interpretation seems a very reasonable inference from what I wrote. What I meant: Teaching is hard enough that you shouldn't expect to find it easy without having spent any time studying it. Even as a well educated westerner, the bits of teaching you can reasonably expect to pick up won't take you far down the path to mastery. (Thank you for you comment - it got me thinking.)
7Elithrion12y
No, I haven't, and reading your explanation I now believe that there is a fair chance you are correct. However, one problem I have with it is that you're describing a few points of frustration, some of which I assume you ended up overcoming. I am not entirely convinced that had she spent, say one hundred hours studying each skill that someone with adequate talent could fully understand in one, she would not eventually fully understand it. In cases of extreme trouble, I can imagine her spending forty hours working through a thousand examples, until mechanically she can recognise every example reasonably well, and find the solution correctly, then another twenty working through applications, then another forty hours analysing applications in the real world until the process of seeing the application, formulating the correct problem, and solving it becomes internalised. Certainly, just because I can imagine it doesn't make it true, but I'm not sure on what grounds I should prefer the "impossibility" hypothesis to the "very very slow learning" hypothesis.
5Incorrect12y
I can't imagine how hard it would be to learn math without the concept of referential transparency.
0MixedNuts12y
Not all that hard if that's the only sticking point. I acquired it quite late myself.
2NancyLebovitz12y
What was your impression of her intelligence otherwise? Suzette Haden Elgin (a science fiction author and linguist who was quite intelligent with and about words) described herself as intractably bad at math.
0DanArmak12y
This anecdote gives very little information on its own. Can you describe your experience teaching math to other people - the audience, the investment, the methods, the outcome? Do you have any idea whether that one woman eventually succeeded in learning some of what you couldn't teach her, and if so, how? (ETA: I do agree with the general argument about people who are not good at math. I'm only saying this particular story doesn't tell us much about that particular woman, because we don't know how good you are at teaching, etc.)

I fear you're committing the typical mind fallacy. The dyscalculic could simulate a Turing machine, but all of mathematics, including basic arithmetic, is whaargarbl to them. They're often highly intelligent (though of course the diagnosis is "intelligent elsewhere, unintelligent at maths"), good at words and social things, but literally unable to calculate 17+17 more accurately than "somewhere in the twenties or thirties" or "I have no idea" without machine assistance. I didn't believe it either until I saw it.

0TheOtherDave12y
Do you find this harder to believe than, say, aphasia? I've never seen it, but I have no difficulty believing it.
0David_Gerard12y
Well, I certainly don't disbelieve in it now. I first saw it at eighteen, in first-year psychology, in the bit where they tried to beat basic statistics into our heads.
0DanArmak12y
I can't imagine how hard it is to learn to program if you don't instinctively know how. Yet I know it is that hard for many people. Some succeed in learning, some don't. Those who do still have big differences in ability, and ability at a young age seems to be a pretty good predictor of lifetime ability. I realize I must have learned the basics at some point, although I don't remember it. And I remember learning many more advanced concepts during the many years since. But for both the basics and the advanced subjects, I never experienced anything I can compare to what I'd call "learning" in other subjects I studied. When programming, if I see/read something new, I may need some time (seconds or hours) to understand it, then once I do, I can use it. It is cognitively very similar to seeing a new room for the first time. It's novel, but I understand it intuitively and in most cases quickly. When I studied e.g. biology or math at university, I had to deliberately memorize, to solve exercises before understanding the "real thing", to accept that some things I could describe I couldn't duplicate by building them from scratch no matter how much time I had and what materials and tools. This never happened to me in programming. I may not fully understand the domain problem that the program is manipulating. But I always understand the program itself. And yet I've seen people struggle to understand the most elementary concepts of programming, like, say, distinguishing between names and values. I've had to work with some pretty poor programmers, and had the official job of on-the-job mentoring newbies on two occasions. I know it can be very difficult to teach effectively, it can be very difficult to learn. Given that I encountered a heavily preselected set of people, who were trying to make programming their main profession, it's easy for me to believe that - at the extreme - for many people elementary programming is impossible to learn, period. And the same should apply
3thomblake12y
I'm not sure what you mean by understanding-complete, but remember that the turing-complete system is both the operator and any machinery they are manipulating.
2Incorrect12y
So you are considering a man in a Chinese room to lack understanding?

Obviously the man in the Chinese room lacks understanding, by most common definitions of understanding. It is the room as a system which understands Chinese. (Assuming lookup tables can understand. By functional definitions, they should be able to.)

0Incorrect12y
But with a person it becomes a bit more complicated because it depends on what we are referring to when we say their name. I was trying to make an allusion to Blindsight.
0JulianMorrison12y
It means you could, in theory, run an AI on them (slowly).
6Will_Newsome12y
FWIW I've read a study that says about 50% of people can't tell the difference between a major and a minor chord even when you label them happy/sad. [ETA: Happy/sad isn't the relevant dimension, see the replies to this comment.] I have no idea how probable that is, but if true it would imply that half of the American population basically can't hear music.
4Dmytry12y
This is weird. It is hard for me to hear the difference in the cadence, but crystal clear otherwise. In the cadence, the problem for me is that the notes are dragging on, like when you press pedal on piano a bit, that makes it hard to discern the difference. Maybe they lost something in retelling here? Made up new stimuli for which it doesn't work because of harmonics or something? Or maybe its just me and everyone on this thread? I have a lot of trouble hearing speech through noise (like that of flowing water), i always have to tell others, i am not hearing what you're saying i am washing the dishes. Though i've no idea how well other people can hear something when they are washing the dishes; maybe i care too much not to pretend to listen when i don't hear. This needs proper study.
6arundelo12y
The following recordings are played on an acoustic instrument by a human (me), and they have spaces in between the chords. The chord sequences are randomly generated (which means that the major-to-minor ratio is not necessarily 1:1, but all of them do have a mixture of major and minor chords). Each of the following two recordings is a sequence of eight C major or C minor chords: * major-minor-1.mp3 * major-minor-2.mp3 Each of the following two recordings is a sequence of eight "cadences" -- groups of four chords that are either F B♭ C F or F B♭ Cminor F * cadences-1.mp3 * cadences-2.mp3 Edit: Here's a listing of the chords in all four sound files. Edit 2 (2012-Apr-22): I added another recording that contains these chords: F B♭ C F F B♭ Cmi F repeated over and over, while the balance between the voices is varied, from "all voices roughly equal" to "only the second voice from the top audible". The second voice from the top is the only one that is different on the C minor chord. My idea is that hearing the changing voice foregrounded from its context like this might make it easier to pick it out when it's not foregrounded.
2Scottbert12y
Ditto for me -- The difference between the two chords is crystal clear, but in the cadence I can barely hear it. I'm not a professional, but I sang in school chorus for 6 years, was one of the more skilled singers there, I've studied a little musical theory, and I apparently have a lot of natural talent. And the first time I heard the version played in cadence I didn't notice the difference at all. Freaky. I know how that post-doc felt when she couldn't hear the difference in the chords.
0arundelo12y
I added another recording. See "Edit 2" in this comment for an explanation.
0arundelo12y
Nope, the audio examples are all straightforward realizations of the corresponding music notation. (They are easy for me to tell apart.)
0Dmytry12y
Still, the notes drag on, the notes have harmonics, etc. It is not pure sine waves that abruptly stop and give time for the ear to 'clear' of afterimage-like sound. I hear the difference in the cadence, it's just that I totally can't believe it can possibly be clearer than just the one chord then another chord. I can tell apart just the two chords at much lower volume level and/or paying much less attention.
0tgb12y
I am with you on easily telling the two apart in the original chords but being unable to reliably tell the difference in the cadence version.
0arundelo12y
I've had between a dozen and two dozen music students over the years. (Guitar and bass guitar.) Some of them started out having trouble telling the difference between ascending and descending intervals. (In other words, some of them had bad ears.) All of them improved, and all of them, with practice, were able to hear me play something and play it back by ear. I'm sure there are some people who are neurologically unable to do this, but in general, it is a learnable skill. The cognitive fun! website has a musical interval exercise. Edit: One disadvantage to that exercise/game for people who aren't already familiar with the intervals is that it doesn't have you differentiate between major and minor intervals. (So if you select e.g. 2 and 8 as your intervals, you'll be hearing three different intervals, because some of the 2nds will be minor rather than major.) Sooner or later I'll write my own interval game!
3alex_zag_al12y
is this what you're looking for? http://www.musictheory.net/exercises/ear-interval
0arundelo12y
That's pretty cool. Are there keybindings?
0alex_zag_al12y
I don't know, doesn't look like it.
0wedrifid12y
Likewise.
3TheOtherDave12y
I was going to comment about how the individual chords were clearly different to my ear but the "stereotypical I-IV-V-I cadential sequences" were indistinguishable, precisely the reverse of the experience the Bell Labs post doc reportedly reported. Then I read the comments on the article and realized this is fairly common, so I deleted the comment. Then I decided to comment on it anyway. Now I have.
1wedrifid12y
I had to listen to that second part several times before I could pick up the difference too. They sound equivalent unless I concentrate.
1Dmytry12y
And me. I guess - as most probable explanation - they just lost something crucial in retelling. The notes drag on a fair bit in the second part. I can hear the difference if I really concentrate. But its ilke a typo in the text. If the text was blurred.
0orthonormal12y
The second sequence sounded jarringly wrong to me, FWIW.
0khafra12y
At first, I found it unbelievable. Then, I remembered that I have imperfect perfect pitch: I learned both piano and french horn; the latter of which is transposed up a perfect fourth. Especially when I'm practicing regularly, I can usually name a note or simple chord when I hear it; but I'm often off by a perfect fourth. Introspecting on the difference between being right about a note and wrong about a note makes me believe people can confuse major and minor, but still enjoy music.
0Bluehawk12y
Might have something to do with the fact that happy/sad is neither an accurate nor an encompassing description of the uses of major/minor chords, unless you place a C major and a C or A minor directly next to each other. I for one find that when I try to tell the difference solely on that basis, I might as well flip a coin and my success rate would go down only slightly. When I come at it from other directions and ignore the emotive impact, my success rate is much higher. In short: Your conclusion doesn't follow from the evidence.
1Will_Newsome12y
I stated the evidence incorrectly, look at the uncle/aunt of your comment (if you haven't already) for the actual evidence.
0Bluehawk12y
Yeah, I spotted that after making my comment, but after that I wasn't sure whether you were citing the same source material or no. The actual evidence does say a lot more about how humans (don't?) perceive musical sounds. Thanks for clarifying, though.
0[anonymous]12y
I'm curious; 50% of what sample? total human population or USians or what?
0Dmytry12y
There's the halting problem, so here you go. There's also the thoughts that you'll never arrive at because your arriver at the thoughts won't reach them, even if you could think them if told of them.
5majus12y
In Pinker's book "How the Mind Works" he asks the same question. His observation (as I recall) was that much of our apparently abstract logical abilities are done by mapping abstractions like math onto evolved subsystems with different survival purposes in our ancestors: pattern recognition, 3D spatial visualization, etc. He suggests that some problems seem intractable because they don't map cleanly to any of those subsystems.
2MixedNuts12y
Because thoughts don't behave much like perceptions at all, so that wouldn't occur to us or convince us much once we hear it. Are there any thoughtlike things we don't get but can indirectly manipulate?

Extremely large numbers.

(among other things)

9Vaniver12y
Parity transforms as rotations in four-dimensional space.
1TheOtherDave12y
Can you expand on what you mean by that? There are many ways in which thoughts behave quite a bit like perceptions, which is unsurprising since they are both examples of operations clusters of neurons can perform, which is a relatively narrow class of operations. Video games behave quite a bit like spreadsheets in a similar way. Of course, there are also many ways in which video games behave nothing at all like spreadsheets, and thoughts behave nothing like perceptions.
1MixedNuts12y
Naively speaking, if Alice can think a thought, she can just tell Bob, and he will. Dogs can't tell us what ultrasounds sound like, but that's for the same reason they can't tell us what regular sounds sound like.
1Eugine_Nier12y
That's assuming the thought can be expressed in language.
1TheOtherDave12y
Even if we posit that for every pair of humans X,Y if X thinks thought T then Y is capable of thinking T, it doesn't follow that for all possible Ts, X and Y are capable of thinking T. That is, whether Alice can think the thought in the first place is not clear.
1MixedNuts12y
If you limit yourself to humans, yes. But at least one mind has to be able to think a thought for that thought to exist.
0TheOtherDave12y
Ah, I thought you were limiting yourself to humans, given your example. If you're asserting that for every pair of cognitive systems X,Y (including animals, aliens, sufficiently sophisticated software, etc.) if X thinks thought T then Y is capable of thinking T, then we just disagree.
0MixedNuts12y
Yes, transmission of thoughts between sufficiently different minds breaks down, so we recover the possibility of thoughts that can be thought but not by us. But that's a sufficiently different reason from why there are sensations we can't perceive to show that the analogy is very shallow.
0[anonymous]12y
It would surprise me, since no one could ever give me an example. I'm not sure what kind of evidence could give me good reason to think that there are thoughts that I cannot think.
3Eugine_Nier12y
Try visualizing four spacial dimensions.

Just visualize n dimensions, and then set n = 4.

1Sabiola12y
You might as well tell me to 'just' grow wings and fly away...
4NancyLebovitz12y
I believe wnoise was making a joke-- one that I thought was moderately funny.
5Sabiola12y
I thought it might be, and if I'd read it elsewhere, I'd have been sure of it - but this is LessWrong, which is chock-full of hyperintelligent people whose abilities to do math, reason and visualize are close to superpowers from where I am. You people seriously intimidate me, you know. (Just because I feel you're so much out of my league, not for any other reason.)
5wnoise12y
It's a standard joke about mathematicians vs everybody else, and I intended it as such. I can do limited visualization in the 4th dimension (hypercubes and 5-cells (hypertetrahedra), not something as complicated as the 120-cell or even the 24-cell), but it's by extending from a 3-d visualization with math knowledge, rather than specializing n to 4.
0NancyLebovitz12y
For what it's worth, my ability to reason is fairly good in a very specific way-- sometimes I see the relevant thing quickly (and after LWers have been chewing on a problem and haven't seen it (sorry, no examples handy, I just remember the process)), but I'm not good at long chains of reasoning. Math and visualizing aren't my strong points.
8Nominull12y
Been there, done that. Advice to budding spatial-dimension visualizers: the fourth is the hardest, once you manage the fourth the next few are quite easy.
1tgb12y
Is this legit and if so can you elaborate? I bet I'm not the only one here who has tried and failed.
7Nominull12y
Well, I can elaborate, but I'm not sure how helpful it will be. "No one can be told what the matrix is" and that sort of thing. The basic idea is that it's the equivalent of the line rising out of the paper in two-dimensions, but in three dimensions instead. But that's not telling someone who has tried and failed anything they don't know, I'm sure. If you really want to be able to visualize higher-order spaces, my advice would be to work with them, do math and computer programming in higher-order spaces, and use that to build up physical intuitions of how things work in higher-order spaces. Once you have the physical intuitions it's easier for your brain to map them to something meaningful. Of course if your reason for wanting to be able to visualize 4D-space is because you want to use the visualization to give you physical intuitions about it that will be useful in math or computer programming, this is an ass-backward way of approaching the problem.
5sixes_and_sevens12y
Is it like having a complete n-dimensional construct in your head that you can view in its entirety? I can visualise 4-dimensional polyhedra, in much the same way I can draw non-planar graphs on a sheet of paper, but it's not what I imagine being able to visualise higher-dimensional objects to be like. I used to be into Rubik's Cube, and it's quite easy for me to visualise all six faces of a 3D cube at once, but when visualising, say, a 4-octahedron, the graph is easy to visualise, (or draw on a piece of paper, for that matter), but I can only "see" one perspective of the convex hull at a time, with the rest of it abstracted away.
4CronoDAS12y
Even better - play Snake in four spatial dimensions!
4Multiheaded12y
When I was 13 or so, my brains worked significantly better than they currently do, and I figured out an easy trick for that in a math class one day. Just assign a greyscale color value (from black to white) to each point! This is exactly like taking an usual map and coloring the hills a lighter shade and the low places a darker one. The only problem with that is it's still "3.5D", like the "2.5D" graphics engine of Doom, where there's only one Z-value to any point in the world so things can't be exactly above or below each other. To overcome this, you could theoretically imagine the 3D structure alternating between "levels" in the 4th dimension every second, so e.g. one second a 3D cube's left half is grey and its right half is white, indicating a surface "rising" in the 4th dimension, but every other second the right half changes to black while the left is still grey, showing a second surface which begins at the same place and "descends" in the 4th dimension. Voila, you have two 3D "surfaces" meeting at a 4D angle! With RGB color instead of greyscale, one could theoretically visualize 6 dimensions in such a way.
3Eugine_Nier12y
Now, if only this let you rotate things through the 4th dimension.
8wnoise12y
Doing specific rotations by breaking it into steps is possible. Rotations by 90 degrees through the higher dimensions is doable with some effort -- it's just coordinate swapping after all. You can make checks that you got it right. Once you have this mastered, you can compose it with rotations that don't touch the higher dimensions. Then compose again with one of these 90 degree rotations, and you have an effective rotation through the higher dimensions. (Understanding the commutation relations for rotation helps in this breakdown, of course. If you can then go on to understanding how the infinitesimal rotations work, you've got the whole thing down.)
0wedrifid12y
I knew a guy who credibly claimed to be able to visualize 5 spacial dimensions. He is a genius math professor with 'autistic savant' tendencies. I certainly couldn't pull it off and I suspect that at my age it is too late for me to be trained without artificial hardware changes.
2Mitchell_Porter12y
The way I would do it for dimensions between d=4 and d=6 is to visualize a (d-3)-dimensional array of cubes. Then you remember that similarly positioned points, in the interior of cubes that are neighbors in the array, are near-neighbors in the extra dimensions (which correspond to the directions of the array). It's not a genuinely six-dimensional visualization, but it's a three-dimensional visualization onto which you can map six-dimensional properties. Then if you make an effort, you could learn how rotations, etc, map onto transformations of objects in the visualization. I would think that all claimed visualizations of four or more dimensions really amount to some comparable combinatorial scheme, backed up with some nonvisual rules of transformation and interpretation. ETA: I see similar ideas in this subthread.
0faul_sname12y
Am I allowed to use time/change dimensions? Because if so, the task is trivial (if computationally expensive).
0Eugine_Nier12y
Ok, now add a temporal dimension.
0faul_sname12y
Adding multiple temporal dimensions effectively how I do it, so one more shouldn't be a problem*. I visualize a 3 dimensional object in an space with a reference point that can move in n perpendicular directions. As the point of reference moves through the space, the object's shape and size change. Example: to visualize a 5-dimensional sphere, I first visualize a 3 dimensional sphere that can move along a 1 dimensional line. As the point of reference reaches the three-dimensional sphere, a point appears, and this point grows into a full sized sphere at the middle, then shrinks back down to a point. I then add another degree of freedom perpendicular to the first line, and repeat the procedure. Rotations are still very hard for me to do, and become increasingly difficult with 5 or more dimensions. I think this is due to a very limited amount of short-term memory. As for my technique, I think it piggybacks on the ability to imagine multiple timelines simultaneously. So, alas, it's a matter of repurposing existing abilities, not constructing entirely new ones. *up to 7: 3 of space, 3 of observer-space, and 1 of time
0[anonymous]12y
Either I can visualize them, and then they're thoughts I can think, or I can't visualize them, in which case the exercise doesn't help me.
0Eugine_Nier12y
If you can, replace 4 with N for sufficiently large N. If you can't, imagine a creature that evolved in a 4-dimensional universe. I find it unlikely that it would not be able to visualize 4 dimensions.
0[anonymous]12y
There's a pretty serious gap between the idea of a person evolved to visualize four dimensions and it being capable of thoughts I cannot think. This might be defensible, but if so only in the context of certain thoughts, something like qualitative ones. But the original quote was inferring from the fact that not everyone can see all the colors to the idea that there are thoughts we cannot think. If 'colors I can't see' are the only kinds of things we can defend as thoughts that I cannot think, then the original quote is trivial. So even if you can defend 4d visualizations as thoughts I cannot think, you'd have to extend your argument to something else. But I have a question in return: how would the belief that there are thoughts you cannot think modify your anticipations? What would that look like?
0Strange712y
By itself? Not much at all. The fun part is encountering another creature which can think those thoughts, then deducing the ability (and, being human, shortly thereafter finding some way to exploit it for personal gain) without being able to replicate the thoughts themselves.
0Richard_Kennaway12y
Hinton cubes. I haven't tried them though. ETA: Original source, online.
2Desrtopa12y
The existence of other signals your brain simply doesn't process doesn't shift your prior at all?
0[anonymous]12y
That doesn't seem strictly relevant. Other signals might lead me to believe that there are thoughts I don't think (but I accepted that already), not thoughts I can't think. How could I recognize such a thing as a thought? After all, while every thought is a brain signal, not every brain signal is a thought: animals have lots of brain signals, but no thoughts.
2LucasSloan12y
What is the difference between a thought you can't think and one you don't think?
0[anonymous]12y
Well, for example I don't think very much about soccer. There are thoughts about who the best soccer team is that I simply don't ever think. But I can think them. Another case: In two different senses of 'can', I can and can't understand Spanish. I can't understand it at the moment, but nevertheless Spanish sentences are in principle translatable into sentences I can understand. I also can't read Aztec hieroglyphs, and here the problem is more serious: no one knows how to read them. But nevertheless, insofar as we assume they are a form of language, we assume that we could translate them given the proper resources. To see something as translatable just is to see it as a language, and to see something as a language is to see it as translatable. Anything which was is in principle untranslatable just isn't recognizable as a language. I think the point is analogous (and that's no accident) with thoughts. Any thought that I couldn't think by any means is something I cannot by any means recognize as a thought in the first place. All this is just a way of saying that the belief that there are thoughts you cannot think is one of those beliefs that could never modify your anticipations. That should be enough to discount it as a serious consideration.
0TheOtherDave12y
And yet, if I see two nonhuman life forms A1 and A2, both of which are performing something I classify as the same task but doing it differently, and A1 and A2 interact, after which they perform the task the same way, I would likely infer that thoughts had been exchanged between them, but I wouldn't be confident that the thoughts which had been exchanged were thoughts that could be translated to a form that I could understand.
0DanArmak12y
Alternative explanations include: * They exchanged genetic material, like bacteria, or outright code, like computer programs; which made them behave more similarly. * They are programs, one attacked the other, killed it and replaced its computational slot with a copy of itself. * A1 gave A2 a copy of its black-box decision maker which both now use to determine their behavior in this situation. However, neither of them understands the black box's decision algorithm on the level of their own conscious thoughts; and the black box itself is not sentient or alive and has no thoughts. * One of them observed the other was more efficient and is now emulating its behavior, but they didn't talk about it ("exchange thoughts"), just looked at one another. These are, of course, not exhaustive. You could call some these cases a kind of thought. Maybe to self-modifying programs, a blackbox executable algorithm counts as a thought; or maybe to beings who use the same information storage for genes and minds, lateral gene transfer counts as a thought. But this is really just a matter of defining what the word "thought" may refer to. I can define it to include executable undocumented Turing Machines, which I don't think humans like us can "think". Or you could define it as something that, after careful argument, reduces to "whatever humans can think and no more".
0TheOtherDave12y
Sure. Leaving aside what we properly attach the label "thought" to, the thing I'm talking about in this context is roughly speaking the executed computations that motivate behavior. In that sense I would accept many of these options as examples of the thing I was talking about, although option 2 in particular is primarily something else and thus somewhat misleading to talk about that way.
0[anonymous]12y
I think you're accepting and then withdrawing a premise here: you've identified them as interacting, and you've identified their interaction as being about the task at hand, and the ways of doing it, and the relative advantages of these ways. You've already done a lot of translation right there. So the set up of your problem assumes not only that you can translate their language, but that you in some part already have. All that's left, translation wise, is a question of precision.
3TheOtherDave12y
Sure, to some level of precision, I agree that I can think any thought that any other cognitive system, however alien, can think. There might be a mind so alien that the closest analogue to its thought process while contemplating some event that I can fathom is "Look at that, it's really interesting in some way," but I'll accept that this in some part a translation and "all that's left" is a question of precision. But if you mean to suggest by that that what's left is somehow negligible, I strenuously disagree. Precision matters. If my dog and I are both contemplating a ball, and I am calculating the ratio between its volume and surface, and my dog is wondering whether I'll throw it, we are on some level thinking the same thought ("Oh, look, a ball, it's interesting in some way") but to say that my dog therefore can understand what I'm thinking is so misleading as to be simply false. I consider it possible for cognitive systems to exist that have the same relationship to my mind in some event that my mind has to my dog's mind in that example.
0[anonymous]12y
Well, I don't think I even implied that the dog could understand what you're thinking. I don't think dogs can think at all. What I'm claiming is that for anything that can think (and thus entertain the idea of thoughts that cannot be thought), there are no thoughts that cannot be thought. The difference between you and your dog isn't just one of raw processing power. It's easy to imagine a vastly more powerful processor than a human brain that is nevertheless incapable of thought (I think Yud.'s suggestion for an FAI is such a being, given that he's explicit that it would not rise to the level of being a mechanical person). Once we agree that it's a point about precision, I would just say that this ground can always in principle be covered. Suppose the translation has gotten started, such that there is some set of thoughts at some level of precision that is translatable, call it A, and the terra incognito that remains, call it B. Given that the cognitive system you're trying to translate can itself translate between A and B (the aliens understand themselves perfectly), there should be nothing barring you from doing so as well. You might need extremely complex formulations of the material in A to capture anything in B, but this is allowed: we need some complex sentence to capture what the Germans mean by 'schadenfreude', but it would be wrong to think that because we don't have a single term which corresponds exactly, that we cannot translate or understand the term to just the same precision the Germans do.
2TheOtherDave12y
I accept that you don't consider dogs to have cognitive systems capable of having thoughts. I disagree. I suspect we don't disagree on the cognitive capabilities of dogs, but rather on what the label "thought" properly refers to. Perhaps we would do better to avoid the word "thought" altogether in this discussion in order to sidestep that communications failure. That said, I'm not exactly sure how to do that without getting really clunky, really fast. I'll give it a shot, though. I certainly agree with you that if cognitive system B (for example, the mind of a Geman speaker) has a simple lexical item Lb (for example, the word "schadenfreude") , ...and Lb is related to some cognitive state Slb (for example, the thought /schadenfreude/) such that Slb = M(Lb) (which we ordinarily colloquially express by saying that a word means some specific thought), ...and cognitive system A (for example, the mind of an English speaker) lacks a simple lexical item La such that Slb=M(La) (for example, the state we'd ordinarily express by saying that English doesn't have a word for "schadenfreude")... that we CANNOT conclude from this that A can't enter Slb, nor that there exists no Sla such that A can enter Sla and the difference between Sla and Slb is < N, where N is the threshold below which we'd be comfortable saying that Sla and Slb are "the same thought" despite incidental differences which may exist. So far, so good, I think. This is essentially the same claim you made above about the fact that there is no English word analogous to "schadenfreude" not preventing an English speaker from thinking the thought /schadenfreude/. In those terms, I assert that there can exist a state Sa such that A can enter Sa but B cannot enter Sa. Further, I assert that there can exist a state Sa such that A can enter Sa but B cannot enter any state Sb such that the difference between Sa and Sb is < N. Do you disagree with that? Or do you simply assert that if so, Sa and Sb aren't thoughts? Or s
0[anonymous]12y
I agree that this is an issue of what 'thoughts' are, though I'm not sure it's productive to side step the term, since if there's an interesting point to be found in the OP, it's one which involves claims about what a thought is. I'd like to disagree with that unqualifiedly, but I don't think I have the grounds to do so, so my disagreement is a qualified one. I would say that there is no state Sa such that A can enter Sa, and such that B cannot enter Sa, and such that B can recognise Sa as a cognitive state. So without the last 'and such that', this would be a metaphysical claim that all cognitive systems are capable of entertaining all thoughts, barring uninteresting accidental interference (such as a lack of memory capacity, a lack of sufficient lifespan, etc.). I think this is true, but alas. With the qualification that 'B would not be able to recognise Sa as a cognitive state', this is a more modest epistemic claim, one which amounts to the claim that recognising something as a cognitive state is nothing other than entering that state to one degree of precision or another. This effectively marks out my opinion on your second assertion: for any Sa and any Sb, such that the difference between Sa and Sb cannot be < N, A (and/or B) cannot by any means recognise the difference as part of that cognitive state. All this is a way of saying that you could never have reason to think that there are thoughts that you cannot think. Nothing could give you evidence for this, so it's effectively a metaphysical speculation. Not only is evidence for such thoughts impossible, but evidence for the possibility of such thoughts is impossible.
0TheOtherDave12y
I'm not exactly sure what it means to recognize something as a cognitive state, but I do assert that there can exist a state Sa such that A can enter Sa, and such that B cannot enter Sa, and such that B can believe that A is entering into a particular cognitive state whenever (and only when) A enters Sa. That ought to be equivalent, yes? This seems to lead me back to your earlier assertion that if there's some shared "thought" at a very abstract level I and an alien mind can be said to share, then the remaining "terra incognito" between that and sharing the "thought" at a detailed level is necessarily something I can traverse. I just don't see any reason to expect that to be true. I am as bewildered by that claim as if you had said to me that if there's some shared object that I and an alien can both perceive, then I can necessarily share the alien's perceptions. My response to that claim would be "No, not necessarily; if the alien's perceptions depend on sense organs or cognitive structures that i don't possess, for example, then I may not be able to share those perceptions even if I;n perceiving the same object." Similarly, my response to your claim is "No, not necessarily, if the alien's 'thought' depends on cognitive structures that i don't possess, for example, then I may not be able to share that 'thought'." You suggest that because the aliens can understand one another's thoughts, it follows that I can understand the alien's thoughts, and I don't see how that's true either. So, I dunno... I'm pretty stumped here. From my perspective you're simply asserting the impossibility, and I cannot see how you arrive at that assertion.
0[anonymous]12y
Well, if the terra incogntio has any relationship at all to the thoughts you do understand, such that the terra could be recognized as a part of or related to a cognitive state, then the terra is going to consist in stuff which bears inferential relations to what you do understand. These are relations you can necessarily traverse if the alien can traverse them. Add to that the fact that you've already assumed that the aliens largely share your world, that their beliefs are largely true, and that they are largely rational, and it becomes hard to see how you could justify the assertion at the top of your last post. And that assertion has, thus far, gone undefended.
0TheOtherDave12y
Well, I justify it by virtue of believing that my brain isn't some kind of abstract general-purpose thought-having or inferential-relationship-traversing device; it is a specific bit of machinery that evolved to perform specific functions in a particular environment, just like my digestive system, and I find it no more plausible that I can necessarily traverse an inferential relationship that an alien mind can traverse than that I can necessarily extract nutrients from a food source that an alien digestive system can digest. How do you justify your assertion that I can necessarily traverse an inferential relationship if an alien mind is capable of traversing it?
0[anonymous]12y
Well, your brain isn't that, but its only a necessary but insufficient condition on your having thoughts. Understanding a language is both necessary and sufficient and a language actually is the device you describe. Your competance with your own language ensures the possibility of your traversal in another.
0TheOtherDave12y
Sorry, I didn't follow that at all.
0[anonymous]12y
The source of your doubt seemed to be that you didn't think you posessed a general purpose thought having and inferential relationship traversing device. A brain is not such a device, we agree. But you do have such a device. A language is a general purpose thought having and inferential relationship traversing device, and you have that too. So, doubt dispelled?
0TheOtherDave12y
Ah! OK, your comment now makes sense to me. Thanks. Agreed that my not believing that my brain is a general-purpose inferential relationship traversing device (hereafter gpirtd) is at the root of my not believing that all thoughts thinkable by any brain are thinkable by mine. I'm glad we agree that my brain is not a gpirtd. But you seem to be asserting that English (for example) is a gpirtd. Can you expand on your reasons for believing that? I can see no justification for that claim, either. But I do agree that if English were a gpirtd while my brain was not, it would follow that I could infer in English any thought that an alien mind could infer, at the same level of detail that the alien mind could think it, even if my brain was incapable of performing that inference.
0[anonymous]12y
So the claim is really that language is a gpirtd, excepting very defective cases (like sign-language or something). That language is an inference relation traversing device is, I think, pretty clear on the surface of things: logic is that in virtue of which we traverse inference relations (if anything is). This isn't to say that English, or any language, is a system of logic, but only that logic is one of the things language allows us to do. I think it actually follows from this that language is also a general purpose thought having device: thoughts are related, and their content is in large part (or perhaps entirely) constituted, by inferential relations. If we're foundationalists about knowledge, then we think that the content of thoughts is not entirely constituted by inferential relations, but this isn't a serious problem. If we can get anywhere in a process of translation, it is by assuming we share a world with whatever speaker we're trying to understand. If we don't assume this, and to whatever extent we don't assume this, just to that extent we can't recognize the gap as conceptual or cognitive. If an alien was reacting in part to facts of the shared world, and in part to facts of an unshared world (whatever that means), then just to the extent that the alien is acting on the latter facts, to that extent would we have to conclude that they are behaving irrationally. The reasons are invisible to us, after all. If we manage to infer from their behavior that they are acting on reasons we don't have immediate access to, then just to the extent that we now view their behavior as rational, we now share that part of the world with them. We can't decide that behavior is rational while knowing nothing of the action or the content of the reason, in the same sense that we can't decide whether or not a belief is rational, or true, while knowing nothing of its meaning or the facts it aimes at. This last claim is most persuasively argued, I think, by showing that any ex
0TheOtherDave12y
Re: your ETA... agreed that there are thoughts I cannot think in the trivial sense you describe here, where the world is such that the events that would trigger that thought never arise before my death. What is at issue here is not that, but the less trivial claim that there are thoughts I cannot think by virtue of the way my mind works. To repeat my earlier proposed formalization: there can exist a state Sa such that mind A can enter Sa but mind B cannot enter Sa. But you seem to also want to declare as trivial all cases where the reason B cannot enter Sa is because of some physical limitation of B, and I have more trouble with that. I mean, sure, if A can enter Sa in response to some input and B cannot, I expect there to be some physical difference between A and B that accounts for this, and therefore some physical modification that can be made to B to remedy this. So sure, I agree that all such cases are "fundamentally remediable". Worst-case, I transform B into an exact replica of A, and now B can enter state Sa, QED. I'm enough of a materialist about minds to consider this possible in principle. But I would not agree that, because of this, the difference between A and B is trivial.
0TheOtherDave12y
Well, at the risk of repeating myself in turn, I'll go back to my original example. As an observer I would have reason to believe there were some thoughts involved in that exchange, even if I couldn't think those thoughts. I understand you to be saying in response that I can necessarily think those thoughts, since I can understand them at some level L1 by virtue of having an awareness of the same world A1 and A2 are interacting with (I agree so far) and that I can therefore understand them at any desired level L2 as long as the aliens themselves can traverse an inference relation between L1 and L2 because I have a language, and languages* are gpirtds (I disagree). I've asked you why you believe English (for example) is a gpirtd, and you seem to have responded that English (like any non-defective language) allows us to do logic, and logic allows us to traverse inference relations. Did I understand that correctly? If so, I don't think your response is responsive. I would certainly agree that English (like any language) allows me to perform certain logical operations and therefore to traverse certain inference relations. I would not agree that for all inference relations R, English (or any other language) allows me to traverse R. I agree that if I'm wrong about that and English (for example) really does allow me to traverse all inference relations, then the rest of your argument holds. I see no reason to believe that, though. === * Except, you say, for defective cases like sign-language. I have absolutely no idea on what basis you judge sign language defective and English non-defective here, or whether you're referring to some specific sign language or the whole class of sign languages. However, I agree with you that sign languages are not gpirtds. (I don't believe English is either.)
0[anonymous]12y
Well, I'd like a little more from you: I'd like an example where you are given reason to think that there are thoughts in the air, and reason to think that they are not thoughts you could think. As it stands, I of course have no objection to your example, because the example doesn't go so far as suggesting the latter of the two claims. So do you think you can come up with such an example? If not, don't you think that counts powerfully against your reasons for thinking that such a situation is possible? This is not exactly related to my claim. My claim is that you could never be given a reason for thinking that there are thoughts you cannot think. That is not the same as saying that there are thoughts you cannot think. So likewise, I would claim that you could never, deploying the inference relations available to you, infer that there are inference relations unavailable to you. Because if you can infer that they are inference relations, then they are available to you. (ETA: the point here, again, is that you cannot know that something is an inference relation while not knowing of what kind of relation it is. Recognizing that something is an inference relation just is recognizing that it is truth-preserving (say), and you could only recognize that by having a grip on the relation that it is.) It's extremely important to my argument that we keep in full view the fact that I am making an epistemic claim, not a metaphysical one.
0TheOtherDave12y
From an epistemic position, the proposition P1: "Dave's mind is capable of thinking the thought that A1 and A2 shared" is experimentally unfalsifiable. No matter how many times, or how many different ways, I try to think that thought and fail, that doesn't prove I'm incapable of it, it just means that I haven't yet succeeded. But each such experiment provides additional evidence against P1. The more times I try and fail, and the more different ways I try and fail, the greater the evidence, and consequently the lower the prior probability of P1. If you're simply asserting that that prior probability can't ever reach zero, I agree completely. If you're asserting that that prior probability can't in practice ever reach epsilon, I mostly agree. If you're asserting that that prior probability can't in practice get lower than, say, .01, I disagree. (ETA: In case this isn't clear, I mean here to propose "I repeatedly try to understand in detail the thought underlying A1 and A2's cooperation and I repeatedly fail" as an example of a reason to think that the thought in question is not one I can think.)
0[anonymous]12y
I think that overestimates my claim: suppose Dave were a propositional logic machine, and the A's were first order logic machines. If we were observing Dave and the Aliens, and given that we are capable of thinking more expressively than either of them, then we could have reason for thinking that Dave cannot think the thoughts that the Aliens are thinking (lets just assume everyone involved is thinking). So we can prove P1 to be false in virtue of stuff we know about Dave and stuff we know about what the Aliens are saying. That, again, is not my point. My point is that Dave could never have reasons for thinking that he couldn't think what the Aliens are thinking, because Dave could never have reasons for thinking both A) that the aliens are in a given case doing some thinking, and B) that this thinking is thinking that Dave cannot do. If B is true, A is not something Dave can have reasons for. If Dave can have reason for thinking A, then B is false. So suppose Dave has understood that the aliens are thinking. By understanding this, Dave has already and necessarily assumed that he and the aliens share a world, that he and the aliens largely share relevant beliefs about the world, and that he and the aliens are largely rational. If you agree that one cannot have reason to think that an action or belief is rational or true without knowing the content or intention of the belief or action, then I think you ought to agree that whatever reasons Dave has for thinking that the aliens are rational are already reasons for thinking that Dave can understand them. And to whatever extent we third party observers can see that Dave cannot understand them, just to that extent Dave cannot have reasons for thinking that the aliens are rational. In such a case, Dave may believe that the aliens are thinking and it might be impossible for him to understand them. But in this case Dave's opinion that the aliens are thinking is irrational, even if it is true. Thus, no one can ever be gi
0TheOtherDave12y
Supposing both that all of those suppositions were true, and that we could somehow determine experimentally that they were true, then, yes, it would follow that the conclusion was provable. I'm not sure how we would determine experimentally that they were true, though. I wouldn't normally care, but you made such a point a moment ago about the importance of your claim being about what's knowable rather than about what's true that I'm not sure how to take your current willingness to bounce back and forth between that claim about what can be known in practice, and these arguments that depend on unknowable-in-practice presumptions. Then I suppose we can safely ignore it for now. As I've already said, in this example I have reason to believe A1 and A2 are doing some thinking, and if I make a variety of good-faith-but-unsuccessful attempts to recapitulate that thinking I have reason to believe I'm incapable of doing so. Is it sufficient to suppose that Dave has reasons to believe the aliens are thinking? I'm willing to posit all of those things, and I can imagine how they might follow from a belief that the aliens are thinking, for sufficiently convenient values of "world", "largely", and "relevant". Before I lean too heavily on any of that I'd want to clarify those words further, but I'm not sure it actually matters. I don't agree with this. Just to pick a trivial example, if you write down a belief B on a slip of paper and hand it to my friend Sam, who I trust to be both a good judge of and an honest reporter of truth, and Sam says to me "B is true," I have reason to think B is true but I don't know the content of B. The premise is false, but I agree that were it true your conclusion would follow.
0[anonymous]12y
This seems to be a crucial disagreement, so we should settle it first. In your example, you said that you trust Sam to be a good judge and an honest reporter of truth. This means, among other things, that you and Sam share a great many beliefs, and that you think Sam makes judgements roughly in the same ways you do. So, you mostly understand the kinds of inferences Sam draws, and you mostly understand the beliefs that Sam has. If you infer from this that B is true because Sam says that it is, you must be assuming that B isn't so odd belief that Sam has no competence in assessing it. It must be something Sam is familiar enough to be comfortable with. All that said, you've got a lot of beliefs about what B is, without knowing the specifics. Essentially, your inference that B is true because Sam says that it is, is the belief that though you don't know what B says specifically, B is very likely to either be one of your beliefs already or something that follows straightforwardly from some of your beliefs. In other words, if you have good reason to think B is true, you immediately good reason to think you know something about the content of B (i.e. that it is or follows from one of your own beliefs). Thinking that B is probably true just is believing you know something about B. (ETA: I want to add how closely this example resembles your aliens example, both in the set up, and in how (I think) it should be answered. In both cases, we can look at the example more closely and discover that in drawing the conclusion that the aliens are thinking or that B is true, a great deal is assumed. I'm saying that you can either have these assumptions, but then my translation point follows, or you can deny the translation point, but then you can't have the assumptions necessary to set up your examples.)
0TheOtherDave12y
All right. Sure, if Sam and I freely interact and I consider him a good judge and honest reporter of truth, I will over time come to believe many of the things Sam believes. Also, to the extent that I also consider myself a good judge of truth (which has to be nontrivial for me to trust my judgment of Sam in the first place), many of the beliefs I come to on observing the world will also be beliefs Sam comes to on observing the world, even if we don't interact freely enough for him to convince me of his belief. This is a little trickier, because not all reasons for belief are fungible... I might have reasons for believing myself a good judge of whether Sam is a good judge of truth without having reasons for believing myself a good judge of truth more generally. But I'm willing to go along with it for now. Agreed so far. No, I don't follow this at all. I might think Sam comes to the same conclusions that I would given the same data, but it does not follow in the least that he uses the same process to get there. That said, I'm not sure this matters to your argument. Yes, both in the sense that I can mostly predict the inferences Sam will draw from given data, and in the sense that any arbitrarily-selected inference that Sam draws is very likely to be one that I can draw myself. Yes, in the same ways. Something like this, yes. It is implicit in this example that I trust Sam to recognize if B is outside his competence to evaluate and report that fact if true, so it follows from his not having reported that that I'm confident it isn't true. Certainly. In addition to all of that stuff, I also have the belief that B can be written down on a slip of paper, with all that that implies. Statistically speaking, yes: given an arbitrarily selected B1 for which Sam would report "B1 is true," the prior probability that I already know B1 is high. But this is of course in no sense guaranteed. For example, B might be "I'm wearing purple socks," in response to which Sam check
0[anonymous]12y
You know that B is likely to be one of your beliefs, or something that follows straightforwardly from your beliefs. It makes no difference if B actually turns out not to be one of your beliefs or something that follows straightforwadly therefrom. Likewise, you would have good reason to guess that the outcome of a die roll is 1-5 as opposed to 6. If it turns out that it comes up 6, this does not impugn the probability involved in your initial estimate. Knowing how dice work is knowing something about this die roll and its outcome. By knowing how dice work, you know that the outcome of this roll is probably 1-5, even if it happens to be 6. Knowing how Sam's judgements work is knowing something about this judgement. None of this, I grant you, involves knowing the specific content of B. But all of this is knowledge about the content of B. If Sam said to you "Dave, you don't know the content of B", you ought to reply "Sam, I know enough about your beliefs and judgements that I really do know something about the content of B, namely that it's something you would judge to be true on the basis of a shared set of beliefs." Your set up, I think, draws an arbitrary distinction between knowledge of the specific content of B and knowledge of B as a member of someone's set of beliefs. Even if there's any distinction here (i.e. if we're foundationalists of some kind), it still doesn't follow that knowledge of the second kind is wholly unrelated to knowledge of the first. In fact, that would be astonishing. So, I'm not saying that because you have reason to believe B to be true, you therefore have reason to believe that you know the content of B. What I'm saying is that because you have reason to believe B to be true, you therefore do know something about the content of B.
0TheOtherDave12y
I hope we can agree that in common usage, it's unproblematic for me to say that I don't know what color your socks are. I don't, in fact, know what color your socks are. I don't even know that you're wearing socks. But, sure, I think it's more probable that your socks (if you're wearing them) are white than that they're purple, and that they probably aren't transparent, and that they probably aren't pink. I agree that I know something about the color of your socks, despite not knowing the color of your socks. And, sure, if you're thinking "my socks are purple" and I'm thinking "Abrooks' socks probably aren't transparent," these kinds of knowledge aren't wholly unrelated to one another. But that doesn't mean that either my brain or my command of the English language is capable of traversing the relationship from one to the other. Much as you think I'm drawing arbitrary distinctions, I think you're eliding over real distinctions.
0[anonymous]12y
Okay, so it sounds like we're agreed that your reasons for believing B are at the same time things you take yourself to know about the content of B. Would you accept that this is always going to be true? Or can you think of a counterexample? If this is always true, then we should at least take this in support of my more general claim that you cannot have reason to think that something is rational or true, i.e. that something is thinking, without taking yourself to know something about the content of that thought. If we're on the same page so far, then we've agreed that you can't recognise something as thought without assuming you can understand something about its content. Now the question remains, can you understand something to be a thought or part of a thought while at the same time having reason to think it is fundamentally unintelligible to you? Or does the very recognition of something as a thought immediately give you reason to think you can understand it, while evidence against your understanding justifies you only in concluding that something isnt thought after all?
0TheOtherDave12y
Yes, my reasons for believing B are, in the very limited sense we're now talking about, things I know about the content of B (e.g., that the value of a die roll is probably between 1 and 5). Yes, agreed that if I think something is thinking, I know something about the content of its thought. Further agreed that in the highly extended sense that you're using "understanding" -- the same sense that I can be said to "know" what color socks you're wearing -- I understand everything that can be understood by every thinking system, and my inability to understand a thing is evidence against it's being a thought. So, OK... you've proven your point. I continue to think that by insisting on that extended sense of the word, you are eliding over some important distinctions. But I appreciate that you consider those distinctions arbitrary, which is why you insist on ignoring them.
0[anonymous]12y
Oh, come on, this has been a very interesting discussion. And I don't take myself to have proven any sort of point. Basically, if we've agreed to all of the above, then we still have to address the original point about precision. Now, I don't have a very good argument here, for thinking that you can go from knowing some limited and contextual things about the content of a thought to knowing the content with as much precision as the thinker. But here goes: suppose you have a cooperative and patient alien, and that you yourself are intent on getting the translation right. Also, let's assume you have a lot of time, and all the resources you could want for pursuing the translation you want. So given an unlimited time, and full use of metaphor, hand gestures, extended and complex explanations in what terms you do manage to get out of the context, corrections of mistakes, etc. etc., I think you could cover any gap so long as you can take the first step. And so long as the thought isn't actually logically alien. This means that the failure to translate something should be taken not as evidence that it might be impossible, but as evidence that it is in fact possible to translate. After all, if you know enough to have reason to believe that you've failed, you have taken the first few steps already. As to whether or not logically alien thought, thought which involves inferences of which we are incapable, is possible, I don't know. I think that if we encountered such thought, we would pretty much only have reason to think that it's not thought. So, forget about proving anything. Have I made this plausible? Does it now seem reasonable to you to be surprised (contra the original quote) to hear that there are thoughts we cannot think? If I've utterly failed to convince you, after all, I would take that as evidence against my point.
0TheOtherDave12y
My position on this hasn't changed, really. I would summarize your argument as "If we can recognize them as thinking, we are necessarily mutually intelligible in some highly constrained fashion, which makes it likely that we are mutually intelligible in the more general case. Conversely, if we aren't mutually intelligible in the general case, we can't recognize them as thinking." My objection has been and remains with the embedded assumption that if two systems are mutually intelligible in some highly constrained fashion, it's likely that they are mutually intelligible in the more general case. On average this might well be true, but the exceptions are important. (Similar things are true when playing Russian roulette. On average it's perfectly safe, but I wouldn't recommend playing.) My reason for objecting remains what it was: evolved systems are constrained by the environment in which they evolved, and are satisficers rather than optimizers, and are therefore highly unlikely to be general-purpose systems. This is as true of cognitive systems as it is of digestive systems. I would be as surprised to hear of an alien mind thinking thoughts I can't think as I would be to hear of an alien stomach digesting foods I can't digest -- that is, not surprised at all. There's nothing magic about thought, it's just another thing we've evolved to be able to do. That said, I would certainly agree that when faced with a system I have reason to believe is thinking, the best strategy for me to adopt is to assume that I can understand its thoughts given enough time and effort, and to make that effort. (Similarly, when faced with a system I have reason to believe needs food, I should assume that I can feed it given enough time and effort, and make that effort.) But when faced with a system where I have reason to believe is thinking and where all plausible efforts have failed, I am not justified in concluding that it isn't thinking after all, rather than concluding that its think
0[anonymous]12y
I guess my problem with this claim is similar to my problem with the original quote: the analogy between sensations and thoughts is pretty weak, such that the inference from incompatible sensations to incompatible thoughts is dubious. The analogy between thoughts and digestion is even weaker. The objection that we're organisms of a certain kind, with certain biological limits, is one which involves taking an extremely general point, and supposing that it bears on this issue in particular. But how? Why think that the biological character of our brain places any relevant limits on what we can think? This might seem like an obvious connection, but I don't think it can go without support. The connection between neural activity and brain structures on the one hand and thoughts on the other is not so clear that we can just jump from such general observations about the one to specific claims about the other. So how can we fill out this reasoning?
0TheOtherDave12y
Yes, it does seem like an obvious connection to me. But, all right... For example, I observe that various alterations of the brain's structure (e.g., those caused by brain lesions or traumatic brain injury) create alterations in the kinds of thoughts the brain can think, and I infer from this that the brain's structure constrains the kinds of thoughts it can think. And as I said, I consider the common reference class of evolved systems a source of useful information here as well. Incidentally, didn't you earlier agree that brains weren't general-purpose inference-traversing devices? I thought your position was that language filled this role. (Even more incidentally, does it follow from this that two nonlinguistic thinking systems being mutually nonintelligable is more plausible for you? Actually, do you consider nonlinguistic thinking systems possible in the first place?)
0[anonymous]12y
I don't think this is a good inference: it doesn't follow from the fact that defective brains are constrained in some of their cognitive capacities that for healthy brains there are thoughts that they cannot think (and not for reasons of memory storage, etc.). First, this involves an inference from facts about an unhealthy brain to facts about a healthy brain. Second, this involves an inference from certain kinds of limitations on unhealthy brains to other kinds of limitations on healthy brains. After all, we've agreed that we're not talking about limits on thinking caused by a lack of resources like memory. None of the empirical work showing that brain damage causes cognitive limits is strictly relevant to the question of whether or not other languages are translatable into our own. This is still my position. No, I don't consider that to be possible, though it's a matter of how broadly we construe 'thinking' and 'language'. But where thinking is the sort of thing that's involved in truth values and inference relations (the truth predicate is probably not actually necessary), and where language is what we are using to communicate right now, then I would say "there is nothing that thinks that cannot use language, and everything that can use language can to that extent think."
0TheOtherDave12y
As I said the last time this came up, I don't consider the line you want to draw on "for reasons of memory storage, etc" to be both well-defined and justified. More precisely, I freely grant that if there are two minds A and B such that A can think thought T and B cannot think T, that there is some physical difference D between A and B that causes that functional difference, and whether D is in the category of "memory storage, etc." is not well defined. If any physical difference counts, then I guess I agree with you: if A can think T and B is physically identical to A, then B can think T as well. But that's one hell of an additional condition. It follows that as humans evolved from our most recent non-thinking-and-non-linguistic forebear, the ability to think and the ability to use language appeared within the same individual. Yes?
0[anonymous]12y
Well, I take it for granted that you can I can think the same thought (say "It is sunny in Chicago"), even though (I think) we have no reason to believe the neural activity involved in this thinking is identical in our two brains. So the fact that there are physical differences between two thinkers doesn't immediately mean that they cannot think the same thoughts. I expect you can think all the same thoughts that I think if we were to make a project of it. And yet it is implausible (and as far as I know empirically unsupported) to think that part or all (or even any) of your brain would as a result become structurally identical to mine. So physical differences can matter, but among healthy brains, they almost always don't. No two english speakers have structurally identical brains, and yet we're all fully mutually intelligible. So we can't infer from physical differences to cognitive incompatibilities. I asked you in my last post to give me some reasons for the inference from 'our brains are evolved systems' to 'we can have reason to believe that there are thoughts we cannot think' or 'there are thoughts we cannot think'. Is there some inferential meat here, or is this a hunch? Have I misrepresented your view? Yes, I think so, though of course there wasn't a 'first thinker/language user'.
0TheOtherDave12y
This is another place where I want to avoid treating "Y is near enough to X for practical considerations" as equivalent to "Y is X" and then generalizing out from that to areas outside those practical considerations. I would certainly agree that you and I can think two thoughts Ta and Tb and have them be similar enough to be considered the same thought for practical purposes (the case where both Ta and Tb map to "It is sunny in Chicago" might be an example, depending on just what we mean by that utterance). I would similarly agree that we have no reason to expect, in this case, that either the neural activity involved in this thinking or the biochemical structures that support and constrain that neural activity to be exactly identical. Sure, but why are you limiting the domain of discourse in this way? If Tom has a stroke and suffers from aphasia, he is less mutually intelligible with other English speakers than he was before the stroke, and his brain is less relevantly similar to that of other English speakers. As his brain heals and the relevant similarities between his brain and mine increase, our mutual intelligibility also increases. I certainly agree that if we ignore Tom altogether, we have less reason to believe that structure constrains function when it comes to cognition than if we pay attention to Tom. But I don't see why ignoring him is justified. I would say rather that the relevant parts of two English speakers' brains are very similar, and their mutual intelligibility is high. This is precisely what I would expect from a relationship between relevant structural similarity and mutual intelligibility. As above, this is equivalent to what you said for practical considerations. If you don't consider anything I've said on this question over the last twenty comments or so to provide meat for inference, then I think we can classify it as "just a hunch" for purposes of this conversation.
0[anonymous]12y
The point isn't that we should ignore him. The point is that your assumption that the difference between Tom and a healthy brain is relevant to this question is (at least as yet) undefended. Maybe you could point me to something specific? In reviewing our conversations, I found statements of this inference, but I didn't find a defense of it. At one point you said you took it to be obvious, but this is the best I could find. Am I just missing something?
0TheOtherDave12y
I don't know if you're missing anything. I accept that you consider the items on which I base the belief that brain structure constrains the set of inferential relations that an evolved brain can traverse to be inadequate evidence to justify that conclusion. I don't expect repeating myself to change that. If you genuinely don't consider them evidence at all, I expect repeating myself to be even less valuable.
0[anonymous]12y
I consider it evidence, just weak and indirect in relation to (what I take to be) much stronger and more directly related evidence that we can assume that anything we could recognize as thinking is something we can think. Such that, on balance, I would be surprised to hear that there are such thoughts. It sounds like we've pretty much exhausted ourselves here, so thanks for the discussion.
0Desrtopa12y
Can you rotate four dimensional solids in your head? Edit: it looks like I'm not the first to suggest this, but I'll add that since computers are capable not just of representing more than three spacial dimensions, but of tracking objects through them, these are probably "possible thoughts" even if no human can represent them mentally.
0[anonymous]12y
Well, suppose I'm colorblind from birth. I can't visualize green. Is this significantly different from the example of 4d rotations? If so, how? (ETA: after all, we can do all the math associated with 4d rotations, so we're not deficient in conceptualizing them, just in imagining them. Arguably, computers can't visualize them either. They just do the math and move on). If not, then is this the only kind of thought (i.e. visualizations, etc.) that we can defend as potentially unthinkable by us? If this is the only kind of thought thus defensible, then we've rendered the original quote trivial: it infers from the fact that it's possible to be unable to see a color that it's possible to be unable to think a thought. But if these kinds of visualizations are the only kinds of thoughts we might not be able to think, then the quote isn't saying anything.
0Desrtopa12y
If you discount inaccessible qualia, how about accurately representing the behaviors of subatomic particles in a uranium atom? I'm not a physicist, but I have been taught that beyond the simplest atoms, the calculations become so difficult that we're unable to determine whether our quantum models actually predict the configurations we observe. In this case, we can't simply do the math and move on, because the math is too difficult. With our own mental hardware, it appears that we can neither visualize nor predict the behavior of particles on that scale, above a certain level of complexity, but that doesn't mean that a jupiter brain wouldn't be able to.
1[anonymous]12y
I'm not discounting qualia (that's it's own discussion), I'm just saying that if these are the only kinds of thoughts which we can defend as being potentially unthinkable by us, then the original quote is trivial. So one strategy you might take to defend thoughts we cannot think is this: thinking is or supervenes on a physical process, and thus it necessarily takes time. All human beings have a finite lifespan. Some thought could be formulated such that the act of thinking it with a human brain would take longer than any possible lifespan, or perhaps just an infinite amount of time. Therefore, there are thoughts we cannot think. I think this suggestion is basically the same as yours: what prevents us from thinking this thought is some limited resources, like memory or lifespan, or something like that. Similarly, I could suggest a language that is in principle untranslatable, just because all well formed sentences and clauses in that language are long enough that we couldn't remember a whole one. But it would be important to distinguish, in these cases, between two different kinds of unthinkability or untranslatability. Both the infinite (or just super complex) thoughts and the super long sentences are translatable into a language we can understand, in principle. There's nothing about those thoughts or sentences, or our thoughts or sentences, that makes them incompatible. The incompatibility arises from a fact about our biology. So in the same line, we could say that some alien species' language is untranslatable because they speak and write in some medium we don't have the technology to access. The problem there isn't with the language or the act of translation. In sum, I think that this suggestion (and perhaps the original quote) trades on an equivocation between two different kinds of unthinkability. But if the only defensible kind of unthinkability is one on the basis of some accidental limitation of access or resources, then I can't see what's interesting ab
0Richard_Kennaway12y
For me, it merely brings it to the level of "interesting speculation". What observations would provide strong evidence that there be dragons? Other weak evidence that just leaves it at much the original level is the existence of anosognosia -- people with brain damage who appear to be unable to think certain thoughts about their affliction. But that doesn't prove anything about the healthy brain, any more than blindness proves the existence of invisible light. Some people seem unable to grok mathematics, but then, some people do. The question is whether, Turing-completeness aside, the best current human thinking is understanding-complete, subject only to resource limitation.
0BillyOblivion12y
So if Majus's post (on Pinker) is correct, and the underling processing engine(s) (aka "the brain") determine the boundaries of what you can think about, then it is almost tautological that no one can give you an example since to date almost all folks have a very similar underlying architecture.
0[anonymous]12y
So what I argued was that thoughts are by nature commensurable: it's just in the nature of thoughts that any thinking system can think any thought from any other thinking system. There are exceptions to this, but these exceptions are always on the basis of limited resources, like limited memory. So, an application of this view is that there are no incommensurable scientific schemes: we can in principle take any claim from any scientific paradigm and understand or test it in any other.
1BillyOblivion12y
All I argued was that if their thesis is correct, then unless you've had some very odd experiences, no one can give you an example because everyone you meet is similarly bounded. That is the limit of what my statement was intended to convey. I don't know enough neurology, psychology and etc. to have a valid opinion, but I will note that we see at most 3 colors. We perceive many more. But any time we want to perceive, for example, the AM radio band we map it into a spectrum our eyes can handle, and as near as I can tell we "think" about it in the colors we perceive. It is my understanding that there is some work in this area where certain parts of hte brain handle certain types of work. Folks with certain types of injuries or anomalous structures are unable to process certain types of input, and unable to do certain kinds of work. This seems to indicate that while our brain, as currently constructed, is a fairly decent tool for working out the problems we have in front of us, there is some evidence that it is not a general purpose thinking machine. (in one of those synchronicity thingies my 5 year old just came up to me and showed me a picture of sound waves coming into an ear and molecules "traveling" into your nose).

Westerners are fond of the saying ‘Life isn’t fair.’ Then, they end in snide triumphant: ‘So get used to it!’
What a cruel, sadistic notion to revel in! What a terrible, patriarchal response to a child’s budding sense of ethics. Announce to an Iroquois, ‘Life isn’t fair,’ and her response will be: ‘Then make it fair!’

Barbara Alice Mann

I agree with the necessity of making life more fair, and disagree with the connotational noble Pocahontas lecturing a sadistic western patriarch. (Note: the last three words are taken from the quote.)

Agree that that looks an awful lot like an abuse of the noble savage meme. Barbara Alice Mann appears to be an anthropologist and a Seneca, so that's at least two points where she should really know better -- then again, there's a long and more than somewhat suspect history of anthropologists using their research to make didactic points about Western society. (Margaret Mead, for example.)

Not sure I entirely agree re: fairness. "Life's not fair" seems to me to succinctly express the very important point that natural law and the fundamentals of game theory are invariant relative to egalitarian intuitions. This can't be changed, only worked around, and a response of "so make it fair" seems to dilute that point by implying that any failure of egalitarianism might ideally be traced to some corresponding failure of morality or foresight.

3Multiheaded12y
You are confusing "fairness" and egalitarianism. While everyone has their own definition of "fairness", it feels obvious to me that, even if you're correct about the cost of imposing reasonable egalitarianism being too high in any given situation, this does not absolve us from seeking some palliative measures to protect those left worst off by that situation. Reducing first the suffering of those who suffer most is an ok partial definition of fairness for me. Despite (or due to, I'm too sleepy to figure it out) considering myself an egalitarian, I would prefer a world where the most achieving 10% get 200 units of income (and the top 10% of them get 1000), the least achieving 10% get 2 units and everyone else gets 5-15 units (1 unit supporting the lifestyle of today's European blue-collar worker) to a world where the bottom 10% get 0.2 units and everyone else gets 25-50. Isn't that more or less the point of charity (aside from signaling)?
1Nornagest12y
I didn't say this. Actually, I'd consider it somewhat incoherent in the context of my argument: if imposing reasonable egalitarianism (whatever "reasonable" is) was too costly to be sustainable, it seems unlikely that we'd have developed intuitions calling for it. On the other hand, I suppose one possible scenario where that'd make sense would be if some of the emotional architecture driving our sense of equity evolved in the context of band-level societies, and if that architecture turned out to scale poorly -- but that's rather speculative, somewhat at odds with my sense of history, and in any case irrelevant to the point I was trying to make in the grandparent. Anyway, don't read too much into it. My point was about the relationship between the world and its mathematics and our anthropomorphic intuitions; I wasn't trying to make any sweeping generalizations about our behavior towards each other, except in the rather limited context of game theory and its various cultural consequences. I certainly wasn't trying to make any prescriptive statements about how charitable we should be.
0Multiheaded12y
Some of the local Right are likely to claim that we developed them just for the purpose of signaling, and that they're the worst thing EVAH when applied to reality. ;) (Please don't take this as a political attack, guys, my debate with you is philosophical. I just need a signifier for you.)
4[anonymous]12y
ominous theme music Well someone certainly has been digging into the LessWrong equivalent of Sith holocrons. You are getting pretty good at integrating their mental tool kit. It has made your thinking clearer, made your positions stronger than would have been otherwise possible. Now far from me, to question such a search for knowledge. Indeed I commend it. It is a path to great predictive power! You will find that as you continue your studies it can offer many useful heuristics, that some would consider ... unthinkable.
1Multiheaded12y
You know, I was not wholly unprepared for this ideological predicament. Since I first became interested in Fascist-like ideas and the history of political conflict surrounding them (during high school), I've always had a hunch that "the enemy" is far wiser, more attractive and more insidious than most people who pretend to "common sense" believe. It is the radical Right themselves and the radical Left who oppose both them and mainstream liberalism (which is "common sense" to our age) that have a more realistic estimate of this conflict's importance. Even in spite of the fact that said Right has been hounded and suppressed since 1940, including, in a gentler way, by moderate conservatives eager to attain a more enlightened image. To quote again from Orwell's review of Mein Kampf: Of course, the above can't be applied to all such right-wing radicals without adjusting for their personal differences - e.g. Mencius criticizing idealism as the root of all evil both on the right and on the left, while himself possessing a less-than-obvious but very distinct sort of idealism [1] - but still. If exposed to today's political blogosphere, Orwell could undoubtedly have constructed similar respectful warnings for all his radical opponents he'd find solid. The people who dreaded and obsessed over "Fascism", and continue to do so to this day - as well as the contrarians who actually walk that path - have clearer vision than the complacent masses. That the idea is in retreat and on the decline does not affect its strict consistency, decent compatibility with human nature and inherent potential. Still, when all's said and done I view the situation as half a rational investigation and half a holy war (for a down-to-earth definition of "holy"); I don't currently feel any erosion in my values or see myself reneging at the end of it. Yet - and thank you for your compliment - I'm certainly eager to familiarize myself with as much of the other side's intellectual weaponry as it's possib
1Paul Crowley12y
I think that Robert Smith has a much wiser take on this: "The world is neither fair nor unfair"
2Eliezer Yudkowsky12y
The world is neither F nor ~F?

Unfair is the opposite of fair, not the logical complement. The moon is neither happy nor sad.

That is indeed possible if F is incoherent or has no referent. The assertion seems equivalent to "There's no such thing as fairness".

I'm confused because it was Eliezer who taught me this.

(P or ~P) is not always a reliable heuristic, if you substitute arbitrary English sentences for P.

EDIT: I'm now resisting the temptation to tell Eliezer to "read the sequences".

Original parent says, "The world is neither fair nor unfair", meaning, "The world is neither deliberately fair nor deliberately unfair", and my comment was meant to be interpreted as replying, "Of course the world is unfair - if it's not fair, it must be unfair - and it doesn't matter that it's accidental rather than deliberate." Also to counteract the deep wisdom aura that "The world is neither fair nor unfair" gets from counterintuitively violating the (F \/ ~F) axiom schema.

7Paul Crowley12y
It matters hugely that it's not deliberately unfair. People get themselves into really awful psychological holes - in particular the lasting and highly destructive stain of bitterness - by noting that the world is not fair, and going on to adopt a mindset that it is deliberately unfair.
0wedrifid12y
It matters a lot (to those who are vulnerable to the particular kind of irrational bitterness in question) that the universe is not deliberately unfair. I took Eliezer's "it doesn't matter" to be the more specific claim "it does not matter to the question of whether the universe is unfair whether the unfairness present is deliberate or not-deliberate".
2Paul Crowley12y
Err, the "question of whether the universe is unfair" sounds a lot to me like the "question of whether the tree makes a sound". What query are we trying to hug here? I think what I call "unfairness" - something due to some agent - is something we can at least sometimes usefully respond by being pissed off, because the agent doesn't want us to be pissed off. But the Universe absolutely cannot care whether we're pissed off, and so putting it under the same category as eg discrimination engenders the wrong response.
3TheOtherDave12y
What makes being pissed off at an agent who treats me unfairly useful is not that the agent doesn't want me to be pissed off. In fact, I can sometimes be usefully pissed off at an unfair agent that is entirely indifferent to, or even unaware of, my existence. In much the the same way, I can sometimes be usefully pissed off at a non-agent that behaves in ways that I would classify as "unfair" if an agent behaved that way. Admittedly, asking when it's useful to classify something as "unfair" is different from asking what things are in fact unfair. On the other hand, in practice the first of those seems most relevant to actual human behavior. The second seems to pretty quickly lead to either the answer "everything" (all processes result in output distributions that are not evenly distributed across some metric) or "nothing" (all processes are equally constrained and specified by physical law) and neither of those answers seems terribly relevant to what anyone means by the question.
2Paul Crowley12y
No, that fairness isn't a characteristic you can measure of the world. There's such a thing as fairness when it comes to eg dividing a cake between children.
-3A1987dM12y
“The world is fair” = world.fairness > 0 “The world is unfair” = world.fairness < 0 “The world is neither fair nor unfair” = world.fairness == 0, or something like this.

I didn't think I could remove the quote from that attitude about it very effectively without butchering it. I did lop off a subsequent sentence that made it worse.

6NancyLebovitz12y
Do people typically say "life isn't fair" about situations that people could choose to change?

Don't they usually say it about situations that they could choose to change, to people who don't have the choice?

7BlazeOrangeDeer12y
Exactly. In my experience the people who say "life isn't fair" are the main reason that it still isn't.
2Tyrrell_McAllister12y
How did you develop a sufficiently powerful causal model of "life" to establish this claim with such confidence?
6BlazeOrangeDeer12y
i mean that in almost all of the situations where I've heard that phrase used, it was used by someone who was being unfair and who couldn't be bothered to make a real excuse.
-1Tyrrell_McAllister12y
Okay, but that is a very different claim. It could be true even while most sources of unfairness in life are other things, not people who bother to say "life's not fair".
5TimS12y
I agree, it's usually used as an excuse not to try to change things.

Do people typically say "life isn't fair" about situations that people could choose to change?

Introspection tells me this statement usually gets trotted out when the cost of achieving fairness is too high to warrant serious consideration.

EDIT: Whoops, I just realised that my imagination only outputted situations involving adults. When imagining situations involving children I get the opposite of my original claim.

0Multiheaded12y
Could you give an example of such a situation where the cost of achieving "fairness" is indeed too high for you? Because I have a hunch that we differ not so much in our assessment of costs but in our notions of "fairness". Oh, and what is "Serious consideration"? Is a young man thinking of what route he should set his life upon and wanting to increase "fairness" doing more or less serious consideration than an adult thinking whether to give $500 to charity?
5NancyLebovitz12y
Current example: A friend of mine telling her very intelligent son that he has to do boring schoolwork because life isn't fair. It occurs to me to ask her whether a good gifted and talented program is available.
2Multiheaded12y
Hmm? I know I'm no-one to tell you those things and it might sound odd coming from a stranger, but... please try persuading her to attend to the kid's special needs somehow. Ideally, I believe, he should be learning what he loves plus things useful in any career like logic and social skills, with moderate challenge and in the company of like-minded peers... but really, any improvement over either the boredom of standard "education" or the strain of a Japanese-style cram school would be fine. It pains me to see smart children burning out, because it happened to me too.
5NancyLebovitz12y
I've talked with her. Her son is already in a Gifted and Talented program, but they're still expecting too much busy work from him-- he's good at learning things that he's interested in the first time he hears them, and doesn't need drilling. He's got two years more of high school to go. I've convinced her that it's worthwhile to work on convincing the school that they should modify the program into something that's better for him, and also that it's good for him to learn about advocacy as well as (instead of?) accommodation. I think she cares enough that this isn't going to fall off the to do list, but I'll ask again in a couple of months. Thanks for pushing about this.
2Multiheaded12y
Great. That's going to brighten up a very very shitty day I'm having, BTW. I got my father moderately angry and disappointed in me for an insubstantial reason (he's OK but kind of emotional and has annoying expectations), and then my mom phoned from work in tears to say that her cat electrocuted itself somehow. I have just got very high on coffee to numb emotion and am browsing LW right now until I can take a peek at reality again.
2CronoDAS12y
Me, I've burned out many times in school. Each time it happened, I was sent to psychiatrists as punishment.
0Jayson_Virissimo12y
I don't remember exactly what I imagined, but it was something like this:
0Multiheaded12y
Actually, I'd say that it could be a case where justice can assert itself... the boss is, barring unusual circumstances, going to lose out on a skilled worker and that could impact his business. (I mean, presumably the overly high cost of achieving fairness in that case would be passing a law telling employers how to make hiring decisions... but that idiot of a boss would benefit from such a law if the heuristics in it were good; now he's free to shoot himself in the foot!)
3Jayson_Virissimo12y
Bob is telling Alice that life isn't fair. Bob is Alice's friend; he is not the boss. Bob seems like he has Alice's interests in mind, since it is unlikely that Alice "doing something about it" would be worth it (such as confronting the boss, suing the company, picketing on the street outside the building, etc...). She is probably better off just continuing her job search. This is independent of whether or not Alice's decision is best for society as a whole.
2Multiheaded12y
Oh, that makes sense.
1taelor12y
The problem with saying that we should make life more fair is that life is often unfair with regard to our ability to make it more fair.

The automatic pursuit of fairness might lead to perverse incentives. I have in mind some (non-genetically related) family in Mexico who don't bother saving money for the future because their extended family and neighbours would expect them to pay for food and gifts if they happen to acquire "extra" cash. Perhaps this "Western" patriarchal peculiarity has some merit after all.

Is this really about fairness? Seems like different people agree that fairness is a good thing, but use different definitions of fairness. Or perhaps the word fairness is often used to mean "applause lights of my group".

For someone fairness means "everyone has food to eat", for another fairness means "everyone pays for their own food". Then proponents of one definition accuse the others of not being fair -- the debate is framed as if the problem is not different definitions of fairness, but rather our group caring about fairness and the other group ignoring fairness; which of course means that we are morally right and they are morally wrong.

3Jayson_Virissimo12y
IDK, but I have heard people refer to fairness in similar situations, so I am merely adopting their usage. I agree. To a large degree the near universal preference for "fairness" in humans is illusory, because people mean mutually contradictory things by it. I believe "fairness" can be given a fairly rigorous definition (I have in mind people like Rawls), but the second you get explicit about it, people stop agreeing that it is such a good thing (and therefore, it loses its moral force as a human universal).
1Nornagest12y
One wonders whether food and gifts translate into status more or less effectively than whatever they might buy to that end in "Western" society would. Scare quotes because most of Mexico isn't much more or less Western than the US, all things considered.
1Jayson_Virissimo12y
Yeah, the scare quotes are because I dislike the use of "Western" to mean English-speaking cultures rather than the Greek-Latin-Arabic influenced cultures.
4John_Maxwell12y
I'm not convinced fairness is inherently valuable. * Envy is an unpleasant emotion that should probably be eliminated. * I like being part of egalitarian social groups, but I don't think status inequality has to follow inevitably from material inequality.
9Paul Crowley12y
I don't think that fairness is terminally valuable, but I think it has instrumental value.

Gene Hofstadt: You people. You think money is the answer to every problem.

Don Draper: No, just this particular problem.

Mad Men, "My Old Kentucky Home"

Another good one from Don Draper:

I hate to break it to you, but there is no big lie, there is no system, the universe is indifferent.

1JulianMorrison12y
This is mistaken because systems can and do assemble out of sufficiently similar people pursuing self interest in a way that ends up coordinated because their motivations are alike. Capitalism is the simplest and most obvious example of such a system, but I'd argue things like patriarchy and racism are similar.
4FiftyTwo12y
The point is the system doesn't have a particular overriding goals, or central coordination, and isn't interested in you personally. In context, he was speaking to counter-culture people who thought the system was against them, in an ego satisfying way that makes them feel significant. He counter that it is simply indifferent to them.

A faith which cannot survive collision with the truth is not worth many regrets.

Arthur C. Clarke

The trouble is, the most problematic kinds of faith can survive it just fine.

Which leads us to today's Umeshism: "Why are existing religions so troublesome? Because they're all false, the only ones that exist are so dangerous that they can survive the truth."

1Multiheaded12y
I'm not sure if I can really call myself Gnostic, but if I can, mine's neither troublesome*, nor does it make any claims inconsistent with a sufficiently strong simulation hypothesis. -* (when e.g. Voegelin was complaining about "Gnostic" ideas of rearranging society, he was 1) obviously excluding any transformation he approved of, perhaps considering it "natural" and not dangerous meddling, and 2) blaming a fairly universal kind of radicalism correlated with all monotheistic or quasi-monotheistic worldviews; he's essentially privileging the hypothesis to vent about personality types he dislikes, and conservatives should really look at these things more objectively for the sake of their own values)
3Eugine_Nier12y
Um, no. He was complaining about attempts to rearrange society from the top down.
0Multiheaded12y
The problem is, hardly anyone else would describe a person who's actually in a position of power to do the rearranging - like e.g. Lenin - as "Gnostic"; he has certainly been known as a dreamer blind to reality, but as I pointed out that's a very general indictment. The way it's actually used throughout history, "Gnosticism" has the connotations of a monastic life and mystical pursuits, detached from daily life or outright fleeing from society; after all, no leader who actually left a noticeable mark on society has ever been called that. Many parallels have been drawn between Marxism/Facscism/transhumanism/etc and religious fundamentalism, but those parallels did not include a persecuted, non-populist and underground branch of a religion. The word has always been associated with "heresy", and a tendency that's imposing its own dogma & suppressing opposition is not called a "heresy". Voegelin should've introduced a new term for the category of people he wanted to indict instead of appropriating an unsuitable word.
5NancyLebovitz12y
That's very nice to say, but people are apt to find giving up some faiths very emotionally wrenching and socially costly (even if the faith isn't high status, a believer is likely to have a lot of relationships with people who are also believers). Now what?

The other day I was thinking about Discworld, and then I remembered this and figured it would make a good rationality quote...

[Vimes] distrusted the kind of person who'd take one look at another man and say in a lordly voice to his companion, "Ah, my dear sir, I can tell you nothing except that he is a left-handed stonemason who has spent some years in the merchant navy and has recently fell on hard times," and then unroll a lot of supercilious commentary about calluses and stance and the state of a man's boots, when exactly the same comments could apply to a man who was wearing his old clothes because he'd been doing a spot of home bricklaying for a new barbecue pit, and had been tattooed once when he was drunk and seventeen and in fact got seasick on a wet pavement. What arrogance! What an insult to the rich and chaotic variety of the human experience!

-- Terry Pratchett, Feet of Clay

Reminded of a quote I saw on TV Tropes of a MetaFilter comment by ericbop:

Encyclopedia Brown? What a hack! To this day, I occasionally reach into my left pocket for my keys with my right hand, just to prove that little brat wrong.

2tut12y
Sounds like Vimes doesn't like Sherlock Holmes much.
3Multiheaded12y
Gee, you think?
1tut12y
Well, the quote made me think of this. Now that I looked up that post I notice that it is downvoted, so perhaps it isn't relevant. But the behavior that Vimes expresses distrust of in the Pratchett quote is pretty much the exact behavior that is used to show off how intelligent/perceptive Holmes is, and which the poster wants to use as an example for rationalists.
0MixedNuts12y
It is relevant and obvious. I suppose it was downvoted for the latter.

"What really is the point of trying to teach anything to anybody?" This question seemed to provoke a murmur of sympathetic approval from up and down the table. Richard continued, "What I mean is that if you really want to understand something, the best way is to try and explain it to someone else. That forces you to sort it out in your mind. And the more slow and dim-witted your pupil, the more you have to break things down into more and more simple ideas. And that's really the essence of programming. By the time you've sorted out a complicated idea into little steps that even a stupid machine can deal with, you've learned something about it yourself."

Douglas Adams, Dirk Gently's Holistic Detective Agency

What really matters is:–

  1. Always try to use the language so as to make quite clear what you mean and make sure your sentence couldn't mean anything else.

  2. Always prefer the plain direct word to the long, vague one. Don't implement promises, but keep them.

  3. Never use abstract nouns when concrete ones will do. If you mean "More people died" don't say "Mortality rose."

  4. In writing. Don't use adjectives which merely tell us how you want us to feel about the thing you are describing. I mean, instead of telling us a thing was "terrible," describe it so that we'll be terrified. Don't say it was "delightful"; make us say "delightful" when we've read the description. You see, all those words (horrifying, wonderful, hideous, exquisite) are only like saying to your readers, "Please will you do my job for me."

  5. Don't use words too big for the subject. Don't say "infinitely" when you mean "very"; otherwise you'll have no word left when you want to talk about something really infinite.

-- C. S. Lewis

‘I’m exactly in the position of the man who said, ‘I can believe the impossible, but not the improbable.’’

‘That’s what you call a paradox, isn’t it?’ asked the other.

‘It’s what I call common sense, properly understood,’ replied Father Brown. ’It really is more natural to believe a preternatural story, that deals with things we don’t understand, than a natural story that contradicts things we do understand. Tell me that the great Mr Gladstone, in his last hours, was haunted by the ghost of Parnell, and I will be agnostic about it. But tell me that Mr Gladstone, when first presented to Queen Victoria, wore his hat in her drawing-room and slapped her on the back and offered her a cigar, and I am not agnostic at all. That is not impossible; it’s only incredible.

-G. K. Chesterton, The Curse of the Golden Cross

"What was the Sherlock Holmes principle? 'Once you have discounted the impossible, then whatever remains, however improbable, must be the truth.'"

"I reject that entirely," said Dirk sharply. "The impossible often has a kind of integrity to it which the merely improbable lacks. How often have you been presented with an apparently rational explanation of something that works in all respects other than one, which is just that it is hopelessly improbable? Your instinct is to say, 'Yes, but he or she simply wouldn't do that.'"

"Well, it happened to me today, in fact," replied Kate.

"Ah, yes," said Dirk, slapping the table and making the glasses jump. "Your girl in the wheelchair -- a perfect example. The idea that she is somehow receiving yesterday's stock market prices apparently out of thin air is merely impossible, and therefore must be the case, because the idea that she is maintaining an immensely complex and laborious hoax of no benefit to herself is hopelessly improbable. The first idea merely supposes that there is something we don't know about, and God knows there are enough of those. The second, however, runs contrary to something fundamental and human which we do know about. We should therefore be very suspicious of it and all its specious rationality."

-- Douglas Adams. The Long Dark Tea-Time of the Soul (1988) p.169

I can't find the quote easily (it's somewhere in God, No!), but Penn Jillette has said that one aspect of magic tricks is the magician putting in more work to set them up than anyone sane would expect.

I'm moderately sure that he's overestimating how clearly the vast majority of people think about what's needed to make a magic trick work.

His partner Teller says the same thing here:

Make the secret a lot more trouble than the trick seems worth. You will be fooled by a trick if it involves more time, money and practice than you (or any other sane onlooker) would be willing to invest. My partner, Penn, and I once produced 500 live cockroaches from a top hat on the desk of talk-show host David Letterman. To prepare this took weeks. We hired an entomologist who provided slow-moving, camera-friendly cockroaches (the kind from under your stove don't hang around for close-ups) and taught us to pick the bugs up without screaming like preadolescent girls. Then we built a secret compartment out of foam-core (one of the few materials cockroaches can't cling to) and worked out a devious routine for sneaking the compartment into the hat. More trouble than the trick was worth? To you, probably. But not to magicians.

Edit: That trick is 19 minutes and 50 seconds into this video.

0TheOtherDave12y
It's not clear to me that clear thought on the part of the audience is necessary to make that statement true.
0Alejandro112y
Yes, exactly the same idea. Partial versions of your quote have been posted twice in LW already, and might have inspired me to post the Chesterton prior version, but I liked seeing the context for the Adams one that you provide.
3CronoDAS12y
Out of context, the quote makes much less sense; the specific example illustrates the point much better than the abstract description does. Just for fun, which of the following extremely improbable events do you think is more likely to happen first: 1) The winning Mega Millions jackpot combination is 1-2-3-4-5-6 (Note that there are 175,711,536 possible combinations, and drawings are held twice a week.) 2) The Pope makes a public statement announcing his conversion to Islam (and isn't joking).
8Alejandro112y
Assuming that the 123456 winning must occur by legit random drawing (not a prank or a bug of some kind that is biased towards such a simple result) then I'd go for the Pope story as ]more likely to happen any given day in the present. After all, there have been historically many examples of highly ranked members of groups who sincerely defect to opposing groups, starting with St. Paul. But I confess I'm not very sure about this, and I'm too sleepy to think about the problem rigorously. In the form you posed the question ("which is more likely to happen first") it is much more difficult to answer because I'd have to evaluate how likely are institutions such as the lottery and the Catholic Church to persist in their current form for centuries or millennia.
4CronoDAS12y
Good point.
2A1987dM12y
It'd be even more fun if you replaced "1-2-3-4-5-6" with "14-17-26-51-55-36". (Whenever I play lotteries I always choose combinations like 1-2-3-4-5-6, and I love to see the shocked faces of the people I tell, tell them that it's no less likely than any other combination but it's at least easier to remember, and see their perplexed faces for the couple seconds it takes them to realize I'm right. Someone told me that if such a combination ever won they'd immediately think of me. (Now that I think about it, choosing a Schelling point does have the disadvantage that should I win, I'd have to split the jackpot with more people, but I don't think that's ever gonna happen anyway.)) Dunno how you would count the (overwhelmingly likely) case where both Mega Millions and the papacy cease to exist without either of those events happening first, but let's pretend you said "more likely to happen in the next 10 years"... Event 1 ought to happen 0.6 times per million years in average; I dunno about the probability per unit time for Event 2, but it's likely about two orders of magnitude larger.
2Cyan12y
Aren't you choosing an anti-Schelling point? It seems to me that people avoid playing low Kolmogorov-complexity lottery numbers because of a sense that they're not random enough -- exactly the fallacious intuition that prompts the shocked faces you enjoy.

Choosing something that's "too obvious" out of a large search space can work if you're playing against a small number of competitors, but when there are millions of people involved, not only are some of them going to un-ironically choose "1-2-3-4-5-6", but more than one person will choose it for the same reason it appeals to you.

0Cyan12y
Thank you for that insightful observation. Just to follow up, army1987's actual choice is: So whether this choice is Schelling or anti-Schelling depends on reference sets that are quite fuzzy on the specified information, to wit, the set of non-random-seeming selections and (the proportion of players in) the set of people who play them.
6A1987dM12y
I still think many more people pick any given low Kolmogorov-complexity combination than any given high Kolmogorov-complexity combination, if anything because there are fewer of the former. If 0.1% of the people picked 01-02-03-04-05 / 06 and 99.9% of the people picked a combination from http://www.random.org/quick-pick/ (and discarded it should it look ‘not random enough’), there'd still be 175 thousand times as many people picking 01-02-03-04-05 / 06 as 33-39-50-54-58 / 23. (Likewise, the fact that the most common password is password doesn't necessarily mean that there are lots of idiots: it could mean that 0.01% of the people pick it and 99.99% pick one of more than 9,999 more complicated passwords. Not that I'm actually that optimistic.)
0wedrifid12y
With this in mind I think I would choose combinations that match the pattern /[3-9][0-9][3-9][0-9][1-6][0-9]/. Six digit numbers look too much like dates!
1sixes_and_sevens12y
1-2-3-4-5-6 is a Schelling point for overt tampering with a lottery. That makes it considerably more likely to be reported as the outcome to a lottery, even if it's not more likely to be the outcome of a stochastic method of selecting numbers. After seeing quite a few examples, I've recently become very sensitive to comparisons of an abstract idea of something with an objective something, as if they were on equal footing. Your question explicitly says the Pope conversion is a legitimate non-shenanigans event, while not making the same claim of the lottery result. Was that intentional?
0CronoDAS12y
No, I just didn't think of it. (Assume that I meant that, if someone happens to have bought a 1-2-3-4-5-6 ticket, they would indeed be able to claim the top prize.)
2sixes_and_sevens12y
I might not have worded that very clearly. You said that the Pope was definitely not joking, (or replaced by a prankster in a pope suit), but left it open as to whether the lottery result was actually a legitimate sequence of numbers drawn randomly from a lottery machine, or somehow engineered to happen. In that sense, you're comparing a very definite unlikely event (the Pope actually converting to Islam) to a nominally unlikely event (1-2-3-4-5-6 coming up as the lottery results, for some reason that may or may not be a legitimate random draw). Was that intentional?
0CronoDAS12y
No, but if someone successfully manages to rig the lottery to come up 1-2-3-4-5-6, and doesn't get caught, I'd count that as an instance. Similarly, if the reason the Pope issued the public statement was that his brother was being held hostage or something, and he recants after he's rescued, that's good enough, too; I just wanted to rule out things like April Fools jokes, or off-the-cuff sarcastic remarks.
-1APMason12y
I don't think that's true. If you were going to tamper with the lottery, isn't your most likely motive that you want to win it? Why, then, set it up in such a way that you have to share the prize with the thousands of other people who play those numbers?
1sixes_and_sevens12y
I specified "overt tampering" rather than "covert tampering". If you wanted to choose a result that would draw suspicion, 1-2-3-4-5-6 strikes me as the most obvious candidate.
0A1987dM12y
Why would anyone want to do that? (I'm sure that any reason for that would be much more likely than 1 in 175 million, but still I can't think of it.)
8sixes_and_sevens12y
The three most obvious answers (to my mind) are: 1) to demonstrate your Big Angelic Powers 2) to discredit the lottery organisers 3) as a prank / because you can
0[anonymous]12y
The former will happen about every couple million years in average, so I'd say the latter is more likely by at least a factor of 100.

The ghost of Parnell is Far, the presentation to the Queen is Near?

1Alejandro112y
Perhaps. I had thought of the quote in the context of a distinction between epistemic/Bayesian probability and physical possibility or probability. For us (though perhaps not for Father Brown) the ghost story is physically impossible, it contradicts the basic laws of reality, while the presentation story does not. (In terms of the MWI we might say that there is a branch of the wavefunction where Gladstone offered the Queen a cigar, but none where a ghost appeared to him.) However, we might very well be justified in assigning the ghost story a higher epistemic probability, because we have more underlying uncertainty about (to use your words) Far concepts like the possibility of ghosts than about Near ones like how Gladstone would have behaved in front of the Queen.
2cousin_it12y
I seem to instinctively assign the ghost story a lower probability. The lesson of the quote might still be valid, can you come up with an example that would work for me?
0Alejandro112y
Sure. Take one mathematical fact which the mathematical community accepts as true, but which has a complicated proof only recently published and checked. Surely your epistemic probability that there is a mistake in the proof and the theorem is false should be larger than the epistemic probability of the Gladstone story (if you are not convinced, add more outrageous details to it, like Gladstone telling the Queen "What's up, Vic?"). But according to your current beliefs, in the actual world the theorem is necessarily true and its negation impossible, while the Gladstone story is possible in the MWI sense.
0[anonymous]12y
Whuh? I have logical uncertainty about the theorem.

[Hitler] has grasped the falsity of the hedonistic attitude to life. Nearly all western thought since the last war, certainly all "progressive" thought, has assumed tacitly that human beings desire nothing beyond ease, security, and avoidance of pain. In such a view of life there is no room, for instance, for patriotism and the military virtues. The Socialist who finds his children playing with soldiers is usually upset, but he is never able to think of a substitute for the tin soldiers; tin pacifists somehow won’t do. Hitler, because in his own joyless mind he feels it with exceptional strength, knows that human beings don’t only want comfort, safety, short working-hours, hygiene, birth-control and, in general, common sense; they also, at least intermittently, want struggle and self-sacrifice, not to mention drums, flag and loyalty-parades.

However they may be as economic theories, Fascism and Nazism are psychologically far sounder than any hedonistic conception of life. The same is probably true of Stalin’s militarized version of Socialism. All three of the great dictators have enhanced their power by imposing intolerable burdens on their peoples. Whereas Socialism, an

... (read more)
5Oligopsony12y
I don't see that that's true. Germany loved Hitler when he was giving them job security and easy victories and became much less popular once the struggle and danger and death arrived on the scene.
3Multiheaded12y
They grumbled, but 95% of them obeyed, worked, killed and died up until the spring of 1945. A huge amount of Germans certainly believed that sticking with the Nazis until the conflict's end was a much lesser evil compared to another national humiliation on the scale of Versallies. And look at the impressive use to which him and Goebbels put evaporative cooling of group beliefs to radicalize the faithful after the July plot. Purging a few malcontents led to a significant increase in zeal and loyalty even as things were getting visibly worse and worse.
0FiftyTwo12y
Full review here:
0Multiheaded12y
There's a pretty good and complete archive of all things by St. George at orwell.ru, by the way. As a pleasant exercise, I'm going to go through the Russian translations over there and see if I can correct anything.

On politics as the mind-killer:

We’re at the point where people are morally certain about the empirical facts of what happened between Trayvon Martin and George Zimmerman on the basis of their general political worldviews. This isn’t exactly surprising—we are tribal creatures who like master narratives—but it feels as though it’s gotten more pronounced recently, and it’s almost certainly making us all stupider.

-- Julian Sanchez (the whole post is worth reading)

6RobertLumley12y
Does anyone know the exact quote to which he is referring here?

We've reached the point where the weather is political, and so are third person pronouns.

5Multiheaded12y
Well, third-person pronouns were always political - it's just that only the last century's shift in values and ideological attitudes has allowed the spread of gender-neutral pronouns. Before that the issue was taken to be completely one-sided.
2hairyfigment12y
Conversely, evolution does not count as "political" here because we all belong to one camp. (Posted from Louisiana.)
5RobertLumley12y
I think it's this but I'm not sure:

Tell that to Socrates.

6FiftyTwo12y
Given that they supposedly drowned people for discussing irrational numbers that seems false.
0ec42912y
Sorry to have to tell you this, but Pythagoras of Samos probably didn't even exist. More generally, essentially everything you're likely to have read about the Pythagoreans (except for some of their wacky cultish beliefs about chickens) is false, especially the stuff about irrationals. The Pythagoreans were an orphic cult, who (to the best of our knowledge) had no effect whatsoever on mainstream Greek mathematics or philosophy.
3fubarobfusco12y
Source?
0ec42912y
Well, my source is Dr Bursill-Hall's History of Mathematics lectures at Cambridge; I presume his source is 'the literature'. Sorry I can't give you a better source than that.
5[anonymous]12y
Can anyone confirm this? Preferably with citation?
-5CronoDAS12y
1MixedNuts12y
Wait, is there any actual disagreement about what happened? I'm reading older Julian Sanchez posts, but the only point of disagreement seems to be "Once Zimmerman confronted Martin with a gun, did Martin try to disarm him before getting shot?". None of what I've read considers the question relevant; they base their judgements on already known facts such as "someone shot someone else then was let free rather than have a judge decide whether it counted as self-defense".
7TimS12y
There's substantial disagreement about the facts. For example, someone was heard yelling for help, but no one agrees whether that was Zimmerman or Martin. I can talk about Stand-Your-Ground laws and their apparent effect in this case, but I don't want to drone on.
6[anonymous]12y
There is the minor matter of people trying to very hard to spin and misrepresent events. At this point I can't help but link to this very relevant Aurini talk on the subject.
6CaveJohnson12y
Thank you for the link! Checking out some of his other videos and links I found this podcast on the topic rather interesting commentary. Especially the summary of facts starting at the 23 minute mark.
2David Althaus12y
Link doesn't work. Here is a new one.
1CaveJohnson12y
Thank you! Fixed the link to match yours.
1[anonymous]12y
Yes I listened to that podcast as well. I am much more confident that Zimmerman was not the attacker than I was about the innocence of Amanda Knox. His instant demonization and near lynching (people putting out a dead or alive bounty) seems a very troubling development for American society.
0CharlieSheen12y
More justice for Trayvon I guess.

"Muad’Dib learned rapidly because his first training was in how to learn. And the first lesson of all was the basic trust that he could learn. It is shocking to find how many people do not believe they can learn, and how many more believe learning to be difficult. Muad‘Dib knew that every experience carries its lesson"

Frank Herbert, Dune

It took me years to learn not to feel afraid due to a perceived status threat when I was having a hard time figuring something out.

A good way to make it hard for me to learn something is to tell me that how quickly I understand it is an indicator of my intellectual aptitude.

Interesting article about a study on this effect:

Dweck’s researchers then gave all the fifth-graders a final round of tests that were engineered to be as easy as the first round. Those who had been praised for their effort significantly improved on their first score—by about 30 percent. Those who’d been told they were smart did worse than they had at the very beginning—by about 20 percent.

Dweck had suspected that praise could backfire, but even she was surprised by the magnitude of the effect. “Emphasizing effort gives a child a variable that they can control,” she explains. “They come to see themselves as in control of their success. Emphasizing natural intelligence takes it out of the child’s control, and it provides no good recipe for responding to a failure.”

This seems like a more complicated explanation than the data supports. It seems simpler, and equally justified, to say that praising effort leads to more effort, which is a good thing on tasks where more effort yields greater success.

I would be interested to see a variation on this study where the second-round problems were engineered to require breaking of established first-round mental sets in order to solve them. What effect does praising effort after the first round have in this case?

Perhaps it leads to more effort, which may be counterproductive for those sorts of problems, and thereby lead to less success than emphasizing intelligence. Or, perhaps not. I'm not making a confident prediction here, but I'd consider a praising-effort-yields-greater-success result more surprising (and thus more informative) in that scenario than the original one.

7Spurlock12y
I agree that the data doesn't really distinguish this explanation from the effect John Maxwell described, mainly I just linked it because the circumstances seemed reminiscent and I thought he might find it interesting. Its worth noting though that these aren't competing explanations: your interpretation focuses on explaining the success of the "effort" group, and the other focuses on the failure of the "intelligence" group. To help decide which hypothesis accounts for most of the difference, there should really have been a control group that was just told "well done" or something. Whichever group diverged the most from the control, that group would be the one where the choice of praise had the greatest effect.
0matt12y
I think the universe is not usually engineered to perversely punish effort. Extra effort may sometimes be counter productive… but I think most people I know fail more often for too little effort than for too much. Use the Try Harder, Luke is usually good advice.
2TheOtherDave12y
I agree, so if you intended this as a counterpoint, it seems to follow that I have inconsistent beliefs. If so, can you expand?
5matt12y
I'm inferring more than you said, which isn't making it easy for anyone to understand me. Sorry about that. If you think your comment discusses an edge case, and that it's a good general practice to praise/reward effort rather than intelligence, then we are in agreement and this conversation should probably end. If you think it's a good general practice to spend the cognitive effort required to scan the world for situations where each type of praise/reward would most help… then I think we're disagreeing. Long comment following - summary at bottom. Dweck's work sounded a strong chord for me. I was an intelligent kid often praised for my intelligence, and often very scared that I would be discovered not to be as intelligent as everyone seemed to think I was (because the world was full of stuff that I wasn't immediately good at). I therefore avoided many pursuits that I thought would lead others to discover their previous overestimate of my innate, fixed intelligence. I think there are many children and adults who live in that place (I think that, for example, there is a lot of evidence in Eliezer's writing that he has a fixed conception of intelligence (eg. http://lesswrong.com/lw/bdo/rationality_quotes_april_2012/68n2). I also think that praise of my intelligence in my youth had a strong influence on my forming that model (fixed intelligence, not being good at something immediately is evidence that you're not as clever as they thought). After reading Dweck's work I've tried hard to alter my model of the universe. Innate intelligence obviously varies between individuals… but that's not very helpful or important to me, and spending time thinking about it doesn't help me much. As an individual with whatever innate capacity I have I benefit much more by considering the very significant impact my efforts have on what I can understand and what I can achieve. Anyone I meet who praises me for my (innate, fixed) intelligence undermines my efforts to focus on what I can ch
0TheOtherDave12y
Well, what my comment discusses is a potential direction of research, and makes some predictions about the results of that, and isn't really about application at all. As far as application goes, I agree that it's a good general practice to praise/reward effort rather than intelligence. Also to reward effort rather than strength, dexterity, attractiveness, and various other attributes. More generally, I think it's a good practice to reward behaviors rather than attributes. Rewarding behaviors gets me more of those behaviors. Rewarding attributes gets me nothing predictable.
1Eugine_Nier12y
There's something to be said for rewarding results instead of effort to teach people to make sure they are actually trying rather than trying to try.
-1TheOtherDave12y
Better results than fixed attributes, certainly. No objection to rewarding results as well. My primary concern with rewarding results instead is that it seems to create the incentive to only tackle problems I'm confident I can succeed at.
7undermind12y
I've seen this study cited a lot; it's extremely relevant to smart self- and other-improvement. But there are various possible interpretations of the results, besides what the authors came up with... Also, how much has this study been replicated? I'd like to see a top-level post about it.

I believe I am accurate in saying that educators too are interested in learnings which make a difference. Simple knowledge of facts has its value. To know who won the battle of Poltava, or when the umpteenth opus of Mozart was first performed, may win $64,000 or some other sum for the possessor of this information, but I believe educators in general are a little embarrassed by the assumption that the acquisition of such knowledge constitutes education. Speaking of this reminds me of a forceful statement made by a professor of agronomy in my freshman year in college. Whatever knowledge I gained in his course has departed completely, but I remember how, with World War I as his background, he was comparing factual knowledge with ammunition. He wound up his little discourse with the exhortation, "Don't be a damned ammunition wagon; be a rifle!"

-Carl Rogers, On Becoming a Person: A Therapist's View of Psychotherapy (1961)

I know a lot of scientists as well as laymen are scornful of philosophy - perhaps understandably so. Reading academic philosophy journals often makes my heart sink too. But without exception, we all share philosophical background assumptions and presuppositions. The penalty of not doing philosophy isn't to transcend it, but simply to give bad philosophical arguments a free pass.

David Pearce

8Jayson_Virissimo12y
This is analogous to my main worry as someone who considers himself a part of the anti-metaphysical tradition (like Hume, the Logical Positivists, and to an extent Less Wrongers): what if by avoiding metaphysics I am simply doing bad metaphysics.
4VKS12y
As an experiment, replace 'metaphysics' and 'metaphysical' with 'theology' and 'theological' or 'spirituality' and 'spiritual'. Then the confusion is obvious. Unless I don't understand what you mean by metaphysics, and just have all those terms bunched up in my head for no reason, which is also possible.
5Viliam_Bur12y
Yes. There is a difference between speaking imprecisely because we don't know (yet) how to express it better, and speaking things unrelated to reality. The former is worth doing, because a good approximation can be better than nothing, and it can help us to avoid worse approximations.
1VKS12y
Well, but what it that is meant by metaphysics? I've heard the word many times, seen its use, and I still don't know what I'm supposed to do with it. Ok, so now I've read the Wikipedia article, and now I'm unconvinced that when people use the term they mean what it says they mean. I know at least some people who definitely used "metaphysical" in the sense of "spiritual". What do you mean by metaphysics? Also unconvinced that it has any reason to be thought of as a single subject. I get the impression that the only reason these topics are together is that they feel "big". But I will grant you that given Wiki's definition of metaphysics, there is no reason to think that it is in principle incapable of providing useful works. I revise my position to state that arguments should not be dismissed because they are metaphysical, but rather because they are bad. Furthermore, I suspect that "metaphysics" is just a bad category, and should, as much as possible, be expunged from one's thinking.
4Jayson_Virissimo12y
We may be moving too fast when we expunge metaphysics from our web-of-belief. Say you believe that all beliefs should pay rent in anticipated experiences. What experiences do you anticipate only because you hold this belief? If there aren't any, then this seems awfully like a metaphysical belief. In other words, it might not be feasible to avoid metaphysics completely. Even if my specific example fails, the metaphysicians claim to have some that succeed. Studying metaphysics has been on my to-do list for a long time (if only to be secure in my belief that we don't need to bother with it), but for some reason I never actually do it.
6Will_Newsome12y
(LessWrong implicitly assumes certain metaphysics pretty often, e.g. when they talk about "simulation", "measure", "reality fluid", and so on; it seems to me that "anthropics" is a place where experience meets metaphysics. My preferred metaphysic for anthropics comes from decision theory, and my intuitions about decision theory come to a small extent from theological metaphysics and to a larger extent from theoretical computer science, e.g. algorithmic probability theory, which I figured is a metaphysic for the same reason that monadology is a metaphysic. ISTM that even if metaphysics aren't as fundamental as they pretend to be, they're still useful and perhaps necessary for organizing our experiences and intuitions so as to predict/understand prospective/counterfactual experiences in highly unusual circumstances (e.g. simulations).)

When some Lesswrong-users use 'metaphysics', they mean other people's metaphysics. This is much like how some Christians use the term 'religion'.

3Will_Newsome12y
Hm... one rationale for such a designation might be: "A 'metaphysic' is a model that is at least one level of abstraction/generalization higher than my most abstract/general model; people who use different models than me seem to have higher-level models than I deem justified given their limited evidence; thus those higher-level models are metaphysical." Or something? I should think about this more.
9J_Taylor12y
Your theory is much nicer than mine. Mine essentially amounts to people believing "I understand reality, your beliefs are scientifically justified, he endorses metaphysical hogwash." Further, at least since the days of the Vienna Circle, some scientifically-minded individuals have used 'metaphysics' as a slur. (I mean, at least some of the Logical Positivists seriously claimed that metaphysical terms were nonsense, that is, having neither truth-value nor meaning.) I have read Yudkowsky discuss matters of qualia and free will. This site contains metaphysics, straight up. I assume that anyone who dismisses metaphysics is either dismissing folk-usage of the term or is taking too much pride in their models of reality (that latter part does somewhat match your stipulative explanation.) (Oh, I'm not sure if your joke was intentional, but I still think it is funny that some possible humans would reject metaphysics for being 'models' which are too 'abstract', 'of higher-level', and not 'justified' given the current 'evidence'.)
0TheOtherDave12y
Agreed that Will's theory is nicer than yours. That said, with emphasis on "some," I think yours is true. Although the Christians I know are far more likely to use "religion" to refer to Christianity. (Still more so are the Catholics I know inclined to use "religion" to refer to Catholicism.)
0J_Taylor12y
I was just referring to some Protestants who will share such statements as "Christianity isn't a religion, it's a relationship" or "I hate religion too. That's why I believe in Jesus." Of course, most Protestants do not do this.
0TheOtherDave12y
Ah, I see. The Christians I know are more prone to statements like "Religion is important, because it teaches people about the importance of Jesus' love."
-2Will_Newsome12y
Just came across a comment by Deogolwulf in response to a comment on one of Mencius Moldbug's posts: Oh, snap!
7TheOtherDave12y
I couldn't find the original on a quick Google, but: Which is to say, believing that something can be entirely explained in terms of something else doesn't absolve me from the need to deal with it. Even if I and the bull and my preference to remain alive can all be entirely captured by the sufficiently precise specification of a set of quarks, it doesn't follow that there exists no such person, no such bull, or no such preference.
6Will_Newsome12y
The argument was a meta-level undermining argument supporting the necessity of metaphysical reasoning (of the exact sort that you're engaging in in your comment);—it wasn't an argument about the merits of reductionism. That would likely have been clearer had I included more context; my apologies.
0TheOtherDave12y
(nods) Context is often useful, agreed. Also, metaphysical reasoning is often necessary, agreed. Sadly, I often find it necessary in response to metaphysical reasoning introduced to situations without a clear sense of what it's achieving and whether that end can be achieved without it. In this sense it's rather like lawyers. Not that I'm advocating eliminating all the lawyers, not even a little. Lawyers are useful. They're even useful for things other than defending oneself from other lawyers. But I've also seen situations made worse because one party brought in a lawyer without a clear understanding of the costs and benefits of involving lawyers in that situation. I suspect that a clear understanding of the costs and benefits of metaphysical reasoning is equally useful.
0Bugmaster12y
Where is that quote from, out of curiosity ?
2TheOtherDave12y
If I could remember that, I probably could have found it on Google in the first place.
0Bugmaster12y
...fair enough. I tried looking on Google, and couldn't find it either. Perhaps your quote is original enough for you to claim authorship :-/
0TheOtherDave12y
Perhaps? I'm fairly sure I read it somewhere, but my memory is unreliable.
0J_Taylor12y
Deogolwulf is the sort of fellow who uses 'proposition' while obviously meaning 'statement'. Also, some of the first paragraph is pure unreflective sophistry. Still, the second half: Following this epistemic attack, I am imagining Deogolwulf holding up a mirror to TGGP's face and stating "No, TGGP, you are the metaphysics."
0Eugine_Nier12y
I think part of the problem is different scenes of the word "reduce". Consider the following two statements: 1) All things ultimately reduce to quarks (nitpick: and leptons) 2) Quarks and leptons ultimately reduce to quantum wave functions. 3) Quantum wave functions ultimately reduce to mathematics. 4) All mathematics ultimately reduces to the ZFC axioms. Notice that all these statements are true (I'm not quite sure about the first one) for slightly different values of "reduces".
0VKS12y
What?
4J_Taylor12y
When someone on Lesswrong uses the term 'simulation', they are probably making some implicit metaphysical claims about what it means for some object(A) to be a simulation of some other object(B). (This particular subject often falls under the part of metaphysics known as ontology.) The same applies to usage of most terms.
1VKS12y
Correct me if I'm wrong, but "They are probably making some implicit metaphysical claims about what it means for some object(A) to be a simulation of some other object(B)." and "They are probably making some implicit claims about what it means for some object(A) to be a simulation of some other object(B)" mean exactly the same thing.
0J_Taylor12y
They do happen to mean the same thing. This is because the question "What does it mean for some y to be an x?" is a metaphysical question. "They are probably making some aesthetic claim about why object(A) is more beautiful than object(B)" and "They are probably making some claim about why object(A) is more beautiful than object(B)" also mean the same thing.
0TheOtherDave12y
Come to that, they both probably mean the same thing as "They are probably making some implicit claims about how some object(B) differs from some other object (A) it simulates," which eliminates the reference to meaning as well.
1fubarobfusco12y
Well, that's a "should" statement, so we cash it out in terms of desirable outcomes, e.g.: * People who spend more time elaborating on their non-anticipatory beliefs will not get as much benefit from doing so as people who spend more time updating anticipatory beliefs. * If two people (or groups, or disciplines) ostensibly aim at the same goals, and deploy similar amounts of resources and effort; but one focuses its efforts with anticipation-controlling beliefs while the other relies on non-anticipation-controlling beliefs, then the former will achieve the goals more than the latter. (Examples could be found in charities with the goal of saving lives; or in martial arts schools with the goal of winning fights.)
0Incorrect12y
Where Recursive Justification Hits Bottom - EY Can you give any examples of modern metaphysics being useful?
6thomblake12y
Ontology begat early AI, which begat object-oriented programming.
0Viliam_Bur12y
I anticipate to experience more efficient thinking, because I will have to remember less and think about less topics, while achieving the same results. What do you anticipate to experience after studying metaphysics (besides being able to signal deep wisdom)?
3Will_Newsome12y
I anticipate understanding the abstract nature of justification, thus allowing me to devise better-justified institutions. I anticipate understanding cosmology and its role in justification, thus allowing me to understand how to transcend the contingent/universal duality of justification. I anticipate understanding infinities and their actuality/non-actuality and thus what role infinities play in justification. I anticipate graving new values on new tables with the knowledge gleaned from a greater understanding of justification—I anticipate seeing what both epistemology and morality are special cases and approximations of, and I anticipate using my knowledge of that higher-level structure to create new values. And so on.
1VKS12y
You might be better off studying mathematics, then.
0Will_Newsome12y
That too, yes. Algorithmic probability is an example of a field that is pretty mathematical and pretty metaphysical. It's the intellectual descendant of Leibniz's monadology. Computationalism is a mathematical metaphysic.
0VKS12y
If you would be so kind as to try and tell me what you mean by "metaphysic", I would be much less confused.
0Will_Newsome12y
By "metaphysic" I mean a high-level model for phenomena or concepts that you can't immediately falsify because, though the model explains all of the phenomena you are aware of, the model is also very general. E.g., if you look at a computer processor you can say "ah, it is performing a computation", and this constrains your anticipations quite a bit; but if you look at a desk or a chair and say "ah, it is performing a computation", then you've gotten into metaphysical territory: you can abstract away the concept of computation and apply it to basically everything, but it's unclear whether or not doing so means that computation is very fundamental, or if you're just overapplying a contingent model. Sometimes when theorizing it's necessary to choose a certain metaphysic: e.g., I will say that I am an instance of a computation, and thus that a computer could make an exact simulation of me and I would exist twice as much, thus making me less surprised to find myself as me rather than someone else. Now, such a line of reasoning requires quite a few metaphysical assumptions—assumptions about the generalizability of certain models that we're not sure do or don't break down—but metaphysical speculation is the best we can do because we don't have a way of simulating people or switching conscious experience flows with other people. That's one possible explanation of "metaphysic"/"metaphysics", but honestly I should look into the relevant metaphilosophy—it's very possible that my explanation is essentially wrong or misleading in some way.
2VKS12y
Why would generality be opposed to falsifiability? Wouldn't having a model be more general lead to easier falsifiability, given that the model should apply more broadly? In order to tell whether something is performing a computation, you try to find some way to get the object to exhibit the computation it is (allegedly) making. So -- if I understand correctly -- then a model is metaphysical, in the things you write, if applying it to a particular phenomenon requires an interpretation step which may or may not be known to be possible. How does this differ from any other model, except that you're allowing yourself to be sloppy with it? If you just replace "metaphysic" by "model", "metaphysical assumptions" by "assumptions about our models and their applicability", "metaphysical speculation" by "speculations based on our models", I think the things you're trying to say become clearer. If a bit less fancy-sounding. If the thing I understood is the thing you tried to say.
-1Will_Newsome12y
I could replace all my uses of the word "metaphysical" with "sloppily-general", I guess, but I'm not sure it has quite the right connotations, and "metaphysical" is already the standard terminology. "Metaphysical" is vague in a somewhat precise way that "sloppily-general" isn't. I appreciate the general need for down-to-earth language, but I also don't want to consent to the norm of encouraging people to take pains to write in such a way as to be understood by the greatest common factor of readers.
0VKS12y
"X is a metaphysic" becomes "X is somehow a model (of something), but I'm not sure how". "Y is metaphysical" becomes "Y is about or related to a model (somehow)". I assume my understanding is correct, since you didn't correct it. "sloppily-general" is then indeed kind of far from the intended meaning, but that's just because it's a terrible coinage. Elsewhere, somebody posted a link to the Stanford Encyclopedia of Philosophy's definition of metaphysics. They say right in the intro that they haven't found a good way to define it. The Wikipedia article on metaphysics's body implies a different definition than its opening paragraph. In common parlance, it's used for some vague spiritualish thing. And your definition is different from all of these. Do you think that the term could reasonably be expected to be understood the way you intended it to? "Metaphysical" isn't vague in a somewhat precise way. It isn't even evocative, as its convoluted etymology prevents even that. It's just vague and used by philosophers. The greatest common factor of readers isn't even here. The point is more to be understood by readers at all. Don't make your writing more obscure than it needs to be. Hard concepts are hard enough as is, without making the fricking idea of "somehow a model" worth 3 hours' worth of discussion.
-2Will_Newsome12y
Sorry, I was just too lazy to correct it. Still too lazy.
2VKS12y
I give up. Good night.
-2VKS12y
Metaphysics can't even be a thing in a web of belief! It's more a box for a bunch of things, with a tag that says "Ooo". Unless you want to define it otherwise, or I'm more confused than I think I am. So the category only makes sense if you want to use it to describe your feelings for some given subject. Why would that be a good way to frame a field of study? That's what I suspect is problem with metaphysics; not the things in the box, which are arbitrary, rather that the box messes up your filing system.
2J_Taylor12y
Metaphysics, as a category, has its constituents determined by the contingent events of history. The same could be said for the categories of philosophy and art. As such, 'metaphysics' is a convenient bucket whose constituents do not necessarily have similarities in structure. At best, I think one could say that they have a Wittgensteinian family-resemblance. However, I am only defending the academic usage of the term. (More information here.) The folk usage seems to hold that metaphysics is "somewhere between "crystal healing" and "tree hugging" in the Dewey decimal system."
0VKS12y
Well that at least makes some sense. I was noticing that Wiki's definition and the definition implied by its examples were in conflict. I don't particularly see why the metaphysics bucket is convenient, though. Is there any point in discussing metaphysics as anything other than a cultural phenomenon among philosophers?
0J_Taylor12y
Unless you are a cladist, 'reptile' is a bucket which contains crocodiles, lizards, and turtles, but does not contain birds and mammals. The word is still sometimes useful for communication. It depends on your goals. I do not generally recommend it, however.
0VKS12y
My claim was not about the general lack of utility of buckets. Briefly, the reptile bucket is useful because reptiles are similar to one another, and thus having a way to refer to them all is handy. There is apparently no such justification for "metaphysics", except in the sense that its contents are related by history. But this clearly isn't the use you want to make of this bucket.
0J_Taylor12y
The word 'similar' is often frustratingly vague. However, crocodiles and birds share a more recent common ancestor than crocodiles and turtles. The word is nonetheless used. I do agree with you that it is frustrating that the word's usage is historically determined.
-2VKS12y
Well then the term reptile is somewhat deceptive in evolutionary biology, and based more on some consensus about appearance. Fine. Whatever. The point is that the word metaphysics isn't evocative in that way or any way, except in the context of its historical usage. As such, it cannot inform us in any way about any subject that isn't the phenomenon of its acceptance as a field, and is not even a useful subject heading, being a hodgepodge. We can choose whether to continue to use it, and I don't see why we should.
2J_Taylor12y
Within the field of philosophy, the usage is a fairly normal term, much like 'reptile' or 'sex' are normal terms for most people. Much of my vocabulary comes from that field and I am most comfortable using its terms. 'Metaphysics' is one of many problematic terms which are evocative to me, because I understand how these terms are used. Asking someone who studies philosophy to stop using 'metaphysics' is like asking someone who studies biology to stop using 'species'. However, it is your prerogative to use whatever terms you prefer. I am sure that we are both trying to be pragmatic.
1Viliam_Bur12y
Conventional usage seems to be: speaking about deep intangible topics. Which is a bad categegory, because it contains: abstract thinking + supernatural claims + complicated nonsense; especially the parts good for signalling wisdom.
0thomblake12y
It's a bit confusing in part because of its strange etymology. Originally, "meta" was used in the sense of "after", since "metaphysics" was the unnamed book that came after "physics" in the standard ordering of Aristotle's works. Later scholars accidentally connected that to something like our current usage of "meta", and a somewhat arbitrary field was born.

Pedantry and mastery are opposite attitudes toward rules. To apply a rule to the letter, rigidly, unquestioningly, in cases where it fits and in cases where it does not fit, is pedantry. ... To apply a rule with natural ease, with judgment, noticing the cases where it fits, and without ever letting the words of the rule obscure the purpose of the action or the opportunities of the situation, is mastery.

  • George Pólya, How to Solve It
6MixedNuts12y
...and that's why the rule doesn't apply to the reference class of cases I just constructed to only contain my own, Officer.
2Strange712y
At which point the officer will demonstrate in no uncertain terms who is the master in the current situation.

Our minds contain processes that enable us to solve problems we consider difficult. "Intelligence" is our name for whichever of those processes we don't yet understand.

Some people dislike this "definition" because its meaning is doomed to keep changing as we learn more about psychology. But in my view that's exactly how it ought to be, because the very concept of intelligence is like a stage magician's trick. Like the concept of "the unexplored regions of Africa," it disappears as soon as we discover it.

-- Marvin Minsky, The Society of Mind

But, the hard part comes after you conquer the world. What kind of world are you thinking of creating?

Johan Liebert, Monster

By relieving the brain of all unnecessary work, a good notation sets it free to concentrate on more advanced problems, and, in effect, increases the mental power of the race.

Alfred North Whitehead, “An Introduction to Mathematics” (thanks to Terence Tao)

On specificity and sneaking on connotations; useful for the liberal-minded among us:

I think, with racism and sexism and 'isms' generally, there's a sort of confusion of terminology.

A "Racist1" is someone, who, like a majority of people in this society, has subconsciously internalized some negative attitudes about minority racial groups. If a Racist1 takes the Implicit Association Test, her score shows she's biased against black people, like the majority of people (of all races) who took the test. Chances are, whether you know it or not, you're a Racist1.

A "Racist2" is someone who's kind of an insensitive jerk about race. The kind of guy who calls Obama the "Food Stamp President." Someone you wouldn't want your sister dating.

A "Racist3" is a neo-Nazi. You can never be quite sure that one day he won't snap and kill someone. He's clearly a social deviant.

People use the word "Racist" for all three things, and I think that's the source of a lot of arguments. When people get accused of being racists, they evade responsibility by saying, "Hey, I'm not a Racist3!" when in fact you were only saying they were Racist1 or Racist2. B

... (read more)

How about:

  1. Someone who, following an honest best effort to evaluate the available evidence, concludes that some of the beliefs that nowadays fall under the standard definition of "racist" nevertheless may be true with probabilities significantly above zero.

  2. Someone who performs Bayesian inference that somehow involves probabilities conditioned on the race of a person or a group of people, and whose conclusion happens to reflect negatively on this person or group in some way. (Or, alternatively, someone who doesn't believe that making such inferences is grossly immoral as a matter of principle.)

Both (1) and (2) fall squarely under the common usage of the term "racist," and yet I don't see how they would fit into the above cited classification.

Of course, some people would presumably argue that all beliefs in category (1) are in fact conclusively proven to be false with p~1, so it can be only a matter of incorrect conclusions motivated by the above listed categories of racism. Presumably they would also claim that, as a well-established general principle, no correct inferences in category (2) are ever possible. But do you really believe this?

8A1987dM12y
That (1) only makes sense if there is a “standard” definition of racist (and it's based on what people believe rather than/as well as what they do). The point of the celandine13 was indeed that there's no such thing.
6A1987dM12y
The evidence someone's race constitutes about that person's qualities is usually very easily screened off, as I mentioned here. And given that we're running on corrupted hardware, I suspect that someone who does try to “performs Bayesian inference that somehow involves probabilities conditioned on the race of a person” ends up subconsciously double-counting evidence and therefore end up with less accurate results than somebody who doesn't. (As for cases when the evidence from race is not so easy to screen off... well, I've never heard anybody being accused of racism for pointing out that Africans have longer penises than Asians.)
8Vaniver12y
I have seen accusations for racism as responses to people pointing that out.
6Eugine_Nier12y
Also, according to the U.S. Supreme Court even if race is screened off, you're actions can still be racist or something.
6Eugine_Nier12y
In real life, you don't have the luxury of gathering forensic evidence on everyone you meet.
5A1987dM12y
I'm not talking about forensic evidence. Even if white people are smarter in average than black people, I think just talking with somebody for ten minutes would give me evidence about their intelligence which would nearly completely screen off that from skin colour. Heck, even just knowing what their job is would screen off much of it.
6Eugine_Nier12y
Also, as Eric Raymond discusses here, especially in the comments, you sometimes need to make judgements without spending ten minutes talking to everyone you see. There's this thing called Affirmative Action, as I mentioned elsewhere in this thread.
6Multiheaded12y
... I facepalmed. Really, Eric? Sorry, I don't think that a moral realist is perceptive enough to the nuances and ethical knots involved to be a judge on this issue. I don't know, he might be an excellent scientist, but it's extremely stupid to be so rash when you're attempting serious contrarianism. Yep, let's all try to overcome bias really really hard; there's only one solution, one desirable state, there's a straight road ahead of us; Kingdom of Rationality, here we come! (Yvain, thank you a million times for that sobering post!)
4A1987dM12y
You know, there are countries where the intentional homicide rate is smaller than in John Derbyshire's country by nearly an order of magnitude. That thing doesn't exist in all countries. Plus, I think the reason why you don't see that many two-digit-IQ people among (say) physics professors is not that they don't make it, it's that they don't even consider doing that, so even if some governmental policy somehow made it easier for black people with an IQ of 90 to succeed than for Jewish people with the same IQ, I would still expect a black physics professor to be smarter than (say) a Jewish truck driver.
-1Eugine_Nier12y
That's not the point. The point is that the black physics professor is less smart than the Jewish physics professor.
-2A1987dM12y
But the difference is smaller than for the median black person and the median Jewish person. (I said "even just knowing what their job is would screen off much of it", not "all of it".)
6private_messaging12y
The bell curve has both the mean and the deviation, you can have a 'race' with lower mean and larger standard deviation, and then you can e.g. filter by reliable accomplishment of some kind, such as solving some problem that smartest people in the world attempted and failed, you may end up with situation that the population with lower mean and larger standard deviation will have fewer people whom attain this, but those whom do, are on average smarter. Set bar even higher, and the population with lower mean and larger standard deviation has more people attaining it. Also, the Gaussian distribution can stop being good approximation very far away from the mean. edit: and to reply to grand grand parents: I bet i can divide the world into category that includes you, and a category that does not include you, in such a way that the category including you has substantially higher crime rate, or is otherwise bad. Actually if you are from US, I have a pretty natural 'cultural' category where your murder rate is about 5..10x of normal for such average income. Other category is the 'racists', i.e. the people whom use skin colour as evidence. Those people also have substantially bad behaviour. You of course want to use skin colour as evidence, and don't want me to use your qualities as evidence. See if I care. If you want to use the skin colour as evidence, lumping together everyone that's black, I want to use 'use of skin colour as evidence', lumping you together with all the nasty racists.
1A1987dM12y
IIRC, no substantial difference was found in the standard deviations among races. (Whereas for genders, they have the same mean but males have larger sigma, so there are both more male idiots than female idiots and more male geniuses than female geniuses.) Isn't IQ defined to be a Gaussian (e.g. IQ 160 just means ‘99.99683rd percentile among people your age’), rather than ‘whatever IQ tests measure’? If so, a better statement of that phenomenon would be “IQ tests are inaccurate for extreme values.” I want to use ‘use of “use of skin colour as evidence” as evidence’ as evidence, but I'm not sure what that's evidence for. :-)
5private_messaging12y
Even a small difference translates into enormous ratio between numbers of people, several standard deviations from the mean... Yes, and it is defined to have specific standard deviation as well. That definition makes it unsuitable measure. The Gaussian distribution also arises from sum of multiple independent variables. The statement was about intelligence though, which is different thing from both "what IQ tests measure" and "how IQ is defined". Another huge failing of IQ is the non-measure of ability to build and use a huge search-able database of methods and facts. Building such database is a long-term memory task and can not be tested in short time span; the existing knowledge can't be tested without massive influence by the background. Likewise, the IQ test lacks any problems that are actually difficult enough to have some solution methods that some people would know before the test, and some won't. Effectively, the IQ tests do not test for heavily parallel processing capability. For example, I do believe that it would be possible to build 'superhuman AI' that runs on a cellphone and aces IQ tests, and could perhaps deceive a human in brief conversation. The same AI would never be able to invent a stone axe from scratch, let alone anything more complicated; it'd be nothing but a glorified calculator. Well, the people who use skin colour as evidence, i would guess, are on average less well behaved than rest of society... so you can use it to guess someone's criminality or other untrustworthiness.
1A1987dM12y
Indeed, when I last took a few IQ tests I felt like I was being tested tested more for familiarity with concepts such as exclusiveOR, cyclical permutations, and similar basic discrete maths stuff than for processing power. (Of course, it does take insight to realize that such concepts are relevant to the questions and processing power to figure out the answer within the time frame of the test, but I think that if I had never heard about XOR or used Sarrus' rule I would have scored much worse.) ETA: This is also why I suspect that the correlations between race and IQ aren't entirely genetic. If Einstein's twin brother had grown up in a very poor region with no education...
0A1987dM12y
A distribution with mean 100 and st. dev. 14 will exceed one with mean 90 and st. dev. 16 for all x between about 93 and about 170, and there aren't that many people with IQs over 170 anyway.
2private_messaging12y
But can we detect such a tiny difference as between std dev 14 and std dev 16 ? After we have to control for really many factors that are different between groups in question? Also, that was my point, at the level of very high (one in million) intelligence, i.e. actual geniuses, the people you'd call genius without having to detect them using some test. I have a pet hypothesis about the last biological change which caused our technological progress. Little mixing with Neanderthals, raising the standard deviation somewhat. The IQ test I think get useless past some point, when the IQ test savants that solve it at such level (but can't learn very well for example, or can't do problems well that require more of parallel processing), start to outnumber geniuses.
0Vaniver12y
What sort of effect size do you expect here? Why?
2private_messaging12y
You have the neonazis among those who use skin colour as evidence of criminality, but not among those who don't. I don't know of other differences that were demonstrated, my expectation for other effects is zero. I should expect the overall effect on order of at least the proportion of race motivated violence to overall violence; my expectation is somewhat higher than this though because I would guess that the near-neonazis are likewise more violent, including within-race crime.
3private_messaging12y
Doh, missed the extra nesting. I doubt it'll be evidence for much... both neonazis and liberal types use that as evidence, the former as evidence of ingroup-ness and the latter as evidence of badness, so I don't see for what it would be discriminating.
3A1987dM12y
I can't remember whether I read this from someone else or came up with it on my own, but when people ask “do you oppose homosexual marriage” in questionnaires to find out political orientations, people answering “yes” will include both those who oppose homosexual marriage but are OK with heterosexual marriage, and those who oppose all marriage, and those groups are very different clusters in political space (paleo-conservatives the former, radical anarchists the latter). (Of course, the latter group is so much smaller than the former than if you're doing statistics with large numbers of people this shouldn't be such an issue.)
4Vaniver12y
What if verbal ability and quantitative ability are often decoupled?
3A1987dM12y
I wasn't talking about "verbal ability" (which, to the extent that can be found out in ten minutes, correlates more with where someone grew up than with IQ), but about what they say, e.g. their reaction to finding out that I'm a physics student (though for this particular example there are lots of confounding factors), or what kinds of activities they enjoy.
4Vaniver12y
If you're able to drive the conversation like that, you can get information about IQ, and that information may have a larger impact than race. But to "screen off" evidence means making that evidence conditionally independent- once you knew their level of interest in physics, race would give you no information about their IQ. That isn't the case. Imagine that all races have Gaussian IQ distributions with the same standard deviation, but different means, and consider just the population of people whose IQs are above 132 ('geniuses' for this comment). In such a model, the mean IQ of black geniuses will be smaller than the mean IQ of white geniuses which will be smaller than the mean IQ of Jewish geniuses- so even knowing a lower bound for IQ won't screen off the evidence provided by race!
4A1987dM12y
Huh, sure, if the likelihood is a reversed Heaviside step. If the likelihood is itself a Gaussian, then the posterior is a Gaussian whose mean is the weighed average of that of the prior and that of the likelihood, weighed by the inverse squared standard deviations. So even if the st.dev. of the likelihood was half that of the prior for each race, the difference in posterior means would shrink by five times.
4Vaniver12y
Right- there's lots of information out there that will narrow your IQ estimate of someone else more than their race will, like that they're a professional physicist or member of MENSA, but evidence only becomes worthless when it's independent of the quantity you're interested in given the other things you know.
2maia12y
Can you give an example of evidence becoming worthless? (I can't think of any.)
8alex_zag_al12y
You have a theory that a certain kind of building is highly prone to fire. You see a news report that mentions that a building of that kind has burnt down on Main Street. The news report supports your theory - unless you were a witness to the fire the previous night.
2Dolores198412y
If you were promoting the theory before that point, the police may still have some pointed questions to ask you.
0alex_zag_al12y
I'm talking about how valuable the evidence is to you, the theory-promoter. If you were there, then the news report tells you nothing you didn't already know.
0Dolores198412y
I understood your point. I was simply making a joke.
0TheOtherDave12y
In this case, if the news report is consistent with my recollections, it seems that is evidence of the reliability of the news, and of the reliability of my memory, and additional evidence that the event actually occurred that way. No?
0alex_zag_al12y
Yeah, true. But having been there the previous night, and making good observations the previous night, certainly makes the news report go from pretty strong evidence to almost nothing. EDIT: Really the important thing I think, is that if your observations are good enough than the evidence from the news report is "worthless", in the sense that you shouldn't pay to find out whether there was a news report that backs up your observations. It's not worth the time it takes to hear it..
0TheOtherDave12y
Hm. Maybe I'm missing your point altogether, but it seems this is only true if the only thing I care about is the truth of that one theory of mine. If I also care about, for example, whether news reports are typically reliable, then suddenly the news report is worth a lot more. But, sure, given that premise, I agree.
0Vaniver12y
Suppose A gives me information about B, and B gives me information about C; they're dependent. (Remember, probabilistic dependence is always mutual.) A gives me information about C (through B) only if I don't know B. If I know B, then A is conditionally independent of C, and so learning A tells me nothing about C.
0maia12y
So essentially... a new fact is useless only if it's a subset of knowledge you already have?
0Vaniver12y
That seems like a fine way to put it.
0JoshuaZ12y
Minor note, this appears to actually not be the case. Most studies have no correlation between race and penis size. See for example here. The only group that there may be some substantial difference is that Chinese babies may have smaller genitalia after birth but this doesn't appear to hold over to a significant difference by the time the children have reached puberty. Relevant study.
2A1987dM12y
Huh, according to this map the average Congolese penis is nearly twice as long as the average South Korean penis. (ISTR that stretched flaccid length doesn't perfectly correlate with erect length.)
0Nornagest12y
Oddly salient for such a trivial result. Should a study qualify for an Ig Nobel if you can use it to settle bar bets?
6cousin_it12y
Where would someone like Steve Sailer fit in this classification?
1GLaDOS12y
Indeed as strange as it might sound (but not to those who know what he usually blogs about) Steve Sailer seems to genuinely like black people more than average and I wouldn't be surprised at all if a test showed he wasn't biased against them or was less biased than the average white American. He also dosen't seem like racist2 from the vast majority of his writing, painting him as racist3 is plain absurd.
0JoshuaZ12y
What evidence leads to this conclusion?
2Vaniver12y
He published his IAT results and he's proposed policies that play to the strengths of blacks.
1JoshuaZ12y
Historically, proposing policies that are set to help the specific strengths of a minority group are not generally indicative of actually positive feelings about those groups.
7Vaniver12y
The IAT is the best measure of 'genuinely like X people' we have now, though that's not saying much. (I believe the only place he published it is VDare, which is currently down.) What are the competing hypotheses and competing observations, here?
2A1987dM12y
...for a particular value of genuine. (See this, BTW.)
0Vaniver12y
It seems to me the natural interpretation for "genuine" is "unconscious," and if that post is relevant, it seems that it argues for more relative importance for the IAT over stated positions and opinions.
3CaveJohnson12y
This is missing Racist4: Someone whose preferences result in disparate impact.
2TheOtherDave12y
...and also useful for those among us who don't identify as "liberal-minded."
-13MixedNuts12y
1BillyOblivion12y
So if a minority takes the Implicitly Association Test and finds out their biased against the dominant "race" in their area, are they a Racist1, or not? I would also really question the validity of the Implicit Association Test. It says "Your data suggest a slight implicit preference for White People compared to Black People.", which given that blacks have been severely under-represented my social sub-culture for the last 27 years(Punk/Goth), the school I graduated from (Art School), and my professional environments (IT) for the last 20 years is probably not inaccurate. However, it also says "Your data suggest a slight implicit preference for Herman Cain compared to Barack Obama." Which is nonsense. I have a STRONG preference for Herman Cain over Barack Obama.
2Manfred12y
Looks like we need more "racism"s :D A common definition of racism that reflects the intuitions you bring up is "racism is prejudice plus power," (e.g., here) which isn't very useful from a decision-making point of view but which is very useful when looking at this racism as a functional thing experienced by the some group.
1Oscar_Cunningham12y
Surely one of the definitions of "racist" should contain something about thinking that some races are better than others. Or is that covered under "neo-Nazi"?
5thomblake12y
I'm pretty sure that's covered under Racist1. Note the word "negative". Though it's odd that Racist1 specifically refers to "minorities". The entire suite seems to miss folks that favor a "minority" race.
4CaveJohnson12y
Not really it is perfectly possible to be explicitly aware of one's racial preferences and not really be bothered by having such preferences, at least no more than one is bothered by liking salty food or green parks, yet not be a Nazi or prone to violence. Indeed I think a good argument can be made not only that large number of such people lived in the 19th and 20th century, but that we probably have millions of them living today in say a place like Japan. And that they are mostly pretty decent and ok people. Edit: Sorry! I didn't see the later comments already covering this. :)
1gjm12y
Negative subconscious attitudes aren't the same thing as (though they might cause or be caused by) conscious opinions that such-and-such people are inferior in some way.
3thomblake12y
Ah yes - it's extra-weird that someone isn't allowed in that framework to have conscious racist opinions but not be a jerk about it.
2Normal_Anomaly12y
If one has conscious racist opinions, or is conscious that one has unconscious racist opinions (has taken the IAT but doesn't explicitly believe negative things about blacks) but doesn't act on them, it's probably because one doesn't endorse them. I'd class such a person as a Racist1.
7thomblake12y
I don't think not being an "insensitive jerk" is the same as not acting on one's opinions. For example, if I think that people who can't do math shouldn't be programmers, and I make sure to screen applicants for math skills, that's acting on my opinions. If I make fun of people with poor math skills for not being able to get high-paying programmer jobs, that's being an insensitive jerk.
-2Normal_Anomaly12y
That's true. I was taking "racist opinions" to mean "incorrect race-related beliefs that favor one group over another". If people who couldn't do math were just as good at programming as people who could, and you still screened applicants for math skills, that would be a jerk move. If your race- or gender- or whatever-group-related beliefs are true, and you act on them rationally (e.g. not discriminating with a hard filter when there's only a small difference), then you aren't being any kind of racist by my definition. ETA: did anyone downvote for a reason other than LocustBeamGun's?
6wedrifid12y
Not to mention a bad business decision.
0Normal_Anomaly12y
That too, thanks for pointing it out.
5[anonymous]12y
(ETA: I didn't downvote, but) I wouldn't call gender differences in math "small" - the genders have similar average skills but their variances are VERY different. As in, Emmy Noether versus ~everyone else. And if there is a great difference between groups it would be more rational to apply strong filters (except for example people who are bad at math, conveniently, aren't likely to become programmers). Perhaps the downvoter(s) thought you only presented the anti-discrimination side of the issue.
0Normal_Anomaly12y
I think in most cases the average is more important in deciding how much to discriminate. But I deleted the relevant phrase because I'm not sure about that specific case and my argument holds about the same amount of water without it as with it. EDIT: Huh, I was intending to say that it's acceptable to discriminate on real existing differences, to the extent that those differences exist. Not sure how to fix my comment to make that less ambiguous, so just saying it straight out here.
0A1987dM12y
Indeed. For some reason I'm not sure of, I instinctively dislike Chinese people, but I don't endorse this dislike and try to acting upon it as little as possible (except when seeking romantic partners -- I think I do get to decide what criteria to use for that).
0TheOtherDave12y
Can you expand on the difference you see between acting on your (non-endorsed) preferences in romantic partners, and acting on those preferences in, for example, friends?
0A1987dM12y
As for this specific case, I don't happen to have any Chinese friend at the moment, so I can't. More generally, see some of the comments on this Robin Hanson post: not many of them seem to agree with him.
0TheOtherDave12y
I don't understand how not having any Chinese friends at the moment precludes you from expanding on the differences between acting on your dislike of Chinese people when seeking romantic partners and acting on it in other areas of your life, such as maintaining friendships. Yes, the commenters on that post mostly don't agree with him. That said, I would summarize most of the exchange as: "Why are we OK with A, but we have a problem with B?" "Because A is OK and B is wrong!" Which isn't quite as illuminating as I might have liked.
2A1987dM12y
Since I'm not maintaining any friendships with Chinese people, I can't see what it would even mean for me to act on my dislike of Chinese people in maintaining friendships. As for ‘other areas of my life’, this means that if I attempt to interact with a Chinese-looking beggar the same way I'd behave I'd interact with an European-looking beggar, to read a paper by an author with a Chinese-sounding name the same way I'd read one by an author with (say) a Polish-sounding name, and so on. (I suspect I might have misunderstood your question, though.)
3Eugine_Nier12y
Depends on what you mean by "better". There's a difference between taking the data on race and IQ seriously, and wanting to commit genocide.
2TheOtherDave12y
(blink) Can you unpack the relationship here between some available meaning of "better" and wanting to commit genocide?
2Eugine_Nier12y
That's the question I was implicitly asking Oscar.
1wedrifid12y
Most obvious plausible available meaning for 'better' that fits: "Most satisfies my average utilitarian values". (Yes, most brands of simple utilitarianism reduce to psychopathy - but since people still advocate them we can consider the meaning at least 'available'.)
0TheOtherDave12y
Fair enough.
0Oscar_Cunningham12y
Sure, I just thought it was weird that the definitions given barely even mentioned race.
1Eugine_Nier12y
You left out one common definition. Also I don't see why calling Obama the "Food Stamp President" or otherwise criticizing his economic policy president makes one a jerk, much less a "Racist2" unless one already believes that all criticism of Obama is racist by definition.
4TimS12y
I'm honestly confused. You don't see why calling Obama a "Food Stamp President" is different from criticizing his economic policy? I guess I would not predict that particular phrase being leveled against Hillary or Bill Clinton - even from people who disagreed with their economic policies for the same reasons they disagree with Obama's economic policies.
-1Eugine_Nier12y
Well, Bill Clinton had saner economic policies, but otherwise I would predict that phrase, or something similar, being used against a white politician.
4TimS12y
You haven't answered my question: Given the way that public welfare codes for both "lazy" and "black" in the United States, do you think that "Food Stamp President" has the same implications as some other critique of Obama's economic policies (in terms of whether the speaker intended to invoke Obama's race and whether the speaker judges Obama differently than some other politician with substantially identical positions)?
5Random83212y
"public welfare codes for both "lazy" and "black" in the United States" Taking your word on that, what "other critique of Obama's economic policies" are you imagining that would not have the same implications, unless you mean one that ignores public welfare entirely in favor of focusing on some other economic issue instead?
3TimS12y
A political opponent of Obama might say: or or edit: or (end edit) without me thinking that the political opponent was intending to invoke Obama's race in some way. None of these are actual quotes, but I think they are coherent assertions that disagree with Obama's economic or legal philosophy. Edit: I feel confident I could find actual quote of equivalent content.
0Random83212y
Of course, none of the ones you suggested are actually about public welfare, in the sense of the government providing supplemental income for people who are unable to get jobs to provide themselves adequate income. So what we have is not a code word, but rather a code issue. Except the first one, but with how you framed it as "public welfare codes for..." I don't see how that one wouldn't have the same connotations.
1TimS12y
Tl;dr: You have a good point, but we seem to be stuck with the historical context. ---------------------------------------- Unemployment benefits might qualify as public welfare. More tenuously, the various health insurance subsidies and expansions of Medicaid (government health insurance for the very poor) contained in "Obamacare." But your point is well taken. The well has been poisoned by political talking points from the 1980s (e.g. welfare queen and the response from the left). I'll agree that there's no good reason for us to be trapped in the context from the past, but politicians have not tried very hard to escape that trap.
-2Eugine_Nier12y
The term "welfare president" has the advantage of not having a huge inferential distance (how many people know what a Laffer curve is?) and working as a soundbite.
-4Eugine_Nier12y
Here is another example of my point that one can claim any criticism of Obama is racist if one is sufficiently motivated.
0Eugine_Nier12y
Well, yes by finding enough "code words" you can make any criticism of Obama racist.
5TheOtherDave12y
Yes, that's certainly true. I'm really curious now, though. What's your opinion about the intended connotations of the phrase "food stamp President"? Do you think it's intended primarily as a way of describing Obama's economic policies? His commitment to preventing hunger? His fondness for individual welfare programs? Something else? Or, if you think the intention varies depending on the user, what connotations do you think Gingrich intended to evoke with it? Or, if you're unwilling to speculate as to Gingrich's motives, what connotations do you think it evokes in a typical resident of, say, Utah or North Dakota?
-9Eugine_Nier12y
0RobinZ12y
That seems improbable. To pick the first example I Googled off of the Atlantic webside: Chart of the Day: Obama's Epic Failure on Judicial Nominees contains some substantive criticism of Obama - can you show me where it contains "code words" of this kind?

It's not an improbable claim so much as a nigh-unfalsifiable claim.

I mean, imagine the following conversation between two hypothetical people, arbitrarily labelled RZ and EN here:
EN: By finding enough "code words" you can make any criticism of Obama racist.
RZ: What about this criticism?
EN: By declaring "epic", "confirmation mess", and "death blow" to be racist "code words", you can make that criticism racist.
RZ: But "epic", "confirmation mess", and "death blow" aren't racist code words!
EN: Right. Neither is "food stamps".

Of course, one way forward from this point is to taboo "code word" -- for example, to predict that an IAT would find stronger associations between "food stamps" and black people than between "epic" and black people, but would not find stronger associations between "food stamps" and white people than between "epic" and white people.

0RobinZ12y
I think "nigh-unfalsifiable" is unfair in general when it comes to the use of code words, but I'm not familiar with the facts of the particular case under discussion.
2TheOtherDave12y
I agree in the general case. In fact, I fully expect that (for example) an IAT would find stronger associations between "food stamps" and black people than between "epic" and black people, but would not find stronger associations between "food stamps" and white people than between "epic" and white people, and if I did not find that result I would have to seriously rethink my belief that "food stamps" is a dog-whistle in the particular case under discussion; it's not unfalsifiable at all. But I can't figure out any way to falsify the claim that "by finding enough 'code words' you can make any criticism of Obama racist," nor even the implied related claim that it's equally easy to do so for all texts. Especially in the context of this discussion, where the experimental test isn't actually available. All Eugene_Nier has to do is claim that arbitrarily selected words in the article you cite are equally racially charged, and claim -- perhaps even sincerely -- to detect no difference between the connotations of different words.
2RobinZ12y
I wouldn't actually use IAT to find these kind of connections - I would look at the use of phrases in other contexts by other people, and I would look at the reactions to the phrases in those contexts. To take a historical example from Battle Cry of Freedom: The Civil War Era by James M. McPherson: in the 1862 riots against the draft, one of the banners that rioters carried read, "The Constitution As It Is, The Union As It Was". That this allusion to the Constitution is an allusion to the legality of slavery under said Constitution is supported by one of the other banners carried by the same groups of rioters: "We won't fight to free the nigger". If, in 1862, a candidate for state office out in the Midwest were to repeat (or even, depending on the exact words, paraphrase) that phrase about the Constitution, I think the charge of "code word" would be well-placed.
2TimS12y
I agree that looking at deployment of phrases is a useful way of finding code words, but it is always vulnerable to "cherry-picking." The second banner you mentioned might or might not have been representative of the movement. Consider the hypothetical protest filled with "Defend the Constitution, Strike Down Obamacare" posters, which should not be tainted by other posters saying "Keep government out of Medicare"(1) but it is hard to describe an ex ante principle explaining how distinctions should be made. (1) For non-Americans: Medicare is widely popular government health insurance program for the elderly.
0RobinZ12y
Agreed - it's not a mechanical judgment.
2TheOtherDave12y
Yup, looking at venues in which a phrase gets used is another way to establish likely connections between phrases and ideologies.
3CronoDAS12y
Unfortunately, it seems to me that most of the information that "race" provides is screened off by various things that are only weakly correlated with race, and it also seems to me that our badly-designed hardware doesn't update very well upon learning these things. For example, "X is a college graduate, and is black" doesn't tell you all that much more than "X is a college graduate"; it's probably easier to deal with this by having inaccurate priors than by updating properly.
5steven046112y
I'm not sure that what you have in mind here is screening, at least in the causal diagrams sense. If I'm not mistaken, learning that someone is a college graduate screens off race for the purpose of predicting the causal effects of college graduation, but it doesn't screen off race for the purpose of predicting causes of college graduation (such as intelligence) and their effects. You're right, though, that even in the latter case learning that someone is a college graduate decreases the size of the update from learning their race. (At least given realistic assumptions. If 99% of cyan people have IQ 80 and 1% have IQ 140, and 99% of magenta people have IQ 79 and 1% have IQ 240, learning that someone is a college graduate suddenly makes it much more informative to learn their race. But that's not the world we live in; it's just to illustrate the statistics.)
1Eugine_Nier12y
Which are generally much harder to observe. Um, Affirmative Action. Also tail ends of distributions.
3grendelkhan12y
I was under the impression that AA applied to college admissions, and that college graduation is still entirely contingent on one's performance. (Though I've heard tell that legacy students both get an AA-sized bump to admissions and tend to be graded on a much less harsh scale.) Additionally, it seems that there's a lot of 'different justification, same conclusion' with regards to claims about black people. For instance, "black people are inherently stupid and lazy" becomes "black people don't have to meet the same standards for education". The actual example I saw was that people subconsciously don't like to hire black people (the Chicago resume study) because they present a risk of an EEOC lawsuit. (The annual risk of being involved in an EEOC lawsuit is on the order of one in a million.)
3Desrtopa12y
A quick google search isn't giving me an actual percentage, but I believe that students who're admitted to and attend college, but do not graduate, are still significantly in the minority. Even those who barely made it in mostly graduate, if not necessarily with good GPAs.
2BillyOblivion12y
One of the criticisms of colleges engaging in "AA" type policies is that they often will put someone in a slightly higher level school (say Berkeley rather than Davis) than they really should be in and which because of their background they are unprepared for. Not necessarily intellectually--they could be very bright, but in terms of things like study skills and the like. There is sufficient data to suggest this should be looked at more thoroughly. In general it is better for someone to graduate from a "lesser" school than to drop out of a better one.
-7wedrifid12y
0grendelkhan12y
Okay, but if not everyone graduates from college, and the point of admissions is to weed out people who'll succeed in school rather than wasting everyone's time, then how does a college degree mean anything different for a standard graduate, a legacy graduate, and an affirmative-action graduate? (Note that the bar is lowered for legacy graduates to the same degree as affirmative-action graduates, so if you don't hear "my father also went here" the same way as "I got in partly because of my race", then there's a different factor at work here.)
5steven046112y
In the extreme case where being above a given level of competence deterministically causes graduation, you're correct and AA makes no difference; the likelihood (but not necessarily the prior or posterior probability) of different competence levels for a college graduate is independent of race. In the extreme case where graduation is completely random, you're wrong and AA affects the evidence provided by graduation in the same way as it affects the evidence provided by admission. Reality is likely to be somewhere in between (I'm not saying it's in the middle). It depends on the actual distribution of legacy and AA graduates.
2Desrtopa12y
I'd say that the point of admissions is less to weed out people who'll succeed from people who'll waste the school's time than to weed out people who'll reflect poorly on the status of the school. Colleges raise their status by taking better students, so their interests are served not by taking students down to the lower limit of those who can meet academic requirements, but by being as selective as they can afford to be. Schools will even lie about the test scores of students they actually accept, among other things, to be seen as more selective.
0Eugine_Nier12y
I think it's more a case same observations, different proposed mechanisms.
2grendelkhan12y
Has anyone ever claimed that any criticism of Obama is racist by definition? I only ever see this claim from people who want to raise the bar for racism above what they've been accused of. It's not like targeting welfare to play on racism is a completely outlandish claim--I hope you're familiar with Lee Atwater's very famous description of the Southern Strategy:
-2Eugine_Nier12y
No, they just declare each individual instance 'racist' no matter how tenuous the argument. The rather ludicrous attempts to dismiss the Tea Party as 'racist' being the most prominent example.
2Oligopsony12y
That's the R2 way of phrasing R{1,2}, like "race traitor" is the R3 way of phrasing R1 or celandine's phrasings are from an R1 perspective. (Not saying you are a jerk; just trying to separate out precisely such connotative differences from these useful clusters/concentric rings in peoplespace.) (N.B. that if this definition wasn't question-begging and/or indexical it would imply that iff accurate priors are equal over races then the genuinely colorblind are racists.)
1Eugine_Nier12y
Possibly, I couldn't quite figure out Mixed Nuts' definitions because he seemed to be implicitly assuming that accurate priors were equal over races. Well they aren't. Nevertheless, I should probably have said something more like:
0Crouching_Badger12y
Apart from race, isn't this a problem with English or language in general? We use the same words for varying degrees of a certain notion, and people cherry pick the definitions that they want to cogitate for response. If I call someone a conservative, is it a compliment or an insult? That depends on both of our perceptions of the word conservative as well as our outlook on ourselves as political beings; however, beyond that, I could mean to say that the person is fiscally conservative, but as the current conservative candidates are showing conservatism to be far-right extremism, the person may think, "Hey! I'm not one of those guys." I think if someone wants to argue with you, you'd be hard-pressed to speak eloquently enough to provide an impenetrable phrase that does not open itself to a spectrum of interpretation.
2fubarobfusco12y
Sure. "Conservative" isn't a fixed political position. Quite often, it's a claim about one's political position: that it stands for some historical good or tradition. A "conservative" in Russia might look back to the good old days of Stalin whereas a "conservative" in the U.S. would not appreciate the comparison. It's also a flag color; your "fiscal conservative" may merely not want to wave a flag of the same color as Rick Santorum's.
-1A1987dM12y
What about a "Racist4", someone who assign different moral values to people of different races all other things being equal?
4Desrtopa12y
Based on a couple interviews I've seen with unabashed Racist3s, I think that they would tend to fulfill that criterion. Edit: Requesting clarification for downvote?
1Strange712y
That would be a paleo-nazi. Not many of them around, anymore, and those that are don't get away with much.
2CaveJohnson12y
Why make up a new word? Paleoconservatives and smarter white nationalists (think Jared Taylor ) seem to often fit the bill.
0CaveJohnson12y
Depends if the differences in assigned moral values are large enough they can easily approach Nazi pretty quickly. As a thought experiment consider how many dolphins would you kill to save a single person?

You don't understand anything until you learn it more than one way.

Marvin Minsky

[-][anonymous]12y200

The most fundamental form of human stupidity is forgetting what we were trying to do in the first place

--Nietzsche

“The mind commands the body and it obeys. The mind orders itself and meets resistance. ”

-St Augustine of Hippo

The mind commands the body and it obeys.

Augustine has obviously never tried to learn something which requires complicated movement, or at least he didn't try it as an adult.

2JulianMorrison12y
The general principle is: cached is fast, cache-populating is slow. This goes for mind and "body" both, because the body does as its told, but it needs telling in a lot of detail and the control signals need to be discovered. Most people, for both mind and body, learn enough control signals for day-to-day use, and stop. I do somewhat wonder what it would be like to know the control signals for all my muscles, Bene Gesserit style.
0khafra12y
Vladimir Vasiliev is a Bene Gesserit, at least for skeletal muscle. Unfortunately, I can't locate any of the videos that really demonstrate this on youtube; but it makes him able to do some strange-looking things very effectively.
0NancyLebovitz12y
I'm reasonably sure that the important thing is awareness of muscles in systems appropriate for movement [1] rather than as individuals. Herbert had a good intution there, but Feldenkrais is a real world method of improving movement. Also take a look at Eric Franklin's books on practical anatomy. [1] That's approximate phrasing for an approximate idea.
-2DSimon12y
It may be a matter of the mind having to first order itself to give the body the correct commands.
0NancyLebovitz12y
That seems fair, but on the other hand, it seems that a primary way of the mind acquiring the order it needs is to start by giving the body commands that the body doesn't follow.
0[anonymous]12y
-

For those who feel deeply about contemporary politics, certain topics have become so infected by considerations of prestige that a genuinely rational approach to them is almost impossible.

-George Orwell

4Multiheaded12y
Sadly, there's no need of any adjective before "Politics" here. It's a fully general statement.
1RobinZ12y
You may be able to delete the words on either side of the adjective as well.

Truth must necessarily be stranger than fiction, for fiction is the creation of the human mind and therefore congenial to it.

G. K. Chesterton

Zach Wiener's elegant disproof:

Think of the strangest thing that's true. Okay. Now add a monkey dressed as Hitler.

(Although to be fair, it's possible that the disproof fails because "think of the strangest thing that's true" is impossible for a human brain.)

It also fails in the case where the strangest thing that's true is an infinite number of monkeys dressed as Hitler. Then adding one doesn't change it.

More to the point, the comparison is more about typical fiction, rather than ad hoc fictional scenarios. There are very few fictional works with monkeys dressed as Hitler.

8Eugine_Nier12y
Indeed, I posted this quote partially out of annoyance at a certain type of analysis I kept seeing in the MoR threads. Namely, person X benefited from the way event Y turned out; therefore, person X was behind event Y. After all, thinking like this about real life will quickly turn one into a tin-foil-hat-wearing conspiracy theorist.
8FiftyTwo12y
Yes but in real life the major players don't have the ability to time travel, read minds, become invisible, manipulate probability etcetera, these make complex plans far more plausible than they would be in the real world. (That and conservation of detail.)

In real life the major players are immune to mindreading, can communicate securely and instantaneously worldwide, and have tens of thousands of people working under them. You are, ironically, overlooking the strangeness of reality.

Conservation of detail may be a valid argument though.

5Eugine_Nier12y
Conservation of detail is one of the memetic hazards of reading too much fiction.
2gwern12y
Which is exactly what MoR tells us to do to analyze it, is it not?
2Eugine_Nier12y
That's still not a reason for assuming everyone is running perfect gambit roulettes.
0gwern12y
You can say that with a straight face after the last few chapters of plotting?
1Eugine_Nier12y
Yes, I was referring to the theories that Dumbledore sabotaged Snape's relationship with Lilly so that the boy-who-lived (who hadn't even been born then) would have the experience of being bullied by his potions master.
3Ezekiel12y
Depends on the infinity. Ordinal infinities change when you add one to them. If we're restricting ourselves to actual published fiction, I present Cory Doctorow's Someone Comes to Town, Someone Leaves Town. The protagonist's parents are a mountain and a washing machine, it gets weirder from there, and the whole thing is played completely straight.
4gjm12y
Depends on which end you add one at. :-) (I mention this not because I think there's any danger Ezekiel doesn't know it, but just because it might pique someone's curiosity.)
0TraderJoe12y
[comment deleted]

This quote seems relevant:

They must be true because, if there were not true, no one would have the imagination to invent them.

G. H. Hardy, upon receiving a letter containing mathematical formulae from Ramanujan

6A1987dM12y
Doesn't work if (n + 1) monkeys dressed as Hitler are no stranger than n monkeys dressed as Hitler, and n monkeys dressed as Hitler are true.
0CronoDAS12y
0Eugine_Nier12y
Eliezer's unconventional definition of "strange" is occasionally annoying.
0wedrifid12y
Strange I would almost accept. But in this case the quote is 'unusual'... that's even worse! Unusual fits squarely into the realm of 'actually happens'.
0Kaj_Sotala12y
Also:
1Eugine_Nier12y
I was originally going to post that one, but decided to go with Chesterton's version since it better explains what is meant. (At the expense of loosing some of the snappiness.)
-1BlazeOrangeDeer12y
"Reality is the thing that surpises me." - Paraphrase of EY

Don't just read it; fight it! Ask your own questions, look for your own examples, discover your own proofs. Is the hypothesis necessary? Is the converse true? What happens in the classical special case? What about the degenerate case? Where does the proof use the hypothesis?

  • Paul Halmos

From this moment forward, remember this: What you do is infinitely more important than how you do it. Efficiency is still important, but it is useless unless applied to the right things.

-Tim Ferriss, The 4-Hour Workweek

4CronoDAS12y
-- Peter Drucker (I've quoted this line several times before.)
5wedrifid12y
Sure there is. Doing inefficiently what should not be done at all is even more useless. At least if you do it efficiently you can go ahead and do something else sooner. It seems to me that efficiency is just as useful doing things that should not be done as it is other times, for a fixed amount of doing stuff that shouldn't be done.
8thomblake12y
Depends on the kind of efficiency, I guess. If someone is systematically murdering people for an hour, I'd prefer they not get as much murdering done as they could.
-2wedrifid12y
I did specify "for a fixed amount of doing stuff that shouldn't be done". If they are getting more murdering done, that is probably bad.

In short, and I can't emphasize this strongly enough, a fundamental issue that any theory of psychology ultimately has to face is that brains are useful. They guide behavior. Any brain that didn't cause its owner to do useful--in the evolutionary sense--things, didn't cause reproduction.

-Robert Kurzban, Why Everyone (Else) is a Hypocrite: Evolution and the Modular Mind

But when we have these irrational beliefs, these culturally coded assumptions, running so deep within our community and movement, how do we actually change that? How do we get people to further question themselves when they’ve already become convinced that they’re a rational person, a skeptic, and have moved on from irrationality, cognitive distortion and bias?

Well I think what we need to do is to change the fundamental structure and values of skepticism. We need to build our community and movement around slightly different premises.

As it has stood in the past, skepticism has been predicated on a belief in the power of the empirical and rational. It has been based on the premise that there is an empirical truth, and that it is knowable, and that certain tools and strategies like science and logic will allow us to reach that truth. In short, the “old guard” skepticism was based on a veneration of the rational. But the veneration of certain techniques or certain philosophies creates the problematic possibility of choosing to consider certain conclusions or beliefs to BE empirical and rational and above criticism, particularly beliefs derived from the “right” tools, and even more dan

... (read more)
8MixedNuts12y
Upvoted because I like Natalie Reed, but this is way too long. The key sentence seems to be
1jsbennett8612y
Thanks. I didn't wanna post this much, but I was rather too attached to the passage to cut anything else out. Helps to have other eyes.

Any collocation of persons, no matter how numerous, how scant, how even their homogeneity, how firmly they profess common doctrine, will presently reveal themselves to consist of smaller groups espousing variant versions of the common creed; and these sub-groups will manifest sub-sub-groups, and so to the final limit of the single individual, and even in this single person conflicting tendencies will express themselves.

— Jack Vance, The Languages of Pao

9A1987dM12y
Shorter version: -- Terence, Phormio
4MixedNuts12y
My favorite:

Suppose you know a golfer's score on day 1 and are asked to predict his score on day 2. You expect the golfer to retain the same level of talent on the second day, so your best guesses will be "above average" for the [better-scoring] player and "below average" for the [worse-scoring] player. Luck, of course, is a different matter. Since you have no way of predicting the golfers' luck on the second (or any) day, your best guess must be that it will be average, neither good nor bad. This means that in the absence of any other information, your best guess about the players' score on day 2 should not be a repeat of their performance on day 1. ...

The best predicted performance on day 2 is more moderate, closer to the average than the evidence on which it is based (the score on day 1). This is why the pattern is called regression to the mean. The more extreme the original score, the more regression we expect, because an extremely good score suggests a very lucky day. The regressive prediction is reasonable, but its accuracy is not guaranteed. A few of the golfers who scored 66 on day 1 will do even better on the second day, if their luck improves. Most will do worse,

... (read more)
7CronoDAS12y
If you know the scores of two different golfers on day 1, then you know more than if you know the score of only one golfer on day 1. You can't predict the direction in which regression to the mean will occur if your data set is a single point. The following all have different answers: (The answer is 39700; I'm probably not going to improve with practice, and you have no way to know if 39700 is unusually good or unusually bad.) (The answer is some number less than 39700; knowing that my friend got a lower score gives you a reason to believe that 39700 might be higher than normal.) (The answer is some number higher than 39700, because I'm no longer an absolute beginner.)
0maia12y
True, a single data point can't give you knowledge of regression effects. In the context of the original problem, Kahneman assumed that you had access to the average score of all the golfers on the first day. I'm not sure it's true that the answer is higher than 39700, in this case. It depends on if you have knowledge of how people generally improve, and if your score is higher than average for an absolute beginner. Since unknown factors could adjust the score either up or down, I would probably just guess that it will be the same the next day.
3RobinZ12y
The existence of factors which could adjust the score either up or down does not indicate which factors dominate. In this case, you have no information which suggests that 39700 is either above or below the median, and therefore these two cases must be assigned equal probability - canceling out any "regression to the mean" effects you could have predicted. Similar arguments apply to other effects which change the score.
4maia12y
So you estimate "regression to the mean" effects as zero, and base your estimate on any other effects you know about and how strong you think they are. That makes sense. Thanks for the correction!
2Eugine_Nier12y
Not quite, you have some background information about the range of scores video games usually employ.
0RobinZ12y
And, I suppose, information about the probability of people mentioning average scores. I concede that either factor could justify arguing that the score should decrease.
0A1987dM12y
It reminds me of E.T. Jaynes' explanation of why time-reversible dynamic laws for (say) sugar molecules in water lead to a time-irreversible diffusion equation.

-- So... if they've got armor on, it's a battle !
-- And who told you that ?
-- A knight...
-- How'd you know he was a knight ?
-- Well... that's 'cause... he'd got armor on ?
-- You don't have to be a knight to buy armor. Any idiot can buy armor.
-- How do you know ?
-- 'Cause I sold armor.

-Game of Thrones (TV show)

He who knows how to do something is the servant of he who knows why that thing must be done.

-- Isuna Hasekura, Spice and Wolf vol. 5 ("servant" is justified by the medieval setting).

4John_Maxwell12y
I don't get it.
2Vaniver12y
Short explanation: the person that knows why a thing must be done is generally the person who decides what must be done. Application to rationality: instrumental rationality is a method that serves goals. The part that values and the part that implements are distinct. (Also, you can see the separation of terminal and instrumental values.)
6gwern12y
And explains why businessmen keep more of the money than the random techies they hire.
2Blueberry12y
Would "servant" not otherwise be justified?
1Nornagest12y
It's fairly benign, but looks a little archaic -- not so archaic that it'd have to be medieval, though. The rest of the phrasing is fairly modern, or I'd probably have assumed it was a quote from anywhere from the Enlightenment up to the Edwardian period. It has the ring of something a Victorian aphorist might say.
2Bugmaster12y
I think the quote should start with, "he WHO knows...".
[-][anonymous]12y150

The fundamental rule of political analysis from the point of psychology is, follow the sacredness, and around it is a ring of motivated ignorance.

--Jonathan Haidt, source

9Multiheaded12y
He also talks about how sacredness is one of the fundamental values for human communities, and how liberal/left-leaning theorists don't pay enough attention to it (and refuse to acknowledge their own sacred/profane areas). I have more to say about his values theory, I'll post some thoughts later. UPD: I wrote a little something, now I'm just gonna ask Konkvistador whether he thinks it's neutral enough or too political for LW.
2[anonymous]12y
Please make sure you do. I suspect it will be interesting. :)

I first encountered this in a physics newsgroup, after some crank was taking some toy model way too seriously:

Analogies are like ropes; they tie things together pretty well, but you won't get very far if you try to push them.

Thaddeus Stout Tom Davidson

(I remembered something like "if you pull them too much, they break down", actually...)

Don't kid yourself, just because you got the correct numerical answer to a problem is not justification that you understand the physics of the problem. You must understand all the logical steps in arriving at that solution or you have gained nothing, right answer or not.

My old physics professor David Newton (yes, apparently that's the name he was born with) on how to study physics.

A novice was trying to fix a broken Lisp machine by turning the power off and on.

Knight, seeing what the student was doing, spoke sternly: “You cannot fix a machine by just power-cycling it with no understanding of what is going wrong.”

Knight turned the machine off and on.

The machine worked.

--Some AI Koans, collected by ESR

3BlazeOrangeDeer12y
My physics teacher is always sure to clarify which parts of a problem are physics and which are math. Physics is usually the part that allows you to set up the math.
[-][anonymous]12y140

A weak man is not as happy as that same man would be if he were strong. This reality is offensive to some people who would like the intellectual or spiritual to take precedence. It is instructive to see what happens to these very people as their squat strength goes up.

-- Mark Rippetoe, Starting Strength

7Manfred12y
Sample: men who come to this guy to get stronger, I assume?
6Nornagest12y
Hmm. This sort of thing seems plausible, but I wonder how much of it is strength-specific? I've heard of eudaimonic effects for exercise in general (not necessarily strength training) and for mastering any new skill, and I doubt he's filtering those out properly.
-1A1987dM12y
Why was this downvoted?
1Incorrect12y
He's ignoring that people might not like how larger muscles look. And personally (though I don't care much) I would only care about practical athletic ability, not weight lifting.
2realitygrill12y
I understand this line of thought, but.. strength doesn't have to be developed through weights, strength increase doesn't necessarily mean much hypertrophy, and most importantly strength is a prerequisite/accelerator for increasing pretty much all athletic abilities (power, flexibility, endurance..)
1A1987dM12y
I guess the relation between muscle mass and physical attractiveness is non-monotonic, so a marginal increase in muscle mass would make some people look marginally better and other people look marginally worse. (I suspect the median Internet user is in the former group, though.) ETA: Judging from the picture on Wikipedia, Rippetoe himself looks like someone who would look better if he lost some weight (but I'm a heterosexual male, so my judgement might be inaccurate).
4[anonymous]12y
I'm somewhat annoyed that the comments on this thread are vapid, but this might be worth responding to. It doesn't particularly matter whether or not Rippetoe is himself currently ripped -- see this Wikipedia article of yours for his domain expert credentials: Secondly, notice that he was a competitive powerlifter thirty years ago. Senescence is a bitch.
0A1987dM12y
Why “of yours”? I've never edited it. I didn't dispute them. The grandparent and great-grandparent are about “how larger muscles look”. I can't see how the passage you quote is relevant to the fact that I think he's ugly.

In the real world things are very different. You just need to look around you. Nobody wants to die that way. People die of disease and accident. Death comes suddenly and there is no notion of good or bad. It leaves, not a dramatic feeling but great emptiness. When you lose someone you loved very much you feel this big empty space and think, 'If I had known this was coming I would have done things differently.'

Yoshinori Kitase

0gwern12y
Context: Aeris dies. (Spoilers!)
8gRR12y
It would be interesting to calculate the total utility of an author wantonly murdering a universally beloved character. May turn out to be quite a crime...
3Nornagest12y
Well, it's certainly not limited to killing off characters, but people have been writing about emotional release as a response to tragedy in drama for quite a long time. Generally it's thought of as a good thing, if not necessarily a pleasant one, and I'm inclined to agree with this analysis; people go into fiction looking for an emotional response, and the enduring popularity of tragic storytelling suggests that they aren't exclusively looking for emotions generally regarded as positive. Content warnings pointing to what a work's going for might not be a bad idea from a utilitarian standpoint, though. I personally handle tragedy well, for example, but I have a lot of trouble with cringe comedy.
4CronoDAS12y
I've had to leave the room because I get embarrassed just watching characters in that kind of show...
3Desrtopa12y
Well, one of my favorite authors is infamous for doing this, and I for one think his works are the better for it. It certainly hasn't prevented them from becoming very popular.
0taelor12y
Upvoted, for having the exact same thought as I did when reading the parent post.
1Document12y
Maybe you were both primed by gRR's username.

Who has seen the wind?
Neither I nor you:
But when the leaves hang trembling,
The wind is passing through.

Who has seen the wind?
Neither you nor I:
But when the trees bow down their heads,
The wind is passing by.

-- Christina Rossetti, Who has seen the Wind?

3BlazeOrangeDeer12y
Interestingly enough, this is my friend's parents response when asked why they believe in an invisible god. I suppose they haven't considered that the leaves and trees may be messed up enough to shake of their own accord.
9tgb12y
Interesting. It is rather unlikely that Christina Rossetti intended this to be a rationalist quote in a sense we would identify with. I do read it as an argument for scientific realism and belief in the implied invisible, but it seems likely that she was merely being poetic or that she was making a pro-religion argument, given her background. Of course the beauty of this system is that if someone quotes this to you as an argument for God (or anything), you can ask them what the leaves and trees are for their wind and thus get at their true argument. Furthermore, the context in which I first read it is the video game Braid, juvpu cerfragrq vg va gur pbagrkg bs gur chefhvg bs fpvrapr. I would highly recommend this game, by the way.
0wedrifid12y
Hey! It's Super Mario with built in cheat modes!
0wirov12y
Could you rot13 the word fpvrapr in the last paragraph? For me, finally getting the meaning of the princess at the end was such a beautiful realization that I wouldn't like to spoil it for others… (I highly recommend the game too. In fact, I've already bought it several times – once for me, and as a gift for others.)
0tgb12y
Done and agreed. I am ashamed to admit it that I first played it from a pirated copy - I later bought it, and I intend to buy Jonathan Blow's next game The Witness when it comes out. But I still feel bad about pirating it...
0BlazeOrangeDeer12y
I love that game, it's been a while since I played it though.
1MixedNuts12y
I third the recommendation.

A shortcut for making less-biased predictions, taking base averages into account.

Regarding this problem: "Julie is currently a senior in a state university. She read fluently when she was four years old. What is her grade point average (GPA)?"

Recall that the correlation between two measures - in the present case, reading age and GPA - is equal to the proportion of shared factors among their determinants. What is your best guess about that proportion? My most optimistic guess is about 30%. Assuming this estimate, we have all we need to produce an unbiased prediction. Here are the directions for how to get there in four simple steps:

  1. Start with an estimate of average GPA.
  2. Determine the GPA that matches your impression of the evidence.
  3. Estimate the correlation between your evidence and GPA.
  4. If the correlation is .30, move 30% of the distance from the average to the matching GPA.
  • Daniel Kahneman, Thinking, Fast and Slow

The fact that I can knock 12 points off a Hamilton Depression scale with an Ambien and a Krispy Kreme should serve as a warning about the validity and generalizability of the term "antidepressant."

[-][anonymous]12y110

Generally when I see write-ups of statistical results, I immediately go to the original source. The fact is that the media is liable to simply shade and color the results to suit their own pat narrative. That’s just human nature.

--Razib Khan, source

"When I was young I shoved my ignorance in people's faces. They beat me with sticks. By the time I was forty my blunt instrument had been honed to a fine cutting point for me. If you hide your ignorance, no one will hit you and you'll never learn."

-- Farenheit 451

I'll be sticking around a while, although I'm not doing too well right now (check the HPMOR discussion thread for those of you interested in viewing the carnage, it's beautiful). It's not really a rationality problem, but I need to learn how to deal with other people who have big egos, because apparently only two or three people received my comments the way I meant them to come across. Plus, I like the idea of losing so much karma in one day and then eventually earning it all back and being recognized as a super rationalist. Gaining the legitimate approval of a group who now have a lot against me will be a decent challenge.

Also I doubt that I would be able to resist commenting even if I wanted to. That's probably mostly it.

Tips for dealing with people with big egos:

  • Don't insult anyone, ever. If Wagner posts, either say "Hmm, why do you believe Mendelssohn's music to be derivative?" or silently downvote, but don't call him an antisemitic piece of shit.
  • Attributing negative motivations (disliking you, wanting to win a debate, being prejudiced) counts as an insult.
  • Attributing any kind of motivation at all is pretty likely to count as an insult. You can ask about motivation, but only list positive or neutral ones or make it an open question.
  • Likewise, you can ask why you were downvoted. This very often gets people to upvote you again if they were wrong to downvote you (and if not, you get the information you want). Any further implication that they were wrong is an insult.
  • Stick closely to the question and do not involve the personalities of debaters.
  • Exception to the above: it's okay to pass judgement on a personality trait if it's a compliment. If you can't always avoid insulting people, occasionally complimenting them can help.
  • A lot of things are insults. You will slip up. This won't make people dislike you.
  • If you know what a polite and friendly tone is, have one.
  • If someone isn't polit
... (read more)

I'll add to this that actually paying attention to wedrifid is instructive here.

My own interpretation of wedrifid's behavior is that mostly s/he ignores all of these ad-hoc rules in favor of:
1) paying attention to the status implications of what's going on,
2) correctly recognizing that attempts to lower someone's status are attacks
3) honoring the obligations of implicit social alliances when an ally is attacked

I endorse this and have been trying to get better about #3 myself.

Might be too advanced for someone who just learned that saying "Please stop being stupid." is a bad idea.

8TheOtherDave12y
Sure. Then again, if you'd only intended that for chaosmosis' benefit, I assume you'd have PMed it.
-1wedrifid12y
Well... I've seen people nearly that exact phrase to great effect at times... But that's not the sort of thing you'd want to include in a 'basics' list either. Just as with fashion, it is best to follow the rules until you understand the rules well enough to know exactly how they work and why a particular exception applies!

The phrase "social alliances" makes me uneasy with the fear that if everyone did #3, LW would degenerate into typical green vs blue debates. Can you explain a bit more why you endorse it?

If Sam and I are engaged in some activity A, and Pat comes along and punishes Sam for A or otherwise interferes with Sam's ability to engage in A...
...if on reflection I endorse A, then I endorse interfering with Pat and aiding Sam, for several reasons: it results in more A, it keeps me from feeling like a coward and a hypocrite, and I establish myself as a reliable ally. I consider that one of the obligations of social alliance.
...if on reflection I reject A, then I endorse discussing the matter with Sam in private. Ideally we come to agreement on the matter, and either it changes to case 1, or I step up alongside Sam and we take the resulting social status hit of acknowledging our error together. This, too, I consider one of the obligations of social alliance.
...if on reflection I reject A and I can't come to agreement with Sam, I endorse acknowledging that I've unilaterally dissolved the aspect of our social alliance that was mediated by A. (Also, I take that status hit all by myself, but that's beside the point here.)

I agree with you that if I instead skip the reflective step and reflexively endorse A, that quickly degenerates into pure tribal warfare. But the failure in this... (read more)

I feel vaguely like Will_Newsome, now. I wonder if that's a good thing.

Start to worry if you begin to feel morally obliged to engage in activity 'Z' that neither you, Sam or Pat endorse but which you must support due to acausal social allegiance with Bink mediated by the demon X(A/N)th, who is responsible for UFOs, for the illusion of stars that we see in the sky and also divinely inspired the Bhagavad-Gita.

4TheOtherDave12y
Been there, done that. (Not specifically. It would be creepy if you'd gotten the specifics right.) I blame the stroke, though.

Been there, done that. (Not specifically. It would be creepy if you'd gotten the specifics right.) I blame the stroke, though.

Battling your way to sanity against corrupted hardware has the potential makings of a fascinating story.

It wasn't quite as dramatic as you make it sound, but it was certainly fascinating to live through.
The general case is here.
The specifics... hm.
I remain uncomfortable discussing the specifics in public.

5Wei Dai12y
Is establishing yourself as a reliable ally an instrumental or terminal goal for you? If the former, what advantages does it bring in a group blog / discussion forum like this one? The kind of alliance you've mentioned so far are temporary ones formed implicitly by engaging someone in discussion, but people will discuss things with you if they think your comments are interesting, with virtually no consideration for how reliable you are as an ally. Are you hoping to establish other kinds of alliances here?
4TheOtherDave12y
Instrumental. Trust, mostly. Which is itself an instrumental goal, of course, but the set of advantages that being trusted provides in a discussion is so ramified I don't know how I could begin to itemize it. To pick one that came up recently, though, here's a discussion of one of the advantages of trust in a forum like this one, related to trolley problems and similar hypotheticals. Another one that comes up far more often is other people's willingness to assume, when I say things that have both a sensible and a nonsensical interpretation, that I mean the former. Yes, I agree that when people form implicit alliances by (for example) engaging someone in discussion, they typically give virtually no explicit consideration for how reliable I am as an ally. If you mean to say further that it doesn't affect them at all, I mostly disagree, but I suspect that at this point it might be useful to Taboo "ally." People's estimation of how reliable I am as a person to engage in discussion with, for example, certainly does influence their willingness to engage me in discussion. And vice-versa. There are plenty of people I mostly don't engage in discussion, because I no longer trust that they will engage reliably. Not that I can think of, but honestly this question bewilders me, so it's possible that you're asking about something I'm not even considering. What kind of alliances do you have in mind?
2Wei Dai12y
It's not clear to me that these attributes are strongly (or even positively) correlated with willingness to "stick up" for a conversation partner, since typically this behavioral tendency has more to do with whether a person is socially aggressive or timid. So by doing that, you're mostly signaling that you're not timid, with "being a good discussion partner" a much weaker inference, if people think in that direction at all. (This is the impression I have of wedrifid, for example.) I didn't have any specific kind of alliances in mind, but just thought the question might be worth asking. Now that I think about it, it might be for example that you're looking to make real-life friends, or contacts for advancing your career, or hoping to be recruit by SIAI.
3wedrifid12y
This model of the world does an injustice to a class of people I hold in high esteem (those who are willing to defend others against certain types of social aggression even at cost to themselves) and doesn't seem to be a very accurate description of reality. A lot of information - and information I consider important at that - can be gained about a person simply by seeing who they choose to defend in which circumstances. Sure, excessive 'timidity' can serve to suppress this kind of behavior and so information can be gleaned about social confidence and assertiveness by seeing how freely they intervene. But to take this to the extreme of saying you are mostly signalling that you're not timid seems to be a mistake. In my own experience - from back when I was timid in the extreme - the sort of "sticking up for", jumping to the defense against (unfair or undesirable) aggression is one thing that could break me out of my shell. To say that my defiance of my nature at that time was really just me being not timid after all would be to make a lie of the battle of rather significant opposing forces within the mind of that former self. Merely that I am bold and that my behavioral tendencies and strategies in this kind of area are just signals of that boldness? Dave's model seems far more accurate and useful in this case.
4Wei Dai12y
I find that my brain doesn't automatically build detailed models of LW participants, even the most prominent ones like yourself, and I haven't found a strong reason to do so consciously, using explicit reasoning, except when I engage in discussion with someone, and even then I only try to model the part of their mind most relevant to the discussion at hand. I realize that I may be engaging in typical mind fallacy in thinking that most other people are probably like me in this regard. If I am, I'd be curious to find out.
0TheOtherDave12y
Fair enough; it may be that I overestimate the value of what I'm calling trust here. Just for my own clarity, when you say that what I'm doing is signaling my lack of timidity, are you referring to my actual behavior on this site, or are you referring to the behavior we've been discussing on this thread (or are they equivalent)? I'm not especially looking to make real-life friends, though there are folks here who I wouldn't mind getting to know in real life. Ditto work contacts. I have no interest in working for SI.
0Wei Dai12y
I was talking about the abstract behavior that we were discussing.
0wedrifid12y
I really like your illustration here. To the extent that this is what you were trying to convey by "3)" in your analysis of wedrifid's style then I endorse it. I wouldn't have used the "alliances" description since that could be interpreted in a far more specific and less desirable way (like how Wei is framing it). But now that you have unpacked your thinking here I'm happy with it as a simple model. Note that depending on the context there are times where I would approve of various combinations of support or opposition to each of "Sam", "Pat" and "A". In particular there are many behaviors "A" that the execution of will immediately place the victim of said behavior into the role of "ally that I am obliged to support".
4TheOtherDave12y
Yeah, agreed about the distracting phrasing. I find it's a useful way for me to think about it, as it brings into sharp relief the associated obligations for mutual support, which I otherwise tend to obfuscate, but talking about it that way tends to evoke social resistance. Agreed that there are many other scenarios in addition to the three I cite, and the specifics vary; transient alliances in a multi-agent system can get complicated. Also, if you have an articulable model of how you make those judgments I'd be interested, especially if it uses more socially acceptable language than mine does. Edit: Also, I'm really curious as to the reasoning of whoever downvoted that. I commit to preserving that person's anonymity if they PM me about their reasoning.
-2wedrifid12y
For what it is worth, sampling over time suggests multiple people - at one point there were multiple upvotes. I'm somewhat less curious. I just assumed it people from the 'green' social alliance acting to oppose the suggestion that people acting out the obligations of social allegiance is a desirable and necessary mechanism by which a community preserves that which is desired and prevents chaos.
7komponisto12y
Regardless of whether or not this is compatible with being a "complete jerk" in your sense, I wish to point out that wedrifid is in many respects an exemplary Less Wrong commenter. There are few others I can think of who are simultaneously as (1) informative, including about their own brain state, (2) rational, especially in the sense of being willing and able to disagree within factions/alliances and agree across them, and (3) socially clueful, in the sense of being aware of the unspoken interpersonal implications of all discourse and putting in the necessary work to manage these implications in a way compatible with one's other goals (naturally the methods used are community-specific but that is more than good enough). In saying this, I don't know whether I'm expanding on your point or disagreeing with it.
6Wei Dai12y
I would be interested in having wedrifid write a post systematically explaining his philosophy of how to participate on LW, because the bits and pieces of it that I've seen so far (your comment, TheOtherDave's, this comment by wedrifid) are not really forming into a coherent whole for me.
4wedrifid12y
That would be an interesting thing to do, too. It is on the list of posts that I may or may not get around to writing!
6wedrifid12y
I appreciate your kind words komponisto! You inspire me to live up to them.

Plus, I like the idea of losing so much karma in one day and then eventually earning it all back

This discussion is off-topic for the "Rationality Quotes" thread, but...

If you're interested in an easy way to gain karma, you might want to try an experimental method I've been kicking around:

Take an article from Wikipedia on a bias that we don't have an article about yet. Wikipedia has a list of cognitive biases. Write a top-level post about that bias, with appropriate use of references. Write it in a similar style to Eliezer's more straightforward posts on a bias, examples first.

My prediction is that such an article, if well-written, should gain about +40 votes; about +80 if it contains useful actionable material.

1chaosmosis12y
No, I want this to be harder than that. It needs to be a drawn out and painful and embarrassing process. Maybe I'll eventually write something like that. Not yet.

It needs to be a drawn out and painful and embarrassing process.

Oh, you want a Quest, not a goal. :-)

In that case, try writing an article that says exactly the opposite of something that somebody with very high (>10,000) karma says, even linking to their statement to make the contrast clear. Bonus points if you end up getting into a civil conversation directly with that person in the comments of your article.

Note: I believe that it is not only possible, but even easy, for you to do this and get a net karma gain. All you need is (a) a fairly good argument, and (b) a friendly tone.

8orthonormal12y
I nominate this as the Less Wrong Summer Challenge, for everybody. (One modification I'd make: it shouldn't necessarily be the exact opposite: precisely reversed intelligence usually is stupidity. But your thesis should be mutually incompatible with any charitable interpretation of the original claim.)
2gRR12y
And now I realize I just did exactly that, and your prediction is absolutely correct. No bonus points for me, though.
2Bugmaster12y
You just need a reasonably friendly tone. I have a bunch of karma, and I haven't posted any articles yet (though I'm working on it).
3DSimon12y
Indeed, that would work if karma was merely the goal. But chaosmosis expressed a desire for a "painful and embarrasing process", meaning that the ante and risk must be higher.
0wedrifid12y
That actually sounds fun now that you put it like that!

One day I will write "How to karmawhore with LessWrong comments" if I can work out how to do it in such a way that it won't get -5000 within an hour.

I know how you could do it. You need to come up with a detailed written strategy for maximizing karma with minimal actual contribution. Have some third party (or several) that LW would trust hold on to it in secrect.

Then, for a week or two, apply that strategy as directly and blatantly as you think you can get away with, racking up as many points as possible.

Once that's done, compile a list of those comments and post it into an article, along with your original strategy document and the verification from the third party that you wrote the strategy before you wrote the comments, rather than ad-hocing a "strategy" onto a run of comments that happened to succeed.

Voila: you have now pulled a karma hack and then afterwards gone white-hat with the exploit data. LW will have no choice but to give you more karma for kindly revealing the vulnerability in their system! Excellent. >:-)

Have some third party (or several) that LW would trust hold on to it in secrect.

Nitpick: cryptography solves this much more neatly.

Of course, people could accuse you of having an efficient way of factorising numbers, but if you do karma is going to be the least of anyone's concerns.

5Paul Crowley12y
Factorization doesn't enter into it - to precommit to a message that you will later reveal publically, publish a hash of the (salted) message.
0wedrifid12y
But somewhat less transparently. The cryptographic solution still requires that an encrypted message is made public prior to the actions being taken and declaring an encrypted prediction has side effects. The neat solution is to still use trusted parties but give the trusted parties only the encrypted strategy (or a hash thereof).
0Bugmaster12y
What kind of side effects ? I have no formal training in cryptography, so please forgive me if this is a naive question.
1wedrifid12y
I mean you still have to give the encrypted data to someone. They can't tell what it is but they can see you are up to something. So you still have to use some additional sort of trust mechanism if you don't want the act of giving encrypted fore-notice to influence behavior.
2Bugmaster12y
Ah ok, that makes sense. In this case, you can employ steganography. For example, you could publish an unrelated article using a pretty image as a header. When the time comes, you reveal the algorithm and password required in order to extract your secret message from the image.
6wedrifid12y
Better yet... embed five different predictions in that header. When the time comes, reveal just the one that turned out most correct!
2Bugmaster12y
Hmm yes, there might be a hidden weakness in my master plan as far as accountability is concerned :-)
0khafra12y
None that were not extant in the original scheme, assuming there are at least five people on LW who'd be considered trusted parties.
4RobinZ12y
But of four people on LW who would be considered trusted parties, what's the probability that all four would be quiet after the fifth is called upon to post the prediction or prediction hash?
0khafra12y
You're right, of course. I didn't think that through. There haven't been any good "gain the habit of really thinking things through" exercises for a Skill-of-the-Week post, have there?
0RobinZ12y
Bear in mind that it's often not worth the effort. I think the skill to train would be recognizing when it might be. Besides, in the prediction-hash case, they may well not post right away.
0khafra12y
"Recognizing when you've actually thought thoroughly" is the specific failure mode I'm thinking of; but that's probably highly correlated with recognizing when to start thinking thoroughly. I feel like such a skill may be difficult to consciously train without a tutor: -- @afoolswisdom Yes, the first thing I thought of was Quirrel's hashed prediction; but it doesn't seem that everyone's forgotten yet, as of last month.
7David_Gerard12y
My actual strategy was just to post lots. Going through the sequences provided a target-rich environment ;-)
7TheOtherDave12y
IME, per-comment EV is way higher in the HP:MoR discussion threads.
5David_Gerard12y
It so is. Karmawhoring in those is easy. This suggests measuring posts for comment EV.
5Hul-Gil12y
Now that is an interesting concept. I like where this subthread is going. Interesting comparisons to other systems involving currency come to mind. EV-analysis is the more intellectually interesting proposition, but it has me thinking. Next up: black-market karma services. I will facilitate karma-parties... for a nominal (karma) fee, of course. If you want to maintain the pretense of legitimacy, we will need to do some karma-laundering, ensuring that your posts appear that they could be worth the amount of karma they have received. Sock-puppet accounts to provide awful arguments that you can quickly demolish? Karma mines. And then, we begin to sell LW karma for Bitcoins, and-- ...okay, perhaps some sleep is in order first.
3David_Gerard12y
It is clear we need to start work on a distributed, decentralised, cryptographically-secure Internet karma mechanism.
0A1987dM12y
Create a dozen sockpuppet accounts and use them to upvote every single one of your posts. Duh.
8Richard_Kennaway12y
That's like getting a black belt in karate by buying one from the martial arts shop. It isn't karmawhoring unless you're getting karma from real people who really thought your comments worth upvoting.
2A1987dM12y
“Getting karma from real people who really thought your comments worth upvoting” sounds like a good thing, so why the (apparently) derogatory term karmawhoring?
6Richard_Kennaway12y
It is good to have one's comments favourably appreciated by real people. Chasing after that appreciation, not so much. Especially, per an ancestor comment, trying to achieve that proxy measure of value while minimizing the actual value of what you are posting. The analogy with prostitution is close, although one difference is that the prostitute's reward -- money -- is of some actual use.
8Strange712y
Not as straightforward as it sounds. Irrelevant one-sentence comments upvoted to +10 will attract more downvotes than they would otherwise.
2Bugmaster12y
This would indeed count as "minimal contribution", but still sounds like a lot of work...
[-][anonymous]12y120

It's not really a rationality problem, but I need to learn how to deal with other people who have big egos.

This is actually a really worthwhile skill to learn, independently of any LW-related foolishness. And it is actually a rationality problem.

4A1987dM12y
You mean to the extent that any problem at all is a rationality problem, or something else?
4[anonymous]12y
It's a bias, as far as I'm concerned, and something that needs to be overcome. People with egos can be right, but if one can't deal with the fact that they're either right or wrong regardless of their egotism, then one is that much slower to update.
2David_Gerard12y
Dealing with others' irrationality is very much a rationality problem.
0[anonymous]12y
Ignore this.
4wedrifid12y
It is what we would call an "instrumental rationality" problem. And one of the most important ones at that. Right up there with learning how to deal with our own big egos... which you seem to be taking steps towards now!
0[anonymous]12y
And I thought I was the only one getting pummeled here...
-12chaosmosis12y

That's right, Emotion. Go ahead, put Reason out of the way! That's great! Fine! ...for Hitler.

--1943 Disney cartoon

Every intelligent ghost must contain a machine.

Aaron Sloman

So the interesting and substantive question is not whether one thinks the fit will survive and thrive better than the unfit. They will. The interesting question is what the rules are that determine what is "fit."

-- David Henderson on Social Darwinism

Clearly, Bem’s psychic could bankrupt all casinos on the planet before anybody realized what was going on. This analysis leaves us with two possibilities. The first possibility is that, for whatever reason, the psi effects are not operative in casinos, but they are operative in psychological experiments on erotic pictures. The second possibility is that the psi effects are either nonexistent, or else so small that they cannot overcome the house advantage. Note that in the latter case, all of Bem’s experiments overestimate the effect.

Returning to Laplace’s Principle, we feel that the above reasons motivate us to assign our prior belief in precognition a number very close to zero.

Why Psychologists Must Change the Way They Analyze Their Data: The Case of Psi

Eric–Jan Wagenmakers, Ruud Wetzels, Denny Borsboom, & Han van der Maas

5FiftyTwo12y
I don't see why the first hypothesis should necessarily be rejected out of hand. If the supposed mechanism is unconscious then having it react to erotic pictures and not particular casino objects seems perfectly plausible. Obviously the real explanation might be that the data wasn't strong enough to prove the claim, but we shouldn't allow the low status of "psi theories" to distort our judgement.
4scav12y
One good thing about Bayesian reasoning is that assigning a prior belief very close to zero isn't rejecting the hypothesis out of hand. The posterior belief will be updated by evidence (if any can be found). And even if you start with a high prior probability and update it with Bem's evidence for precognition, you would soon have a posterior probability much closer to zero than your prior :) BTW there is no supposed mechanism for precognition. Just calling it "unconscious" doesn't render it any more plausible that we have a sense that would be super useful if only it even worked well enough to be measured, and yet unlike all our other senses, it hasn't been acted on by natural selection to improve. Sounds like special pleading to me.
2Eugine_Nier12y
FiftyTwo wasn't arguing that the sense was plausible. He was conditioning on the assumption that the sense exists.
2scav12y
OK, point taken. However, there being no proposed mechanism for precognition, it can hardly be called "plausible" that it operates inconsistently and that the experiment just happened to pick one of the things it can do out of all possibilities. After all, if nobody knows how it's supposed to work, how does the experimenter justify claiming his data as evidence for precognition rather than quantum pornotanglement? You could say I just made that up on the spot. It doesn't matter: precognition isn't necessarily a thing either.
3Eugine_Nier12y
How exactly does "quantum pornotanglement" and why doesn't it count as a type/mechanism for precognition.
4Vaniver12y
Now I'm thinking of pin-up Feynman diagrams.
4A1987dM12y
(Does Rule 34 apply?)
0FiftyTwo12y
Analogously, if someone told me they had a magic rock that could pick up certain pieces metal and not others, and couldn't explain why. it might be they are wrong it can pick up any metals, or there may be an underling effect causing these observations that we don't understand. In the analogy magnetism can be observed long before its is understood, and why some metals are and aren't magnetic isn't a trivial problem. Similarly it may be that some psychic phenomena exists which works for some things, and not for others, for reasons we're not aware of. The fact we can't fully explain why it works in some cases but not others doesn't mean we should outlaw evidence of the cases where it does.
0scav12y
I would at least expect them to be able to demonstrate their magic rock and let me try it out on various materials. If they had a rock that they claimed could pick up copper but not brass, based on only one experiment, but the rock now doesn't work if any scientists are watching, I'd be disinclined to privilege their hypothesis of the rock's magic properties. Nobody is outlawing the evidence. I'm saying the evidence is unconvincing, and far short of what is needed to support an extraordinary claim such as precognition. It is for example much less rigorous than the evidence there was for another causality-violating hypothesis: FTL neutrinos. That turned out to be due to an equipment defect. Many were disappointed but nobody was surprised. Same reference class if you ask me.

当局者迷,旁观者清

Chinese proverb, meaning "the onlooker sees things more clearly", or literally, "the player lost, the spectator clear"

[-][anonymous]12y180

三人成虎

Chinese proverb, "three men make a tiger", referring to a semi-mythological event during the Warring States period:

According to the Warring States Records, or Zhan Guo Ce, before he left on a trip to the state of Zhao, Pang Cong asked the King of Wei whether he would hypothetically believe in one civilian's report that a tiger was roaming the markets in the capital city, to which the King replied no. Pang Cong asked what the King thought if two people reported the same thing, and the King said he would begin to wonder. Pang Cong then asked, "what if three people all claimed to have seen a tiger?" The King replied that he would believe in it. Pang Cong reminded the King that the notion of a live tiger in a crowded market was absurd, yet when repeated by numerous people, it seemed real. As a high-ranking official, Pang Cong had more than three opponents and critics; naturally, he urged the King to pay no attention to those who would spread rumors about him while he was away. "I understand," the King replied, and Pang Cong left for Zhao. Yet, slanderous talk took place. When Pang Cong returned to Wei, the King indeed stopped seeing him.

-- Wikipedia

5Richard_Kennaway12y
In personal development workshops, the saying is, "the one with the mike in their hand is the last to see it." Of doctors and lawyers it is said that one who treats himself, or acts in court for himself, has a fool for a client.

Any “technology” which claims miraculous benefits on a timescale longer than it takes to achieve tenure and retire is vaporware, and should not be taken seriously.

-- Scott Locklin

3DSimon12y
Cryonics?
2Ezekiel12y
I'm curious. Were you agreeing with the quote (and thus dissing cryonics), disagreeing with the quote (and bringing cryonics as a counterexample), or genuinely without agenda?
1DSimon12y
Partly that second one, partly just curious if it was an intended subject.
3CronoDAS12y
The original context is that Scott Locklin is a nanotechnology skeptic.
0BillyOblivion12y
Follow the link, he explains it there.
0Multiheaded12y
Manifestly stupid.

Uxbal: I don't want to die, Bea. I'm afraid to leave the children on their own... I can't.
Bea: You think you take care of the children Uxbal. Don't be naive. The universe takes care of them.
Uxbal: Yes... but the universe doesn't pay the rent.

-Biutiful

[-][anonymous]12y100

All fiction needs to be taken both seriusly and not seriously.

Seriously because even the silliest of art can change minds.

Not seriously because no matter the delusions of the author, or the tone of the work, it's still fiction; entertainment, simulated on an human brain.

Rasmus Eide aka. Armok_GoB.

PS. This is not taken from an LW/OB post.

0Will_Newsome12y
Everything needs to be taken both seriously and not-seriously. Tepid unreflective semi-seriousness is always a mistake.

To prize every thing according to its real use ought to be the aim of a rational being. There are few things which can much conduce to happiness, and, therefore, few things to be ardently desired. He that looks upon the business and bustle of the world, with the philosophy with which Socrates surveyed the fair at Athens, will turn away at last with his exclamation, 'How many things are here which I do not want'.

--Samuel Johnson, The Adventurer, #119, December 25, 1753.

Seek knowledge, even as far as China.

-A Weak Hadith of the Prophet Muhammad

"If you had a choice between the ability to detect falsehood and the ability to discover truth, which would you take? There was a time when I thought they were different ways of saying the same thing, but I no longer believe that. Most of my relatives, for example, are almost as good at seeing through subterfuge as they are at perpetrating it. I'm not at all sure, though, that they care much about truth. On the other hand, I'd always felt there was something noble, special, and honorable about seeking truth..."

  • Merlin, Sign of Chaos

The majority of mankind is lazy-minded, incurious, absorbed in vanities, and tepid in emotion, and is therefore incapable of either much doubt or much faith; and when the ordinary man calls himself a sceptic or an unbeliever, that is ordinarily a simple pose, cloaking a disinclination to think anything out to a conclusion.

T. S. Eliot

2DSimon12y
I've read this a few times, but I'm still not seeing anything except "Non-believers are dummies, ha!", and I wonder if that's all there is to it or if I'm just getting blocked by my "oh-crap-what-did-he-say-about-my-tribe?" alarms going off.
2[anonymous]12y
I may very well reading what I want to read out of this quote, but I feel like if the quote is to be taken as a jab at non-believers, it's also a jab at believers. The "ordinary man claiming to be a skeptic" part is explicit, but note that before he claims most are incapable of both much doubt and much faith, which I think implies that the same issue goes for believers and non-skeptics. The basic idea I'm pulling from the quote seems to be that most people won't critically think about their ideas, so you can't always trust another's self-labeling to decide if their beliefs have been well thought out.
0Document12y
Consider "The majority of this liquid is not water".

"The human understanding when it has once adopted an opinion draws all things else to support and agree with it.

And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects or despises, or else by some distinction sets aside and rejects, in order that by this great and pernicious predetermination the authority of its former conclusion may remain inviolate."

--Francis Bacon, Novum Organum (1620)

Dear, my soul is grey
With poring over the long sum of ill;
So much for vice, so much for discontent...
Coherent in statistical despairs
With such a total of distracted life,
To see it down in figures on a page,
Plain, silent, clear, as God sees through the earth
The sense of all the graves, - that's terrible
For one who is not God, and cannot right
The wrong he looks on. May I choose indeed
But vow away my years, my means, my aims,
Among the helpers, if there's any help
In such a social strait? The common blood
That swings along my veins, is strong enough
To draw me t

... (read more)

The last level of metaphor in the Alice books is this: that life, viewed rationally and without illusion, appears to be a nonsense tale told by an idiot mathematician. At the heart of things science finds only a mad, never-ending quadrille of Mock Turtle Waves and Gryphon Particles. For a moment the waves and particles dance in grotesque, inconceivably complex patterns capable of reflecting on their own absurdity.

  • Martin Gardner, The Annotated Alice
7gjm12y
Leaving aside the dubiousness of calling the way the universe actually works "nonsense" and "mad": It seems very, very, very unlikely that anything in Lewis Carroll's writings was a metaphor for quantum mechanics. He died in 1898. (I suppose something can be used as a metaphor for quantum mechanics without having been intended as one, though.)
7Eliezer Yudkowsky12y
The heck? Quantum fields are completely lawful and sane. Only the higher levels of organization, i.e. human beings, are bugfuck crazy. Behold, the Copenhagen Interpretation causes BRAIN DAMAGE.
5VKS12y
As natural as QFT seems today, my understanding is that in 1960, before many of the classic texts in the domain were published, the ideas still seemed quite strange. We would do well to remember that when we set out to search for other truths which we do not yet grasp. :p
-3shminux12y
Maybe, but the Big World idea causes much more severe damage, judging by the recent discussions here and elsewhere.
2MixedNuts12y
What's Martin complaining about, exactly? That goodness is nowhere in physical law, so things can be unfair and horrible for no reason? That goodness is reducible in the first place? That physics is hard and therefore deserves nasty words like "absurd"?
-1dbaupp12y
Lewis Carroll was religious, and to add to that, he was human.

These threads would be very sparsely populated if we avoided quoting humans.

3dbaupp12y
You have misrepresented me. I was refuting the bit where a human was said to be doing something "rationally and without illusion": chances are that doesn't happen (especially regarding a topic as broad as "life").
2TheOtherDave12y
Upvoted for dry wit.
0wedrifid12y
Is fiction permitted? Most of my favorite quote are not from 'humans'.
4Tyrrell_McAllister12y
For that matter, so was Martin Gardner.

Memory locations are just wires turned sideways in time.

  • Danny Hillis
7Mass_Driver12y
Can you please explain this, slowly and carefully? It sounds plausible, and I'm trying to improve my understanding of space-time / 4-D thinking.
7Oscar_Cunningham12y
When analysing a circuit we normally consider a wire to have the same voltage along its entire length. (There are two problems with this: voltage changes only propagate at c, and the wire has a resistance. Normally these are both negligible.) Thus we can view wires as taking a voltage and spreading it out along a line in space. On the other hand, memory locations take a voltage and spread it out through time. So they are in some sense a wire pointing in the time direction. Sadly, the analogy doesn't quite hold up. Wires have one spatial dimension but also have a temporal dimension (i.e. wires exist for more than an instant). So if you rotated a wire so that its spatial dimension pointed along the temporal dimension, its temporal dimension would rotate down into one of the spatial dimensions. It would still look like a wire! A memory location has no spatial extent: they're a very small bit of metal (you could make one in the shape of a wire but people don't). Thus they have a temporal extent but no spatial extent. So if you rotated one you could get something that had a spatial extent but no temporal extent. This would look like a piece of wire that appeared for an instant and then disappeared again.
2Mass_Driver12y
Amazing! So a stricter analogy might be a memory location and a lightning bolt -- the memory location occupies only a tiny amount of space, and the static discharge of lightning takes only a tiny amount of time.
0Thomas12y
Ponder only the one dimensional time for now. At every point of time, you have only this moment and nothing more. But with the memories, you have same previous moments cached. Stored somewhere "orthogonal" to the timeline. I've heard it here: http://edge.org/conversation/a-universe-of-self-replicating-code On a site even better than this and quite unpopular on this site, also. Read or watch Dyson there. As many others.
0NancyLebovitz12y
Is Edge the more unpopular site, or are you thinking of someplace else? For what it's worth, I don't have anything against Edge, I just get bored reading it, even when the question is something I'm interested in.

I was once a skeptic but was converted by the two missionaries on either side of my nose.

Robert Brault

Am I the only one who didn't realize before reading other comments that he was not claiming to have been converted by his nostrils?

9Ezekiel12y
Particularly interesting since I (and, I suspect, others on LW) usually attach positive affect to the word "skeptic", since it seems to us that naivete is the more common error. But of course a Creationist is sceptical of evolution. (Apparently both spellings are correct. I've learned something today.)
2BlazeOrangeDeer12y
I'd call creatonists "evolution deniers" before I'd call them "evolution skeptics", but I suppose they'd do the same to me with God...
0Blueberry12y
I must be misinterpreting this, because it appears to say "religion is obvious if you just open your eyes." How is that a rationality quote?
7TheOtherDave12y
LW's standards for rationality quotes vary, but in any case this does allow for the reading of endorsing allowing perceived evidence to override pre-existing beliefs, if one ignores the standard connotations of "skeptic" and "missionary".
6Blueberry12y
I guess, but that seems like a strange interpretation seeing as the speaker says he's no longer "a skeptic" in general.

The point of rationality isn't to better argue against beliefs you consider wrong but to change your existing beliefs to be more correct.

3Blueberry12y
That's a good reminder but I'm not sure how it applies here.
0Eugine_Nier12y
A quote that calls the holder of a potentially wrong belief a "skeptic" rather than a "believer" is more useful since it makes you more likely to identify with him.
5Blueberry12y
Also judging from his other quotes I'm pretty sure that's not what he meant...

Using an elementary accounting text and with the help of an accountant friend, I began. For me, a composer, accounting had always been the symbol of ultimate boredom. But a surprise awaited me: Accounting is just a simple, practical tool for measuring resources, so as to better allocate and use them. In fact, I quickly realized that basic accounting concepts had a utility far beyond finance. Resources are almost always limited; one must constantly weigh costs and benefits to make enlightened decisions.

--Alan Belkin From the Stock Market to Music, via t... (read more)

If it can fool ten thousand users all at once (which ought to be dead simple, just add more servers), does that make it ten thousand times more human than Alan Turing?

Bruce Sterling

There are two worlds: the world that is, and the world that should be. We live in one, and must create the other, if it is ever to be. -paraphrased from Jim Butcher's Turn Coat

Human beings have been designed by evolution to be good pattern matchers, and to trust the patterns they find; as a corollary their intuition about probability is abysmal. Lotteries and Las Vegas wouldn't function if it weren't so.

-Mark Rosenfelder (http://zompist.com/chance.htm)

I stopped being afraid because I read the truth. And that's the scientifical truth which is much better. You shouldn't let poets lie to you.

-- Bjork

[-][anonymous]12y40

One day the last portrait of Rembrandt and the last bar of Mozart will have ceased to be — though possibly a colored canvas and a sheet of notes will remain — because the last eye and the last ear accessible to their message will have gone.

--Oswald Spengler, The Decline of the West

4[anonymous]12y
That sounds deep, but it has nothing to to with rationality
1[anonymous]12y
Not really, for example it is actually pretty clearly connected to fun theory.

"An organized mind is a disciplined mind. And a disciplined mind is a powerful mind."

-- Batman (Batman the Brave and the Bold)

5wedrifid12y
That doesn't seem to follow. An organized mind may not be disciplined. It may even be obsessively organized at the expense of being disciplined.
0pleeppleep12y
Assuming the mind is human, then I suppose you might have to modify it to ever make it truly organized, but identifying and organizing one's thoughts is an important part of rationality. You cannot make any effort to organize your thoughts without a certain degree of discipline. Think of the martial arts metaphor people here keep using in regards to rationality.
3wedrifid12y
I expect there is a correlation between degree of organisation, degree of discipline and measures of a minds' 'power'. But this relationship is definitely not one of a series of "is a". To be honest I try not to. That kind of thinking seems to lead to "koans", which seem to be a name for saying things that are blatantly false but feeling deep while doing so because there is some loosely related not-false lesson that someone could conceivably deconstruct from the koan.
-1Arran_Stirton12y
So says a man-dressed-like-a-bat. (That's not a jibe aimed at the quote but rather a reference to this.)
4Pavitra12y
Downvoted because this comment serves only to propagate a mildly-entertaining meme, rather than contributing to the discussion in some way.

In recent years, I've come to think of myself as something of a magician, and my specialty is pulling the wool over my own eyes.

--Kip W

Civil wars are bitter because

People make their recollections fit with their suffering.

---Thucydides

Found here.

[-][anonymous]12y30

AG: You know very well the channels of possi8ility at that exact juncture resulted from her decision paths as well as yours.

AG: 8ut even so, when it comes to your key decisions, the possi8ilities are pro8a8ly fewer and more discrete than you have presumed.

AG: Otherwise you would not see results consolidated into those vortices, would you? Possi8ility would resem8le an enormous hazy field of infinitely su8tle variations and micro-choices.

AG: Imagine if at that moment you truly were capa8le of anything, no matter how outlandish, a8surd, or patently fruitles

... (read more)

Is there a reason all the b's have been replaced by 8's?

7David_Gerard12y
Character typing quirk in the original.
0aribrill12y
The typing quirks actually serve a purpose in the comic. Almost all communication among the characters takes place through chat logs, so the system provides a handy way to visually distinguish who's speaking. They also reinforce each character's personality and thematic associations - for example, the character quoted above (Aranea) is associated with spiders, arachnids in general, and the zodiac sign of Scorpio. Unfortunately, all that is irrelevant in the context of a Rationality Quote.
0Normal_Anomaly12y
The character in question is named Vriska. You're thinking of Aradia.
1Nornagest12y
Actually, he's not -- the quote comes from Vriska's recently introduced pre-Scratch ancestor, who's got a similar but not identical typing style.
0Normal_Anomaly12y
You're right, never mind. Still internalizing the new set of ancestors.
7Bugmaster12y
I hate to downvote Homestuck, but there I go, downvoting it. The typing quirks and chatlog-style layout are too specific to the comic.
5arundelo12y
Every time someone mentions Homestuck I resist (until now) posting this image macro. I spent a few minutes reading Homestuck from the beginning, but it did not grab me at all. Is there a better place to start, or is it probably just not my cup of tea? (Speaking of webcomics, I have a similar question about Dresden Codak.)
7Nornagest12y
It starts pretty slow. Most of the really impressive bits, to my taste, don't start happening until well into act 4, but that's a few thousand (mostly single-panel, but still) pages of story to go through; unless you have a great deal of free time, I wouldn't hold it against you if you decided it's not for you by the end of act 2. Alternately, you might consider reading act 5.1 and going back if you like it; that's a largely independent and much more compressed storyline, although you'll lose some of the impact if you don't have the referents in the earlier parts of the story to compare against. You'll need to front-load a lot of tolerance for idiosyncratic typing that way, though. Which brings me to quotes like MHD's: for quotation out of context, I would definitely have edited out the typing quirks (or ed8ed, if we're being cute). The quirks are more about characterization than content, and some of the characters are almost unreadable without a lot of practice. Dresden Codak, incidentally, doesn't have this excuse. If you've read a couple dozen pages of that and didn't like it, you're probably not going to like the rest.
8khafra12y
I've never been sure exactly where and how to get into the Dresden Codak storyline; but the one-offs like Caveman Science and the epistemological RPG are some of my favorite things on the internet.
7katydee12y
The first real "storyline" Dresden Codak comic can be found here, That said, a lot of people I've spoken with simply don't like the Dresden Codak storyline in any form, and prefer the funny one-offs to any of the continuity-oriented comics.
0VKS12y
A couple dozen pages of Dresden Codak is almost a third of the entire thing... Perhaps it's just me, but I think it's sufficiently short that the naïve strategy (start at the beginning, click next until you get to the end) would work in this case. (Incidentally, when you get to Hob #9, remember to read the description at the bottom of the page.)
4Bugmaster12y
I disagree with Nornagest: I think the best place to start is at the beginning. They pretty much had me at "fetch modus", I was hooked from then on. A lot of really inspirational things start to happen later on, f.ex. the Flash animation "[S] WV: Ascend", but it might be difficult to comprehend without reading the earlier parts. I would also advise starting at the beginning because I'm starting to grow dissatisfied with the double-meta-reacharaound tack that the comic is taking now... The earlier chapters had a much more coherent story, IMO.

Men, it has been well said, think in herds; it will be seen that they go mad in herds, while they only recover their senses slowly, and one by one.

-C. Mackay, Extraordinary Popular Delusions and the Madness of Crowds, 1852.

Billings: [...] What do you think, Peters? What are the chances that this "jewpacabra" is real?

Peters: "I'm estimating somewhere around point zero zero zero zero zero zero zero zero one percent.

Billings: (Sighs) We can't afford to take that chance. [...]

-- Trey Parker, Jewpacabra

(This is at about five minutes fifty seconds into the episode.)

Edit: Related Sequence post.

To a large degree, our values "just happen"—like our brains. When our values conflict—the value of preventing suffering versus the value of preserving the human species—we are tempted to choose the latter because it feels axiomatic to us. But that is a reason to treat it with extra suspicion, not to treat it as axiomatic.

-Sister Y

0Grognor12y
This quote argues for a position, which is why I think it currently sits ugly at 0 karma after having sat ugly at 1 for a while, but I think, inseparable from the position being argued for, it espouses an important general principle which one should not simply ignore because it can apply to one's preconception; indeed (applying its lesson) that is precisely when we need the principle most. So while I would have just taken the general principle out from Sister Y's post if it were possible for me to do so (and taken the mediocre three to four karma I would have gotten for it), I'm glad that it was intertwined now, as it shows that yes, you're supposed to apply the principle to even this (substitute anything for "this", of course). I do sincerely wonder what the world would look like if people could even-handedly apply lessons from quotes. There are many lessons here. Edit: Actually, looking closely at what the words actually say, I realize it doesn't, by itself, argue for the position that the former value is better than the latter value, but its context is an argument for said thing. Edit2: If you look at the sort of quote in the original Rationality Quotes posts that were entirely Eliezer's collection, they were mostly of the sort that were likely to make you think about something rather than something that is easy to agree with. A desire to return to that model could be what's motivating the comment you're reading.
5TimS12y
In brief, you presented a quote (1) with a controversial position, (2) little LessWrong consensus, (3) no obvious relationship to generalized improvement at achieving goals, and (4) no relationship to the ideal scientific method. You are surprised (or disappointed) that it got negligible karma attention. I notice I am confused.
0Grognor12y
Definitely not surprised. (Edit: okay, now I'm a little surprised. The quote has now been voted up to +4. My little discussion was convincing? I don't know!) Maybe moderately disappointed. I think there's a lot to be said for the meta level of "continue to search, and not just put on a show of searching, for where you're wrong, even if you've already done this many times." I'm a little more disappointed that the highest-voted quotes tend to be applause lights. (Though not always) (also, applause lights are not inherently bad things, but I wish they didn't get the most karma).
0TimS12y
(1) Visibility - people who missed the quote the first time saw our exchange on the side bar. (2) I am also confused by the purpose of the rationality quotes page. It's not surprising to me that lack of consensus limits upvote potential (i.e. local applause lights get voted up). That said, applause lights are grounded in particular communities. "I like human rights" is an applause light in the United States, but is a provocative position in North Korea. Some of the upvoting is based on the wish that the quote was more widely accepted in general society (i.e. we wish society was more like us) (3) Notwithstanding what I just said, Rationality Quotes seems to function as a ideological purity tester. If it gets upvoted here, that shows it is part of the local consensus. In other words, I could post quotes that I thought were both post-modern and rationalist, and I expect they would be downvoted as outside the mainstream. To the extent that you think LessWrong has dysfunctional groupthink, I'm not sure the fight can be won in Rationality Quotes as opposed to Open Thread or Discussion. (I aspire to aspire to post into Main, so I seldom think about the social norms of that type of posting). (4) In a substantive response to your quote, LessWrong is surprisingly child-free-living in its attitude. Even controlling for age, socioeconomic status, and gender, we are not even vaguely representative of how frequently people desire to have children.
0Bluehawk12y
I'm curious. Did you say "aspire to aspire to post into Main" deliberately?

Oh my soul, be prepared for the coming of the Stranger.
Be prepared for him who knows how to ask questions.

T. S. Eliot, The Rock

Leonid: Without a purpose, a man is nothing.

Newton: Yes. But we wonder...do you share our gift? Do you have the necessary vision? Do you know the final fate of man?

Leonid: How could anyone know things like that?

Council: The Greater Science. The Quiet Math. The Silent Truth. The Hidden Arts. The Secret Alchemy.

Newton: Every question has an answer. Every equation has a solution.

  • S.H.I.E.L.D. #1 (Jonathan Hickman)
2David_Gerard12y
The point of this one isn't clear.
0Vulture12y
I guess it probably should have been broken up into a couple of shorter ones, but it was a single, short exchange and I just couldn't resist. That the question of the final fate of man, can, like any question, be answered with a greater science, with the hidden arts... this is essentially the message of transhumanist rationality, and it was beautifully phraseds here. "Without a purpose, a man is nothing"... this really should have been off on its own, in retrospect, but its meaning is a little bit less obscure, I think.
0SusanBrennan12y
Isn't one of the implications of Gödel's incompleteness theorem that there will always be unanswerable questions?
1TheOtherDave12y
Only if the questioner is consistent.
0Vulture12y
And there's no way to tell whether the questioner is inconsistent, or there exist unanswerable questions, right? [In any case, I would be greatly astonished if "What is the final fate of man?" was found to be isomorphic to a human Godel sentence ;-) ]
[-][anonymous]12y00

> "The penalty of not doing philosophy isn't to transcend it, but simply to give bad philosophical arguments a free pass."

David Pearce

David Pearce ">www.reddit.com/r/Transhuman/comments/r7dui/david_pearce_ama/c43jfmk)

[This comment is no longer endorsed by its author]Reply
[-][anonymous]12y00

"Dear, my soul is grey With poring over the long sum of ill; So much for vice, so much for discontent... Coherent in statistical despairs With such a total of distracted life, To see it down in figures on a page, Plain, silent, clear, as God sees through the earth The sense of all the graves, - that's terrible For one who is not God, and cannot right The wrong he looks on. May I choose indeed But vow away my years, my means, my aims, Among the helpers, if there's any help In such a social strait? The common blood That swings along my veins, is strong enough To draw me to this duty."

Elizabeth Barrett Browning, Aurora Leigh, 1856

[This comment is no longer endorsed by its author]Reply

The chess board is the world, the pieces are the phenomena of the universe, the rules of the game are what we call the laws of Nature. The player on the other side is hidden from us. We know that his play is always fair, just and patient. We also know, to our cost, that he never overlooks a mistake, or makes the smallest allowance for ignorance.

-Thomas Huxley

4Will_Newsome12y
I've traditionally gone with: the board is the space of/for potentially-live hypotheses/arguments/considerations, pieces are facts/observations/common-knowledge-arguments, moves are new arguments, the rules are the rules of epistemology. This lets you bring in other metaphors: ideally your pieces (facts/common-knowledge-arguments) should be overprotected (supported by other facts/common-knowledge-arguments); you should watch out for zwichenzugs (arguments that redeem other arguments that it would otherwise be justified to ignore); tactics/combinations (good arguments or combinations of arguments) flow from strategy/positioning (taking care in advance to marshal your arguments); controlling the center (the key factual issues/hypotheses at stake) is important; tactics (good arguments) often require the coordination of functionally diverse pieces (facts/common-knowledge-arguments), and so on. The subskills that I use to play chess overlap a lot with the subskills I use to discover truth. E.g., the subskill of thinking "if I move here, then he moves there, then I move there, then he moves there, ..." and thinking through the best possible arguments at each point rather than just giving up or assuming he'll do something I'd find useful, i.e. avoiding motivated stopping and motivated continuation, is a subskill I use constantly and find very important. I constantly see people only thinking one or two moves (arguments) ahead, and in the absence of objective feedback this leads to them repeatedly being overconfident in bad moves (bad arguments) that only seem good if you're not very experienced at chess (argumentation in the epistemic sense). Oh, a rationality quote: Bill Hartson: "Chess doesn't make sane people crazy; it keeps crazy people sane." And Bobby Fischer: "My opponents make good moves too. Sometimes I don't take these things into consideration."
[-][anonymous]12y00

But, the hard part comes after you conquer the world. What kind of world are you thinking of creating?

Johan Liebert, Monster

[This comment is no longer endorsed by its author]Reply

I adore Western medicine. I trust my doctor with my life. I’m just not sure I trust her with my death. Keep in mind that when it comes to your body and those of your family and who’s dead and who’s alive, who’s conscious and who’s not, your own judgment may be better than anyone else’s.

Dick Teresi, The Undead

They were conquerors, and for that you want only brute force -- nothing to boast of, when you have it, since your strength is just an accident arising from the weakness of others.

--Joseph Conrad, Heart of Darkness

In the small circle of pain within the skull
You still shall tramp and tread one endless round
Of thought, to justify your action to yourselves,
Weaving a fiction which unravels as you weave,
Pacing forever in the hell of make-believe
Which never is belief: this is your fate on earth
And we must think no further of you.

T. S. Eliot, Murder in the Cathedral

"The material world," continued Dupin, "abounds with very strict analogies to the immaterial; and thus some color of truth has been given to the rhetorical dogma, that metaphor, or simile, may be made to strengthen an argument, as well as to embellish a description. The principle of the vis inertiae, for example, seems to be identical in physics and metaphysics. It is not more true in the former, that a large body is with more difficulty set in motion than a smaller one, and that its subsequent momentum is commensurate with this difficulty,

... (read more)

He who refuses to do arithmetic is doomed to talk nonsense.

-- John McCarthy

5Nominull12y
Repeat
0MarkusRamikin12y
I'm starting to feel it was a mistake to have so many of those threads instead of a single one.
5NancyLebovitz12y
A single thread would have been of unmanageable size.
2MarkusRamikin12y
In what sense unmanageable? What would it make harder to do that is easy to do now? It seems to me the current setup makes it harder to know if you're posting a repeat, or to display a list of all top quotes. Also, I think it leads to more barrel-scraping this way; it seems to me that for the most part we ran out of the really great quotes and now often things get posted that have no special rationality lesson, but instead appeal to the tastes and specific beliefs common in our particular community.
5NancyLebovitz12y
Unmanageable because the site software doesn't show more than 500 (top-level?) comments, and because large numbers of comments load more slowly. There's a way to find top-voted quotes-- Best of Rationality Quotes 2009/2010 (Warning: 750kB page, 774 quotes). This could be considered a hint about the quantity problem. There is another one for 2011. As for dupes, the search on the site is adequate for finding them-- what's needed is a recommendation on the quotes page for people to check before posting. I think the quotes continue to be somewhat interesting, but it's not so much that there are no great ones left (though I was surprised to discover recently that "Nature to be commanded must be obeyed" hadn't been listed) as that they tend to keep hitting the same points.
3MarkusRamikin12y
I see. Thank you. It seems to me that there's room for improvement to the software, then. However, I'll shut up at this point.
1NancyLebovitz12y
You're welcome. There's always room for improvement in the software. Once in a while, there's a request for suggestions, so you might want to think about the changes you'd like to see.
2NancyLebovitz12y
To my mind, the redundancy problem with the quotes pages isn't so much repeated quotes as different quotes which mean pretty much the same thing.
1Richard_Kennaway12y
How many different things are there to say about rationality?
4NancyLebovitz12y
Well, the right question is "How many different brief things are there to say about rationality?" If you're allowed to go on at length, the sequences imply that there's quite a bit to say. I don't think the question about brief statements has an a priori answer.
1NancyLebovitz12y
Thanks for asking about unmanageablility. That fits neatly with the importance of being specific. I had enough experience with the site to know that very long threads don't work well and to have a feeling for the quote threads adding up to a huge lump, but I had it in my mind as one chunk and didn't realize that if you suggested a single quote thread, it was worth considering that you didn't have my background knowledge.

Tom: "Diana, have you ever confronted a moral dilemma?"

Diana: "I have spent my life confronting real dilemmas. I have always found moral dilemmas to be the indulgence of the well-fed middle class."

— Waiting for God (TV Series)

8tut12y
Is there a point to this quote, besides that this diana character doesn't understand the term 'moral dilemma'?
2Eugine_Nier12y
That the kind of "moral dilemmas" philosophers tend to contemplate, tend to be very different to the kind of dilemmas people encounter in practice.
0Normal_Anomaly12y
Perhaps that it requires significant time and cognitive energy to make difficult decisions in general or reflectively modify one's moral system in particular? ETA: can someone explain the downvote?