Rationality Quotes April 2012
Here's the new thread for posting quotes, with the usual rules:
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself
- Do not quote comments/posts on LW/OB
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (858)
Paul Dirac
Dick Teresi, The Undead
On counter-signaling, how not to do:
-- The Irish Independent, "News In Brief"
Maybe the guy had been reading too much Edgar Allan Poe? As a child, I loved "The Purloined Letter" and tried to play that trick on my sister - taking something from her and hiding it "in plain sight". Of course, she found it immediately.
ETA: it was a girl, not a guy.
I find it highly unlikely that this is the whole story. Surely the police are not licensed to investigate a car based solely on its vanity plate and where it was parked...
You are probably right that more information drew police attention to the car, but "near the border" gets one most of the way to legally justified. In the 1970s, the US Supreme Court explicitly approved a permanent checkpoint approximately 50 miles north of the Mexican border.
Well that's a rather depressing piece of law...
It surprises people like Greg Egan, and they're not entirely stupid, because brains are Turing complete modulo the finite memory - there's no analogue of that for visible wavelengths.
If this weren't Less Wrong, I'd just slink away now and pretend I never saw this, but:
I don't understand this comment, but it sounds important. Where can I go and what can I read that will cause me to understand statements like this in the future?
When speaking about sensory inputs, it makes sense to say that different species (even different individuals) have different ranges, so one can percieve something and other can't.
With computation it is known that sufficiently strong programming languages are in some sense equal. For example, you could speak about relative advantages of Basic, C/C++, Java, Lisp, Pascal, Python, etc., but in each of these languages you can write a simulator of the remaining ones. This means that if an algorithm can be implemented in one of these languages, it can be implemented in all of them -- in worst case, it would be implemented as a simulation of another language running its native implementation.
There are some technical details, though. Simulating another program is slower and requires more memory than the original program. So it could be argued that on a given hardware you could do a program in language X which uses all the memory and all available time, so it does not necessarily follow that you can do the same program in language Y. But on this level of abstraction we ignore hardware limits. We assume that the computer is fast enough and has enough memory for whatever purpose. (More precisely, we assume that in available time a computer can do any finite number of computation steps; but it cannot do an infinite number of steps. The memory is also unlimited, but in a finite time you can only manage to use a finite amount of memory.)
So on this level of abstraction we only care about whether something can or cannot be implemented by a computer. We ignore time and space (i.e. speed and memory) constraints. Some problems can be solved by algorithms, others can not. (Then, there are other interesting levels of abstraction which care about time and space complexity of algorithms.)
Are all programming languages equal in the above sense? No. For example, although programmers generally want to avoid infinite loops in their programs, if you remove a potential for infinite loops from the programming language (e.g. in Pascal you forbid "while" and "repeat" commands, and a possibility to call functions recursively), you lose ability to simulate programming languages which have this potential, and you lose ability to solve some problems. On the other hand, some universal programming languages seem extremely simple -- a famous example is a Turing machine. This is very useful, because it is easier to do mathematical proofs about a simple language. For example if you invent a new programming language X, all you have to do to prove its universality, is to write a Turing machine simulator, which is usually very simple.
Now back to the original discussion... Eliezer suggests that brain functionality should be likened to computation, not to sensory input. A human brain is computationally universal, because (given enough time, pen and paper) we can simulate a computer program, so all brains should be equal when optimally used (differing only in speed and use of resources). In another comment he adds that ability to compute isn't the same as ability to understand. Therefore (my conclusion) what one human can understand, another human can at least correctly calculate without understanding, given a correct algorithm.
Wow. That's really cool, thank you. Upvoted you, jeremysalwen and Nornagest. :)
Could you also explain why the HPMoR universe isn't Turing computable? The time-travel involved seems simple enough to me.
Not a complete answer, but here's commentary from a ffdn review of Chapter 14:
I got the impression that what "not Turing-computable" meant is that there's no way to only compute what 'actually happens'; you have to somehow iteratively solve the fixed-point equation, maybe necessarily generating experiences (waves hands confusedly) corresponding to the 'false' timelines.
There's also the problem of an infinite number of possible solutions.
A computational system is Turing complete if certain features of its operation can reproduce those of a Turing machine, which is a sort of bare-bones abstracted model of the low-level process of computation. This is important because you can, in principle, simulate the active parts of any Turing complete system in any other Turing complete system (though doing so will be inefficient in a lot of cases); in other words, if you've got enough time and memory, you can calculate anything calculable with any system meeting a fairly minimal set of requirements. Thanks to this result, we know that there's a deep symmetry between different flavors of computation that might not otherwise be obvious. There are some caveats, though: in particular, the idealized version of a Turing machine assumes infinite memory.
Now, to answer your actual question, the branch of mathematics that this comes from is called computability theory, and it's related to the study of mathematical logic and formal languages. The textbook I got most of my understanding of it from is Hopcroft, Motwani, and Ullman's Introduction to Automata Theory, Languages, and Computation, although it might be worth looking through the "Best Textbooks on Every Subject" thread to see if there's a consensus on another.
What does that statement mean in the context of thoughts?
That is, when I think about human thoughts I think about information processing algorithms, which typically rely on hardware set up for that explicit purpose. So even though I might be able to repurpose my "verbal manipulation" module to do formal logic, that doesn't mean I have a formal logic module.
Any defects in my ability to repurpose might be specific to me: I might able to think the thought "A-> B, ~A, therefore ~B" with the flavor of trueness, and another person can only think that thought with the flavor of falseness. If the truth flavor is as much a part of the thought as the textual content, then the second thinker cannot think the thought that the first thinker can.
Aren't there people who can hear sounds but not music? Are their brains not Turing complete? Are musical thoughts ones they cannot think?
It means nothing, although Greg Egan is quite impressed by it. Sad but true: Someone with an IQ of, say, 90 can be trained to operate a Turing machine, but will in all probability never understand matrix calculus. The belief that Turing-complete = understanding-complete is false. It just isn't stupid.
It doesn't mean nothing; it means that people (like machines) can be taught to do things without understanding them.
(They can also be taught to understand, provided you reduce understanding to Turing-machine computations, which is harder. "Understanding that 1+1 = 2" is not the same thing as being able to output "2" to the query "1+1=".)
FWIW I've read a study that says about 50% of people can't tell the difference between a major and a minor chord even when you label them happy/sad. [ETA: Happy/sad isn't the relevant dimension, see the replies to this comment.] I have no idea how probable that is, but if true it would imply that half of the American population basically can't hear music.
http://languagelog.ldc.upenn.edu/nll/?p=2074
It shocked the hell out of me, too.
This is weird. It is hard for me to hear the difference in the cadence, but crystal clear otherwise. In the cadence, the problem for me is that the notes are dragging on, like when you press pedal on piano a bit, that makes it hard to discern the difference.
Maybe they lost something in retelling here? Made up new stimuli for which it doesn't work because of harmonics or something?
Or maybe its just me and everyone on this thread? I have a lot of trouble hearing speech through noise (like that of flowing water), i always have to tell others, i am not hearing what you're saying i am washing the dishes. Though i've no idea how well other people can hear something when they are washing the dishes; maybe i care too much not to pretend to listen when i don't hear.
This needs proper study.
The following recordings are played on an acoustic instrument by a human (me), and they have spaces in between the chords. The chord sequences are randomly generated (which means that the major-to-minor ratio is not necessarily 1:1, but all of them do have a mixture of major and minor chords).
Each of the following two recordings is a sequence of eight C major or C minor chords:
Each of the following two recordings is a sequence of eight "cadences" -- groups of four chords that are either
F B♭ C F
or
F B♭ Cminor F
Edit: Here's a listing of the chords in all four sound files.
Edit 2 (2012-Apr-22): I added another recording that contains these chords:
repeated over and over, while the balance between the voices is varied, from "all voices roughly equal" to "only the second voice from the top audible". The second voice from the top is the only one that is different on the C minor chord. My idea is that hearing the changing voice foregrounded from its context like this might make it easier to pick it out when it's not foregrounded.
Ditto for me -- The difference between the two chords is crystal clear, but in the cadence I can barely hear it.
I'm not a professional, but I sang in school chorus for 6 years, was one of the more skilled singers there, I've studied a little musical theory, and I apparently have a lot of natural talent. And the first time I heard the version played in cadence I didn't notice the difference at all. Freaky. I know how that post-doc felt when she couldn't hear the difference in the chords.
In Pinker's book "How the Mind Works" he asks the same question. His observation (as I recall) was that much of our apparently abstract logical abilities are done by mapping abstractions like math onto evolved subsystems with different survival purposes in our ancestors: pattern recognition, 3D spatial visualization, etc. He suggests that some problems seem intractable because they don't map cleanly to any of those subsystems.
Because thoughts don't behave much like perceptions at all, so that wouldn't occur to us or convince us much once we hear it. Are there any thoughtlike things we don't get but can indirectly manipulate?
Extremely large numbers.
(among other things)
Parity transforms as rotations in four-dimensional space.
-Carl Rogers, On Becoming a Person: A Therapist's View of Psychotherapy (1961)
Facts are friendly on average, that is. Individual pieces of evidence might lead you to update towards a wrong conclusion. /nitpick
Chris Bucholz
Mostly agreed. If I were to stand on a soapbox and say "light with a wavelength of 523.4371 nm is visible to the human eye", it would fall into the category of an unsubstantiated claim by a single person. But it is implied by the general knowledge that the human visual range is from roughly 400 nm to roughly 700 nm, and that has been confirmed by anyone who has looked at a spectrum with even crude wavelength calibration.
Mad Men, "My Old Kentucky Home"
Another good one from Don Draper:
Barbara Alice Mann
I agree with the necessity of making life more fair, and disagree with the connotational noble Pocahontas lecturing a sadistic western patriarch. (Note: the last three words are taken from the quote.)
Agree that that looks an awful lot like an abuse of the noble savage meme. Barbara Alice Mann appears to be an anthropologist and a Seneca, so that's at least two points where she should really know better -- then again, there's a long and more than somewhat suspect history of anthropologists using their research to make didactic points about Western society. (Margaret Mead, for example.)
Not sure I entirely agree re: fairness. "Life's not fair" seems to me to succinctly express the very important point that natural law and the fundamentals of game theory are invariant relative to egalitarian intuitions. This can't be changed, only worked around, and a response of "so make it fair" seems to dilute that point by implying that any failure of egalitarianism might ideally be traced to some corresponding failure of morality or foresight.
I didn't think I could remove the quote from that attitude about it very effectively without butchering it. I did lop off a subsequent sentence that made it worse.
Do people typically say "life isn't fair" about situations that people could choose to change?
Don't they usually say it about situations that they could choose to change, to people who don't have the choice?
Exactly. In my experience the people who say "life isn't fair" are the main reason that it still isn't.
How did you develop a sufficiently powerful causal model of "life" to establish this claim with such confidence?
i mean that in almost all of the situations where I've heard that phrase used, it was used by someone who was being unfair and who couldn't be bothered to make a real excuse.
I agree, it's usually used as an excuse not to try to change things.
Introspection tells me this statement usually gets trotted out when the cost of achieving fairness is too high to warrant serious consideration.
EDIT: Whoops, I just realised that my imagination only outputted situations involving adults. When imagining situations involving children I get the opposite of my original claim.
The automatic pursuit of fairness might lead to perverse incentives. I have in mind some (non-genetically related) family in Mexico who don't bother saving money for the future because their extended family and neighbours would expect them to pay for food and gifts if they happen to acquire "extra" cash. Perhaps this "Western" patriarchal peculiarity has some merit after all.
Is this really about fairness? Seems like different people agree that fairness is a good thing, but use different definitions of fairness. Or perhaps the word fairness is often used to mean "applause lights of my group".
For someone fairness means "everyone has food to eat", for another fairness means "everyone pays for their own food". Then proponents of one definition accuse the others of not being fair -- the debate is framed as if the problem is not different definitions of fairness, but rather our group caring about fairness and the other group ignoring fairness; which of course means that we are morally right and they are morally wrong.
I'm not convinced fairness is inherently valuable.
I don't think that fairness is terminally valuable, but I think it has instrumental value.
Arthur C. Clarke
The trouble is, the most problematic kinds of faith can survive it just fine.
Which leads us to today's Umeshism: "Why are existing religions so troublesome? Because they're all false, the only ones that exist are so dangerous that they can survive the truth."
That's very nice to say, but people are apt to find giving up some faiths very emotionally wrenching and socially costly (even if the faith isn't high status, a believer is likely to have a lot of relationships with people who are also believers). Now what?
Douglas Adams, Dirk Gently's Holistic Detective Agency
-- C. S. Lewis
"Muad’Dib learned rapidly because his first training was in how to learn. And the first lesson of all was the basic trust that he could learn. It is shocking to find how many people do not believe they can learn, and how many more believe learning to be difficult. Muad‘Dib knew that every experience carries its lesson"
Frank Herbert, Dune
It took me years to learn not to feel afraid due to a perceived status threat when I was having a hard time figuring something out.
A good way to make it hard for me to learn something is to tell me that how quickly I understand it is an indicator of my intellectual aptitude.
Interesting article about a study on this effect:
This seems like a more complicated explanation than the data supports. It seems simpler, and equally justified, to say that praising effort leads to more effort, which is a good thing on tasks where more effort yields greater success.
I would be interested to see a variation on this study where the second-round problems were engineered to require breaking of established first-round mental sets in order to solve them. What effect does praising effort after the first round have in this case?
Perhaps it leads to more effort, which may be counterproductive for those sorts of problems, and thereby lead to less success than emphasizing intelligence. Or, perhaps not. I'm not making a confident prediction here, but I'd consider a praising-effort-yields-greater-success result more surprising (and thus more informative) in that scenario than the original one.
I agree that the data doesn't really distinguish this explanation from the effect John Maxwell described, mainly I just linked it because the circumstances seemed reminiscent and I thought he might find it interesting. Its worth noting though that these aren't competing explanations: your interpretation focuses on explaining the success of the "effort" group, and the other focuses on the failure of the "intelligence" group.
To help decide which hypothesis accounts for most of the difference, there should really have been a control group that was just told "well done" or something. Whichever group diverged the most from the control, that group would be the one where the choice of praise had the greatest effect.
I've seen this study cited a lot; it's extremely relevant to smart self- and other-improvement. But there are various possible interpretations of the results, besides what the authors came up with... Also, how much has this study been replicated?
I'd like to see a top-level post about it.
Dupe
On politics as the mind-killer:
-- Julian Sanchez (the whole post is worth reading)
Does anyone know the exact quote to which he is referring here?
We've reached the point where the weather is political, and so are third person pronouns.
I think it's this but I'm not sure:
Tell that to Socrates.
Given that they supposedly drowned people for discussing irrational numbers that seems false.
The other day I was thinking about Discworld, and then I remembered this and figured it would make a good rationality quote...
-- Terry Pratchett, Feet of Clay
Reminded of a quote I saw on TV Tropes of a MetaFilter comment by ericbop:
(George Orwell's review of Mein Kampf)
(well, we have videogames now, yet... we gotta make them better! more vicseral!)
I don't see that that's true. Germany loved Hitler when he was giving them job security and easy victories and became much less popular once the struggle and danger and death arrived on the scene.
...and that's why the rule doesn't apply to the reference class of cases I just constructed to only contain my own, Officer.
G. K. Chesterton
Zach Wiener's elegant disproof:
(Although to be fair, it's possible that the disproof fails because "think of the strangest thing that's true" is impossible for a human brain.)
It also fails in the case where the strangest thing that's true is an infinite number of monkeys dressed as Hitler. Then adding one doesn't change it.
More to the point, the comparison is more about typical fiction, rather than ad hoc fictional scenarios. There are very few fictional works with monkeys dressed as Hitler.
Indeed, I posted this quote partially out of annoyance at a certain type of analysis I kept seeing in the MoR threads. Namely, person X benefited from the way event Y turned out; therefore, person X was behind event Y. After all, thinking like this about real life will quickly turn one into a tin-foil-hat-wearing conspiracy theorist.
Yes but in real life the major players don't have the ability to time travel, read minds, become invisible, manipulate probability etcetera, these make complex plans far more plausible than they would be in the real world. (That and conservation of detail.)
In real life the major players are immune to mindreading, can communicate securely and instantaneously worldwide, and have tens of thousands of people working under them. You are, ironically, overlooking the strangeness of reality.
Conservation of detail may be a valid argument though.
Conservation of detail is one of the memetic hazards of reading too much fiction.
This quote seems relevant:
G. H. Hardy, upon receiving a letter containing mathematical formulae from Ramanujan
Doesn't work if (n + 1) monkeys dressed as Hitler are no stranger than n monkeys dressed as Hitler, and n monkeys dressed as Hitler are true.
Alfred North Whitehead, “An Introduction to Mathematics” (thanks to Terence Tao)
-Carl Rogers, On Becoming a Person: A Therapist's View of Psychotherapy (1961)
-G. K. Chesterton, The Curse of the Golden Cross
-- Douglas Adams. The Long Dark Tea-Time of the Soul (1988) p.169
I can't find the quote easily (it's somewhere in God, No!), but Penn Jillette has said that one aspect of magic tricks is the magician putting in more work to set them up than anyone sane would expect.
I'm moderately sure that he's overestimating how clearly the vast majority of people think about what's needed to make a magic trick work.
His partner Teller says the same thing here:
Edit: That trick is 19 minutes and 50 seconds into this video.
The ghost of Parnell is Far, the presentation to the Queen is Near?
Johan Liebert, Monster
-- Marvin Minsky, The Society of Mind
-George Orwell
Sadly, there's no need of any adjective before "Politics" here. It's a fully general statement.
David Pearce
This is analogous to my main worry as someone who considers himself a part of the anti-metaphysical tradition (like Hume, the Logical Positivists, and to an extent Less Wrongers): what if by avoiding metaphysics I am simply doing bad metaphysics.
“The mind commands the body and it obeys. The mind orders itself and meets resistance. ”
-St Augustine of Hippo
Augustine has obviously never tried to learn something which requires complicated movement, or at least he didn't try it as an adult.
Marvin Minsky
--Nietzsche
-Game of Thrones (TV show)
-Robert Kurzban, Why Everyone (Else) is a Hypocrite: Evolution and the Modular Mind
Upvoted because I like Natalie Reed, but this is way too long. The key sentence seems to be
— Jack Vance, The Languages of Pao
Shorter version:
-- Terence, Phormio
My favorite:
On specificity and sneaking on connotations; useful for the liberal-minded among us:
-celandine13
How about:
Someone who, following an honest best effort to evaluate the available evidence, concludes that some of the beliefs that nowadays fall under the standard definition of "racist" nevertheless may be true with probabilities significantly above zero.
Someone who performs Bayesian inference that somehow involves probabilities conditioned on the race of a person or a group of people, and whose conclusion happens to reflect negatively on this person or group in some way. (Or, alternatively, someone who doesn't believe that making such inferences is grossly immoral as a matter of principle.)
Both (1) and (2) fall squarely under the common usage of the term "racist," and yet I don't see how they would fit into the above cited classification.
Of course, some people would presumably argue that all beliefs in category (1) are in fact conclusively proven to be false with p~1, so it can be only a matter of incorrect conclusions motivated by the above listed categories of racism. Presumably they would also claim that, as a well-established general principle, no correct inferences in category (2) are ever possible. But do you really believe this?
The evidence someone's race constitutes about that person's qualities is usually very easily screened off, as I mentioned here. And given that we're running on corrupted hardware, I suspect that someone who does try to “performs Bayesian inference that somehow involves probabilities conditioned on the race of a person” ends up subconsciously double-counting evidence and therefore end up with less accurate results than somebody who doesn't. (As for cases when the evidence from race is not so easy to screen off... well, I've never heard anybody being accused of racism for pointing out that Africans have longer penises than Asians.)
I have seen accusations for racism as responses to people pointing that out.
Also, according to the U.S. Supreme Court even if race is screened off, you're actions can still be racist or something.
In real life, you don't have the luxury of gathering forensic evidence on everyone you meet.
I'm not talking about forensic evidence. Even if white people are smarter in average than black people, I think just talking with somebody for ten minutes would give me evidence about their intelligence which would nearly completely screen off that from skin colour. Heck, even just knowing what their job is would screen off much of it.
Also, as Eric Raymond discusses here, especially in the comments, you sometimes need to make judgements without spending ten minutes talking to everyone you see.
There's this thing called Affirmative Action, as I mentioned elsewhere in this thread.
...
I facepalmed. Really, Eric? Sorry, I don't think that a moral realist is perceptive enough to the nuances and ethical knots involved to be a judge on this issue. I don't know, he might be an excellent scientist, but it's extremely stupid to be so rash when you're attempting serious contrarianism.
Yep, let's all try to overcome bias really really hard; there's only one solution, one desirable state, there's a straight road ahead of us; Kingdom of Rationality, here we come!
(Yvain, thank you a million times for that sobering post!)
What if verbal ability and quantitative ability are often decoupled?
That (1) only makes sense if there is a “standard” definition of racist (and it's based on what people believe rather than/as well as what they do). The point of the celandine13 was indeed that there's no such thing.
Where would someone like Steve Sailer fit in this classification?
Indeed as strange as it might sound (but not to those who know what he usually blogs about) Steve Sailer seems to genuinely like black people more than average and I wouldn't be surprised at all if a test showed he wasn't biased against them or was less biased than the average white American.
He also dosen't seem like racist2 from the vast majority of his writing, painting him as racist3 is plain absurd.
This is missing Racist4:
Someone whose preferences result in disparate impact.
You left out one common definition.
Also I don't see why calling Obama the "Food Stamp President" or otherwise criticizing his economic policy president makes one a jerk, much less a "Racist2" unless one already believes that all criticism of Obama is racist by definition.
...and also useful for those among us who don't identify as "liberal-minded."
-Tim Ferriss, The 4-Hour Workweek
-- Peter Drucker
(I've quoted this line several times before.)
Sure there is. Doing inefficiently what should not be done at all is even more useless. At least if you do it efficiently you can go ahead and do something else sooner.
It seems to me that efficiency is just as useful doing things that should not be done as it is other times, for a fixed amount of doing stuff that shouldn't be done.
Depends on the kind of efficiency, I guess.
If someone is systematically murdering people for an hour, I'd prefer they not get as much murdering done as they could.
Yoshinori Kitase
I first encountered this in a physics newsgroup, after some crank was taking some toy model way too seriously:
Thaddeus Stout Tom Davidson
(I remembered something like "if you pull them too much, they break down", actually...)
-- Christina Rossetti, Who has seen the Wind?
-- Isuna Hasekura, Spice and Wolf vol. 5 ("servant" is justified by the medieval setting).
I don't get it.
Short explanation: the person that knows why a thing must be done is generally the person who decides what must be done. Application to rationality: instrumental rationality is a method that serves goals. The part that values and the part that implements are distinct. (Also, you can see the separation of terminal and instrumental values.)
--Jonathan Haidt, source
He also talks about how sacredness is one of the fundamental values for human communities, and how liberal/left-leaning theorists don't pay enough attention to it (and refuse to acknowledge their own sacred/profane areas).
I have more to say about his values theory, I'll post some thoughts later.
UPD: I wrote a little something, now I'm just gonna ask Konkvistador whether he thinks it's neutral enough or too political for LW.
If you know the scores of two different golfers on day 1, then you know more than if you know the score of only one golfer on day 1. You can't predict the direction in which regression to the mean will occur if your data set is a single point.
The following all have different answers:
(The answer is 39700; I'm probably not going to improve with practice, and you have no way to know if 39700 is unusually good or unusually bad.)
(The answer is some number less than 39700; knowing that my friend got a lower score gives you a reason to believe that 39700 might be higher than normal.)
(The answer is some number higher than 39700, because I'm no longer an absolute beginner.)
My old physics professor David Newton (yes, apparently that's the name he was born with) on how to study physics.
--Some AI Koans, collected by ESR
A shortcut for making less-biased predictions, taking base averages into account.
Regarding this problem: "Julie is currently a senior in a state university. She read fluently when she was four years old. What is her grade point average (GPA)?"
Rasmus Eide aka. Armok_GoB.
PS. This is not taken from an LW/OB post.
--Razib Khan, source
-- Farenheit 451
I'll be sticking around a while, although I'm not doing too well right now (check the HPMOR discussion thread for those of you interested in viewing the carnage, it's beautiful). It's not really a rationality problem, but I need to learn how to deal with other people who have big egos, because apparently only two or three people received my comments the way I meant them to come across. Plus, I like the idea of losing so much karma in one day and then eventually earning it all back and being recognized as a super rationalist. Gaining the legitimate approval of a group who now have a lot against me will be a decent challenge.
Also I doubt that I would be able to resist commenting even if I wanted to. That's probably mostly it.
Tips for dealing with people with big egos:
On politeness:
People who are exempted:
I'll add to this that actually paying attention to wedrifid is instructive here.
My own interpretation of wedrifid's behavior is that mostly s/he ignores all of these ad-hoc rules in favor of:
1) paying attention to the status implications of what's going on,
2) correctly recognizing that attempts to lower someone's status are attacks
3) honoring the obligations of implicit social alliances when an ally is attacked
I endorse this and have been trying to get better about #3 myself.
The phrase "social alliances" makes me uneasy with the fear that if everyone did #3, LW would degenerate into typical green vs blue debates. Can you explain a bit more why you endorse it?
If Sam and I are engaged in some activity A, and Pat comes along and punishes Sam for A or otherwise interferes with Sam's ability to engage in A...
...if on reflection I endorse A, then I endorse interfering with Pat and aiding Sam, for several reasons: it results in more A, it keeps me from feeling like a coward and a hypocrite, and I establish myself as a reliable ally. I consider that one of the obligations of social alliance.
...if on reflection I reject A, then I endorse discussing the matter with Sam in private. Ideally we come to agreement on the matter, and either it changes to case 1, or I step up alongside Sam and we take the resulting social status hit of acknowledging our error together. This, too, I consider one of the obligations of social alliance.
...if on reflection I reject A and I can't come to agreement with Sam, I endorse acknowledging that I've unilaterally dissolved the aspect of our social alliance that was mediated by A. (Also, I take that status hit all by myself, but that's beside the point here.)
I agree with you that if I instead skip the reflective step and reflexively endorse A, that quickly degenerates into pure tribal warfare. But the failure in this case is not in respecting the alliance, it's failing to reflect on whether I endorse A. If I do neither, then the community doesn't degenerate into tribal warfare, it degenerates into chaos.
Admittedly, chaos can be more fun, but I don't really endorse it.
All of that said, I do recognize that explicitly talking about "social alliances" (and, indeed, explicitly talking about social status at all) is a somewhat distracting thing to do, and doesn't help me make myself understood especially well to most audiences. It was kind of a self-indulgent comment, in retrospect, although an accurate one (IMO).
(I feel vaguely like Will_Newsome, now. I wonder if that's a good thing.)
Start to worry if you begin to feel morally obliged to engage in activity 'Z' that neither you, Sam or Pat endorse but which you must support due to acausal social allegiance with Bink mediated by the demon X(A/N)th, who is responsible for UFOs, for the illusion of stars that we see in the sky and also divinely inspired the Bhagavad-Gita.
Been there, done that. (Not specifically. It would be creepy if you'd gotten the specifics right.)
I blame the stroke, though.
Battling your way to sanity against corrupted hardware has the potential makings of a fascinating story.
It wasn't quite as dramatic as you make it sound, but it was certainly fascinating to live through.
The general case is here.
The specifics... hm.
I remain uncomfortable discussing the specifics in public.
Is establishing yourself as a reliable ally an instrumental or terminal goal for you? If the former, what advantages does it bring in a group blog / discussion forum like this one? The kind of alliance you've mentioned so far are temporary ones formed implicitly by engaging someone in discussion, but people will discuss things with you if they think your comments are interesting, with virtually no consideration for how reliable you are as an ally. Are you hoping to establish other kinds of alliances here?
Might be too advanced for someone who just learned that saying "Please stop being stupid." is a bad idea.
Sure. Then again, if you'd only intended that for chaosmosis' benefit, I assume you'd have PMed it.
Regardless of whether or not this is compatible with being a "complete jerk" in your sense, I wish to point out that wedrifid is in many respects an exemplary Less Wrong commenter. There are few others I can think of who are simultaneously as (1) informative, including about their own brain state, (2) rational, especially in the sense of being willing and able to disagree within factions/alliances and agree across them, and (3) socially clueful, in the sense of being aware of the unspoken interpersonal implications of all discourse and putting in the necessary work to manage these implications in a way compatible with one's other goals (naturally the methods used are community-specific but that is more than good enough).
In saying this, I don't know whether I'm expanding on your point or disagreeing with it.
I appreciate your kind words komponisto! You inspire me to live up to them.
I would be interested in having wedrifid write a post systematically explaining his philosophy of how to participate on LW, because the bits and pieces of it that I've seen so far (your comment, TheOtherDave's, this comment by wedrifid) are not really forming into a coherent whole for me.
That would be an interesting thing to do, too. It is on the list of posts that I may or may not get around to writing!
This discussion is off-topic for the "Rationality Quotes" thread, but...
If you're interested in an easy way to gain karma, you might want to try an experimental method I've been kicking around:
Take an article from Wikipedia on a bias that we don't have an article about yet. Wikipedia has a list of cognitive biases. Write a top-level post about that bias, with appropriate use of references. Write it in a similar style to Eliezer's more straightforward posts on a bias, examples first.
My prediction is that such an article, if well-written, should gain about +40 votes; about +80 if it contains useful actionable material.
This is actually a really worthwhile skill to learn, independently of any LW-related foolishness. And it is actually a rationality problem.
-- Mark Rippetoe, Starting Strength
Sample: men who come to this guy to get stronger, I assume?
Hmm. This sort of thing seems plausible, but I wonder how much of it is strength-specific? I've heard of eudaimonic effects for exercise in general (not necessarily strength training) and for mastering any new skill, and I doubt he's filtering those out properly.
--1943 Disney cartoon
Aaron Sloman
-- David Henderson on Social Darwinism
-- Scott Locklin
-Biutiful
T. S. Eliot
Chinese proverb, meaning "the onlooker sees things more clearly", or literally, "the player lost, the spectator clear"
Chinese proverb, "three men make a tiger", referring to a semi-mythological event during the Warring States period:
-- Wikipedia
In personal development workshops, the saying is, "the one with the mike in their hand is the last to see it." Of doctors and lawyers it is said that one who treats himself, or acts in court for himself, has a fool for a client.
--Francis Bacon, Novum Organum (1620) <!-- 1905 (Ellis, R. & Spedding, J., Trans.). London: Routledge. -->
Robert Brault
Am I the only one who didn't realize before reading other comments that he was not claiming to have been converted by his nostrils?
Particularly interesting since I (and, I suspect, others on LW) usually attach positive affect to the word "skeptic", since it seems to us that naivete is the more common error. But of course a Creationist is sceptical of evolution.
(Apparently both spellings are correct. I've learned something today.)
--Samuel Johnson, The Adventurer, #119, December 25, 1753.
"An organized mind is a disciplined mind. And a disciplined mind is a powerful mind."
-- Batman (Batman the Brave and the Bold)
That doesn't seem to follow. An organized mind may not be disciplined. It may even be obsessively organized at the expense of being disciplined.
Why Psychologists Must Change the Way They Analyze Their Data: The Case of Psi
Eric–Jan Wagenmakers, Ruud Wetzels, Denny Borsboom, & Han van der Maas
I don't see why the first hypothesis should necessarily be rejected out of hand. If the supposed mechanism is unconscious then having it react to erotic pictures and not particular casino objects seems perfectly plausible. Obviously the real explanation might be that the data wasn't strong enough to prove the claim, but we shouldn't allow the low status of "psi theories" to distort our judgement.
-A Weak Hadith of the Prophet Muhammad
--Alan Belkin From the Stock Market to Music, via the Theory of Evolution
This was just the first bit that stood out as LW-relevant; he also briefly mentions cognitive bias and touches on the possible benefits of cognitive science to the arts.
The heck? Quantum fields are completely lawful and sane. Only the higher levels of organization, i.e. human beings, are bugfuck crazy.
Behold, the Copenhagen Interpretation causes BRAIN DAMAGE.
As natural as QFT seems today, my understanding is that in 1960, before many of the classic texts in the domain were published, the ideas still seemed quite strange. We would do well to remember that when we set out to search for other truths which we do not yet grasp.
:p
Leaving aside the dubiousness of calling the way the universe actually works "nonsense" and "mad": It seems very, very, very unlikely that anything in Lewis Carroll's writings was a metaphor for quantum mechanics. He died in 1898.
(I suppose something can be used as a metaphor for quantum mechanics without having been intended as one, though.)
In recent years, I've come to think of myself as something of a magician, and my specialty is pulling the wool over my own eyes.
--Kip W
Human beings have been designed by evolution to be good pattern matchers, and to trust the patterns they find; as a corollary their intuition about probability is abysmal. Lotteries and Las Vegas wouldn't function if it weren't so.
-Mark Rosenfelder (http://zompist.com/chance.htm)
Bruce Sterling
--Oswald Spengler, The Decline of the West
Can you please explain this, slowly and carefully? It sounds plausible, and I'm trying to improve my understanding of space-time / 4-D thinking.
When analysing a circuit we normally consider a wire to have the same voltage along its entire length. (There are two problems with this: voltage changes only propagate at c, and the wire has a resistance. Normally these are both negligible.) Thus we can view wires as taking a voltage and spreading it out along a line in space.
On the other hand, memory locations take a voltage and spread it out through time. So they are in some sense a wire pointing in the time direction.
Sadly, the analogy doesn't quite hold up. Wires have one spatial dimension but also have a temporal dimension (i.e. wires exist for more than an instant). So if you rotated a wire so that its spatial dimension pointed along the temporal dimension, its temporal dimension would rotate down into one of the spatial dimensions. It would still look like a wire! A memory location has no spatial extent: they're a very small bit of metal (you could make one in the shape of a wire but people don't). Thus they have a temporal extent but no spatial extent. So if you rotated one you could get something that had a spatial extent but no temporal extent. This would look like a piece of wire that appeared for an instant and then disappeared again.
-C. Mackay, Extraordinary Popular Delusions and the Madness of Crowds, 1852.
-Elizabeth Barrett Browning, Aurora Leigh, 1856
-Sister Y