You know how people are always telling you that history is actually really interesting if you don’t worry about trivia like dates? Well, that’s not history, that’s just propaganda. History is dates. If you don’t know the date when something happened, you can’t provide the single most obvious reality check on your theory of causation: if you claim that X caused Y, the minimum you need to know is that X came before Y, not afterwards.
"Dateless history" can be interesting without being accurate or informative. As long as I don't use it to inform my opinions on the modern world either way, it can be just as amusing and useful as a piece of fiction.
Refuting frequently appearing bullshit could be made more efficient by having a web page with standard explanations which could be linked from the debate. Posting a link (perhaps with a short summary, which could also be provided on the top of that web page) does not require too much energy.
Which would create another problem, of protecting that web page from bullshit created by reversing stupidity, undiscriminating skepticism, or simply affective death spirals about that web page. (Yes, I'm thinking about RationalWiki.) Maybe we could have multiple anti-bullshit websites, which would sometimes explain using their own words, and sometimes merely by linking to another website's explanation they agree with.
Refuting frequently appearing bullshit is more than a matter of making the facts available. After all, anti-vaccination folks appear with enough frequency to be a curious news item (which I admit is a horrendous metric, but let's pretend it means something), and I'm sure that a quick Google search would yield enough facts to disabuse them of their notions. The trick is building up enough credibility and charisma - if such a property could be applied to an argument - to make such a site not just correct, but convincing. That's where the order of magnitude comes in.
The first is: does this prompt me to think in a way I did not before? If so, it is not evidence, but it allows you to better way the evidence by providing you with more possibilities.
I think that this would only be true if it prompts you to think in a new and random way. Fiction which prompts you to think in a new but non-random way (that is, all fiction) could very well make it worse. It could very well be that the author selectively prompts you to think only in cases where you got it right without doing the thinking. If so, then this will reduce your chance of getting it right.
For a concrete example, consider a piece of homeopathic fiction which "prompts you to think" about how homeopathy could work. It provides a plausible-sounding explanation, which some people haven't heard of before. That plausible-sounding explanation either is rejected, in which case it has no effect on updating, or accepted, making the reader update in the direction of homeopathy. Since the fiction is written by a homeopath, it wouldn't contain an equally plausible sounding (and perhaps closer to reality) explanation of what's wrong with homeopathy, so it only leads people to update in the wrong direction.
Furthermore, homeopathy is probably more important to homeopaths than it is to non-homeopaths. So not only does reading homeopathic fiction lead you to update in the wrong direction, reading a random selection of fiction does too--the homeopath fiction writers put in stuff that selectively makes you think in the wrong direction, and the non-homeopaths, who don't think homeopathy is important, don't write about it at all and don't make you update in the right direction.
Interesting point. The sort of new ways of thinking I had imagined were more along the lines of "consider more possible scenarios" - for example, if you had never before considered the idea of a false flag operation (whether in war or in "civil" social interaction), reading a story involving a false flag operation might prompt you to reinterpret certain evidence in light of the fact that it is possible (a fact not derived directly from the story, but from your own thought process inspired by the story). While it is certainly possible to update in the wrong direction, the thought process I had in mind was thus:
I have possible explanations A, B, and C for this observed phenomenon Alpha.
I read a story in which event D* occurs, possibly entangled with Alpha*, a similar phenomenon to Alpha.
I consider the plausibility of an event of the type D* occurring, taking in not only fictional evidence but also real-world experience and knowledge, and come to the conclusion that while D* takes certain liberties with the laws of (psychology/physics/logic), the event D is entirely plausible, and may be entangled with a phenomenon such as Alpha*.
I now have possible explanations A, B, C, and D for the observed phenomenon Alpha.
It is important to note that fiction has no such use for a hypothetical perfect reasoner, who begins with priors assigned to each and every physically possible event. Further, it would be of no use to anyone incapable of making that second-to-last step correctly; if they simply import D* as a possible explanation for Alpha, or arrive at some hypothetical event D which is not, in fact, reasonable to assume possible or plausible, then they have in fact been hindered by fictional "evidence".
Most people don't reject violent revolution for the practical reason that it's a unworkable strategy but because they find the idea of going and lynching the capitalists is morally wrong.
Marx idea of putting philosophy into action brought along the politics of revolution.Bush's relationship with the "reality-based community" leads to misleading voters and ignoring scientific findings. In both cases the ideas get judged by their practical political consequences.
What evidence moves you to say that the primary reason for rejection of violent revolution is morality rather than practicality? (And why do you/the majority of people think that violent revolution has to end in lynchings? Is there another widely-held opinion that simply stripping the capitalists of their defining trait - wealth - would be insufficient?)
What about fanfictional evidence?
More seriously, shouldn't it be "don't update on fictional evidence as if it were true"?
Certainly it's reasonable for a story to make us reconsider our beliefs.
It's reasonable to update as a result of the analysis of fiction (including fanfiction) for two reasons, neither of which are directly related to the events of the story in the same way that events in real life are related to updating. The first is: does this prompt me to think in a way I did not before? If so, it is not evidence, but it allows you to better way the evidence by providing you with more possibilities. The second is: why was this written? Even a truthless piece of propaganda can be interesting evidence in that it is entangled with human actions and motivations.
These two merely disagree on the meaning of the word decision, not the nature of the situation; one should pick a different scenario to make the possible point about how choosing not to choose doesn't quite work.
I think that they both agree that "decision" here means "choice to embark on a course of action other than the null action", where the null action may be simply waiting for more data. Where they disagree is the relative costs of the null action versus a member of a set of poorly known actions; it seems that the second speaker is trying to remind the first that the null action carries a cost, whether in opportunity or otherwise.
I like this quote, but it occurs to me that "I don't know" is often a reasonable answer to a question.
How about this:
"I refuse to answer that question on the grounds that I can't think of an answer which I am confident will not put me in a negative light."
That just seems like overly honest politicking to me.
There's probably cultural context you're missing (I'm guessing you're not a native English speaker, or at least non-American.), because it's pretty straightforward from here without any textual context.
A "good loser" is idiomatically someone who can accept defeat graciously (i.e. not get bitter or angry at the opponent). The quote says that anyone who doesn't get offended by their own losses won't improve and will remain a loser.
I actually am a native, American English speaker, and while I am aware that the common usage refers to somebody who is able to handle loss without taking offense, I did not rest on the assumption that the common usage was the relevant usage here. I would consider the meaning of the quote given the common usage inaccurate, as I find the implication that a gracious loser is necessarily an unmotivated loser incorrect. Therefore, I left open the possibility that the quote might use a less common meaning of the term "good loser".
The quote doesn't give that impression in context, including the comments - it's actually a statement about the importance of the rule of law. From the comments, Nick notes:
Indeed, the moral principle of non-initiation of force, far from being a possible basis of society as Murray Rothbard and David Friedman would have it, is a sophisticated outcome of long legal evolution and a highly involved legal procedure that itself cannot stick to that principle: it coerces people to a certain extent so that they will not coerce each other to a much greater extent.
Acknowledged, and criticism withdrawn.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
You speak of putting your trust in "a guru and a bunch of other people" as if it's somehow utterly opposed to the alternative of independently verifying particle physics. That would be the case if we were limited to science alone, forced to explicitly test each and every hypothesis in a controlled way. No, I have not conducted independent, replicated studies with p < 0.05 that verify that the scientific consensus is a reasonably accurate picture of reality.
But, as a rationalist - looking at all evidence, not just the clean, isolated stuff that comes through science - I can make some inferences. If the scientific consensus were, in a significant way, more incorrect than correct, there would be signs, something that would be different in a world-with-correct-consensus. For example, in a world-with-correct-consensus, people would be able to use the fruits of that consensus to design techniques which used the laws they discovered to do more than they could do with their bodies alone. They might build devices which use these principles, which would simply not function if they were untrue.
Further, a world-with-incorrect-consensus would almost certainly have to contain a great conspiracy, to conceal either a hidden truth or a near-universal incompetence. Such a thing is improbable enough that it is reasonable to shift belief towards physics - yes, I personally might not have strong direct evidence for it, but I have reasonably strong evidence (my limited knowledge of human nature judges the probability of a super-conspiracy to be very small) that the evidence which I have received from others is good.
The leprechaun fellow, however, is in a different boat. The world-with-leprechauns would look different from a world-without-leprechauns; there might be photographic documentation that is verifiably unaltered, or consistent reports of lucky Irishmen finding pots of gold at the ends of rainbows. We do not live in such a world; to believe that we do requires one to ignore, rather than use, the evidence available.
Yes, ultimately we do rest on something other than evidence; everybody must have some first principles to go off of. But if your first principle is also your conclusion - "leprechauns exist; that is my belief" - it is a very different, far more useless thing than a first principle which actually gives you tools to deal with the world, such as "things tend to happen, all other things being equal, as they have happened before". To equate the two would be much the same as the stuff required to maintain a scientific conspiracy: denial of a known truth, or denial of tragic incompetence.