Great point! One of the problems here is that people think that just knowing about something is going to give them the power but this is not the case. Rationality is a skillset like bicycle riding or playing chess and the only way to get good at it is by practicing a lot. You can read lots of books of chess and get great insights but when it comes down to actually playing at the board what matters is what you have internalized through practice.
Even worse, unlike your examples, rationality isn't a single, focussed "skillset", but a broad collection of barely related skills. Learning to avoid base rate neglect helps little if at all with avoiding honoring sunk costs which helps little with avoiding the narrative fallacy. You need to tackle them almost independently. That is one reason why I tend to emphasize the need to stop and think, when you can. Even if you have not mastered the particular fallacy that may be about to trip you up, you are more likely to notice a potential problem if you get in the habit of thinking through what you are doing.
Yes, I am very familiar with this kind of experience. I think the point about singular epiphanies of this sort is that they are always too brittle and inflexible to carry you on in any meaningful, long-term sort of way. Two further comments:
The realization of "epiphany addiction" it itself a sort of epiphany, in the same sense that this discussion is talking about. I'm not sure what the punchline of -that- should be, except maybe to say, there doesn't seem to ever be any such "magic bullets" in terms of personal understanding ... <this point included>. Yes, this may seem strange.
This whole idea and discussion draw to mind some closely related ideas from eastern (buddhistic) philosophy and thinking, which considers in detail the process of self-growth (ideally, samadhi) by means of self-consideration (generally, meditation). Within those lines of thought, there seems to be a general emphasis on this point in terms of attachment and detachment fallacies; the human being naturally tends to attach to certain dogmas, beliefs, fears, etc. always forgetting the fact that such things are not really real in the same sense that objective reality is real. Thus they are largely illusionary and fallacious in nature. I think a buddhist might probably look at this article and say, "oh, yes I agree," and then promptly forget all about it.
they are always too brittle and inflexible to carry you on in any meaningful, long-term sort of way.
What you need to do is to capture it, then use it to help you take the next step; then keep taking those next steps.
The very first thing you need to do is to STOP reading, write down whatever caused your epiphany, and think about the next step. Too much of the self-help and popular psychological literature are written like stories, which, while make them more readable and more likely to be read, tends to encourage readers to keep on reading through it all. If you are reading for change, you need to read it like a textbook, for the information, rather than entertainment.
What’s the real-life actuality in the United States today? A study in the journal Circulation found that for cardiovascular diseases and diabetes, “if all the recommended prevention activities were applied with 100 percent success,” the prevention would cost almost ten times as much as the savings, increasing the country’s total medical bill by 162 percent. Elmendorf additionally cites a definitive assessment in the New England Journal of Medicine that reviewed hundreds of studies on preventive care and found that more than 80 percent of preventive measures added to medical costs."
A number of people, myself included, find it suspicious that after years of advocating preventative medicine, a bunch of studies against it are coming out just after Obamacare was passed.
Prediction: If Obamacare gets repealed these studies will be refuted by subsequent studies, whereas if it stays on the books, these studies will become the baseline of a new consensus.
Studies against the effectiveness of preventative medicine aren't new, they have been published repeatedly for decades, I have read several myself as early as 1993. And of course the RAND study that Robin discussed repeatedly.
I don't know if it may help develop a helpful phrase, but another thing to keep in mind is that the link between what information you have and the problem you want to solve is often not obvious. You often need to play around with the information before you can figure out how it can be used to solve the problem.
And the complexity of real world problems can confuse the issue even more, so it helps to try to simplify or generalize the problem, so you can see what the core of the problem actually is, first.
Next we come to what I’ll call the epistemic-skeptical anti-intellectual. His complaint is that intellectuals are too prone to overestimate their own cleverness and attempt to commit society to vast utopian schemes that invariably end badly. Where the traditionalist decries intellectuals’ corrosion of the organic social fabric, the epistemic skeptic is more likely to be exercised by disruption of the signals that mediate voluntary economic exchanges. This position is often associated with Friedrich Hayek; one of its more notable exponents in the U.S. is Thomas Sowell, who has written critically about the role of intellectuals in society.
From Eric Raymond
I think what Harry's says here is, or at least ought to be, a kind of shorthand for a closely related and much stronger argument.
It isn't just that brain damage can take away your mental abilities. It's that particular kinds of brain damage can take away particular mental abilities, and there's a consistent correlation between the damage to the brain and the damage to the mind.
Suppose I show you a box, and you talk to it and it talks back. You might indeed hypothesize that what's in the box is a radio, and there's a person somewhere else with whom you're communicating. But now suppose that you open up the box and remove one electronic component, and the person "at the other end" still talks to you but can no longer remember the names of any vegetables. Then you remove another component, and now they t-t-talk w-with a t-t-t-terrible st-st-st-stutter and keep pausing oddly in the middle of sentences. Another, and they punctuate all their sentences with pointless outbursts of profanity.
And I have some more of these boxes, and it turns out that they all respond in similar ways to similar kinds of damage.
How much of this does it take before you regard this as very, very powerful evidence that the mind you're talking with is implemented by the electronics in the box?
It's that particular kinds of brain damage can take away particular mental abilities, and there's a consistent correlation between the damage to the brain and the damage to the mind.
And particular damage to a radio receiver distorts the received signal in particular ways. So that argument isn't much help.
I don't know if the idea works in general, but if it works as described I think it would still be useful even if it doesn't meet this objection. I don't forsee any authentication system which can distinguish between "user wants money" and "user has been blackmailed to say they want money as convincingly as possible and not to trigger any hidden panic buttons", but even if it doesn't, a password you can't tell someone would still be more secure because:
- you're not vulnerable to people ringing you up and asking what your password is for a security audit, unless they can persaude you to log on to the system for them
- you're not vulnerable to being kidnapped and coerced remotely, you have to be coerced wherever the log-on system is
I think the "stress detector" idea is one that is unlikely to work unless someone works on it specifically to tell the difference between "hurried" and "coerced", but I don't think the system is useless because it doesn't solve every problem at once.
OTOH, there are downsides to being too secure: you're less likely to be kidnapped, but it's likely to be worse if you ARE.
OTOH, there are downsides to being too secure: you're less likely to be kidnapped, but it's likely to be worse if you ARE.
Indeed, for a recent, real world example, the improvement in systems to make cars harder to steal led directly to the rise of carjacking in the 1990s.
Except for possible disutility to family and friends, oblivion has a lot to recommend it; not least that you won't be around to regret it afterward.
I judge that a disadvantage.
If you read the second sentence, I do too; it's just a very weak disadvantage when compared to almost any suffering. If I didn't consider it at least somewhat disadvantageous, I wouldn't be around now to write about it.
The great thing is that it's easily testable.
I don't know - how do you propose to test it? This is explicitly for measuring things where people have incentive to lie about their own behavior, so the only way I can see that this could be tested would be if we could then somehow independently derive the "true" answer and check it against our prediction based on the answer given. (To be useful, we'd also need to perform this test on the "control", where you ask people whatever question you have outright.)
"Anything is easy if you're not the one that has to do it." Claiming something is easy, without giving an actual means of doing it, is a cheap rhetorical trick, one of the "dark arts".
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Is a weakness in your argument. Either you can survive without utilons, a contradiction to utility theory, or you wait until your "pre-existing" utilons are used up and you need more to survive.