Posts

Sorted by New

Wiki Contributions

Comments

Unfortunately, in practice, being as knowledgable about the details of a particular scenario as an expert does not imply that you will process the facts as correctly as the expert. For instance, an expert and I may both know all of the facts of a murder case, but (if expertise means anything) they are still more likely to make correct judgements about what actually happened due to their prior experience. If I actually had their prior experience, it's true that their authority would mean a lot less, but in that case I would be closer to an expert myself.

To give another example, a mathematically inclined high school student may see a mathematical proof, with each step laid out before them in detail. The high school student may have the opportunity to analyze every step to look for potential problems in the proof and see none. Then, a mathematician may come along, glance over the proof, and say that it is invalid. Who are you going to believe?

In some cases, we are the high school student. We can stare at all the raw facts (the details of the proof) and they all make sense to us and we feel very strongly that we can draw a certain inference from them. And yet, we are unaware of what we don't know that the expert does know. Or the expert is simply better at reasoning in these kinds of problems, or avoiding falling into logical traps that sound valid but are not.

Of course, the more you know about the expert's arguments, the less their authority counts. But sometimes, the expertise lies in the ability to correctly process the type of facts at hand. If a mathematician's argument about the invalidness of step 3 does not seem convincing to you, and your argument about why step 3 is valid seems totally convincing, you should still at least hesitate in concluding you are correct.

A point about using diagrams to make arguments: If you are attempting to convince a person that something is true, rather than just launching into your evidence and favorite arguments it is often most efficient to begin by asking a series of questions to determine precisely how the person disagrees with you. The questioning allows you to hone in on the most important sticking points that prevent the other party from coming to your conclusion. These points can then be attacked individually, preventing you from wasting time making arguments that the other party already agree with or refuting positions that he or she has never even considered. The reason that this relates to diagrams is that this method of argumentation can be viewed as a tree, with questions at each of the higher level branches, and arguments at the leaf nodes.

"In my experience, the most staunchly held views are based on ignorance or accepted dogma, not carefully considered accumulations of facts. The more you expose the intricacies and realities of the situation, the less clear-cut things become."

Mary Roach - from her book Spook

A side note: The only reason that prime numbers are defined in such a way as to exclude 1 and negative numbers is because mathematicians found this way of defining them a bit more useful than the alternative possibilities. Mathematicians generally desire for important theorems to be stated in a manner that is as simple as possible, and the theorems about primes are generally simpler if we exclude 1. There is a more detailed analysis of this question here:

http://www.askamathematician.com/?p=1269

Anyone who claims to be rational in all area of their lives is speaking with irrational self confidence. The human brain was not designed to make optimal predictions from data, or to carry out flawless deductions, or to properly update priors when new information becomes available. The human brain evolved because it helped our ancestors spread their genes in the world that existed millions of years ago, and when we encounter situations that are too different from those that we were built to survive in, our brains sometimes fail us. There are simple optical illusions, simple problems from probability, and simple logic puzzles that cause brain failings in nearly everyone.

Matters are even worse than this though, because the logical systems in our brain and the emotional ones can (and often do) come to differing conclusions. For example, people suffering from phobia of spiders know perfectly well that a realistic plastic spider cannot hurt them, and yet a plastic spider likely will terrify them, and may even send them running for their lives. Similarly, some theists have come to the conclusion that they logically have no reason to believe in a god, and yet the emotional part of the brain still fills them with the feeling of belief. I personally know one unusually rational person who admits to being just like this. I have even discussed with her ways in which she might try to bring her emotions in line with her reasoning.

So does one irrational belief discredit someone from being a rationalist? Not at all. We all have irrational beliefs. Perhaps a more reasonable definition of a rationalist would be someone who actively seeks out their irrationalities and attempts to eradicate them. But identifying our own irrationalities is difficult, admitting to ourselves that we have them is difficult (for rationalists, anyway), removing them is difficult, and overcoming the emotional attachment we have to them is sometimes the most difficult part of all.

Some further suggestions for handling hard questions, gleaned from work done in mathematics:

  1. Hard questions can often be decomposed into a number of smaller not quite as hard (or perhaps even easy) questions whose answers can be strung together to answer the original question. So often a good first step is trying to decompose the original question in various ways.

  2. Try and find a connection between the hard question and ones that people already know how to answer. Then, see if you can figure out what it would take to bridge the gap between the hard question and what has been answered. For example, if the hard question you are trying to answer relates to human consciousness, perhaps a (not entirely ridiculous) approach would be to first examine questions that researchers have already made headway with, like the neural correlates to consciousness, and then focus on solving the problem by thinking about how one could go from a theory of correlates to a theory of consciousness (maybe this is impossible, but then again maybe it is not). This sort of approach can be a lot faster than solving a problem from scratch, both because it can avoid requiring you to reinvent the wheel, and because sometimes linking a problem to ones that are already solved is a lot easier than solving those problems to begin with.

  3. Don't become attached to your first ideas. If you've had some great ideas that have gotten you close to solving a hard problem, but after a lot of work you still aren't where you want to be, don't get stuck forever in what could be a dead end. From time to time, try to refresh your perspective by starting over from scratch. Often people find it painful starting over again, or are so excited by their first promising ideas that they don't want to let them go, but when a problem is truly hard you may well need to restart the problem again and again before hitting on an approach that really will work. This is a bit like reseeding a random number generator.

  4. Discuss the problem with other very smart people (even if they are not experts in precisely what you are doing) and listen closely to what they have to say. You never know when someone will say something that will trigger a great idea, and the process of explaining what you are working on can cause you to gain a new understanding of the subject or, at least, force you to clarify your thinking.

I believe that the analysis of this problem can be made more mathematically rigorous than is done in this post. Not only will a formal analysis help us avoid problem's in our reasoning, but it will clearly illustrate what assumptions have been made (so we can question their legitimacy).

Let's assume (as is done implicitly in the post) that you know with 100% certainty that the only two possible payouts are $1 million and $0. Then:

expected earnings = p($1 million payout) $1 million + p($0 payout) $0 - (ticket price)

= p($1 million payout) * $1 million - (ticket price)

= p($1 million payout|correctly computed odds) p(correctly computed odds) * $1 million

  • p($1 million payout|incorrectly computed odds) p(incorrectly computed odds) * $1 million
  • (ticket price)

= (1/40,000,000) p(correctly computed odds) * $1 million

  • p($1 million payout|incorrectly computed odds) (1 - p(correctly computed odds)) * $1 million
  • (ticket price)

We note now that we can write:

p($1 million payout|incorrectly computed odds) (1 - p(correctly computed odds)) $1 million = p($1 million payout|incorrectly computed odds) $1 million (1 - p(correctly computed odds)) = (p($1 million payout|incorrectly computed odds) $1 million + p($0 payout|incorrectly computed odds) $0) (1 - p(correctly computed odds)) = (expected payout given incorrectly computed odds) (1 - p(correctly computed odds))

Hence, our resulting equation is:

expected earnings = (1/40,000,000) p(correctly computed odds) * $1 million

  • (expected payout given incorrectly computed odds) (1 - p(correctly computed odds))
  • (ticket price)

Now, under the fairly reasonable (but not quite true) assumption (which seems to be implicitly made by the author) that

(expected payout given incorrectly computed odds) = (expected payout given that we know nothing except that we are dealing with a lotto that costs (ticket price) to play)

we can convert to the notation of the article, which gives us:

E(L) = p(C) p(L) j + (1 - p(C)) * (e + t) - t

Here I have interpreted e as the expected value given that we are dealing with a lotto that we know nothing else about (rather than expected earnings under those circumstances). The author describes e as an "expected payoff" but I don't think that is really quite what was meant (unless "payoff" returns to total net payoff including the ticket price).

We can now rearrange this formula:

E(L) = p(C) p(L) j + (1 - p(C)) e + (1 - p(C)) t - t = p(C) p(L) j + (1 - p(C)) e + (1 - p(C)) t - t = p(C) p(L) j + (1 - p(C)) e - p(C) t = p(C) ( p(L) j - t) + (1 - p(C)) e

which finally gets us to the author's terminal formula.

What is the point of doing this careful, formal analysis? Well, we now see where the author's formula comes from explicitly, it is proven rigorously, and we are fully aware of what assumptions were made. The assumptions are:

  1. You know with 100% certainty that the only two possible payouts are $1 million and $0

and

  1. expected payout given incorrectly computed odds = expected payout given that we know nothing except that we are dealing with a lotto that costs the given ticket price to play

The first assumption is reasonable assuming that lotto is not fraudulent, you don't have problems reading the rules, it is not possible for multiple people to claim the payout, etc.

The second assumption, however, is harder to justify. There are many ways that a calculation of odds could go wrong (putting a decimal point in the wrong place, making a multiplication error, unknowingly misunderstanding the laws of probability, actually being insane, etc.) If we could really enumerate all of them, understand how they effect our computed payout probability, and estimate the probability of each occurring, then we could compute this missing factor exactly. As things stand though, it is probably untenable. It should not be expected though that errors that make the payout probability artificially larger will balance those that make it artificially smaller. Misplacing a decimal point, for example, will almost certainly be noticed if it leads to a percentage greater than 100%, but not if it leads to one that is less than that (creating an asymmetry).

I would like to add another reason why we might perceive high status individuals as being less intelligent (or talented) than they originally seemed. The effect under consideration is reversion to the mean. Often, a person gains high status (or, at least meaningfully begins the climb to having high status) as a result of one exceptional act or creation or work. If our average skill level is X, we may often produce works that require skill close to X, but occasionally produce works that require much greater or much less skill than X (due to natural variability or variance in our performance). We are much more likely to be recognized (gain high status) when we happen to produce something far above our own skill level than when we create something right near our skill level (due simply to the fact that works above our skill level are of higher quality than most of what we create, and quality productions are more likely to be recognized). Hence, you might expect that many famous people's works that got them noticed (whether it is a novel, movie, essay, business deal, or what have you) may actually be better than would be expected from their average skill level. Hence, future work will seem less good by comparison.

This same effect might partly explain why highly anticipated movie sequels are in many cases not as good as the originals. The creators of the original may well have produced a work significantly above their average skill level (which made the movie more likely to become famous in the first place because so much skill was required), whereas the sequel will likely be closer to their true level!

For more about the statistical effect, google "Reversion to the Mean".

Perhaps it is true that our modest technology for altering brain states (simple wireheading, recreational drugs, magnetic stimulation, etc.) leads only to stimulation of the "wanting" centers of the brain and to simple (though at times intense) pleasurable sensations. On the other hand though, it seems almost inevitable that as the secrets of the brain are progressively unlocked, and as our ability to manipulate the brain grows, it will be possible to generate all sorts of brain states, including those "higher" ones associated with love, accomplishment, fulfillment, joy, religious experiences, insight, bliss, tranquility and so on. Hence, while your analysis appears to be quite relevant with regard to wireheading today, I am skeptical that it is likely to apply much to the brain technology that could exist 50 years from now.

Sure, many people treat technology like magic, but as it becomes an ever increasing part our our lives, it is hard to deny that the supply of jobs in science and engineering will increase, and subsequently that the number of scientists and engineers will grow to meet this demand. What is more, even if most people are not curious about the technology they grow up with, that does not preclude the possibility that increased technology correlates with increased interest in science. All it would take is one in 10 or even 1 in 20 people to be influenced by the technology they use.

Load More