Making Reasoning Obviously Locally Correct
x = y
x2 = x*y
x2 - y2 = x*y - y2
(x+y)(x-y) = y(x-y)
x+y = y
y+y = y
2*y = y
2 = 1
The above is an incorrect "proof" that 2=1. Even for those who know where the flaw is, it might seem reasonable to react to the existence of this "proof" by distrusting mathematical reasoning, which might contain such flaws that lead to erroneous results. But done properly, mathematical reasoning does not look like this "proof". It is more explicit, making each step obviously correct that an incorrect step cannot meet the standard. Let's take a look at what would happen when attempting to present this "proof" following this virtue:
Bayesian Collaborative Filtering
I present an algorithm I designed to predict which position a person would report for an issue on TakeOnIt, through Bayesian updates on the evidence of other people's positions on that issue. Additionally, I will point out some potential areas of improvement, in the hopes of inspiring others here to expand on this method.
For those not familiar with TakeOnIt, the basic idea is that there are issues, represented by yes/no questions, on which people can take the positions Agree (A), Mostly Agree (MA), Neutral (N), Mostly Disagree (MD), or Disagree (D). (There are two types of people tracked by TakeOnIt: users who register their own opinions, and Experts/Influencers whose opinions are derived from public quotations.)
The goal is to predict what issue a person S would take on a position, based on the positions registered by other people on that question. To do this, we will use Bayes' Theorem to update the probability that person S takes the position X on issue I, given that person T has taken position Y on issue I:
Really, we will be updating on several people Tj taking positions Ty on I:
Maximise Expected Utility, not Expected Perception of Utility
Suppose we are building an agent, and we have a particular utility function U over states of the universe that we want the agent to optimize for. So we program into this agent a function CalculateUtility that computes the value of U given its current knowledge. Then we can program it to make decisions by searching through its available actions for the one that maximizes its expectation for its result of running CalculateUtility. But wait, how will an agent with this programming behave?
Suppose the agent has the opportunity (option A) to arrange to falsely believe the universe is in a state that is worth utility uFA but this action really leads to a different state worth utility uTA, and a competing opportunity (option B) to actually achieve a state of the universe that has utility uB, with uTA < uB < uFA. Then the agent will expect that if it takes option A that its CalculateUtility function will return uFA, and if it takes option B that its CalculateUtility function will return uB. uFA > uB, so the agent takes option A, and achieves a states of the universe with utility uTA which is worse than the utility uB it could have achieved if it had taken option B. This agent is not a very effective optimization process1. It would rather falsely believe that it has achieved its goals than actually achieve its goals. This sort of problem2 is known as wireheading.
Let us back up a step, and instead program our agent to make decisions by searching through its available actions for the one whose expected results maximizes its current calculation of CalculateUtility. Then, the agent would calculate that option A gives it expected utility uTA and option B gives it expected utility uB. uB > uTA, so it chooses option B and actually optimizes the universe. That is much better.
So, if you care about states of the universe, and not just your personal experience of maximizing your utility function, you should make choices that maximize your expected utility, not choices that maximize your expectation of perceived utility.
1. We might have expected this to work, because we built our agent to have beliefs that correspond to the actual state of the world.
2. A similar problem occurs if the agent has the opportunity to modify its CalculateUtility function, so it returns large values for states of the universe that would have occurred anyways (or any state of the universe).
The Monty Maul Problem
In his Coding Horror Blog, Jeff Atwood writes about the Monty Hall Problem and some variants. The classic problem presents the situation in which the game show host allows a contestant to choose one of three doors, one of which opens to reveal a prize while the other two reveals goats. The host then opens one of the other doors, reliably choosing one that has a goat, and invites the contestant to switch to the remaining unopened door. The problem is to determine the probability of winning the prize by switching and staying. The variants deal with cases in which the host does not reliably choose a door with a goat, but happens to do so.
Jeff cites Monty Hall, Monty Fall, Monty Crawl (PDF) by Jeff Rosenthal, which explains why the variants have different probabilities in terms of the "Proportionality Principle", which the appendix acknowledges to be a special case of Bayes' Theorem.
One of Jeff's anonymous commenters presented the Monty Maul Problem:
Hypothetical Situation:
The Monty Maul problem. There are 1 million doors. You pick one, and the shows host goes on a bloodrage fueled binge of insane violence, knocking open doors at random with no knowledge of which door has the car. He knocks open 999,998 doors, leaving your door and one unopened door. None of the opened doors contains the car.
Are your odds of winning if you switch still 50/50, as outlined by the linked Rosenthal paper? It seems counter-intuitive even for people who've wrapped their head around the original problem.
If you take as absolute the problem's statement the host is randomly knocking doors open, then yes, the fact that only goats were revealed is strong evidence that only goats were available because you picked the door with the prize, which, when combined with the low prior probability that you picked the door with the prize, gives equal probability to either of the unopened doors having the prize.
However, the fact that only goats were revealed is also strong evidence that the host deliberately avoided opening the door with the prize, and therefor switching is a winning strategy. After all, the probability of this happening if the host really is choosing doors randomly is 2 in a million, but it is guaranteed if the host deliberately opened only doors with goats.
Note that this principal still applies in variants with fewer doors. Unless there is an actual penalty for switching doors (which could happen if the host only sometimes offers the opportunity to switch, and is more likely to do so when the contestant chooses the winning door), any uncertainty about the host choosing doors randomly implies that it is a good strategy to switch.
Catchy Fallacy Name Fallacy (and Supporting Disagreement)
Related: The Pascal's Wager Fallacy Fallacy, The Fallacy Fallacy
We need a catchy name for the fallacy of being over-eager to accuse people of fallacies that you have catchy names for.
When you read an argument you don't like, but don't know how to attack on its merits, there is a trick you can turn to. Just say it commits1 some fallacy, preferably one with a clever name. Others will side with you, not wanting to associate themselves with a fallacy. Don't bother to explain how the fallacy applies, just provide a link to an article about it, and let stand the implication that people should be able to figure it out from the link. It's not like anyone would want to expose their ignorance by asking for an actual explanation.
What a horrible state of affairs I have described in the last paragraph. It seems, if we follow that advice, that every fallacy we even know the name of makes us stupider. So, I present a fallacy name that I hope will exactly counterbalance the effects I described. If you are worried that you might defend an argument that has been accused of committing some fallacy, you should be equally worried that you might support an accusation that commits the Catchy Fallacy Name Fallacy. Well, now that you have that problem either way, you might as well try to figure if the argument did indeed commit the fallacy, by examining the actual details of the fallacy and whether they actually describe the argument.
But, what is the essence of this Catchy Fallacy Name Fallacy? The problem is not the accusation of committing a fallacy itself, but that the accusation is vague. The essence is "Don't bother to explain". The way to avoid this problem is to entangle your counterargument, whether it makes a fallacy accusation or not, with the argument you intend to refute. Your counterargument should distinguish good arguments from bad arguments, in that it specifies criteria that systematically apply to a class of bad arguments but not to good arguments. And those criteria should be matched up with details of the allegedly bad argument.
The wrong way:
It seems that you've committed the Confirmation Bias.
The right way:
The Confirmation Bias is when you find only confirming evidence because you only look for confirming evidence. You looked only for confirming evidence by asking people for stories of their success with Technique X.
Notice how the right way would seem very out of place when applied against an argument it does not fit. This is what I mean when I say the counterargument should distinguish the allegedly bad argument from good arguments.
And, if someone commits the Catchy Fallacy Name Fallacy in trying to refute your arguments, or even someone else's, call them on it. But don't just link here, you wouldn't want to commit the Catchy Fallacy Name Fallacy Fallacy. Ask them how their counterargument distinguishes the allegedly bad argument from arguments that don't have the problem.
1 Of course, when I say that an argument commits a fallacy, I really mean that the person who made that argument, in doing so, committed the fallacy.
View more: Prev
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)