Philosophers and seeking answers

25 Yvain 04 February 2011 10:51PM

This thread has produced some interesting commentary around whether philosophers actually want to answer their own questions, or whether they enjoy sounding profound by debating big questions but don't want to lose that opportunity for profundity by finding single correct answers to them.

I don't quite disagree with the latter theory: the main reason I quit academic philosophy was exasperation that people were still debating questions where the right answer seemed obvious to me (like theism vs. atheism, or whether there was a universally compelling morality/aesthetics of pure reason), and worry that my philosophical career would involve continuing to debate these issues ad nauseum rather than helping to solve them and move on to the next problem.

But when I explained this to a particularly sarcastic friend, he summarized it as "So you think philosophy is useless because not everyone agrees with you?"

The problem isn't that philosophers never come up with solutions. The problem is that they come up with too many different solutions.

Science has solved many scientific problems, and anyone wondering what the solution is can look it up in a book or on Wikipedia. Philosophers have also solved many philosophical problems, but it is full of so many distractions and false solutions that anyone wondering which proposed solution is correct will have to become nearly as good a philosopher as the person who solved it in the first place. It's much easier for science to settle its disputes via experiment than for philosophy to settle its disputes via debate.

I am wary of criticizing the discipline of philosophy simply on the grounds that not everyone in it agrees with me. But I also don't want to let it off and say it's okay that they've managed to go so long without coming to any answers, when it seems to me that settling at least some of the easier problems is not that difficult.

How do we tell the difference between a discipline that doesn't really seek answers and a discipline which honestly seeks answers but just can't agree within itself? And how can philosophy do something about its level of internal disagreement without having to apply the "kick out everyone who disagrees with Less Wrong" solution?

Looking for information on scoring calibration

8 Yvain 29 January 2011 10:24PM

There are lots of scoring rules for probability assessments. Log scoring is popular here, and squared error also works.

But if I understand these correctly, they are combined measurements of both domain-ability and calibration. For example, if several people took a test on which they had to estimate their confidence in their answers to certain true or false questions about history, then well-calibrated people would have a low squared error, but so would people who know a lot about history.

So (I think) someone who always said 70% confidence and got 70% of the questions right would get a higher score than someone who always said 60% confidence and got 60% of the questions right, even though they are both equally well calibrated.

The only pure calibration estimates I've ever seen are calibration curves in the form of a set of ordered pairs, or those limited to a specific point on the cuve (eg "if ey says ey's 90% sure, ey's only right 60% of the time"). There should be a way to take the area under (or over) the curve to get a single value representing total calibration, but I'm not familiar with the method or whether it's been done before. Is there an accepted way to get single-number calibration scores separate from domain knowledge?

Techniques for probability estimates

58 Yvain 04 January 2011 11:38PM

Utility maximization often requires determining a probability of a particular statement being true. But humans are not utility maximizers and often refuse to give precise numerical probabilities. Nevertheless, their actions reflect a "hidden" probability. For example, even someone who refused to give a precise probability for Barack Obama's re-election would probably jump at the chance to take a bet in which ey lost $5 if Obama wasn't re-elected but won $5 million if he was; such decisions demand that the decider covertly be working off of at least a vague probability.

When untrained people try to translate vague feelings like "It seems Obama will probably be re-elected" into a precise numerical probability, they commonly fall into certain traps and pitfalls that make their probability estimates inaccurate. Calling a probability estimate "inaccurate" causes philosophical problems, but these problems can be resolved by remembering that probability is "subjectively objective" - that although a mind "hosts" a probability estimate, that mind does not arbitrarily determine the estimate, but rather calculates it according to mathematical laws from available evidence. These calculations require too much computational power to use outside the simplest hypothetical examples, but they provide a standard by which to judge real probability estimates. They also suggest tests by which one can judge probabilities as well-calibrated or poorly-calibrated: for example, a person who constantly assigns 90% confidence to eir guesses but only guesses the right answer half the time is poorly calibrated. So calling a probability estimate "accurate" or "inaccurate" has a real philosophical grounding.

There exist several techniques that help people translate vague feelings of probability into more accurate numerical estimates. Most of them translate probabilities from forms without immediate consequences (which the brain supposedly processes for signaling purposes) to forms with immediate consequences (which the brain supposedly processes while focusing on those consequences).

continue reading »

Efficient Charity: Do Unto Others...

130 Yvain 24 December 2010 09:26PM

This was originally posted as part of the efficient charity contest back in November. Thanks to Roko, multifoliaterose, Louie, jmmcd, jsalvatier, and others I forget for help, corrections, encouragement, and bothering me until I finally remembered to post this here.

Imagine you are setting out on a dangerous expedition through the Arctic on a limited budget. The grizzled old prospector at the general store shakes his head sadly: you can't afford everything you need; you'll just have to purchase the bare essentials and hope you get lucky. But what is essential? Should you buy the warmest parka, if it means you can't afford a sleeping bag? Should you bring an extra week's food, just in case, even if it means going without a rifle? Or can you buy the rifle, leave the food, and hunt for your dinner?

And how about the field guide to Arctic flowers? You like flowers, and you'd hate to feel like you're failing to appreciate the harsh yet delicate environment around you. And a digital camera, of course - if you make it back alive, you'll have to put the Arctic expedition pics up on Facebook. And a hand-crafted scarf with authentic Inuit tribal patterns woven from organic fibres! Wicked!

...but of course buying any of those items would be insane. The problem is what economists call opportunity costs: buying one thing costs money that could be used to buy others. A hand-crafted designer scarf might have some value in the Arctic, but it would cost so much it would prevent you from buying much more important things. And when your life is on the line, things like impressing your friends and buying organic pale in comparison. You have one goal - staying alive - and your only problem is how to distribute your resources to keep your chances as high as possible. These sorts of economics concepts are natural enough when faced with a journey through the freezing tundra.

continue reading »

Confidence levels inside and outside an argument

129 Yvain 16 December 2010 03:06AM

Related to: Infinite Certainty

Suppose the people at FiveThirtyEight have created a model to predict the results of an important election. After crunching poll data, area demographics, and all the usual things one crunches in such a situation, their model returns a greater than 999,999,999 in a billion chance that the incumbent wins the election. Suppose further that the results of this model are your only data and you know nothing else about the election. What is your confidence level that the incumbent wins the election?

Mine would be significantly less than 999,999,999 in a billion.

When an argument gives a probability of 999,999,999 in a billion for an event, then probably the majority of the probability of the event is no longer in "But that still leaves a one in a billion chance, right?". The majority of the probability is in "That argument is flawed". Even if you have no particular reason to believe the argument is flawed, the background chance of an argument being flawed is still greater than one in a billion.


More than one in a billion times a political scientist writes a model, ey will get completely confused and write something with no relation to reality. More than one in a billion times a programmer writes a program to crunch political statistics, there will be a bug that completely invalidates the results. More than one in a billion times a staffer at a website publishes the results of a political calculation online, ey will accidentally switch which candidate goes with which chance of winning.

So one must distinguish between levels of confidence internal and external to a specific model or argument. Here the model's internal level of confidence is 999,999,999/billion. But my external level of confidence should be lower, even if the model is my only evidence, by an amount proportional to my trust in the model.

continue reading »

Spring 1912: A New Heaven And A New Earth

18 Yvain 13 November 2010 05:11PM

And so it came to pass that on Christmas Day 1911, the three Great Powers of Europe signed a treaty to divide the continent between them peacefully, ending what future historians would call the Great War.

The sun truly never sets on King Jack's British Empire, which stretches from Spain to Stockholm, from Casablanca to Copenhagen, from the fringes of the Sahara to the coast of the Arctic Ocean. They rule fourteen major world capitals, and innumerable smaller towns and cities, the greatest power of the age and the unquestioned master of Western Europe.

From the steppes of Siberia to the minarets of Istanbul, the Ottoman Empire is no longer the Sick Man of Europe but stands healthy and renewed, a colossus every bit the equal of the Christian powers to its west. Its Sultan calls himself the Caliph, for the entire Islamic world basks in his glory, and his Grand Vizier has been rewarded with a reputation as one of the most brilliant and devious politicians of the age. At his feet grovel representatives of twelve great cities, and even far-flung Tunis has not escaped his sway.

And in between, the Austro-Hungarian Empire straddles the Alps and ancient Italy. Its lack of natural borders presented no difficulty for its wily Emperor, who successfully staved off the surrounding powers and played his enemies off against one another while building alliances that stood the test of time. Eight great cities pay homage to his double-crown, and he is what his predecessors could only dream of being - a true Holy Roman Emperor.

And hidden beneath the tricolor map every student learns in grammar school are echoes of subtler hues. In Germany, people still talk of the mighty Kajser Sotala I, who conquered the ancient French enemy and extended German rule all the way to the Mediterranean, and they still seeth and curse at his dastardly betrayal by his English friends. In Russia, Princess Anastasia claims to be the daughter of Czar Perplexed, and recounts to everyone who will listen the story of her stoic father, who remained brave until the very end; at her side travels a strange bearded man who many say looks like Rasputin, the Czar's long-missing adviser. The French remember President Andreassen, who held off the combined armies of England and Germany for half a decade, and many still go on pilgrimage to Liverpool, the site of their last great victory. And in Italy, Duke Carinthium has gone down in history beside Tiberius and Cesare Borgia as one of their land's most colorful and fascinating leaders.

And the priests say that the same moment the peace treaty was signed, the blood changed back to water, and the famines ended, and rain fell in the lands parched by drought. Charles Taze Russell, who had been locked in his room awaiting the Apocalypse, suddenly ran forth into the midwinter sun, shouting "Our doom has been lifted! God has granted us a second chance!" And the mysterious rectangular wall of force separating Europe from the rest of the world blinked out of existence.

Pope Franz I, the new Austrian-supported Pontiff in Rome, declares a month of thanksgiving and celebration. For, he says, God has tested the Europeans for their warlike ways, isolating them from the rest of the earth lest their sprawling empires plunge the entire planet into a world war that might kill millions. Now, the nobility of Europe finally realizing the value of peace, the curse has been lifted, and the empires of Europe can once more interact upon the world stage.

Chastened by their brush with doom, yet humbled by the lesson they had been given, the powers of Europe send missionaries through the dimensional portal, to convince other worlds to abandon their warlike ways and seek universal brotherhood. And so history ends, with three great powers living together side by side and striving together for a better future and a positive singularity.

...

On to the more practical parts. If you think you've learned lessons this game worth telling the rest of Less Wrong, you should send them to either myself or Jack. I say either myself or Jack because Jack had the most supply centers and therefore deserves some karma which he could most easily get by posting the thread which the other two winners then comment on, or if you insist that three way tie means three way tie, I'll post the thread and the three winners can all comment and get up-voted. We'll talk about it in the comments.

Thanks to everyone who played in this game. I was very impressed - it's one of the rare games I have moderated that hasn't been ruined by people constantly forgetting to send orders, or people ragequitting when things don't go their way, or people being totally incompetent and throwing the game to the first person to declare war on them, or any of the other ways a Diplomacy game can go wrong. Everyone fought hard and well and honorably (for definitions of honor compatible with playing Diplomacy). It was a pleasure to serve as your General Secretary.

 

All previous posts and maps from this game are archived. See this comment for an explanation of how to access the archives.

Diplomacy as a Game Theory Laboratory

44 Yvain 12 November 2010 10:19PM

Game theory. You've studied the posts, you've laughed at the comics, you've heard the music1. But the best way to make it Truly Part Of You is to play a genuine game, and I have yet to find any more effective than Diplomacy.

 

Diplomacy is a board game for seven people played on a map of WWI Europe. The goal is to capture as many strategic provinces ("supply centers") as possible; eighteen are needed to win. But each player's country starts off with the same sized army, and there is no luck or opportunity for especially clever tactics. The most common way to defeat an enemy is to form coalitions with other players. But your enemies will also be trying to form coalitions, and the most profitable move is often to be a "double agent", stringing both countries along as long as you can. All game moves are written in secret and revealed at the same time and there are no enforcement mechanisms, so alliances, despite their central importance, aren't always worth the paper they're printed on.

 

The conditions of Diplomacy - competition for scarce resources, rational self-interested actors, importance of coalitions, lack of external enforcement mechanisms - mirror the conditions of game theoretic situations like the Prisoner's Dilemma (and the conditions of most of human evolution!) and so make a surprisingly powerful laboratory for analyzing concepts like trust, friendship, government, and even religion.

 

Over the past few months, I've played two online games of Diplomacy. One I won through a particularly interesting method; the other I lost quite badly, but with an unusual consolation. This post is based on notes I took during the games about relevant game theoretic situations. You don't need to know the rules of Diplomacy to understand the post, but if you want a look you can find them here.

 

continue reading »

V is for Value Maximizing Agent: London, November 5

5 Yvain 28 October 2010 06:53PM

During the last London meetup, which I conveniently scheduled during Easter, I promised that I'd see if the next time, I could make it to London sometime that wasn't a national holiday.

The time has come to break that promise, so I will be in London for a day on Friday November 5th. If anyone wants to meet up, I'll be around that evening at 8 or so to discuss rationality-related issues, chat, or orchestrate a terrorist campaign to overthrow the government while wearing nifty masks. We can try the top floor of that same Waterstone's in Piccadilly Circus, and relocate to Starbucks if it doesn't work out. Does that work for anybody?

Re: sub-reddits

7 Yvain 17 October 2010 01:34PM

A while back, I polled the community on the possibility of subreddits. Most people said they wanted them, and I said I'd investigate.

I talked to a couple of people and eventually ended up talking to Tricycle, the developers of this site. They told me about their own proposed solution to the community organization problem, which is this new Discussion section. They said that searching the discussion section by tag was equivalent to a sub-reddit. For example, if you want a sub-reddit on consciousness, the discussion consciousness tag search is an amazing imitation

I told them I wasn't entirely convinced by this and sent some reasons why, but I haven't heard back from them lately and I'm not going keep pursuing this and make a big deal of it unless a large percentage of the people who wanted sub-reddits are unsatisfied.

Are mass hallucinations a real thing?

14 Yvain 03 October 2010 08:15PM

One of the explanations in the irrationality game thread for UFOs and other paranormal events seen by multiple people at once, like the was mass hysteria. This is also a common explanation given for any seemingly paranormal event that multiple people have independently witnessed.

But mass hysteria is mostly known from incidents where people hysterically believe they have some disease, or have some hysterical delusion (false belief). In cases where people report seeing something or having a hallucination, it tends to be a few people across a large society. For example, when reports of Spring-Heeled Jack were going around England, multiple people claimed to have seen Spring-Heeled Jack, but there were no cases of hundreds of people seeing him simultaneously; therefore, the hysteria could have selected for people who were already a little bit crazy, or it could just have been that out of millions of English people a few of them were willing to say anything to get attention.

Conformity pressures can cause people to misinterpret borderline perceptions - for example, if someone says a random pattern of dots form Jesus' face, I have no trouble believing that, thus primed, people will be able to find Jesus' face in the dots. But it's a much bigger leap to assert that if I say "Jesus is standing right there in front of you" with enough conviction, you'll suddenly see him too.

Does anyone have any evidence that mass hysteria can produce a vivid hallucination shared among multiple otherwise-sane people?

View more: Prev | Next