Most Likely Cause of an Apocalypse on December 21
Note: This post is almost completely tongue-in-cheek. Obviously the chances of December 21, 2012 heralding in an apocalypse, definable maybe as an event causing billions of deaths and/or global catastrophic infrastructure damage, are slim to none.
But they aren't actually none.
Let's say there's a 5% chance of a superhuman General AI being developed in the next 10 years and ushering in the singularity. Let's say 4/5 of those scenarios would lead to a Bad End which could reasonably be called an apocalypse (or an "AI-pocalypse", perhaps). And let's say the distribution of probability isn't linear, but is exponentially skewed somehow so that given the emergence of AI in the next 120 months, the chances of it happening in this very month are 1 in 100,000. Then let's divide that by 30 so we can get our apocalypse rolling on the right day.
5/100 * 4/5 * 1/100000 * 1/30 = 1 in 75,000,000
Admittedly, low odds. This comports with the fact that there would have to be a lot of unknown development progress being made already on the problem for an intelligence explosion to be anywhere on the horizon.
But compared to some of the scenarios debunked by NASA (http://www.space.com/18678-2012-mayan-apocalypse-fears-nasa.html), such as a collision with the rogue planet Nibaru, or Earth being sucked into the supermassive black hole in the center of the Milky Way 30,000 light years away, the AI-doomsday scenario starts to seem relatively plausible.
I think the only other (relatively) plausible contenders would be the release of a pandemic-causing biological weapon, or the start of an international nuclear war. I haven't done any Fermi calculations on those, but I'm sure their probability exceeds that of solar flares scourging the surface of the Earth.
Meetup : Tucson: Fundamental Questions
Discussion article for the meetup : Tucson: Fundamental Questions
After the moderate success of our last meetup (during which much fun and conversation was had!), there is now a next meetup. The nominal topic we'll be discussing is "the fundamental question of rationality": "What do you believe and why do you believe it?" And maybe we'll also talk about "What are you doing and why are you doing it?" Of course, discussion will veer. Hope to see some new faces!
Discussion article for the meetup : Tucson: Fundamental Questions
Meetup : Tucson, Arizona
Discussion article for the meetup : Tucson, Arizona
Hey Tucson, After the fun and inspiration of the Rationality Mini-Camp in Berkeley last month, I'm pretty jazzed about rationality, and I'd like to meet some more of my fellow LessWrongers :) So I'll be hanging out in Coffee X Change Wednesday the 20th from 7 to 10, and anyone who's free should come out. I'll be the one with a print-out of the HPMOR cover on my table (http://mike-obee-lay.deviantart.com/art/Harry-Potter-and-the-Methods-of-Rationality-Cover-280590525) We'll be talking about some of the information that was covered at the Mini-Camp, especially how to apply rationality to your own life and decisions; but I'm sure conversation will wander immensely. (Oh, and if you like in-group-out-group conflicts, then just look at Phoenix, with their LW meetup on the 15th... you don't want to let them win, do you? Tucson Uber Alles!)
Discussion article for the meetup : Tucson, Arizona
Bayesianism and use of Evidence in Social Deduction Games
You look around the table at four friends -- people who share your hatred for the evil empire, or so you thought. At this table, where the resistance meet to plan their missions, fully two of five the operatives are spies, infiltrating the rebels to sabotage their missions. You've seen your loyalty card, so you know you're resistance... but how do you figure out which of your so-called allies are the spies?
The Resistance, like Werewolf, Mafia, Battlestar Galactica, and other social deduction games, tasks the majority of players with rooting out the spies in their midst -- while the spies win by staying hidden. Among my friends, accusations of spyhood tend to be absolute: "Did you see how long he hesitated? He must be a spy!" Whether the suspicion is based on social cues or in-game actions, players rapidly become very sure of those beliefs they discuss at the table. They seem to divide their observations into two neat boxes, based on whether the data can decisively show someone's identity. If evidence seems convincing, it becomes concrete proof, immune to discussion; and if it doesn't, then it's disregarded.
This treatment of evidence can lead to overconfidence: once when I was well-framed by the spies, my fellow resistance member refused to even imagine how I could be innocent. And why should he listen to me? He had evidence that I was a spy. On the other hand, it can just as easily lead to under-confidence: when new players see that there is no conclusive proof one way or the other, they often disregard the hints and suggestive evidence (in someone's tone of voice, or their eagerness to go on a mission), and throw their hands up at the supposed randomness of the game.
Using Bayesianism as an alternative to this dichotomy allows me to treat evidence with the appropriate scrutiny, rather than using narrative ideas to guide my play. A two-person mission succeeds; the next mission adds a player to that team, and it fails. According to story logic, the first two players are trustworthy, so the third must have sabotaged the new mission. For more experienced players, the first mission is treated as having no informational value: spies may lay low, so any of the three players could be the saboteur, and it's a 1/3 shot. According to Bayesianism, P(player 3 is a spy) is influenced by all available evidence, given proper weighting. How likely is it for a spy to lay low on the first mission? Who chose for player 3 to join the mission? What is player 3's strategy as a spy? I find that this approach, of investigating all available evidence and updating my suspicions accordingly, allows me to have better precision in my accusations, and hopefully leads my teammates to start valuing evidence in the gradient way that these games, and investigation in life in general, requires for success.
I post this not only because I love playing Resistance (obviously!), but also because I think this game could be a fun and useful exercise in Bayesian reasoning, for the same reasons that Paranoid Debating may be: the group's appraisal of the evidence needs to be accurate for the resistance to win, while it must be inaccurate for the spies to win. This encourages proper Bayesian technique among the resistance, and clever, bias-abusing rhetoric from the spies to twist the game in their favor.
If anyone would like to use this game at a LessWrong meetup, or as an activity run by the Center for Modern Rationality, all you need are the rules (here and here), a deck of playing cards, and the power of Bayes!
(Special thanks to Julia Galef, for thinking the game sounded like a fun idea for teaching Bayesianism)
Meetup : Tucson Meetup
Discussion article for the meetup : Tucson Meetup
Hi everyone! I was pretty inspired by the Winter Solstice series of posts discussing the benefits and fun of being in an in-person rationalist community (and also lukeprog's Algorithm for Beating Procrastination :)). So, I'm tossing out a line to any rationalists in Tucson or southern Arizona who'd like to meet up and talk about whatever. Time and place are just first suggestions on my part, and if there's interest in changing the details so someone can make it, that could totally happen.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)