Meetup : Chicago Meetup
Discussion article for the meetup : Chicago Meetup
We're meeting at the Big Bowl. Look for a LW sign. At the previous meetup, some of us agreed it would be good to have an official topic. No great topic comes to mind right now, so if we still want to have one, let's figure something out in the comments.
If you're coming, please leave a comment or a message to the google group (http://groups.google.com/group/less-wrong-chicago), so we can give BB an idea of how many people to expect.
Discussion article for the meetup : Chicago Meetup
Meetup : Chicago Meetup
Discussion article for the meetup : Chicago Meetup
Airedale and I will be hosting the next meetup at the Corner Bakery on 1121 North State Street. Look for a sign saying LW. We hope to see you there!
Discussion article for the meetup : Chicago Meetup
PhilPapers survey results now include correlations
Now you can see how philosophical positions are correlated to each other and to some demographic variables:
Chicago Meetup 11/14
Airedale and I will host a meetup this Sunday, starting 5 pm, in the Elephant & Castle Pub and Restaurant on 111 West Adams Street. We'll put up a sign saying "LessWrong".
We're open to changing the time or venue, so check back here to be sure, or join our Google Group for future updates. Having the meetup in the Loop seemed the best compromise, but we haven't tried this particular venue before and maybe someone has a better idea.
A Fundamental Question of Group Rationality
What do you believe because others believe it, even though your own evidence and reasoning ("impressions") point the other way?
(Note that answers like "quantum chromodynamics" don't count, except in the unlikely case that you've seriously tried to do your own physics, and it suggested the mainstream was wrong, and that's what you would have believed if not for it being the mainstream.)
Chicago/Madison Meetup
After a successful first Chicago meetup, Airedale and I are now looking forward to the first nonfirst meetup. There are a few different options:
- Hold the meetup in Chicago on a weekend afternoon, like last time. That could mean the 31st or 1st, or maybe the 24th or 25th. Unless there's massive support for another Hyde Park meetup, we propose trying out a different part of town, such as the North side or the Loop.
- Hold the meetup in Chicago on the evening of Tuesday the 27th. The upside is LW/SIAI regulars Will Newsome and Kevin plan to be in town.
- Will and Kevin plan to visit Madison on the 28th and some other cities in the days after, where they'll want to hold meetups if there's interest. They will post details. (Airedale and I won't be able to attend, though we may be able to attend a future weekend Madison meetup.)
Any thoughts on what's convenient for most people, and on what's a good venue?
If you're interested, there's a Google Group where you can sign up for further updates. Hope to see you soon.
Swimming in Reasons
To a rationalist, certain phrases smell bad. Rotten. A bit fishy. It's not that they're actively dangerous, or that they don't occur when all is well; but they're relatively prone to emerging from certain kinds of thought processes that have gone bad.
One such phrase is for many reasons. For example, many reasons all saying you should eat some food, or vote for some candidate.
To see why, let's first recapitulate how rational updating works. Beliefs (in the sense of probabilities for propositions) ought to bob around in the stream of evidence as a random walk without trend. When, in contrast, you can see a belief try to swim somewhere, right under your nose, that's fishy. (Rotten fish don't really swim, so here the analogy breaks down. Sorry.) As a Less Wrong reader, you're smarter than a fish. If the fish is going where it's going in order to flee some past error, you can jump ahead of it. If the fish is itself in error, you can refuse to follow. The mathematical formulation of these claims is clearer than the ichthyological formulation, and can be found under conservation of expected evidence.
More generally, according to the law of iterated expectations, it's not just your probabilities that should be free of trends, but your expectation of any variable. Conservation of expected evidence is just the special case where a variable can be 1 (if some proposition is true) or 0 (if it's false); the expectation of such a variable is just the probability that the proposition is true.
So let's look at the case where the variable you're estimating is an action's utility. We'll define a reason to take the action as any info that raises your expectation, and the strength of the reason as the amount by which it does so. The strength of the next reason, conditional on all previous reasons, should be distributed with expectation zero.
Maybe the distribution of reasons is symmetrical: for example, if somehow you know all reasons are equally strong in absolute value, reasons for and against must be equally common, or they'd cause a predictable trend. Under this assumption, the number of reasons in favor will follow a binomial distribution with p=.5. Mostly, the values here will not be too extreme, especially for large numbers of reasons. When there are ten reasons in favor, there are usually at least a few against.
But what if that doesn't happen? What if ten pieces of info in a row all favor the action you're considering?
Disambiguating Doom
Analysts of humanity's future sometimes use the word "doom" rather loosely. ("Doomsday" has the further problem that it privileges a particular time scale.) But doom sounds like something important; and when something is important, it's important to be clear about what it is.
Some properties that could all qualify an event as doom:
- Gigadeath: Billions of people, or some number roughly comparable to the number of people alive, die.
- Human extinction: No humans survive afterward. (Or, modified: no human-like life survives, or no sentient life survives, or no intelligent life survives.)
- Existential disaster: Some significant fraction, perhaps all, of the future's potential moral value is lost. (Coined by Nick Bostrom, who defines an existential risk as one "where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential", which I interpret to mean the same thing.)
- "Doomsday argument doomsday": The total number of observers (or observer-moments) in existence ends up being small – not much larger than the total that have existed in the past. This is what we should believe if we accept the Doomsday argument.
- Great filter: Earth ends up not colonizing the stars, or doing anything else widely visible. If all species are filtered out, this explains the Fermi paradox.
Examples to illustrate that these properties are fundamentally different:
- If billions die (1), humanity may still recover and not go extinct (2), retain most of its potential future value (3), spawn many future observers (4), and colonize the stars (5). (E.g., nuclear war, but also aging.)
- If cockroaches or Klingon colonists build something even cooler afterward, human extinction (2) isn't an existential disaster (3), and conversely, the creation of an eternal dystopia could be an existential disaster (3) without involving human extinction (2).
- Human extinction (2) doesn't imply few future observers (4) if it happens too late, or if we're not alone; and few future observers (4) doesn't imply human extinction (2) if we all live forever childlessly. (It's harder to find an example of few observer-moments without human extinction, short of p-zombie infestations.)
- If we create an AI that converts the galaxy to paperclips, humans go extinct (2) and it's an existential disaster (3), but it isn't part of the great filter (5). (For an example where all intelligence goes extinct, implying few future observers (4) for any definition of "observer", consider physics disasters that expand at light speed.) If our true desire is to transcend inward, that's part of the great filter (5) without human extinction (2) or an existential disaster (3).
- If we leave our reference class of observers for a more exciting reference class, that's a doomsday argument doomsday (4) but not an existential disaster (3). The aforementioned eternal dystopia is an existential disaster (3) but implies many future observers (4).
- Finally, if space travel is impossible, that's a great filter (5) but compatible with many future observers (4).
Taking Occam Seriously
Paul Almond's site has many philosophically deep articles on theoretical rationality along LessWrongish assumptions, including but not limited to some great atheology, an attempt to solve the problem of arbitrary UTM choice, a possible anthropic explanation why space is 3D, a thorough defense of Occam's Razor, a lot of AI theory that I haven't tried to understand, and an attempt to explain what it means for minds to be implemented (related in approach to this and this).
Open Thread: May 2009
Here is our monthly place to discuss Less Wrong topics that have not appeared in recent posts.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)