An idea: Sticking Point Learning

10 cousin_it 08 September 2009 09:52AM

When trying to learn technical topics from online expositions, I imagine that most people hit snags at some moment - passages that they can't seem to grasp right away and that impede further progress. Moreover, I imagine that different people often get stuck in the same places, and that a few fortunate words of explanation can often help overcome the hump. (For example, "integral is the area under the curve" or "entropy is the expected number of bits".) And finally, perhaps unintuitively, I also imagine that someone who just overcame a sticking point is more likely to say the right magic words about it than someone who has understood the topic for years.

Hence my suggestion: let's try to identify and resolve such sticking points together, maybe as part of our Simple Math of Everything. This idea might be more appropriate for Hacker News, but I'm submitting it here because it sounds like a not-for-profit rather than a business, and seems nicely aligned with the goals of our community.

The required software certainly exists: our wiki would do fine. One of us posts a copy of a technical text. Others try to parse it, hit the difficult points, resolve them by intellectual force and insert (as a mid-article comment) the magic words or hyperlinks that helped them in that particular case. I really wonder what the result would look like; hopefully, something comfortably readable by people with modest math-reading skillz.

Any number of technical topics suggest themselves immediately - now what would you like to see?

indexical uncertainty and the Axiom of Independence

9 Wei_Dai 07 June 2009 09:18AM

I’ve noticed that the Axiom of Independence does not seem to make sense when dealing with indexical uncertainty, which suggests that Expected Utility Theory may not apply in situations involving indexical uncertainty. But Googling for "indexical uncertainty" in combination with either "independence axiom" or “axiom of independence” give zero results, so either I’m the first person to notice this, I’m missing something, or I’m not using the right search terms. Maybe the LessWrong community can help me figure out which is the case.

The Axiom of Independence says that for any A, B, C, and p, you prefer A to B if and only if you prefer p A + (1-p) C to p B + (1-p) C.  This makes sense if p is a probability about the state of the world. (In the following, I'll use “state” and “possible world” interchangeably.) In that case, what it’s saying is that what you prefer (e.g., A to B) in one possible world shouldn’t be affected by what occurs (C) in other possible worlds. Why should it, if only one possible world is actual?

In Expected Utility Theory, for each choice (i.e. option) you have, you iterate over the possible states of the world, compute the utility of the consequences of that choice given that state, then combine the separately computed utilities into an expected utility for that choice. The Axiom of Independence is what makes it possible to compute the utility of a choice in one state independently of its consequences in other states.

But what if p represents an indexical uncertainty, which is uncertainty about where (or when) you are in the world?  In that case, what occurs at one location in the world can easily interact with what occurs at another location, either physically, or in one’s preferences. If there is physical interaction, then “consequences of a choice at a location” is ill-defined. If there is preferential interaction, then “utility of the consequences of a choice at a location” is ill-defined. In either case, it doesn’t seem possible to compute the utility of the consequences of a choice at each location separately and then combine them into a probability-weighted average.

Here’s another way to think about this. In the expression “p A + (1-p) C” that’s part of the Axiom of Independence, p was originally supposed to be the probability of a possible world being actual and A denotes the consequences of a choice in that possible world. We could say that A is local with respect to p. What happens if p is an indexical probability instead? Since there are no sharp boundaries between locations in a world, we can’t redefine A to be local with respect to p. And if A still denotes the global consequences of a choice in a possible world, then “p A + (1-p) C” would mean two different sets of global consequences in the same world, which is nonsensical.

If I’m right, the notion of a “probability of being at a location” will have to acquire an instrumental meaning in an extended decision theory. Until then, it’s not completely clear what people are really arguing about when they argue about such probabilities, for example in papers about the Simulation Argument and the Sleeping Beauty Problem.

Edit: Here's a game that exhibits what I call "preferential interaction" between locations. You are copied in your sleep, and both of you wake up in identical rooms with 3 buttons. Button A immunizes you with vaccine A, button B immunizes you with vaccine B. Button C has the effect of A if you're the original, and the effect of B if you're the clone. Your goal is to make sure at least one of you is immunized with an effective vaccine, so you press C.

To analyze this decision in Expected Utility Theory, we have to specify the consequences of each choice at each location. If we let these be local consequences, so that pressing A has the consequence "immunizes me with vaccine A", then what I prefer at each location depends on what happens at the other location. If my counterpart is vaccinated with A, then I'd prefer to be vaccinated with B, and vice versa. "immunizes me with vaccine A" by itself can't be assigned an utility.

What if we use the global consequences instead, so that pressing A has the consequence "immunizes both of us with vaccine A"? Then a choice's consequences do not differ by location, and “probability of being at a location” no longer has a role to play in the decision.

This Failing Earth

19 Eliezer_Yudkowsky 24 May 2009 04:09PM

Suppose I told you about a certain country, somewhere in the world, in which some of the cities have degenerated into gang rule.  Some such cities are ruled by a single gang leader, others have degenerated into almost complete lawlessness.  You would probably conclude that the cities I was talking about were located inside what we call a "failed state".

So what does the existence of North Korea say about this Earth?

No, it's not a perfect analogy.  But the thought does sometimes occur to me, to wonder if the camel has two humps.  If there are failed Earths and successful Earths, in the great macroscopic superposition popularly known as "many worlds" - and we're not one of the successful.  I think of this as the "failed Earth" hypothesis.

Of course the camel could also have three or more humps, and it's quite easy to imagine Earths that are failing much worse than this, epic failed Earths ruled by the high-tech heirs of Genghis Khan or the Catholic Church.  Oh yes, it could definitely be worse...

...and the "failed state" analogy is hardly perfect; "failed state" usually refers to failure to integrate into the global economy, but a failed Earth is not failing to integrate into anything larger...

...but the question does sometimes haunt me, as to whether in the alternative Everett branches of Earth, we could identify a distinct cluster of "successful" Earths, and we're not in it.  It may not matter much in the end; the ultimate test of a planet's existence probably comes down to Friendly AI, and Friendly AI may come down to nine people in a basement doing math.  I keep my hopes up, and think of this as a "failing Earth" rather than a "failed Earth".

But it's a thought that comes to mind, now and then.  Reading about the ongoing Market Complexity Collapse and wondering if this Earth failed to solve one of the basic functions of global economics, in the same way that Rome, in its later days, failed to solve the problem of orderly transition of power between Caesars.

continue reading »