PhilGoetz comments on Positioning oneself to make a difference - Less Wrong

5 Post author: Mitchell_Porter 18 August 2010 11:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (46)

You are viewing a single comment's thread.

Comment author: PhilGoetz 19 August 2010 02:55:50AM *  7 points [-]

The problem of consciousness is really hard, and IMHO we don't dare try to make a friendly AI until we've got it solidly nailed down. (Well, IMHO making the "One Friendly AI to Rule them All" is just about the worst thing we could possibly do; but I'm speaking for the moment from the viewpoint of an FAI advocate.)

The idea of positioning yourself brings to mind chess as a metaphor for life. When I started playing chess, I thought chess players planned their strategies many moves in advance. Then I found out that I played best when I just tried to make my position better on each move, and opportunities for checkmate would present themselves. Then I tried to apply this to my life. Then, after many years of improving my position in a many different ways without having a well-researched long-term plan, I realized that the game of life has too large a board for this to work as well as it does in chess.

I'd like to see posts on life lessons learned from chess and go, if someone would care to write them.

Comment author: [deleted] 19 August 2010 02:52:50PM 4 points [-]

I don't play chess, but it occurs to me that what you're talking about sounds like applying greedy algorithms to life. And I realized recently that that's what I do. At any given moment, take the action that is the biggest possible step towards your goal.

For example: You're trying to cut expenses. The first step you make is to cut your biggest optional expense. (Analogously: first deal with your biggest time sink, your biggest worry.) A lot of people start with the little details or worry about constructing a perfect long-term plan; my instinct is always to do the biggest step in the right direction that's possible right now.

Comment author: xamdam 19 August 2010 09:08:33AM 2 points [-]

I'd like to see posts on life lessons learned from chess and go, if someone would care to write them.

Here is a whole book of them (pretty goof, IMO) How Life Imitates Chess by Gary Kasparov

Comment author: Morendil 19 August 2010 08:15:02AM 2 points [-]

I'd like to see posts on life lessons learned from chess and go, if someone would care to write them.

Go has many lessons, but you have to be somewhat tentative about taking them to heart, at least until you reach those ethereal high dan levels of learning. (That's one lesson right there.)

Comment author: hegemonicon 19 August 2010 01:40:03PM *  3 points [-]

The prime tenet of successful strategy in any domain - chess, life, whatever - is "always act to increase your freedom of action". In essence, the way to deal with an uncertain future is to give yourself as many ways of compensating for it as possible. (Edit: removed confused relationship to utility maximization).

It's much more difficult to apply this to a life-sized board, but it's still a very strong heuristic.

Comment author: ciphergoth 19 August 2010 02:01:10PM 3 points [-]

(It can also be thought of as a 'dumb' version of utility maximization, where the utility of every possibility is set to 1).

No, this gives a utility of 1 to every action. You have to find some way to explicitly encode for the diversity of options available to your future self.

Comment author: Emile 19 August 2010 02:20:04PM 1 point [-]

If you're programming a chess AI, that would translate into a heuristic for the "expected utility" of a position as a function of the number of moves you can make in that position (in addition to also being a function of the number of pieces other player have).

Comment author: hegemonicon 19 August 2010 05:05:50PM 0 points [-]

Hrm, I'm not sure if I just miscommunicated or I'm misunderstanding something about utility calculations. Can you clarify your correction?

Comment author: WrongBot 19 August 2010 05:31:34PM *  1 point [-]

Utility calculations are generally used to find the best course of action, i.e. the action with the highest expected utility. If every possible outcome has a utility set to 1, a utility maximizer will choose at random because all actions have equal expected utility. I think you're proposing maximizing the total utility of all possible future actions, but I'm pretty sure that's incompatible with reasoning probabilistically about utility (at least in the Bayesian sense). 0 and 1 are forbidden probabilities and your distribution has to sum to 1, so you don't ever actually eliminate outcomes from consideration. It's just a question of concentrating probabilities in the areas with highest utility.

Does that make any sense at all?

(Ciphergoth's answer to your question is approximately a more concise version of this comment.)

Comment author: hegemonicon 19 August 2010 05:42:32PM 1 point [-]

You're right both in my intended meaning and why it doesn't make sense - thanks.

Comment author: ciphergoth 19 August 2010 05:27:45PM *  1 point [-]

The expected utility is the sum of utilities weighted by probability. The probabilities sum to 1, and since the utilities are all 1, the weighted sum is also 1. Therefore every action scores 1. See Expected utility hypothesis.

Comment author: hegemonicon 19 August 2010 05:37:54PM *  0 points [-]

Thanks. (Edit: My intended meaning doesn't make sense, since # of possible outcomes doesn't change, only their probabilities do. Still a useful heuristic, but tying it to utility is incorrect).

Comment author: orthonormal 24 August 2010 06:02:31PM 0 points [-]

The way chess is different from life is that it's inherently adversarial; reducing your opponent's freedom of action is as much of a win as increasing yours (especially when you can reduce your opponent's options to "resign" or "face checkmate").

And I don't think that heuristic applies without serious exceptions in life either.