Comment author: TheOtherDave 13 March 2012 09:28:59PM 0 points [-]

If your goal is to figure out what to have for breakfast, not much relevance at all.
If your goal is to program an automated decision-making system to figure out what breakfast supplies to make available to the population of the West Coast of the U.S., perhaps quite a lot.
If your goal is to program an automated decision-making system to figure out how to optimize all available resources for the maximum benefit of humanity, perhaps even more.

There are lots of groups represented on LW, with different perceived needs. Some are primarily interested in self-help threads, others primarily interested in academic decision-theory threads, and many others. Easiest is to ignore threads that don't interest you.

Comment author: ksvanhorn 14 March 2012 01:29:52AM 0 points [-]

If your goal is to program an automated decision-making system to figure out what breakfast supplies to make available to the population of the West Coast of the U.S., perhaps quite a lot.

This example has nothing like the character of the one-box/two-box problem or the PD-with-mental-clone problem described in the article. Why should it require an "advanced" decision theory? Because people's consumption will respond to the supplies made available? But standard game theory can handle that.

There are lots of groups represented on LW, with different perceived needs. [...]Easiest is to ignore threads that don't interest you.

It's not that I'm not interested; it's that I'm puzzled as to what possible use these "advanced" decision theories can ever have to anyone.

Comment author: ksvanhorn 13 March 2012 07:13:02PM 0 points [-]

Are you sure that you need an advanced decision theory two handle the one-box/two-box problem, or the PD-with-mental-clone problem? You write that

a CDT agent assumes that X's decision is independent from the simultaneous decisions of the Ys- that is, X could decide one way or another and everyone else's decisions would stay the same.

Well, that's a common situation analyzed in game theory, but it's not essential to CDT. Consider playing a game of chess: your choice clearly affects the choice of your opponent. Or consider the decision of whether to punch a 6'5", 250 lb. muscle-man who has just insulted you -- your choice again has a strong influence on his choice of action. CDT is adequate for analyzing both of these situations.

It is true that in my two examples the other agent's choice is made after X's choice, rather than being simultaneous with his. But of what relevance is the stipulation of simultaneity? It's only relevance is that it leads one to assume that the other decisions are independent of X's decision! That is, the root of the difficulty is simply that you're analyzing the problem using an assumption that you know to be false!

It seems to me that you can analyze the one-box/two-box problem or the PD-with-a-mental-clone problem perfectly well using CDT; you just have to use the right causal graph. The causal graph needs an arc from your decision to Omega's prediction for the first problem, and an arc from your decision to the clone's decision in the second problem. Then you do the usual maximization of expected utility.

Comment author: ksvanhorn 13 March 2012 06:54:34PM 0 points [-]

I don't understand the need for this "advanced" decision theory. The situations you mention -- Omega and the boxes, PD with a mental clone -- are highly artificial; no human being has ever encountered such a situation. So what relevance do these "advanced" decision theories have to decisions of real people in the real world?

Comment author: Gabriel 12 March 2012 03:03:19PM *  7 points [-]

I think the idea is that even if Omega always predicted two-boxing, it still could be said to predict with 90% accuracy if 10% of the human population happened to be one-boxers. And yet you should two-box in that case. So basically, the non-deterministic version of Newcomb's problem isn't specified clearly enough.

Comment author: ksvanhorn 13 March 2012 06:47:23PM 2 points [-]

I disagree. To be at all meaningful to the problem, the "90% accuracy" has to mean that, given all the information available to you, you assign a 90% probability to Omega correctly predicting your choice. This is quite different from correctly predicting the choices of 90% of the human population.

Comment author: metaphysicist 10 March 2012 11:38:47PM 0 points [-]

I assume that we're talking about opinions on factual matters, not personal values. Yes, one's fundamental (terminal) values I would expect to be pretty stable.

To my thinking, this stance forfeits rational reflection where it really counts most. You're saying, if I understand you, that you respect people who change their opinions on factual matters, but not on questions of fundamental ethics. This seems to assume, among other things, that people's values are much more coherent than they are (leaving little leverage for change).

You lose much more status, it is true, when you re-evaluate your terminal values than your factual contentions. That just means the problems of self-confirmation are compounded in ethics, not that they should be ignored there. You can't be rational yet rigidly maintain your terminal values' immunity to rational argument.

Comment author: ksvanhorn 11 March 2012 03:21:06AM 0 points [-]

You can't be rational yet rigidly maintain your terminal values' immunity to rational argument.

Any argument that my terminal values should be one thing or another will itself be founded on certain assumed values. You can't start from a value-neutral position and get to a value system from there.

If rational argument alone is enough to cause a change in one's values, I can see only a few possibilities:

  • The changed values were instrumental values rather than terminal values. It makes perfect sense to modify instrumental values if one no longer believes that they serve the attainment of one's terminal values.

  • The values were incoherent. The rational argument has shown that they are in conflict with each other, making it clear that a choice among them is necessary.

I was going to add the possibility of a value whose subject matter is found not to exist, such as religious values founded on a belief in a god. Some of those values may evaporate after one becomes convinced that there is no god. But even in that case I think one can argue that the religious values really served a more fundamental value -- the desire for self-respect.

Comment author: ksvanhorn 10 March 2012 10:10:37PM 15 points [-]

In my experience, Ph.D. dissertations can be a wonderful resource for getting an overview of a particular academic topic. This is because the typical -- and expected -- pattern for a dissertation is to first survey the existing literature before diving into one's own research. This both shows that the doctoral candidate has done his/her homework, and, just as importantly, brings his/her committee members up to speed on the necessary background. For example, a lot of my early education in Bayesian methods came from reading the doctoral dissertations of Wray Buntine, David J. C. MacKay, and Radford Neal on applications of Bayesian methods to machine learning. Michael Kearns' dissertation helped me learn about computational learning theory. A philosophy dissertation helped me learn about temporal logic.

Of course, this requires that you already have some background in some related discipline. My background was in computer science when I read the above-mentioned dissertations, along with a pretty good foundation in mathematics.

Free Applied Instrumental Rationality Webinar

5 ksvanhorn 10 March 2012 08:35PM

Dan Nuffer and I are putting together a free webinar that will go through the ideas in Smart Choices: A Practical Guide to Making Better Life Decisions, combined with whatever else seems useful. The authors of this book include one of the pioneers of decision analysis.

Although they don't describe it as such, Smart Choices is really a manual for basic applied instrumental rationality. It's a systematic way of going about your decisions, applicable to either decision problems (you have a situation dumped in your lap that requires a response) or decision opportunities (proactively seeking out ways to further your goals.)

The webinar will be one-hour sessions once a week for however long it takes to go through the material. We're going to do the webinar on Google+ Hangouts, and we'll have a discussion forum for the webinar on our web site.

If you're interested, send me an email [kevin at ksvanhorn com] with 1) your preferred day/time(s), and 2) the day/times that are out of the question for you.

Google+ Hangouts has a limit of 10 people. Five of those slots are already filled, leaving 5 seats open, so don't wait too long to email me if this is something you're interested in.

 

Comment author: dbaupp 02 March 2012 12:42:07AM *  8 points [-]

You signal that you are a reasonable person who does not let emotional attachment to a position cloud his judgment. If you're dealing with someone of higher status, you show that your mistake doesn't matter that much because you corrected yourself quickly. If you are dealing with someone who is lower status than you, you come off appearing magnanimous.

In many cases this is true, but someone could also interpret this as you as being loose with your morals, one who betrays one's own ideals in a flash (and so are untrustworthy). Or maybe interpret it as you being a follower, who only thinks what people tell you to.

Comment author: ksvanhorn 05 March 2012 02:43:29AM 0 points [-]

I assume that we're talking about opinions on factual matters, not personal values. Yes, one's fundamental (terminal) values I would expect to be pretty stable. Instrumental values are more fluid because they are a function of both one's terminal values and one's state of information about factual matters. It seems to me that one's morals and ideals are tied more closely to terminal values than to instrumental values.

Comment author: ksvanhorn 01 March 2012 08:44:39PM 11 points [-]

Abandoning your previous position can also be a way of saving face, in at least two ways:

  • Being wrong is embarrassing, yes; but being wrong for a short period of time is less embarrassing than being wrong for an extended period of time. Best to stop the bleeding as quickly as possible.

  • You signal that you are a reasonable person who does not let emotional attachment to a position cloud his judgment. If you're dealing with someone of higher status, you show that your mistake doesn't matter that much because you corrected yourself quickly. If you are dealing with someone who is lower status than you, you come off appearing magnanimous.

In response to Grad School?
Comment author: ksvanhorn 27 February 2012 09:21:16PM 7 points [-]

What are your goals? What does the life you want to build for yourself look like? You need to answer these questions, at least approximately, before you can make any reasonable decision about your education. Some things to think about:

  • How important is money (beyond basic living expenses) to you?

  • How important is independence and autonomy?

  • Is discovering the secrets of the universe your deepest desire?

  • Are you one of those people who gets a rush from finding a clever solution to a difficult problem? (In computer science we call this an "algorasm"...)

  • Do you prefer solving problems that can be attacked with science and mathematics or those that require understanding what makes other people tick?

  • Are long work hours OK if the work is interesting, or do you want a 40-hour workweek?

  • Do a spouse and children figure into your vision for your life?

In other words, you need to think about your own utility function... and how that might evolve as you grow older. My suggestion is to try to put together a set of objectives that you could call your life goals, and then use this both to evaluate your education options and to suggest new options.

I would also recommend reading something like Smart Choices. If you decide you want to follow the process described in that book, I'd be glad to help you work through the steps.

View more: Prev | Next