In response to Scarcity
Comment author: Paul_Gowder 27 March 2008 05:59:38PM 6 points [-]

I agree with Bobvis: a LOT of this is rational:

# When University of North Carolina students learned that a speech opposing coed dorms had been banned, they became more opposed to coed dorms (without even hearing the speech). (Probably in Ashmore et. al. 1971.)

This seems straight Bayes to me. The banning of the speech counts as information about the chance that you'll agree with it, and for a reasonably low probability of banning speech that isn't dangerous to the administration (i.e. speech that won't convince), Everyone's Favorite Probability Rule kicks in and makes it totally rational to become more opposed to coed dorms -- assuming, that is, that you believe your chance of being convicted comes largely from rational sources (a belief that practical agents are at least somewhat committed to having).

# When a driver said he had liability insurance, experimental jurors awarded his victim an average of four thousand dollars more than if the driver said he had no insurance. If the judge afterward informed the jurors that information about insurance was inadmissible and must be ignored, jurors awarded an average of thirteen thousand dollars more than if the driver had no insurance. (Broeder 1959.)

This too seems rational, though in this case only mostly, not totally. We can understand jurors as trying to balance the costs and the benefits of the award (not their legal job, but a perfectly sane thing to do). And the diminishing marginal utility of wealth suggests that imposing a large judgment on an insurance company causes less disutility to the person paying (or people, distributing that over the company's clients) than imposing it on a single person. As for the judge's informing the jurors that insurance information is inadmissible, well, again, they can interpret that instruction as information about the presence of insurance and update accordingly. (Although that might not be accurate in the context of how judges give instructions, jurors need not know that.) Of course, it seems like they updated too much, since they increased their awards much more when p(insurance) increased but is less than 1, than they did when they learned that p(insurance)=1. So it's still probably partially irrational. But not an artifact of some kind of magical scarcity effect.

Comment author: Paul_Gowder 27 February 2008 10:59:48PM 0 points [-]

I'm skeptical about the possibility of really carrying out this kind of visualization (or, more broadly, imaginary leap). Here's why.

I might be able to say that I can imagine the existence of a god, and what the world would be like if, say, it were the Christian one. But I can't imagine myself in that world -- in that world, I'm a different person. For in that world, either I hold the counterfactually true belief that there is such a god, or I don't. If I don't hold that belief, then my response to that world is the same as my response to this world. If I do hold it, well, how can I model that?

This point is related to a point that Eliezer made in the comments, that I think just absolutely nails the problem, for a narrower class of the true set of states for which the problem exists:

You can invent all kinds of Gods and demand that I visualize the case of their existence, or of their telling me various things, but you can't necessarily force me to visualize the case where I accept their statement that killing babies is a good idea - not unless you can argue it well enough to create a real moral doubt in my mind.

Exactly.

But I maintain that you can't model the existence of a God with the right properties (including omnipotence, omniscience, and omnibenevolence) without being able to model that acceptance.

And likewise, the woman who believed in the soul couldn't model her reaction to a world without a soul without being able to experience herself as a person who genuinely doesn't believe in a soul. But she can only have that experience by becoming such a person.

I think this is just a limitation of human psychology. Cf. Thomas Nagel's great article, What is it like to be a bat? The argument doesn't directly apply, but the intuition does.

Comment author: Paul_Gowder 05 February 2008 04:28:15PM 0 points [-]

(And by "expected utility" in the above comment, I meant "expected value" not taking into account risk attitude. One must be precise about such things.)

Comment author: Paul_Gowder 05 February 2008 04:23:31PM 0 points [-]

What if one thinks (as do I) that not only do prediction markets do badly, but so do I? If both me and the market aren't doing better than random, do I have positive expected utility for betting?

Also, I'm not sure how intrade's payoff calculation works -- how much does one stand to gain per dollar on a bet at those odds? I think I'm pretty risk-averse if I'm gambling $250.00 for a $10.00 gain.

Anyway. My cash-free prediction is Obama by 2 points in general.

Comment author: Paul_Gowder 04 February 2008 11:57:38PM 0 points [-]

Silas, that's actually a pretty good way to capture some of the major theories about color -- ostensive definition for a given color solves a lot of problems.

But I wish Eliezer had pointed out that intensional definitions allow us to use kinds of reasoning that extensional definitions don't ... how do you do deduction on an extensional definition?

Also, extensional definitions are harder to interpersonally communicate using. I can wear two shirts, both of which I would call "purple," and someone else would call one "mauve" and the other "taupe" (or something like that -- I'm not even sure what those last two colors are). Whereas if we'd defined the colors on wavelengths of light, well, we know what we're talking about. It's harder to get more overlap between people on extensional rather than intensional definitions.

Comment author: Paul_Gowder 01 February 2008 08:16:42AM -1 points [-]

I do understand. My point is that we ought not to *care* whether we're going to consider all the possibilities and benefits.

Oh, but you say, our caring about our consideration process is a determined part of the causal chain leading to our consideration process, and thus to the outcome.

Oh, but I say, we ought not to care* about that caring. Again, recurse as needed. Nothing you can say about the fact that a cognition is in the causal chain leading to a state of affairs counts as a point against the claim that we ought not to care about whether or not we have that cognition if it's unavoidable.

Comment author: Paul_Gowder 01 February 2008 07:28:13AM 1 point [-]

Unknown: your last question highlights the problem with your reasoning. It's idle to ask whether I'd go and jump off a cliff if I found my future were determined. What does that question even mean?

Put a different way, why should we ask an "ought" question about events that are determined? If A will do X whether or not it is the case that a rational person will do X, why do we care whether or not it is the case that a rational person will do X? I submit that we care about rationality because we believe it'll give us traction on our problem of deciding what to do. So assuming fatalism (which is what we must do if the AI knows what we're going to do, perfectly, in advance) demotivates rationality.

Here's the ultimate problem: our intuitions about these sorts of questions don't work, because they're fundamentally rooted in our self-understanding as agents. It's really, really hard for us to say sensible things about what it might mean to make a "decision" in a deterministic universe, or to understand what that implies. That's why Newcomb's problem is a problem -- because we have normative principles of rationality that make sense only when we assume that it *matters* whether or not we follow them, and we don't really know what it would mean to *matter* without causal leverage.

(There's a reason free will is one of Kant's antimonies of reason. I've been meaning to write a post about transcendental arguments and the limits of rationality for a while now... it'll happen one of these days. But in a nutshell... I just don't think our brains work when it comes down to comprehending what a deterministic universe looks like on some level other than just solving equations. And note that this might make evolutionary sense -- a creature who gets the best results through a [determined] causal chain that includes rationality is going to be selected for the beliefs that make it easiest to use rationality, including the belief that it makes a difference.)

Comment author: Paul_Gowder 01 February 2008 06:27:37AM 0 points [-]

Eleizer: whether or not a fixed future poses a problem for *morality* is a hotly disputed question which even I don't want to touch. Fortunately, *this* problem is one that is pretty much wholly orthogonal to morality. :-)

But I feel like in the present problem the fixed future issue is a key to dissolving the problem. So, assume the box decision is fixed. It need not be the case that the stress is fixed too. If the stress isn't fixed, then it can't be relevant to the box decision (the box is fixed regardless of your decision between stress and no-stress). If the stress IS fixed, then there's no decision left to take. (Except possibly whether or not to stress about the stress, call that stress*, and recurse the argument accordingly.)

In general, for any pair of actions X and Y, where X is determined, either X is conditional on Y, in which case Y must also be determined, or not conditional on Y, in which case Y can be either determined or non-determined. So appealing to Y as part of the process that leads to X doesn't mean that something we could do to Y makes a difference if X is determined.

Comment author: Paul_Gowder 01 February 2008 03:52:44AM 7 points [-]

I don't know the literature around Newcomb's problem very well, so excuse me if this is stupid. BUT: why not just reason as follows:

1. If the superintelligence can predict your action, one of the following two things must be the case:

a) the state of affairs whether you pick the box or not is already absolutely determined (i.e. we live in a fatalistic universe, at least with respect to your box-picking)

b) your box picking is not determined, but it has backwards causal force, i.e. something is moving backwards through time.

If a), then practical reason is meaningless anyway: you'll do what you'll do, so stop stressing about it.

If b), then you should be a one-boxer for perfectly ordinary rational reasons, namely that it brings it about that you get a million bucks with probability 1.

So there's no problem!

Comment author: Paul_Gowder 01 February 2008 03:45:50AM 1 point [-]

There should be a "yes, but I'll be late" option. (I selected "maybe" as a proxy for that.)

(Speaking of late things, I think I owe you a surreply on the utilitarianism/specks debate... it might take a while, though. Really busy.)

View more: Prev | Next