Perplexed comments on The Aliens have Landed! - Less Wrong

33 Post author: TimFreeman 19 May 2011 05:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (156)

You are viewing a single comment's thread. Show more comments above.

Comment author: Perplexed 19 May 2011 06:03:27PM 19 points [-]

And you can set up a scenario without dragging in torture and extinction. Aliens from Ganymede are about to ink a contract to trade us tons of Niobium in exchange for tons of Cobalt. But then the aliens reveal that they have billions of cloned humans working as an indentured proletariat in the mines of the Trojan asteroids. These humans are generally well treated, but the aliens offer to treat them even better - feed them ice cream - if we send the Cobalt without requiring payment in Niobium.

The central problem in all of these thought experiments is the crazy notion that we should give a shit about the welfare of other minds simply because they exist and experience things analogously to the way we experience things.

Comment author: Wei_Dai 19 May 2011 06:26:39PM 40 points [-]

Is there a standard name for the logical fallacy where you attempt a reductio ad absurdum but fail to notice that you're deriving the absurdity from more than one assumption? Why conclude that it's the caring about far-away strangers that is crazy, as opposed to the decision algorithm that says you should give in to extortions like this?

Comment author: Nornagest 19 May 2011 06:35:01PM 5 points [-]

I'm not sure words like "crazy" and "absurd" are even meaningful in this context. It's pretty easy to come up with internally consistent arguments generating both results, and the scenario's outlandish enough that it's not clear which one has more practical vulnerabilities; essentially we're dealing with dueling intuitions.

Comment author: TimFreeman 19 May 2011 08:54:20PM *  4 points [-]

Is there a standard name for the logical fallacy where you attempt a reductio ad absurdum but fail to notice that you're deriving the absurdity from more than one assumption?

Good catch. Yes, I was deriving the absurdity from more than one assumption.

Why conclude that it's the caring about far-away strangers that is crazy, as opposed to the decision algorithm that says you should give in to extortions like this?

Maybe with the right decision algorithm you wouldn't give in to extortions like this. However, this extortion attempt cost the aliens approximately nothing, so unless correctly inferring our decision algorithm cost them less than approximately nothing, the rational step for the aliens is to try the extortion regardless. Thus having a different decision algorithm probably wouldn't prevent the extortion attempt.

Comment author: Wei_Dai 19 May 2011 09:17:42PM *  11 points [-]

But then changing your values to not care about simulated torture won't prevent the extortion attempt either (since the aliens will think there's a small chance you haven't actually changed your values and it costs them nothing to try). Unless you already really just don't care about simulated torture, it seems like you'd want to have a decision algorithm that makes you go to war against such extortionists (and not just ignore them).

Comment author: wedrifid 20 May 2011 09:22:12AM *  2 points [-]

But then changing your values to not care about simulated torture won't prevent the extortion attempt either (since the aliens will think there's a small chance you haven't actually changed your values and it costs them nothing to try).

That 'costs them nothing' part makes a potentially big difference. That the aliens must pay to make their attempt is what gives your decision leverage. The war that you suggest is another way of ensuring that there is a cost. Even though you may actually lose the war and be exterminated.

(Obviously there are whole other scenarios where becoming a 'protectorate' and tithing rather than going to war constitutes a mutually beneficial cooperation. When their BATNA is just to wipe you out but it is slightly better for them to just let you pay them.)

Comment author: AdeleneDawner 19 May 2011 09:53:31PM 2 points [-]

... changing your values to not care about simulated torture ... prevent the extortion attempt ...

Wait, is this a variant on Newcomb's problem?

(Am I just slow today? Nobody else seems to have mentioned it outright, at least.)

Comment author: ciphergoth 20 May 2011 07:26:57AM 3 points [-]

This sort of thing is really the motivating example behind Newcomb's problem.

Comment author: TimFreeman 20 May 2011 03:49:13PM 3 points [-]

This sort of thing is really the motivating example behind Newcomb's problem.

I'm not seeing the analogy. Can you explain?

The extortion attempt cost the aliens almost nothing, and would have given them a vacant solar system to move into if someone like Fred was in power, so it's rational for them to make the attempt almost regardless of the odds of succeeding. Nobody is reading anybody else's mind here, except the idiots who read their own minds and uploaded them to the Internet, and they don't seem to be making any of the choices.

Comment author: AdeleneDawner 20 May 2011 08:25:37PM 3 points [-]

This case looks most like the 'transparent boxes' version of the problem, which I haven't read much about.

In Newcomb's problem, Omega offers a larger amount of utility if you will predictably do something that intuitively would give a smaller amount of utility.

In this situation, being less open to blackmail probably gives you less disutility in the long run (fewer instances of people trying to blackmail you) than acceding to the blackmail, even though acceding intuitively gives you less disutility.

The other interesting part of this particular scenario is how to define 'blackmail' and differentiate it from, say, someone accidentally doing something that's harmful to you and asking you to help fix it. We've approached that issue, too, but I'm not sure if it's been given a thorough treatment yet.

Comment author: DanielLC 20 May 2011 06:11:08PM *  1 point [-]

They had other choices though. It would have been similarly inexpensive to offer to simulate happy people.

Even limiting the spheres to a single proof-of-concept would have been a start.

Comment author: TimFreeman 19 May 2011 09:28:46PM *  -1 points [-]

I really don't care about simulated torture, certainly not enough to prefer war over self-modification if simulated torture becomes an issue. War is very expensive and caring about simulated torture appears to be cost without benefit.

The story is consistent with this. Fred has problems because he cares about simulated torture, and Thud doesn't care and doesn't have problems.

Hmm, perhaps we agree that the story has only one source of absurdity now? No big deal either way.

(UDT is still worth my time to understand. I owe you that, and I didn't get to it yet.)

Comment author: Wei_Dai 19 May 2011 09:45:52PM 10 points [-]

Err, the point of having a decision theory that makes you go to war against extortionists is not to have war, but to have no extortionists. Of course you only want to do that against potential extortionists who can be "dissuaded". Suffice it to say that the problem is not entirely solved, but the point is that it's too early to say "let's not care about simulated torture because otherwise we'll have to give in to extortion" given that we seem to have decision theory approaches that still show promise of solving such problems without having to change our values.

Comment author: ArisKatsaris 20 May 2011 07:13:48PM 8 points [-]

If Fred cared about the aliens exterminating China, and Thud didn't care; then if the aliens instead threatened to exterminate China, Fred would again have problems and Thud again wouldn't have.

A rock doesn't care about anything, and therefore it has no problems at all.

This topic isn't really about simulation, it's about the fact that caring about anything permits you to possibly sacrifice something else for it. Anything that isn't our highest value may end up traded away, sure.

Comment author: TimFreeman 20 May 2011 07:45:25PM *  -2 points [-]

If Fred cared about the aliens exterminating China, and Thud didn't care; then if the aliens instead threatened to exterminate China, Fred would again have problems and Thud again wouldn't have.

You can travel from here to China and back. Therefore, caring about China has at least a potential instrumental consequence on the rest of my life. You can't travel from here to the aliens' simulation and back, so caring about what happens there imposes costs on the rest of my life but no benefits. The analogy is not valid.

Now, if the black spheres had decent I/O capabilities and you could outsource human intellectual labor tasks to the simulations, I suppose it would make sense to care about what happens there. People can't do useful work while they're being tortured, so that wasn't the scenario in the story.

Comment author: ArisKatsaris 20 May 2011 08:06:00PM *  12 points [-]

You can travel from here to China and back. Therefore, caring about China has at least a potential instrumental consequence on the rest of my life.

That's the only sane reason you believe can exist for caring about distant people at all? That you can potentially travel to them?

So if you're a paraplegic , who doesn't want to travel anywhere, can't travel anywhere, and know you'll die in two weeks anyway. You get a choice to push a button or not push it. If you push it you get 1 dollar right now, but 1 billion Chinese people will die horrible deaths in two weeks, after your own death.

Are you saying that the ONLY "sane" choice is to push the button, because you can use the dollar to buy bubblegum or something, while there'll never be a consequence on you for having a billion Chinese die horrible deaths after your own death?

If so, your definition of sanity isn't the definition most people have. You're talking about the concept commonly called "selfishness", not "sanity".

Comment author: TimFreeman 20 May 2011 08:13:51PM *  5 points [-]

If so, your definition of sanity isn't the definition most people have. You're talking about the concept commonly called "selfishness", not "sanity".

Fine. Explain to me why Fred shouldn't exterminate his species, or tell me that he should.

The extortion aspect isn't essential. Fred could have been manipulated by true claims about making simulated people super happy.

ETA: At one point this comment had downvotes but no reply, but when I complained that that wasn't a rational discussion, someone actually replied. LessWrong is doing what it's supposed to do. Thanks people for making it and participating in it.

Comment author: benelliott 21 May 2011 12:30:13PM *  5 points [-]

I would give in to the alien demands in that situation, assuming we 'least convenient possible world' away all externalities (the aliens might not keep their promise, there might be quadrillions of sentient beings in other species who we could save by stopping these aliens).

The way the story is told makes it easy for us to put ourselves in the shoes of Fred, Thud or anyone else on earth, and hard to put ourselves in the shoes of the simulations, faceless masses with no salient personality traits beyond foolishness. This combination brings out the scope insensitivity in people.

A better way to tell the story would be to spend 1000 times as many words describing the point of view of the simulations as that of the people on earth. I wonder how ridiculous giving in would seem then.

Comment author: Normal_Anomaly 21 May 2011 03:41:15AM 2 points [-]

I'm not the person who downvoted you, but I suspect the reason was that when you said this:

You can travel from here to China and back. Therefore, caring about China has at least a potential instrumental consequence on the rest of my life. You can't travel from here to the aliens' simulation and back, so caring about what happens there imposes costs on the rest of my life but no benefits. .... Now, if the black spheres had decent I/O capabilities and you could outsource human intellectual labor tasks to the simulations, I suppose it would make sense to care about what happens there. People can't do useful work while they're being tortured, so that wasn't the scenario in the story.

You implied that it's wrong or nonsensical to care about other people's happiness/absence of suffering as a terminal value. We are "allowed" to have whatever terminal values we want, except perhaps contradictory ones.

Comment author: ArisKatsaris 21 May 2011 05:16:58AM *  1 point [-]

Explain to me why Fred shouldn't exterminate his species, or tell me that he should.

The extortion aspect isn't essential. Fred could have been manipulated by true claims about making simulated people super happy.

I don't know what it means for a person to be simulated. I don't know if the simulated people have consciousness. Are we talking about people whose existence feels as real to themselves as it would to us? This is NOT an assumption I ever make about simulations, but should I consider it so for the sake of the argument?

  • If their experience doesn't feel real to themselves, then obviously there isn't any reason to care about what makes them happy or unhappy, that would be Fred being confused, as he conflates the experience of real people with the fundamentally different simulated people.
  • If their internal experience is as real as ours, then obviously it wouldn't be the extermination of Fred's species, some of his species would survive in the simulations, if in eternal captivity.

He should or shouldn't exterminate his flesh-and-blood species based on whether his utility function assigns a higher value to a free (and aliive) humanity, than to a trillion of individual sentients being happy.

On my part, I'd choose for a free and alive humanity still. But that's an issue that depends on what terminal values we each have.

Comment author: TimFreeman 21 May 2011 12:48:03AM 1 point [-]

If so, your definition of sanity isn't the definition most people have.

Um, I never tried to define sanity. What are you responding to?

Comment author: ArisKatsaris 21 May 2011 05:00:52AM 0 points [-]

Apologies, I did indeed misremember who it was that was talking about "crazy notions", that was indeed Perplexed.

Comment author: Perplexed 21 May 2011 04:10:54AM *  0 points [-]

You seem to be collecting some downvotes that should have gone to me. To even things out, I have upvoted three of your comments. Feel free to downvote three of mine.

I fully agree, by the way, on the distinction between the moral relevance of simulated humans, who have no ability to physically influence our world, and the moral relevance of distant people here on earth, who physically influence us daily (though indirectly through a chain of intermediary agents).

Simulated persons do have the ability to influence us informationally, though, even if they are unaware of our existence and don't recognize their own status as simulations. I'm not sure what moral status I would assign to a simulated novelist - particularly if I liked his work.

ETA: To Normal_Anomaly: I do not deny people the right to care about simulations in terms of their own terminal values. I only deny them the right to insist that I care about simulations. But I do claim the right to insist that other people care about Chinese, for reasons similar to those Tim has offered.

Comment author: Bongo 20 May 2011 11:49:04AM *  1 point [-]

caring about simulated torture appears to be cost without benefit.

Generally the benefit of caring about about any bad thing is that if you care about it there will be less of it because you will work to stop it.

Comment author: TimFreeman 20 May 2011 03:40:08PM *  2 points [-]

caring about simulated torture appears to be cost without benefit.

Generally the benefit of caring about about any bad thing is that if you care about it there will be less of it because you will work to stop it.

Well, Fred cared, and his reaction was to propose exterminating humanity. I assume you think his is a wrong decision. Can you say why?

If you care about simulated torture (or simulated pleasure), and you're willing to shut up and multiply, then anybody with a big enough computer can get you to do anything even when that computer has no inputs or outputs and makes absolutely no difference to the real world. I think it's better to adjust oneself so one does not care. It's not like it's a well-tested human value that my ancestors on the savannah acted upon repeatedly.

Comment author: ArisKatsaris 20 May 2011 07:03:57PM 7 points [-]

If you care about simulated torture (or simulated pleasure), and you're willing to shut up and multiply, then anybody with a big enough computer can get you to do anything even when that computer has no inputs or outputs and makes absolutely no difference to the real world.

Do your calculations and preferred choices change if instead of "simulations", we're talking about trillions of flesh-and-blood copies of human beings who are endlessly tortured to death and then revived to be tortured again? Even if they're locked in rooms without entrances or exists, and it makes absolutely no difference to the outside world?

If you care about them, then anybody with a big enough copier-of-humans, and enough torture chambers "can get you to do anything", as you say. So it's not really an issue that depends on caring for simulations. I wish the concept of "simulations" wasn't needlessly added where it has no necessity to be entered.

General Thud would possibly not care if it was the whole real-life population of China that got collected by the aliens, in exchange for a single village of Thud's own nation.

The issue of how-to-deal-with-extortion is a hard one, but it's just made fuzzier by adding the concept of simulations into the mix.

Comment author: TimFreeman 20 May 2011 07:50:36PM 2 points [-]

The issue of how-to-deal-with-extortion is a hard one, but it's just made fuzzier by adding the concept of simulations into the mix.

I agree that it's a fuzzy mix, but not the one you have in mind. I intended to talk about the practical issues around simulations, not about extortion.

Given that the aliens' extortion attempt cost them almost nothing, there's not much hope of gaming things to prevent it. Properly constructed, the black spheres would not have an audit trail leading back to the aliens' home, so a competent extortionist could prevent any counterattack. Extortion is not an interesting part of this situation.

If you care about them, then anybody with a big enough copier-of-humans, and enough torture chambers "can get you to do anything", as you say. So it's not really an issue that depends on caring for simulations. I wish the concept of "simulations" wasn't needlessly added where it has no necessity to be entered.

Right. It's an issue about caring about things that are provably irrelevant to your day-to-day activities.

Comment author: Raemon 21 May 2011 05:45:45AM 1 point [-]

I intended to talk about the practical issues around simulations, not about extortion.

If you don't want to be talking about extortion, we shouldn't be talking about simulations in the context of extortion. So far as I can tell, the points you've made about useless preferences only matter in the context of extortion, where it doesn't matter whether we're talking about simulations or real people who have been created.

If it's about caring about things that are irrelevant to your everyday life, then the average random person on the other side of the world honestly doesn't matter much to you. They certainly wouldn't have mattered a few hundred years ago. If you were transported to the 1300s, would you care about Native Americans? If so, why? If not, why are you focusing on the "simulation" part.

If it turns out that OUR universe is a simulation, I assume you do not consider our creators to have an obligation to consider our preferences?

Comment author: Kyre 22 May 2011 07:06:27AM 0 points [-]

Right. It's an issue about caring about things that are provably irrelevant to your day-to-day activities.

Caring about those torturees feels a bit like being counterfactually mugged. Being the sort of person (or species) that doesn't care about things that are provably irrelevant to your day-to-day activities would avoid this case of extortion, but depending on the universe that you are in, you might be giving up bigger positive opportunities.

Comment author: nshepperd 21 May 2011 06:10:02AM 1 point [-]

That sounds like a flaw in the decision theory. What kind of broken decision theory achieves its values better by optimizing for different values?

Comment author: DanielLC 20 May 2011 06:17:01PM 0 points [-]

What do you mean by "the real world"? Why does it matter if it's "real"?

Comment author: TimFreeman 20 May 2011 06:47:47PM 2 points [-]

Why does it matter if it's "real"?

The real world generally doesn't get turned off. Simulations generally do. That's why it matters.

If there were a simulation that one might reasonably expect to run forever, it might make sense to debate the issue.

Comment author: DanielLC 20 May 2011 08:33:50PM 1 point [-]

Imagine that, instead of simulations, the spheres contained actual people. They are much smaller, don't have bodies the same shape, and can only seem to move and sense individual electrons, but they nonetheless exist in this universe.

It's still exactly the same sphere.

In any case, you only answered the first question. Why must something exist forever for it to matter morally? It's pretty integral to any debate about what exactly counts as "real" for this purpose.

Comment author: [deleted] 21 May 2011 05:50:01PM 2 points [-]

To a degree, arguing about extortion is arguing about definitions. In the context of the heuristic "don't give in to extortion", we would like to know exactly what the heuristic shouldn't give in to, though, and why.

In my opinion, the main problem is that the extortionist is making a no-downside trade: the thing it is trading is "not torturing simulated humans" or "not killing hostages" or whatever, which probably wasn't worth anything to the extortionist anyway.

A lot of no-downside trades are obviously unfair, so a useful heuristic is not to agree to no-downside trades in general. In fact, extremely unfair trades in general are metaphorically labeled "extortion" (for instance, I'm sure I've heard the term applied to the price of a diamond ring).

We can see cases besides straightforward extortion where people apply the no-downside heuristic. For instance, buying music from iTunes is a no-downside trade for iTunes at first glance: iTunes doesn't lose anything and gains 99 cents. In fact, iTunes has already spent money buying the rights to the music in expectation you'll download it, so this is something of an acausal trade: much like Omega, iTunes is very good at predicting what people will want, and if enough people aren't going to download a track, iTunes won't offer it. Acausal trades are counterintuitive, though, so it makes sense that some people are repelled by this offer and torrent the music instead.

Comment author: Perplexed 19 May 2011 06:49:02PM 1 point [-]

How is offering to supply ice cream characterized as "extortion"?

In any case, I was not using the scenario as a reductio against universal unreciprocated altruism. That notion fails under its own weight, due to complete absence of support.

Comment author: Wei_Dai 19 May 2011 07:06:16PM *  5 points [-]

Sorry, I misread your comment and thought it was an extortion scenario similar to the OP. Now that I've read it more carefully, it's not clear to me that we shouldn't give up the Niobium in order to provide those humans workers with ice cream. (ETA: why did you characterize those humans as indentured workers? It would have worked as well if they were just ordinary salaried workers.)

That notion fails under its own weight, due to complete absence of support.

Altruists certainly claim to have support for their stated preferences. Or one could argue that preferences don't need to have support. What kind of support do you have for liking ice cream, for example?

Comment author: wedrifid 19 May 2011 07:12:09PM 3 points [-]

Sorry, I misread your comment and thought it was an extortion scenario similar to the OP.

Your reading wasn't far off: "in all of these thought experiments" makes your reply remain relevant.

Comment author: Perplexed 19 May 2011 07:25:37PM 5 points [-]

True enough. My main objection to calling my ice cream negotiating tactic 'extortion' is that I really don't like the "just say 'No' to extortion" heuristic. I see no way of definitionally distinguishing extortion from other, less objectionable negotiating stances. Nash's 1953 cooperative game theory model suggests that it is rational to yield to credible threats. I.e. saying 'no' to extortion doesn't win! An AI that begins with the "just say no" heuristic will self-modify to one that dispenses with that heuristic.

Comment author: Wei_Dai 19 May 2011 11:11:26PM *  6 points [-]

I don't think anybody is suggesting building an explicit "just say 'No' to extortion" heuristic into an AI. (I agree we do not have a good definition of "extortion" so when I use the word I use it in an intuitive sense.) We're trying to find a general decision theory that naturally ends up saying no to extortion (when it makes sense to).

Here's an argument that "saying 'no' to extortion doesn't win" can't be the full picture. Some people are more credibly resistant to extortion than others and as a result are less likely to be extorted. We want an AI that is credibly resistant to extortion, if such credibility is possible. Now if other players in the picture are intelligent enough, to the extent of being able to deduce our AI's decision algorithm, then isn't being "credibly resistant to extortion" the same as having a decision algorithm that actually says no to extortion?

ETA: Of course the concept of "credibility" breaks down a bit when all agents are reasoning this way. Which is why the problem is still unsolved!

Comment author: timtyler 20 May 2011 05:15:27PM 1 point [-]

Of course the concept of "credibility" breaks down a bit when all agents are reasoning this way.

It does what? How so?

Comment author: Perplexed 20 May 2011 05:06:16AM *  0 points [-]

I don't think anybody is suggesting building an explicit "just say 'No' to extortion" heuristic into an AI. (I agree we do not have a good definition of "extortion" so when I use the word I use it in an intuitive sense.) We're trying to find a general decision theory that naturally ends up saying no to extortion (when it makes sense to).

That is pretty incoherent. If you are trying to come up with a general decision theory that wins and also says no to extortion, then you have overdetermined the problem (or will overdetermine it once you supply your definition). If you are predicting that a decision theory that wins will say no to extortion, then it is a rather pointless claim until you supply a definition. Perhaps what you really intend to do is to define 'extortion' as 'that which a winning decision theory says no to'. In which case, Nash has defined 'extortion' for you - as a threat which is not credible, in his technical sense.

ETA: Of course the [informal] concept of "credibility" breaks down a bit when all agents are reasoning this way. Which is why the problem is still unsolved!

Why do you say the problem is still unsolved? What issues do you feel were not addressed by Nash in 1953? Where is the flaw in his argument?

Part of the difficulty of discussing this here is that you have now started to use the word "credible" informally, when it also has a technical meaning in this context.

Comment author: lessdazed 19 May 2011 11:34:49PM 0 points [-]

"Commit to just saying 'no' and proving that when just committing to just saying 'no' and proving that wins."

Perhaps something like that.

Comment author: timtyler 19 May 2011 07:48:02PM *  5 points [-]

I really don't like the "just say 'No' to extortion" heuristic.

Well you don't want to signal that you give in to extortion. That would just increase the chances of people attempting extortion against you. Better to signal that you are on a vendetta to stamp out extortion - at your personal expense!!!

Comment author: Perplexed 19 May 2011 10:03:45PM 0 points [-]

There is an idea, surprisingly prevalent on a rationality website, that costless signaling is an effective way to influence the behavior of rational agents. Or in other words, that it is rational to take signalling at face value. I personally doubt that this idea is correct. In any case, I reiterate that I suggest yielding only to credible threats. My own announcements do not change the credibility of any threats available to agents seeking to exploit me.

Comment author: lessdazed 19 May 2011 11:38:00PM 0 points [-]

Perhaps what is really being expressed is the belief that social costs are real, and that mere pseudonymous posting has costs.

Comment author: Perplexed 20 May 2011 05:10:08AM 2 points [-]

huh?????

Comment author: timtyler 19 May 2011 10:14:08PM *  0 points [-]

My own announcements do not change the credibility of any threats available to agents seeking to exploit me.

They inflluence the liklihood of them being made in the first place - by influencing the attacker's expected payoffs. Especially if it appears as though you were being sincere. Your comment didn't look much like signalling. I mean, it doesn't seem terribly likely that someone would deliberately publicly signal that they are more likely than unnamed others to capitulate if threatened with an attempt at extortion.

Credibly signalling resistance to extortion is non-trivial. Most compelling would be some kind of authenticated public track record of active resistance.

Comment author: timtyler 20 May 2011 05:21:32PM 2 points [-]

I see no way of definitionally distinguishing extortion from other, less objectionable negotiating stances.

Well, a simple way would be to use the legal definition of extortion. That should at least help prevent house fires, kidnapping, broken windows and violence.

...but a better definition should not be too difficult - for instance: the set of "offers" which you would rather not be presented with.

Comment author: wedrifid 19 May 2011 07:43:59PM 3 points [-]

My main objection to calling my ice cream negotiating tactic 'extortion

My objection to calling the ice cream negotiation tactic 'extortion' is it just totally isn't. It's an offer of a trade.

Nash's 1953 cooperative game theory model suggests that it is rational to yield to credible threats. I.e. saying 'no' to extortion doesn't win!

Then it's a good thing we've made developments in our models in the last six decades!

Comment author: Perplexed 19 May 2011 10:08:49PM 2 points [-]

Then it's a good thing we've made developments in our models in the last six decades!

Cute. But perhaps you should provide a link to what you think is the relevant development.

Comment author: timtyler 20 May 2011 05:25:59PM *  3 points [-]

Well, the key concept underlying strong resistance to extortion is reputation management. Once you understand the long-term costs of becoming identified as a vulnerable "mark" by those in the criminal underground, giving in to extortion can start to look a lot less attractive.

Comment author: Perplexed 20 May 2011 06:47:26PM 0 points [-]

Tim, we are completely talking past each other here. To restate my position:

Nash in 1953 characterized rational 2 party bargaining with threats. Part of the solution was to make the quantitative distinction between 'non-credible' threats (which should be ignored because they cost the threatener so much to carry out that he would be irrational to do so), and 'credible' threats - threats which a threatener might rationally commit to carry out.

Since Nash is modeling the rationality of both parties here, it is irrational to resist a credible threat - in fact, to promise to do so is to make a non-credible threat yourself.

Hence, in Nash's model, cost-less signaling is pointless if both players are assumed to be rational. Such signaling does not change the dividing line between threats that are credible, and rationally should succeed, and those which are non-rational and should fail.

As for the 'costly signalling' that takes place when non-credible threats are resisted - that is already built into the model. And a consequence of the model is that it is a net loss to attempt to resist threats that are credible.

All of this is made very clear in any good textbook on game theory. It would save us all a great deal of time if you keep your amateur political theorizing to yourself until you read those textbooks.

Comment author: atucker 19 May 2011 09:53:31PM -1 points [-]

My objection to calling the ice cream negotiation tactic 'extortion' is it just totally isn't. It's an offer of a trade.

To elaborate a bit:

I'll give you utility if you give me utility is a trade.

I won't cause you disutility if you give me utility is extortion.

Comment author: ArisKatsaris 20 May 2011 06:45:08PM *  7 points [-]

I'll give you utility if you give me utility is a trade.

I won't cause you disutility if you give me utility is extortion.

I don't think that's exactly the right distinction. Let's say you go to your neighbour because he's being noisy.

Scenario A: He says "I didn't mean to disturb you, I just love my music loud. But give me 10 dollars, and sure, I'll turn the volume down." I'd call that a trade, though it's still about him not giving you disutility.

Scenario B: He says "Yeah, I do that on purpose, so that I can make people pay me to turn the volume down. It'll be 10 bucks. " I'd call that an extortion.

The difference isn't between the results of the offer if you accept or reject -- the outcomes and their utility for you is the same in each (loud music, silence - 10 dollars).

The difference is that in Scenario B, you wish the other person had never decided to make this offer. It's not the utility of your options that are to be compared with each other, but the utility of the timeline where the trade can be made vs the utility of the timeline where the trade can't be made...

In the Trade scenarios, if you can't make a trade with the person, he's still being noisy, and utility minimizes. In the Extortion scenarios, if you can't make a trade with the person, he has no reason to be noisy, and utility maximizes.

I'll probably let someone else to transform the above description into equations containing utility functions.

Comment author: atucker 20 May 2011 07:02:53PM -1 points [-]

Yeah, I was being sloppy.

The more important part for extortion is that they threaten to go out of their way to cause you harm. Schelling points and default states are probably relevant for the distinction.

You can't read a payoff table and declare it extortion or trade.

Comment author: Perplexed 19 May 2011 10:06:37PM 1 point [-]

And what is the distinction between giving utility and not giving disutility? As consequentialists, I thought we were committed to the understanding that they are the same thing.

Comment author: atucker 19 May 2011 11:06:45PM -1 points [-]

The distinction is that I can commit to not giving into extortion, and not also turn down possibly beneficial trades.

Comment author: Perplexed 19 May 2011 07:15:12PM 0 points [-]

What kind of support do you have for liking ice cream, for example?

None at all. But then I don't claim that it is a universal moral imperative that will be revealed to be 'my own imperative' once my brain is scanned, the results of the scan are extrapolated, and the results are weighted in accordance with how "muddled" my preferences are judged to be.

Comment author: Wei_Dai 19 May 2011 07:30:36PM 3 points [-]

I see, so you're saying that universal unreciprocated altruism fails as a universal moral imperative, not necessarily as a morality that some people might have. Given that you used the word "crazy" earlier I thought you were claiming that nobody should have that morality.

Comment author: timtyler 19 May 2011 07:42:51PM 3 points [-]

I think it is easily possible to imagine naturalists describing some kinds of maladaptive behaviour as being "crazy". The implication would be that the behaviour was being caused by some kind of psychological problem interfering with their brain's normal operation.

Comment author: Perplexed 20 May 2011 04:39:33PM 0 points [-]

I thought you were claiming that nobody should have that morality.

I do claim that. In two flavors.

  1. Someone operating under that moral maxim will tend to dispense with that maxim as they approach reflective equilibrium.

  2. Someone operating under that 'moral' maxim is acting immorally - this operationally means that good people should (i.e. are under a moral obligation to) shun such a moral idiot and make no agreements with him (since he proclaims that he cannot be trusted to keep his commitments).

Part of the confusion between us is that you seem to want the word 'morality' to encompass all preferences - whether a preference for chocolate over vanilla, or a preference for telling the truth over lying, or a preference for altruism over selfishness. It is the primary business of metaethics to make the distinction between moral opinions (i.e. opinions about moral issues) and mere personal preferences.

Comment author: Wei_Dai 20 May 2011 07:18:05PM *  2 points [-]

Part of the confusion between us is that you seem to want the word 'morality' to encompass all preferences - whether a preference for chocolate over vanilla, or a preference for telling the truth over lying, or a preference for altruism over selfishness.

No, I don't want that. In fact I do not currently have a metaethical position beyond finding all existing metaethical theories (that I'm aware of) to be inadequate. In my earlier comment I offered two possible lines of defense for altruism, because I didn't know which metaethics you prefer:

Altruists certainly claim to have support for their stated preferences. Or one could argue that preferences don't need to have support.

In your reply to that comment you chose to respond to only the second sentence, hence the "confusion".

Anyway, why don't you make a post detailing your metaethics, as well as your arguments against "universal unreciprocated altruism"? It's not clear to me what you're trying to accomplish by calling people who believe such things (many of whom are very smart and have already seriously reflected on these issues) "crazy" without backing up your claims.

Comment author: Perplexed 20 May 2011 09:17:24PM *  0 points [-]

It's not clear to me what you're trying to accomplish by calling people who believe such things (many of whom are very smart and have already seriously reflected on these issues) "crazy" without backing up your claims.

I'm not sure why you think I have called anyone crazy. What I said above is that a particular moral notion is crazy.

Perhaps you instead meant to complain that (in the grandparent) I had referred to the persons in question as "moral idiots". I'm afraid I must plead guilty to that bit of hyperbole.

Anyway, why don't you make a post detailing your metaethics, as well as your arguments against "universal unreciprocated altruism"?

I am gradually coming to think that there is little agreement here as to what the word metaethics even means. My current understanding is that metaethics is what you do to prepare the linguistic ground so that people operating under different ethical theories and doctrines can talk to each other. Meta-ethics strives to be neutral and non-normative. There are no meta-ethical facts about the world - only definitions that permit discourse and disputation about the facts.

Given this interpretation of "meta-ethics", it would seem that what you mean to suggest is that I make a post detailing my normative ethics, which would include an argument against "universal unreciprocated altruism" (which I take to be a competing theory of normative ethics).

Luke and/or Eliezer and/or any trained philosopher here: I would appreciate feedback as to whether I finally have the correct understanding of the scope and purpose of meta-ethics.

Comment author: Wei_Dai 20 May 2011 11:02:02PM 0 points [-]

Given this interpretation of "meta-ethics", it would seem that what you mean to suggest is that I make a post detailing my normative ethics, which would include an argument against "universal unreciprocated altruism" (which I take to be a competing theory of normative ethics).

I thought you might have certain metaethical views, which might be important for understanding your normative ethics. But yes, I'm mainly interested in hearing about your normative ethics.

Comment author: lessdazed 19 May 2011 11:33:03PM 0 points [-]

Is there a standard name for the logical fallacy

Hidden assumptions play a role similar to the auxiliary hypotheses which undermine naive Popperianism. The fallacy of ignoring auxiliary assumptions seems like a special case of the fallacy of presenting an argument from ignorance.

Comment author: Bongo 21 May 2011 06:14:55PM *  9 points [-]

No, I think the central "problem" is that having preferences that others can thwart with little effort is risky because it makes you more vulnerable to extortion.

For example, if you have a preference against non-prime heaps of pebbles existing, the aliens can try to extort you by building huge numbers of non-prime heaps on their home planet and sending you pictures of them, and therefore, the argument goes, it's crazy and stupid to care about non-prime heaps.

The argument also yields a heuristic that the farther away a thing is from you, the more stupid and crazy it is to care about it.

Comment author: Perplexed 22 May 2011 04:19:23AM 1 point [-]

Right. What you are saying is related to the notion of "credible threats". If other agents can give you disutility with little disutility for themselves, then they have a credible threat against you. And unless you either change your utility function, or find a way of making it much more difficult and costly for them to harm you, the rational course is to give in to the extortion.

One way to make it costly for others to harm you is to join a large coalition which threatens massive retaliation against anyone practicing extortion against coalition members. But notice that if you join such a coalition, you must be willing to bear your share of the burden should such retaliation be necessary.

The alternative I suggested in the grandparent was to change your utility function so as to make you less vulnerable - only care about things you have control over. Unfortunately, this is advice that may be impossible to carry out. Preferences, as several commentators here have pointed out, tend to be incorrigible.

Comment author: wedrifid 22 May 2011 06:51:28AM *  4 points [-]

The alternative I suggested in the grandparent was to change your utility function so as to make you less vulnerable - only care about things you have control over. Unfortunately, this is advice that may be impossible to carry out. Preferences, as several commentators here have pointed out, tend to be incorrigible.

I took the obvious solution to that difficulty. I self modified to an agent that behaves exactly as if he had self modified to be an agent with preferences that make him less vulnerable. This is a coherent configuration for my atoms to be in terms of physics and is also one that benefits me.

Comment author: cousin_it 19 May 2011 06:16:37PM *  2 points [-]

Your variation is better than mine! Not sure about your solution though, it looks a little hurried.

Comment author: wedrifid 19 May 2011 06:39:08PM 2 points [-]

Your variation is better than mine!

However it is a different problem. An interesting problem in its own right but one for which many people's coherent preferences will produce a different answer for slightly different reasons.

Comment author: mendel 22 May 2011 11:13:44PM 0 points [-]

The central problem in all of these thought experiments is the crazy notion that we should give a shit about the welfare of other minds simply because they exist and experience things analogously to the way we experience things.

Well, I see the central problem in the notion that we should care about something that happens to other people if we're not the ones doing it to them. Clearly, the aliens are sentient; they are morally responsible for what happens to these humans. While we certainly should pursue possible avenues to end the suffering, we shouldn't act as if we were.

Comment author: Perplexed 23 May 2011 01:29:46AM 0 points [-]

Interesting. Though in the scenario I suggested there is no suffering. Only an opportunity to deploy pleasure (ice cream).

I'm curious as to your reasons why you hold the aliens morally responsible for the human clones - I can imagine several reasons, but wonder what yours are. Also, I am curious as to whether you think that the existence of someone with greater moral responsibility than our own acts to decrease or eliminate the small amount of moral responsibility that we Earthlings have in this case.

Comment author: mendel 23 May 2011 09:11:01AM *  0 points [-]

Why would I not hold them responsible? They are the ones who are trying to make us responsible by giving us an opportunity to act, but their opportunities are much more direct - after all, they created the situation that exerts the pressure on us. This line of thought is mainly meant to be argued in Fred's terms, who has a problem with feeling responsible for this suffering (or non-pleasure) - it offers him an out of the conundrum without relinquishing his compassion for humanity (i.e. I feel the ending as written is illogical, and I certainly think "Michael" is acting very unprofessionally for a psychoanalyst). ["Relinquish the compassion" is also the conclusion you seem to have drawn, thus my response here.]

Of course, the alien strategy might not be directed at our sense of responsibility, but at some sort of game theoretic utility function that proposes the greater good for the greater number - these utility functions are always sort of arbitrary (most of them on lesswrong center around money, with no indication why money should be valuable), and the arbitrariness in this case consists in including the alien simulations, but not the aliens themselves. If the aliens are "rational agents", then not rewarding their behaviour will make them stop it if it has a cost, while rewarding it will make them continue. (Haven't you ever wondered how many non-rational entities are trying to pose conundrums to rational agents on here? ;)

I don't have a theory of quantifyable responsibility, and I don't have a definite answer for you. Let's just say there is only a limited amount of stuff we can do in the time that we have, so we have to make choices what to do with our lives. I hope that Fred comes to feel that he can accomplish more with his life than to indirectly die for a tortured simulation that serves alien interests.