Comment author: simon2 03 February 2009 06:40:00PM 0 points [-]

... but relative to simply cooperating, it seems a clear win. Unless the superhappies have thought of it and planned a response.

Of course, the corollary for the real world would seem to be: those people who think that most people would not converge if "extrapolated" by Eliezer's CEV ought to exterminate other people who they disagree with on moral questions before the AI is strong enough to stop them, if Eliezer has not programmed the AI to do something to punish that sort of thing.

Hmm. That doesn't seem so intuitively nice. I wonder if it's just a quantitative difference between the scenarios (eg quantity of moral divergence), or a qualitative one (eg. the babykillers are bad enough to justifiably be killed in the first place).

Comment author: simon2 03 February 2009 06:14:00PM 0 points [-]

If the humans know how to find the babyeaters' star,
and if the babyeater civilization can be destroyed by blowing up one star,

then I would like to suggest that they kill off the babyeaters.

Not for the sake of the babyeaters (I consider the proposed modifications to them better than annihilation from humanity's perspective)
but to prevent the super-happies from making even watered down modifications adding baby-eater values -
not so much to humans, since this can also be (at least temporarily) prevented by destroying Huygens -
but to themselves, as they are going to be the dominant life form in the universe over time, being the fastest growing and advancing species.

Of course, relative to destroying Huygens the price to pay in terms of modifications to human values is high, so I would not make this decision lightly.

Comment author: simon2 31 January 2009 10:57:05PM 0 points [-]

James Andrix: I don't claim that the aliens would prefer modification over death, only that it is more consonant with my conception of human values to modify them than exterminate them, notwithstanding that the aliens may prefer the latter.

Comment author: simon2 31 January 2009 02:21:57PM 1 point [-]

Akon claims this is a "true" prisoner's dilemma situation, and then tries to add more values to one side of the scale. If he adds enough values to make cooperation higher value than defecting, then he was wrong to say it was a true prisoner's dilemma. But the story has made it clear that the aliens appear to be not smart enough to accurately anticipate human behaviour (or vice versa for that matter), so this is not a situation where it is rational to cooperate in a true prisoner's dilemma. If it really is a true prisoner's dilemma, they should just defect.

Of course, there may be a more humane approach than extermination or requiring them to live under human law: forcible modification to remove the desire to eat babies, and reduce the amount of reproduction. It might be a little tricky to do this without completely messing up the aliens' psychology.

Also, it seems a little unlikely that a third ship would arrive given that the arrival of even one alien ship was considered so surprising in the first installment.

In response to Changing Emotions
Comment author: simon2 05 January 2009 01:52:10AM 0 points [-]

Strong enough to disrupt personal identity, if taken in one shot? That's a difficult question to answer, especially since I don't know what experiment to perform to test any hypotheses. On one hand, billions of neurons in my visual cortex undergo massive changes of activation every time my eyes squeeze shut when I sneeze - the raw number of flipped bits is not the key thing in personal identity. But we are already talking about serious changes of information, on the order of going to sleep, dreaming, forgetting your dreams, and waking up the next morning as though it were the next moment.

It sounds as if you believe in a soul (or equivalent) that is "different" for some set of possible changes and "the same" for other possible changes. I would suggest that that whether an entity at time n+1 is the same person as you at time n is not an objective fact of the universe. Humans have evolved so that we consider the mind that wakes up in the body of the mind that went to sleep to be the same person, but this intuitive sense is not an intuitive understanding of an objective reality; one could modify oneself to consider sleep to disrupt identity, and this would not be a "wrong" belief but just a different one.

I think most people are most comfortable retaining their evolution-given intuitions where they are strong, but where they are weak I think it is a mistake to try to overgeneralize them; instead one should try to shape them consciously. If you want to try being female for a while, why spoil your fun with hang ups about identity? Just decide that it's still you.

Comment author: simon2 16 December 2008 02:11:07AM 0 points [-]

Eliezer, whenever you start thinking about people who are completely causally unconnected with us as morally relevant, alarm bells should go off.

What's worse though, is that if your opinion on this is driven by a desire to justify not agreeing with the "repugnant conclusion", it may signify problems with your morality that could annihilate humanity if you give your morality to an AI. The repugnant conclusion requires valuing the bringing into existence of hypothetical people with total utility x by as much as reducing the utility of existing people by x, or annihilating people with utility x. Give that morality to a fast takeoff AI and they'll quickly replace all humans with entities with greater capacity for utility. If the AI is programmed to believe the problem with the "repugnant conclusion" is what you claim, the AI will instead create randomized (for high uniqueness) minds with high capacity for utility, still annihilating humans.

Comment author: simon2 25 October 2008 12:46:43AM 1 point [-]

I think what he means by "calibrated" is something like it not being possible for someone else to systematically improve the probabilities you give for the possible answers to a question just from knowing what values you've assigned (and your biases), without looking at what the question is.

I suppose the improvement would indeed be measured in terms of relative entropy of the "correct" guess with respect to the guess given.

Comment author: simon2 24 October 2008 08:33:00PM 0 points [-]

Responding to Gaffa (I kind of intended to respond right after the comment, bot got sidetracked):

When approaching a scientific or mathematical problem, I often find myself trying hard to avoid having to calculate and reason, and instead try to reach for an "intuitive" understanding in the back of my mind, but that understanding, if I can even find it, is rarely sufficient when dealing with actual problems.

I would advise you to embrace calculation and reason, but just make sure you think about what you are doing and why. Use the tools, but try to get an intuitive understanding of why and how they work, both in general and each time you apply them. It is true that formualaic rules can serve as a crutch to avoid the need for understanding, but if you throw away calculation and reason, you are not likely to make much progress.

Finally, be realistic in your expectations: for a complicated problem, you should not expect to be able to get an intuitive understanding of the solution as a single step, but you can aim for a chain of individually intuitive steps and, if the chain is sufficiently short, an overall intuitive understanding of how the steps relate to one another.

In response to Prices or Bindings?
Comment author: simon2 21 October 2008 05:31:15PM 16 points [-]

It might make an awesome movie, but if it were expected behaviour, it would defeat the point of the injunction. In fact if rationalists were expected to look for workarounds of any kind it would defeat the point of the injunction. So the injunction would have to be, not merely to be silent, but not to attempt to use the knowledge divulged to thwart the one making the confession in any way except by non-coercive persuasion.

Or alternatively, not to ever act in a way such that if the person making the confession had expected it they would have avoided making the confession.

In response to Ethical Inhibitions
Comment author: simon2 19 October 2008 11:34:24PM 0 points [-]

To the extent that a commitment to ethics is externally verifiable, it would encourage other people to cooperate, just as a tendency to anger (a visible commitment to retribution) is a disincentive to doing harm.

Also, even if it is not verifiable, a person who at least announces their intention to hold to an ethical standard has raised the impact their failure to do so will have on their reputation, and thus the announcement itself should have some impact on the expectation that they will behave ethically.

View more: Prev | Next