- Permalink
though not quite as good as me cooperating against everyone else's defection.
Shouldn't it be the other way around? (you defecting while everyone else cooperates)
ETA: liking this sequence so far, feels like I'm getting the concepts better now.
The best summary I can give here is that AIs are expected to be expected utility maximisers that completely ignore anything which they are not specifically tasked to maximise.
Counter example: incoming asteroid.
I thought utility maximizers were allowed to make the inference "Asteroid Impact -> reduced resources -> low utility -> action to prevent that from happening", kinda part of the reason for why AI is so dangerous: "Humans may interfere - > Humans in power is low utility -> action to prevent that from happening"
They ignore anything but what they're maximizing in the sense that they don't follow the Spirit of the code but rather its Letter, all the way to the potentially brutal (for Humans) conclusions.
You would care if certain objects are destructively teleported but not care if the same happens to you (and presumably other humans)
Yeah, I would use a teleporter any time if it was safe. But I would only pay a fraction for certain artifacts that were teleported.
Is this a preference you would want to want? I mean, given the ability to self-modify, would you rather keep putting (negative) value on concepts like "copy of" even when there's no practical physical difference?
I would keep that preference. And there is a difference. All the effort it took to relocate an object adds to its overall value. If only for the fact that other people who share my values, or play the same game and therefore play by the same rules, will desire the object even more.
Also, can you trace where this preference is coming from?
Part of the value of touching an asteroid from Mars is the knowledge of its spacetime trajectory. An atomically identical copy of a rock from Mars that was digitally transmitted by a robot probe printed out for me by my molecular assembler is very different. It is also a rock from Mars but its spacetime trajectory is different, it is artificial.
Which is similar to drinking Champagne and sparkling wine that tastes exactly the same. The first is valued because while drinking it I am aware of its spacetime trajectory, the resources it took to create it and where it originally came from and how it got here.
If only for the fact that other people who share my values, or play the same game and therefore play by the same rules, will desire the object even more.
How about if there were two worlds - one where they care about whether a spacetime trajectory does or does not go through a destroy-rebuild cycle, and one where they spend the effort on other things they value. In that case, in which world would you rather live in?
The Champagne example helps, I can understand putting value on effort for attainment, but I'd like another clarification:
If you have two rocks where rock 1 is brought from mars via spaceship, and rock 2 is the same as rock 1 only after receiving it you teleport it 1 meter to the right. Would you value rock 2 less than rock 1? If yes, why would you care about that but not about yourself undergoing the same?
I think the Ship of Theseus problem is good reductionism practice. Anyone else think similarly?
If I was to use an advanced molecular assembler to create a perfect copy the Mona Lisa and destroy the old one in the process, it would still lose a lot of value. That is because many people not only value the molecular setup of things but also their causal history, what transformations things underwent.
Personally I wouldn't care if I was disassembled and reassembled somewhere else. If that was a safe and efficient way of travel then I would do it. But I would care if that happened to some sort of artifact I value. Not only because it might lose some of its value in the eyes of other people but also because I personally value its causal history to be unaffected by certain transformations.
So in what sense would a perfect copy of the Mona Lisa be the same? In every sense except that it was copied. And if you care about that quality then a perfect copy is not the same, it is merely a perfect copy.
You would care if certain objects are destructively teleported but not care if the same happens to you (and presumably other humans)
Is this a preference you would want to want? I mean, given the ability to self-modify, would you rather keep putting (negative) value on concepts like "copy of" even when there's no practical physical difference? Note that this doesn't mean no longer caring about causal history. (you care about your own casual history in the form of memories and such)
Also, can you trace where this preference is coming from?
The "Thermostats can have beliefs" seems like a really good example of how beliefs should affect actions.
(For those looking, map 3 lowest area)
Isn't Harry a little young to have played Fate/Stay Night, both in the sense of it being a Japanese porno game not suitable for 11-year-olds and it not having been made yet when the story is set?
EDIT: Clearly this is intended as a hint that he has the time-traveling adult Voldemort's memories implanted in him.
He didn't actually had to have read it, merely to have come across that particular quote.
Is there a non-car way to get there from Haifa on time? (5 min searching says earliest bus arrives at 19:38 at the new central bus station)
It should be mentioned that when considering things like Cryonics in the Big World, you can't just treat all the other "you" instances as making independent decisions, they'll be thinking similarly enough to you that whatever conclusion you reach, this is what most "you" instances will end up doing. (unless you randomize, and assuming 'most' even means anything)
Seriously, I'd expect people to at least mention the superrational view when dealing with clones of themselves in decide-or-die coordination games.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)