In response to Timeless Identity
Comment author: Sebastian_Hagen2 04 June 2008 07:28:00AM 0 points [-]

What if cryonics were phrased as the ability to create an identical twin from your brain at some point in the future, rather than 'you' waking up. If all versions of people are the same, this distinction should be immaterial. But do you think it would have the same appeal to people?

I don't know, and unless you're trying to market it, I don't think it matters. People make silly judgements on many subjects, blindly copying the majority in this society isn't particularly good advice.

Each twin might feel strong regard for the other, but there's no way they would actually be completely indifferent between pain for themselves and pain for their twin.

Any reaction of this kind is either irrational, based on divergence which has already taken place, or based on value systems very different from my own. In real life, you'd probably get a mix of the first two, and possibly also the last, from most people.

If another 'me' were created on mars and then got a bullet in the head, this would be sad, but no more so than any other death. It wouldn't feel like a life-extending boon when he was created, nor a horrible blow to my immortality when he was destroyed.

For me, this would be a quantitative judgement: it depends on how much both instances have changed since the split. If the time lived before the split is significantly longer than that after, I would consider the other instance a near-backup, and judge the relevance of its destruction accordingly. Aside from the aspect of valuing the other person as a human like any other that also happens to share most of your values, it's effectively like losing the only (and somewhat out-of-date) backup of a very important file: No terrible loss if you can keep the original intact until you can make a new backup, but an increased danger in the meantime.

If you truly believe that 'the same atoms means its 'you' in every sense', suppose I'm going to scan you and create an identical copy of you on mars. Would you immediately transfer half your life savings to a bank account only accessible from mars? What if I did this a hundred times?

Maybe, maybe not, depends on the exact strategy I'd mapped out beforehand for what each of the copies will do after the split. If I didn't have enough foresight to do that beforehand, all of my instances would have to agree on the strategy (including allocation of initial resources) over IRC or wiki or something, which could get messy with a hundred of them - so please, if you ever do this, give me a week of advance warning. Splitting it up evenly might be ok in the case of two copies (assuming they both have comparable expected financial load and income in the near term), but would fail horribly for a hundred; there just wouldn't be enough money left for any of them to matter at all (I'm a poor university student, currently; I don't really have "life savings" in transferrable format).

In response to Timeless Identity
Comment author: Sebastian_Hagen2 03 June 2008 05:57:40PM 0 points [-]

Is the 'you' on mars the same as 'you' on Earth?

There's one of you on earth, and one on mars. They start out (by assumption) the same, but will presumably increasingly diverge due to different input from the environment. What else is there to know? What does the word 'same' mean for you?

And what exactly does that mean if the 'you' on earth doesn't get to experience the other one's sensations first hand? Why should I care chat happens to him/me?

That's between your world model and your values. If this happened to me, I'd care because the other instance of myself happens to have similar values to the instance making the judgement, and will therefore try to steer the future into states which we will both prefer.

Comment author: Sebastian_Hagen2 23 May 2008 11:20:10AM 1 point [-]

But I don't buy the idea of intelligence as a scalar value.

Do you have a better suggestion for specifying how effective a system is at manipulating its environment into specific future states? Unintelligent systems may work much better in specific environments than others, but any really intelligent system should be able to adapt to a wide range of environments. Which important aspect of intelligence do you think can't be expressed in a scalar rating?

Comment author: Sebastian_Hagen2 13 May 2008 08:05:00PM 5 points [-]

They only depend to within a constant factor. That's not the problem; the REAL problem is that K-complexity is uncomputable, meaning that you cannot in any way prove that the program you're proposing is, or is NOT, the shortest possible program to express the law.

I disagree; I think the underspecification is a more serious issue than the uncomputability. There are constant factors that outweigh, by a massive margin, all evidence ever collected by our species. Unless there's a way for us to get our hands on an infinite amount of cputime, there are constant factors that outweigh, by a massive margin, all evidence we will ever have a chance to collect. For any two strings, you can assign a lower complexity to either one by choosing the description language appropriately. Some way to make a good enough (not necessarily optimal) judgement on the language to use is needed for the complexity metric to make any sense.

The uncomputability is unfortunate, but hardly fatal. You can just spend some finite effort trying to find the shortest program that produces the each string, using the best heuristics available for this job, and use that as an approximation and upper bound. If you wanted to turn this into a social process, you could reward people for discovering shorter programs than the shortest-currently-known for existing theories (proving that they were simpler than known up to that point), as well as for collecting new evidence to discriminate between them.

Comment author: Sebastian_Hagen2 13 May 2008 11:41:39AM 2 points [-]

But when I say "macroscopic decoherence is simpler than collapse" it is actually strict simplicity; you could write the two hypotheses out as computer programs and count the lines of code.

Computer programs in which language? The kolmogorov complexity of a given string depends on the choice of description language (or programming language, or UTM) used. I'm not familiar with MML, but considering that it's apparently strongly related to kolmogorov complexity, I'd expect its simplicity ratings to be similarly dependent on parameters for which there is no obvious optimal choice.

If one uses these metrics to judge the simplicity of hypotheses, any probability judgements based on them will ultimately depend strongly on this parameter choice. Given that, what's the best way to choose these parameters? The only two obvious ways I see are to either 1) Make an intuitive judgement, which means the resulting complexity ratings might not turn out any more reliable than if you intuitively judged the simplicity of each individual hypothesis, or 2) Figure out which of the resulting choices can be implemented cheaper in this universe; i.e. try to build the smallest/least-energy-using computer for each reasonably-seeming language, and see which one turns out cheapest. Since resource use at runtime doesn't matter for kolmogorov complexity, it would probably be appropriate to consider how well the designs would work if scaled up to include immense amounts of working memory, even if they're never actually built at that scale.

Neither of those is particularly elegant. I think 2) might work out, but unfortunately is quite sensitive to parameter choice, itself.

Comment author: Sebastian_Hagen2 12 May 2008 11:51:06AM 7 points [-]

"A short time?" Jeffreyssai said incredulously. "How many minutes in thirty days? Hiriwa?"

"28800, sensei," she answered. "If you assume sixteen-hour waking periods and daily sleep, then 19200 minutes."

I would have expected the answers to be 43200 (30d * 24h/d * 60/h) and 28800 (30d * 16h/d * 60/h), respectively. Do these people use another system for specifying time? It works out correctly if their hours have 40 minutes each.

Aside from that, this is an extremely insightful and quote-worthy post. I have^W^W My idiotic past-selves had a bad tendency to cognitively slow down in the absence of interesting and time-critical problems to solve. Accordingly, I find the hints about how to debug those tendencies very interesting. I find it rather quaint that those people still spend a significant part of their time sleeping, however.

Comment author: Sebastian_Hagen2 10 May 2008 02:40:34PM 2 points [-]

I hope the following isn't completely off-topic:

... if I'd been born into that time, instead of this one...

What exactly does a hypothetical scenario where "person X was born Y years earlier" even look like? I could see a somewhat plausible interpretation of that description in periods of extremely slow scientific and technological progress, but the twentieth century doesn't qualify. In the 1920s: 1) The concept of a turing machine hadn't been formulated yet. 2) There were no electronic computers. 3) ARPANET wasn't even an idea yet, and wouldn't be for decades. 4) Television was a novelty, years away from being used by a significant number of people. 5) WW1 was recent history.

Two persons with the same DNA and, except for results of global changes, very similar local environments during their childhood, would most likely turn into completely different adult humans if one of them was born in the 1920s and the other at some point in the last 30 years (roughly chosen to guarantee exposure to the idea of the internet as a teenager), and they both grew up in industrialized countries. The scientific and technological level one is born into is critical for mind development. What does it mean to consider a hypothetical world where a specific person was born into an environment very different in those respects? Why is this worth thinking about?

In response to On Being Decoherent
Comment author: Sebastian_Hagen2 27 April 2008 10:26:45AM 0 points [-]

Maybe later I'll do a post about why you shouldn't panic about the Big World. You shouldn't be drawing many epistemic implications from it, let alone moral implications. As Greg Egan put it, "It all adds up to normality." Indeed, I sometimes think of this as Egan's Law.

While I'm not currently panicking about it, I'd be very interested in reading that explanation. It currently seems to me that there should be certain implications, e.g. in Quantum suicide experiments. If mangled worlds says that the entity perfoming such an experiment should not expect to survive many iterations, that doesn't solve the space-like version of the issue: Some of the person's alternate-selves on far away alternate-earths would be prevented from carrying out their plan by weird stuff (TM) coming in from space at just the right time.

Hopefully Anonymous asked:

10^(10^29) (is this different than 10^30?)

It's different by a factor of roughly 10^(10^29). Strictly speaking the factor is 10^(10^29-30), but making that distinction isn't much more meaningful than distinguishing between metres and lightyears at those distances.

Comment author: Sebastian_Hagen2 25 April 2008 01:13:01PM 1 point [-]

Good writing, indeed! I also love what you've done with the Eborrian anzrf (spoiler rot13-encoded for the benefit of other readers since it hasn't been mentioned in the previous comments).

The split/remerge attack on entities that base their anticipations of future input directly on how many of their future selves they expect to get specific input is extremely interesting to me. I originally thought that this should be a fairly straightforward problem to solve, but it has turned out a lot harder (or my understanding a lot more lacking) than I expected. I think the problem might be in the group of 500,003 brains double-counting anticipated input after the merge. They don't stay exactly the same through the merge phase; in fact, for each of the 500,000 brains in green rooms, the re-integrated previously-in-green-rooms brain only depends to a very small part on them individually. In this particular case, the re-integrated brain will still be very similar to each of the pre-integration brains; but that is just a result of the pre-integration brains all being very similar to each other. Treating the re-integrated brain as a regular future-self for the purposes of anticipating future experience under these conditions seems highly iffy to me.

Comment author: Sebastian_Hagen2 21 April 2008 05:53:57PM 5 points [-]

Similarly to "Zombies: The Movie", this was very entertaining, but I don't think I've learned anything new from it.

Z. M. Davis wrote:

Also, even if there are no moral facts, don't you think the fact that no existing person would prefer a universe filled with paperclips ...

Have you performed a comprehensive survey to establish this? Asserting "No existing person" in a civilization of 6.5e9 people amounts to assigning a probability of less than 1.54e-10 that a randomly chosen person would prefer a universe filled with paperclips. This is an extremely strong claim to make!

For example, note that the set of people alive includes a significant number of people who are certifiably insane, and in all probability others who, while reasonably sane, have gotten very fed up with various forms of torture inflicted on them over the last few days and might be willing to neglect collateral damage if they could make it stop.

If such a survey were performed, and the results were actually what you claim, I would assign a higher probability to the possibility of a nefarious anti-paperclip conspiracy having infiltrated the survey effort than to the possibility of the results being correct.

Unanimous agreement of our entire species is also a much stronger claim than you need to make for your argument.

View more: Prev | Next