Comment author: gwern 03 December 2011 05:21:33PM -1 points [-]

We're predisposed to say that a jarring physical discontinuity (even if afterwards, we have an agent functionally equivalent to the original) is more likely to cause mind-annihilation than no such discontinuity, but this intuition seems to be resting on nothing whatsoever.

Yes. How bizarre of us to be so predisposed.

Comment author: AlephNeil 03 December 2011 05:27:07PM 1 point [-]

Nice sarcasm. So it must be really easy for you to answer my question then: "How would you show that my suggestions are less likely?"

Right?

Comment author: gwern 02 December 2011 10:40:26PM 5 points [-]

You really think there is logical certainty that uploading works in principle and your suggestions are exactly as likely as the suggestion 'uploading doesn't actually work'?

Comment author: AlephNeil 03 December 2011 05:17:57PM 0 points [-]

You really think there is logical certainty that uploading works in principle and your suggestions are exactly as likely as the suggestion 'uploading doesn't actually work'?

How would you show that my suggestions are less likely? The thing is, it's not as though "nobody's mind has annihilated" is data that we can work from. It's impossible to have such data except in the first-person case, and even there it's impossible to know that your mind didn't annihilate last year and then recreate itself five seconds ago.

We're predisposed to say that a jarring physical discontinuity (even if afterwards, we have an agent functionally equivalent to the original) is more likely to cause mind-annihilation than no such discontinuity, but this intuition seems to be resting on nothing whatsoever.

Comment author: buybuydandavis 03 December 2011 06:45:53AM 6 points [-]

The identify of an object is a choice, a way of looking at it. The "right" way of making this choice is the way that best achieves your values. When you ask yourself what object is really you, and therefore to be valued, you're engaged in a tail biting exercise without "rational" answer.

If you value the continuance of your thought patterns, you'll likely he happy to upload. If you value your biological substrate, you won't. In a world where some do, and some don't, I don't see either as irrational - they just value different things, and take different actions thereby. You're not "irrational" for picking Coke over Pepsi.

Comment author: AlephNeil 03 December 2011 04:49:29PM 0 points [-]

The identify of an object is a choice, a way of looking at it. The "right" way of making this choice is the way that best achieves your values.

I think that's really the central point. The metaphysical principles which either allow or deny the "intrinsic philosophical risk" mentioned in the OP are not like theorems or natural laws, which we might hope some day to corroborate or refute - they're more like definitions that a person either adopts or does not.

I don't see either as irrational

I have to part company here - I think it is irrational to attach 'terminal value' to your biological substrate (likewise paperclips), though it's difficult to explain exactly why. Terminal values are inherently irrational, but valuing the continuance of your thought patterns is likely to be instrumentally rational for almost any set of terminal values, whereas placing extra value on your biological substrate seems like it could only make sense as a terminal value (except in a highly artificial setting e.g. Dr Evil has vowed to do something evil unless you preserve your substrate).

Of course this raises the question of why the deferred irrationality of preserving one's thoughts in order to do X is better than the immediate irrationality of preserving one's substrate for its own sake. At this point I don't have an answer.

Comment author: gwern 02 December 2011 10:40:26PM 5 points [-]

You really think there is logical certainty that uploading works in principle and your suggestions are exactly as likely as the suggestion 'uploading doesn't actually work'?

Comment author: AlephNeil 02 December 2011 10:59:03PM 2 points [-]

For any particular proposal for mind-uploading, there's probably a significant risk that it doesn't work, but I understand that to mean: there's a risk that what it produces isn't functionally equivalent to the person uploaded. Not "there's a risk that when God/Ripley is watching everyone's viewscreens from the control room, she sees that uploaded person's thoughts are on a different screen from the original."

Comment author: AlephNeil 02 December 2011 10:38:14PM 4 points [-]

If the rules of this game allow one side to introduce a "small intrinsic philosophical risk" attached to mind-uploading, even though it's impossible in principle to detect whether someone has suffered 'arbitrary Searlean mind-annihiliation', then surely the other side can postulate a risk of arbitrary mind-annihilation unless we upload ourselves. (Even ignoring the familiar non-Searlean mind-annihilation that awaits us in old age.)

Perhaps a newborn mind has a half-life of only three hours before spontaneously and undetectably annihilating itself.

Comment author: shokwave 07 September 2011 12:36:43AM *  13 points [-]

The tournament models natural selection, but no changes and therefore evolution occurs.

Idea: everyone has access to a constant 'm', which they can use in their bot's code however they like. m is set by the botwriter's initial conditions, then when a bot has offspring in the natural selection tournament, one-third of the offspring have their m incremented by 1, one-third have theirs decremented by 1, and one third has their m remain the same. In this manner, you may plug m into any formulas you want to mutate over time.

Comment author: AlephNeil 07 September 2011 05:36:50PM 1 point [-]

Excellent.

Perhaps m could serve as a 'location', so that you'd be more likely to meet opponents with similar m values to your own.

Comment author: AlephNeil 07 September 2011 05:25:09PM 0 points [-]

Thanks, this is all fascinating stuff.

One small suggestion: if you wanted to, there are ways you could eliminate the phenomenon of 'last round defection'. One idea would be to randomly generate the number of rounds according to an exponential distribution. This is equivalent to having, on each round, a small constant probability that this is the last round. To be honest though, the 'last round' phenomenon makes things more rather than less interesting.

Other ways to spice things up would be: to cause players to make mistakes with small probability (say a 1% chance of defecting when you try to co-operate, and vice versa); or have some probability of misremembering the past.

Comment author: AlephNeil 21 August 2011 09:37:13PM *  5 points [-]

Conversely, when we got trolled an unspecified length of time ago, an incompetent crackpot troll who shall remain nameless kept having all his posts and comments upvoted by other trolls.

It would help if there was a restriction on how much karma one could add or subtract from a single person in a given time, as others are suggesting.

Comment author: AlephNeil 01 August 2011 12:39:48AM 3 points [-]

What interests me about the Boltzmann brain (this is a bit of a tangent) is that it sharply poses the question of where the boundary of a subjective state lies. It doesn't seem that there's any part X of your mental state that couldn't be replaced by a mere "impression of X". E.g. an impression of having been to a party yesterday rather than a memory of the party. Or an impression that one is aware of two differently-coloured patches rather than the patches themselves together with their colours. Or an impression of 'difference' rather than an impression of differently coloured patches.

If we imagine "you" to be a circle drawn with magic marker around a bunch of miscellaneous odds and ends (ideas, memories etc. but perhaps also bits of the 'outside world', like the tattoos on the guy in Memento) then there seems to be no limit to how small we can draw the circle - how much of your mental state can be regarded as 'external'. But if only the 'interior' of the circle needs to be instantiated in order to have a copy of 'you', it seems like anything, no matter how random, can be regarded as a "Boltzmann brain".

Comment author: AlephNeil 31 July 2011 11:53:24PM *  0 points [-]

Every now and then I see a claim that if there were a uniform weighting of mathematical structures in a Tegmark-like 'verse---whatever that would mean even if we ignore the decision theoretic aspects which really can't be ignored but whatever---that would imply we should expect to find ourselves as Boltzmann mind-computations

The idea is this: Just as most N-bit binary strings have Kolmogorov complexity close to N, so most N-bit binary strings containing s as a substring have Kolmogorov complexity at least N - length(s) + K(s) - somethingsmall.

And now applying the analogy:

N-bit binary string <---> Possible universe

N-bit binary string containing substring s <---> Possible universe containing a being with 'your' subjective state. (Whatever the hell a 'subjective state' is.)

we get:

N-bit binary string containing substring s with Kolmogorov complexity >= N - length(s) + K(s) - O(1) <---> A Boltzmann brain universe.

We don't seem to be experiencing nonsensical chaos, therefore the argument concludes that a uniform weighting is inadequate and an Occamian weighting over structures is necessary

I've never seen 'the argument' finish with that conclusion. The whole point of the Boltzmann brain idea is that even though we're not experiencing nonsensical chaos, it still seems worryingly plausible that everything outside of one's instantaneous mental state is just nonsensical chaos.

What an 'Occamian' weighting buys us is not consistency with our experience of a structured universe (because a Boltzmann brain hypothesis already gives us that) but the ability to use science to decide what to believe - and thus what to do - rather than descend into a pit of nihilism and despair.

View more: Prev | Next