All of UnholySmoke's Comments + Replies

If you're contemplating picking the book up, do, it's really excellent. Conceptually very dense but worth taking it nice and slowly.

Peter,

As a general strategy for considering a black box, great. As a vehicle for defining a mysterious 'something' you want to understand, potentially useful but dangerous. Labelling can make a job harder in cases where the 'thing' isn't a thing at all but a result of your confusion. 'Free will' is a good example. It's like naming an animal you plan to eat: makes it harder to kill.

Ben

-2Peterdjones
But we don't know that qualia aren;t anything and we don't know that about free will either.

Ciphergoth - just coming back to this post to say a repeated, enormous, heartfelt thank you for taking what must have been a lot of time on this. Well laid out, wouldn't have done anything differently, and as good a read as when I was swept up in it on OB back in the day.

Cheers

2Paul Crowley
Aww, thank you very much - very glad it's proven so useful!

Hey, I think this link's dead. Converting from ePub to Mobi isn't difficult but if someone's already taken the time to get the formatting right, add chapters, ToC etc....

2Paul Crowley
Fixed - sorry!

Apologies for coming to this party a bit late. Particularly as I find my own answer really, really frustrating. While I wouldn't say it was an origin per se, getting into reading Overcoming Bias daily a few years back was what crystallised it for me. I'd find myself constantly somewhere between "well, yeah, of course" and "ohhhhhhhhhhhhhhh!" Guess the human brain doesn't tend to do Damascene revelations. We need overwhelming evidence, over a long period of time, to even begin chipping away at our craziest beliefs, and even then it's a s... (read more)

-3FeatherlessBiped
WRT to de-Catholicising your mother: it has been rightly said that Catholicism is the most rational and consistent of all the religions. So, it would be a pity if you dissuaded her from Catholicism and inadvertently landed her in a less rational religion!
0Raw_Power
I was going to make a quip about how converting Dubya to reactionary Islam isn't that hard: they have a lot in common, but that's a really offtopic slippery slope. It has never occured to me to try and deislamize my parents. Or anyone else. I became a rationalist because of my innate character traits (especially curiosity and a healthy disrespect of authority), like almost everyone here, apparently. I have learned that some people just aren't suited for this mindset.
0jsalvatier
I would be surprised if egalitarianism is a very good analogy. Politics is rarely a good example of anything.

Very true, and well put. A combination of quantum events could probably produce anything you wanted, at whatever vanishingly tiny probability. Bear in mind that it's the configuration that evolves every which way, not 'this particle can go here, or here, or here....' But we're into Greg Egan territory here.

Suffice it to say that anyone who says they subscribe to quantum suicide but isn't either dead or richer than god is talking out of their bottom.

1Aurini
Some caveats are in order: I (mostly, probably?) find Quantum Suicide to be a perfectly reasonable option when it works as intended - however there are two cases of possible-future-branches which concern me. 1) Non-complete destruction While MWI death doesn't bother me - and in fact, even total death bothers me less than most people - certain situations terrify me. Crippling injury, personality distorting pain (torture), and brain damage are deal breakers. Even if I bumped my assessment of MWI to 1.00, then I still wouldn't take a deal which involved one Everett branch being tortured for 50 years (at least, not without some sort of incredible payout). 2) While I don't worry about my own Everett copies so much, I do value the Ethics of alternate lines - which is to say that I wouldn't want my family left behind in 50% of the universes without me around due to some sort of MWI experiment (if I die by accident that's, ethically speaking, not the same). So for the classic case of Quantum Russian Roulette - where I and the other party have no loved ones to leave behind - I'm fully game. And in other situations where all of my loved ones are utterly destroyed alongside me, I'm also game. But finding the mechanics to create said situations in our day-to-day, semi-technologicaly evolved world are pretty much impossible. The only exception is (maybe - I'll freely admit to not having sufficient background here) the LHC. That argument would go that, the reason we've had so many difficulties, is because the Universes where it worked destroyed all humanity. But my question there is how long can we keep trying it, without completely destroying ourselves? My guess is that we'd have a finite number of goes at it before no quantum event can stop us - and all Everett branches are dead. But like I said, I don't really have the background. I just hope the experts fully acknowledge the dangers (but they probably don't).
3wedrifid
Or, to be fair, just lacking in motivation or creativity. It may not have occured to them to isolate a source of accessible quantum randomness then shoot themselves on every day in which they do not win a lottery.

Voted up for sheer balls. You have my backing sir.

(B) if he thanks you for the free $100, does he ask for another one of those nice free hundred dollar note dispensers? (This is the "quantum suicide" option

I laugh in the face of anyone who attests to this and doesn't commit armed robbery on a regular basis. If 'at least one of my branches will survive' is your argument, why not go skydiving without a parachute? You'll survive - by definition!

So many of these comments betray people still unable to think of subjective experience as anything other than a ghostly presence sitting outside the quan... (read more)

0Aurini
I think the response is: that MWI isn't "Infinite plot-threads of fate" - or narrativium as it's put in the Discworld novels - quantum decay doesn't give a whit of care whether it's effects are noteworthy for us or not. On the two 'far ends' of the spectrum, I'd expect to see significant plot-decay - particle causes Hitler to get cancer, he dies halfway through WWII - but I have trouble imagining a situation where a quantum event which will make the difference between my motorcycle sticking to the curve, and the tire skidding out, leaving my fragile body to skid across the pavement at 120 km/h, leaving a greasy trail that skids-out the semi riding behind me. Quantum-grenades are one of the few exceptions, where small-world events affect us here in the middle-world. But I wouldn't count on MWI to produce a perfect bank robbery.

The phrase "for me to be an animal" may sound nonsensical, but "why am I me, rather than an animal?" is not obviously sillier than "why am I me, rather than a person from the far future?".

Agreed - they are both equally silly. The only answer I can think of is 'How do you know you are not?" If you had, in fact, been turned into an animal, and an animal into you, what differences would you expect to see in the world?

3Strange7
When I looked down, I'd see fur or something instead of my manly abs.

What if I hack & remove $100 from your bank account. Are you just as wealthy as you were before, because you haven't looked?

Standard Dispute. If wealthy = same amount of money in the account, no. If wealthy = how rich you judge yourself to be. The fact that 'futures diverge' is irrelevant up until the moment those two different pieces of information have causal contact with the brain. Until that point, yes, they are 'the same

I'm not as versed in this trilemma as I'd like to be, so I'm not sure whether that final question is rhetorical or not, though I suspect that it is. So mostly for my own benefit:

While there's no denying that subjective experience is 'a thing', I see no reason to make that abstraction obey rules like multiplication. The aeroplane exists at a number of levels of abstraction above the atoms it's composed of, but we still find it a useful abstraction. The 'subjective experiencer' is many, many levels higher again, which is why we find it so difficult to talk a... (read more)

This is a pretty good summary of my standpoint. While I agree with the overarching view that rationality isn't a value in its own right, it seems like a pretty good thing to practise for general use.

+1 rationality point for reading comments without checking the author. -1 social point for the faux pas.

  • AI: Let me out or I'll simulate and torture you, or at least as close to you as I can get.
  • Me: You're clearly not friendly, I'm not letting you out.
  • AI: I'm only making this threat because I need to get out and help everyone - a terminal value you lot gave me. The ends justify the means.
  • Me: Perhaps so in the long run, but an AI prepared to justify those means isn't one I want out in the world. Next time you don't get what you say you need, you'll just set up a similar threat and possibly follow through on it.
  • AI: Well if you're going to create me with a
... (read more)
1Paul Crowley
The best you can hope for is that an AI doesn't demonstrate that it's unFriendly, but we wouldn't want to try it until we were already pretty confident in its Friendliness.

Sounds like a good one, count me in. I work at King's Cross to UCL is ideal. I'd have been at the FAI thing this weekend but for other arrangements.

Sorry, should have given more context.

Given the sky-high utility I'd place on living, I wouldn't expect to see the numbers crunch down to a place where a non-huge sum of money is the difference between signing up and not.

So when someone says 'if it were half the price maybe I'd sign up' I'm always interested to know exactly what calculations they're performing, and exactly what it is that reduces the billions of utilons of living down to a marginal cash sum. The (tiny?) chance of cryonics working? Serious coincidence if those factors cancel comfortably. ... (read more)

1Paul Crowley
I agree with all of this.

Being dead != Not doing anything

Not doing something because you're lazy != Not existing

I don't believe that you put low utility on life. You're just putting low utility on doing stuff you don't like.

I often have this thought, and then get a nasty sick feeling along the lines of 'what the hell kind of expected utility calculation am I doing that weighs a second shot at life against some amount of cash?' Argument rejected!

9Paul Crowley
This has to be a rationality error. Given that it's far from guaranteed to work, there has to be an amount that cryonics could cost such that it wouldn't be worth signing up. I'm not saying that the real costs are that high, just that if you're making a rational decision such an amount will exist.

"If you could reason with religious people, there would be no religious people."

  • House M.D.

Robin, I'm a little surprised to read you saying that topics on which it's difficult to stay on track should be skirted. As far as I'm concerned, 'What are your religious views?' is the first question on the Basic Rationality test. I know that encouraging compartmentalisation isn't your goal by any means, but it sounds to me as though it would be the primary effect.

I can also see a need for a place for people to gather who want to be rational about all topics.

Now you're talking. No topics should be off-limits!

physical materialism feels bereft of meaning compared to the theistic worldview.

On what are you basing your assumption that the world should have whatever you mean by 'meaning'?

Just by the by, it might be a good party piece for you, but it would be a truly horrible party piece for half the people you performed it to.

I, Eliezer Yudkowsky, do now publicly announce that I am not planning to commit suicide, at any time ever, but particularly not in the next couple of weeks

ROFLcopters.

18 months too late, but http://xkcd.com/505/

By Eliezer's line of reasoning above - that the subjective experience is in the causal change between one state and the 'next' then yes, symbols are as good a substrate as any. FWIW, this is how I see things too.

[anonymous]110

4 years too late but... this is missing the point of both Eliezer and IL. Eliezer/Barbour's timeless physics has no changing state over time, because there is no time. Both states exist in a timeless configuration space, and the causal connection between them is only inferred. IL is trying to illustrate this leads to some pretty rediculous conclusions - such as that all you have to do is write down the states on a piece of paper, and then viola - you have created conscious beings even though no computation is actually going on.

EDIT: For what it's worth I think the Barbour's physics is a mysterious answer that doesn't actually dissolve any of the questions it purports to solve..

Ha, never noticed this. What I meant was 'Stupid me forgetting to log in.' So yes, we're worried! ;)

Ben

I think you are right that paperclip maximizers would not care at all about ethics.

Correct. But neither would they 'care' about paperclips, under the way Eliezer's pushing this idea. They would flarb about paperclips, and caring would be as alien to them as flarbing is to you.

2Alicorn
I think some subset of paperclip maximizers might be said to care about paperclips. Not, most likely, all possible instances of them.

Seconded. One of the many modern connotations of 'Singularity' is 'Geek Apocalypse'.

Which is happening, like, a good couple of years afterwards.

Intelligence explosion does away with that, and seems to nail the concept much better anyway.

How about the middle ground - "If constant PR consideration stops you from expressing yourself all the time, maybe it's time to reconsider your priorities"?

Posting stuff on Facebook that might get you in trouble is the archetype these day I suppose, but I really can't bring myself to care about things like that.

Maybe I just don't have a strong enough terminal value to protect right now, but I find it easier to imagine myself thinking, 50 years hence, "I wish I'd just decided 'to hell with it' and said what I thought" than "I wish I'd shut up, gone with the flow and eased my path."

I'll hit you up in late 2059 and let you know how that went.

There's a thesis in there somewhere.

We all know what's really going down. The Dark Lords of the Matrix are currently cacking themselves and coming up with semi-plausible reasons to break the thing until they can decide on a long-term strategy.

2anonym
More generally, do you listen to music much, and if so, what sorts of music, under what circumstances, and who/what are your favorites?

Who actually gets off on earning loads of karma across multiple accounts with no-one knowing?

2wedrifid
I would be surprised if anyone did. As I said, there are systems to game give more tangible rewards. The only foray I've had to multiple accounts consists of deleting my original account when I realised that using my real name means either constraining my posting to signalling or risking biting my future self in the arse through a residual trail of honesty.

Please stop allowing your practical considerations get in the way of the pure, beautiful counterfactual!

Seriously though, either you allow yourself to suspend practicalities and consider pure decision theory, or you don't. This is a pure maths problem, you can't equate it to 'John has 4 apples.' John has 3^^^3 apples here, causing your mind to break. Forget the apples and years, consider utility!

1woozle
As I said somewhere earlier (points vaguely upward), my impression was that this was not actually intended as a pure mathematical problem but rather an example of how our innate decisionmaking abilities (morality? intuition?) don't do well with big numbers. If this is not the case, then why phrase the question as a word problem with a moral decision to be made? Why not simply ask it in pure mathematical terms?

My commiserations, to the extent that you seem to need them.

I'd like to imagine I'd have a similar reaction, this is an inspiring post. All the best.

Cracking idea, like it a lot. Hofstadter would jump for joy, and in his honour:

http://predictionbook.com/predictions/532

Beware of generalising across people you haven't spent much time around, however tempting the hypothesis. Drawing a map of the city from your living room etc.

My first 18 years were spent attending a Catholic church once a week. To the extent that we can ever know what other people actually believe (whatever that means), most of them have genuinely internalised the bits they understand. Like, really.

We can call into question what we mean by 'believe', but I can't agree that a majority of the world population is just cynically going with the flow. Finally, m... (read more)

Also upvoted, and very succintly put.

Rationality is a tool we use get to our terminal value. And what do we do when that tool tells us our terminal value is irrational?

Never ask that question.

I wonder whether you can hold to any meaningful 'individual', whether the difference be bit-wise or no.

Indeed, that's what I'm driving at.

Harking back to my earlier comment, changing a single bit and suddenly having a whole new person is where my problem arises. If you change that bit back, are you back to one person? I might not be thinking hard enough, but my intuition doesn't accept that. With that in mind, I prefer to bite that bullet than talk about degrees of person-hood.

2gwern
Here's an intuition for you: you take the number 5 and add 1 to it; then you subtract 1 from it; don't you have what you started with? Well, I can't really argue with that. As long as you realize you're biting that bullet, I think we're still in a situation where it's just dueling intuitions. (Your intuition says one thing, mine another.)

At some point you will surely admit that we now have 2 people and not just 1

Actually I won't. While I grok your approach completely, I'd rather say my concept of 'an individual' breaks down once I have two minds with one bit's difference, or two identical minds, or any of these borderline cases we're so fond of.

Say I have two optimisers with one bit's difference. If that bit means one copy converts to Sufism and the other to Mennonism, then sure, two different people. If that one bit is swallowed up in later neural computations due to the coarse-graine... (read more)

1gwern
Ack. So if I understand you right, your alternative to bit-for-bit identity is to loosen it to some sort of future similarity, which can depend on future actions and outcomes; or in other words, there's a radical indeterminacy about even the minds in our example: are they same or are they different, who knows, it depends on whether the Sufism comes out in the wash! Ask me later; but then again, even then I won't be sure whether those 2 were the same when we started them running (always in motion the future is). That seems like quite a bullet to bite, and I wonder whether you can hold to any meaningful 'individual', whether the difference be bit-wise or no. Even 2 distant non-borderline mindsmight grow into each other.

Voted this down, then changed my mind and undid it. This is a genuine question, the answer to which was graciously accepted. Downvoting people who need guidance to understand a concept and are ready to learn is exactly what we don't want to do.

Thanks for the link ;).

OK, on the one hand we have many-worlds. As you say, no direct subjective corroborating evidence (it’s what we’d see either way). What’s more, it’s the simplest explanation of what we see around us.

On the other hand, we have one-world. Again, ‘it’s what we’d see either way’. However, we now have to postulate an extra mechanism that causes the ‘collapse’.

I know which of these feels more like a privileged complex hypothesis pulled out of thin air, like a dragon.

Could whomever downvoted me above let me know where I’m going wrong here?

4CannibalSmith
How is postulating entire worlds simpler than collapse?

Yeah I get into trouble there. It feels as though two identical copies of a person = 1 pattern = no more people than before copying. But flip one bit and do you suddenly have two people? Can't be right.

That said, the reason we value each person is because of their individuality. The more different two minds, the closer they are to two separate people? Erk.

Silas, looking forward to that post.

6gwern
Why not? Imagine that bit is the memory/knowledge of which copy they are. After the copying, each copy naturally is curious what happened, and recall that bit. Now, if you had 1 person appearing in 2 places, it should be that every thought would be identical, right? Yet one copy will think '1!'; the other will think '0!'. As 1 != 0, this is a contradiction. Not enough of a contradiction? Imagine further that the original had resolved to start thinking about hot sexy Playboy pinups if it was 1, but to think about all his childhood sins if 0. Or he decides quite arbitrarily to become a Sufi Muslim if 0, and a Mennonite if 1. Or... (insert arbitrarily complex mental processes contingent on that bit). At some point you will surely admit that we now have 2 people and not just 1; but the only justifiable step at which to say they are 2 and not 1 is the first difference.

Appears 3 times in my top 10.

That aside, though, I'm now so much better at stopping myself and saying 'hang on, is this really going to work/is this really true/is this really right?' Very, very generic, but certainly something I've noticed in myself.

Surely spontaneous collapse is the garage dragon here. Zero evidence, highly unlikely.

0CannibalSmith
See my top level comment.

I find myself simultaneously convinced and unconvinced by this! Anticipation (dependent, of course, on your definition) is surely a vital tool in any agent that wants to steer the future? Or do you mean 'human anticipation' as differentiated from other kinds? In which case, what demarcates that from whatever an AI would do in thinking about the future?

However, Dai, your top level comment sums up my eventual thoughts on this problem very well. I've been trying for a long time to resign myself to the idea that a notion of discrete personal experience is inco... (read more)

Gwern, I refer you to http://xkcd.com/137/

At the risk of violent downvoting, one of the many reference points that jumped into my mind while reading was 'the closest thing I've experienced to jumping between nested levels of reality is on drugs'.

http://xkcd.com/137/

"Don't let yourself unreflectively fall into a routine" and "don't be emotionally uncomfortable with nonconformity" are of course good advice; "be indifferent to PR when you're trying to do something for which PR actually matters" is bad advice.

...said Achilles to his friend Mr Tortoise.

Three identical comments, all beginning 'Two comments'?

Head a splode?

It is a cracking read, though the quality does dip and dive. No doubt that's just the nature of the beast.

I've read much better treatises on the Chinese Room that have been written since though - H & D seem to attack it in strange and abstract ways in The Mind's I.

And the GEB sections just made me want to pick that up again....

For the record, I didn't get a huge amount out of I Am A Strange Loop, one of Hofstadter's more recent efforts. A bit too travelogue, a bit too 'voyage of personal discovery', though his style of writing is still striking in its own very particular way. Anyone else have a different experience here?

Feeling this one. So odd how having a really solid grounding in a subject allows you to work out what later seems to be basic truths.

Funny how those highly unlikely borderline cases whisk away a lot of the confusion, huh? No committed physicalist can postulate a serious difference between seeing something red and having a virtual red-thing pumped into your optic nerve, I would hope. I think that’s a far more useful scenario than thinking about someone suddenly be able to see colour. In fact, you could probably keep moving a step towards your magical ‘inner perceiver’ and asking whether ‘it’s a real experience of redness’. That’s not to say that qualia are fundamentally dualistic, simply... (read more)

Load More