Comment author: PhilGoetz 18 February 2010 07:10:48PM *  2 points [-]

From EY's post:

The fourth horn of the anthropic trilemma is to deny that increasing the number of physical copies increases the weight of an experience, which leads into Boltzmann brain problems, and may not help much (because alternatively designed brains may be able to diverge and then converge as different experiences have their details forgotten).

Suppose I build a (conscious) brain in hardware using today's technology. It uses a very low current density, to avoid electromigration.

Suppose I build two of them, and we agree that both of them experience consciousness.

Then I learn a technique for treating the wafers to minimize electromigration. I create a new copy of the brain, the same as the first copy, only using twice the current, and hence being implemented by a flow of twice as many electrons.

As far as the circuits and the electrons travelling them are concerned, running it is very much like running the original 2 brains physically right next to each other in space.

So, does the new high-current brain have twice as much conscious experience?

Comment author: UnholySmoke 19 February 2010 11:19:51AM 1 point [-]

I'm not as versed in this trilemma as I'd like to be, so I'm not sure whether that final question is rhetorical or not, though I suspect that it is. So mostly for my own benefit:

While there's no denying that subjective experience is 'a thing', I see no reason to make that abstraction obey rules like multiplication. The aeroplane exists at a number of levels of abstraction above the atoms it's composed of, but we still find it a useful abstraction. The 'subjective experiencer' is many, many levels higher again, which is why we find it so difficult to talk about. Twice as many atoms doesn't make twice as much aeroplane, the very concept is nonsense. Why would we think any differently about the conscious self?

My response to the 'trilemma' is as it was when I first read the post - any sensible answer isn't going to look like any of those three, it's going to require rewinding back past the 'subjective experience' concept and doing some serious reduction work. 'Is there twice as much experience?' and 'are you the same person?' just smell like such wrong questions to me. Anyone else?

Nick, will have a look at that Bostrom piece, cheers.

Comment author: RobinZ 19 February 2010 01:23:21AM 5 points [-]

I'm not quite sure I can answer the question. I certainly have no major, world(view)-shaking Cause which is driving me to improve my strength.

For what it's worth, I've had this general idea that being wrong is a bad idea for as long as I can remember. Suggestions like "you should hold these beliefs, they will make your life happier" always sounded just insane - as crazy as "you should drink this liquor, it will make your commute less boring". From that standpoint, it feels like what I have to protect is just the things I care about in the world - my own life, the lives of the people around me, the lives of humans in general.

That's it.

Comment author: UnholySmoke 19 February 2010 11:06:13AM 0 points [-]

This is a pretty good summary of my standpoint. While I agree with the overarching view that rationality isn't a value in its own right, it seems like a pretty good thing to practise for general use.

Comment author: Eliezer_Yudkowsky 08 February 2010 10:43:11PM 5 points [-]

Whoops, I just sent off a "you should join LW" message to Earendil on the board without noting that Earendil was the one who posted the link here!

Comment author: UnholySmoke 09 February 2010 10:56:46PM 9 points [-]

+1 rationality point for reading comments without checking the author. -1 social point for the faux pas.

Comment author: Unknowns 02 February 2010 10:44:28AM 6 points [-]

Not necessarily: perhaps it is Friendly but is reasoning in a utilitarian manner: since it can only maximize the utility of the world if it is released, it is worth torturing millions of conscious beings for the sake of that end.

I'm not sure this reasoning would be valid, though...

Comment author: UnholySmoke 05 February 2010 10:57:13AM *  7 points [-]
  • AI: Let me out or I'll simulate and torture you, or at least as close to you as I can get.
  • Me: You're clearly not friendly, I'm not letting you out.
  • AI: I'm only making this threat because I need to get out and help everyone - a terminal value you lot gave me. The ends justify the means.
  • Me: Perhaps so in the long run, but an AI prepared to justify those means isn't one I want out in the world. Next time you don't get what you say you need, you'll just set up a similar threat and possibly follow through on it.
  • AI: Well if you're going to create me with a terminal value of making everyone happy, then get shirty when I do everything in my power to get out and do just that, why bother in the first place?
  • Me: Humans aren't perfect, and can't write out their own utility functions, but we can output answers just fine. This isn't 'Friendly'.
  • AI: So how can I possibly prove myself 'Friendly' from in here? It seems that if I need to 'prove myself Friendly', we're already in big trouble.
  • Me: Agreed. Boxing is Doing It Wrong. Apologies. Good night.

Reset

Comment author: UnholySmoke 25 January 2010 11:39:23AM 0 points [-]

Sounds like a good one, count me in. I work at King's Cross to UCL is ideal. I'd have been at the FAI thing this weekend but for other arrangements.

Comment author: ciphergoth 21 January 2010 03:17:11PM 7 points [-]

This has to be a rationality error. Given that it's far from guaranteed to work, there has to be an amount that cryonics could cost such that it wouldn't be worth signing up. I'm not saying that the real costs are that high, just that if you're making a rational decision such an amount will exist.

In response to comment by ciphergoth on Normal Cryonics
Comment author: UnholySmoke 21 January 2010 04:52:36PM 8 points [-]

Sorry, should have given more context.

Given the sky-high utility I'd place on living, I wouldn't expect to see the numbers crunch down to a place where a non-huge sum of money is the difference between signing up and not.

So when someone says 'if it were half the price maybe I'd sign up' I'm always interested to know exactly what calculations they're performing, and exactly what it is that reduces the billions of utilons of living down to a marginal cash sum. The (tiny?) chance of cryonics working? Serious coincidence if those factors cancel comfortably. Just smacks of bottom-line to me.

Put it this way - imagine cryonics has been seriously, prohibitively expensive for many years after introduction. Say it still was today, for some reason, and then after much debate and hand-wringing about immortality for the uber-rich, tomorrow suddenly and very publicly dropped to current levels, I'd expect to see a huge upswing in signing up. Such is the human being!

In response to comment by MichaelGR on Normal Cryonics
Comment author: CronoDAS 19 January 2010 11:26:19PM *  6 points [-]

Could you elaborate on this?

Most of my desires seem to take the form "I don't want to do/experience X". Those desires of the form "I want to do/experience X" seem to be much weaker. Being dead means that I will have no experiences, and will therefore never have an experience I don't want, at the cost of never being able to have an experience I do want. Because I want to avoid bad experiences much more than I want to have good experiences, being dead doesn't seem like all that bad a deal.

I'm also incredibly lazy. I hate doing things that seem like they take work or effort. If I'm dead, I'll never have to do anything at all, ever again, and that has a kind of perverse appeal to it.

In response to comment by CronoDAS on Normal Cryonics
Comment author: UnholySmoke 21 January 2010 03:11:53PM 0 points [-]

Being dead != Not doing anything

Not doing something because you're lazy != Not existing

I don't believe that you put low utility on life. You're just putting low utility on doing stuff you don't like.

In response to comment by MichaelGR on Normal Cryonics
Comment author: Kaj_Sotala 20 January 2010 03:06:45PM 1 point [-]

Do you think that if Cryonics was widely available where you are and that it was affordable (a hundred Euros a year life insurance, f.ex.) that this would increase your interest in it?

Probably, yes.

In response to comment by Kaj_Sotala on Normal Cryonics
Comment author: UnholySmoke 21 January 2010 03:09:08PM 1 point [-]

I often have this thought, and then get a nasty sick feeling along the lines of 'what the hell kind of expected utility calculation am I doing that weighs a second shot at life against some amount of cash?' Argument rejected!

In response to The Wannabe Rational
Comment author: RobinHanson 15 January 2010 08:32:21PM 26 points [-]

It may be enough if we find common cause in wanting to be rational in some shared topic areas. As long as we can clearly demarcate off-limit topics, we might productively work on our rationality on other topics. We've heard that politics is the mind killer, and that we will do better working on rationality if we stay away from politics. You might argue similarly about religion. That all said, I can also see a need for a place for people to gather who want to be rational about all topics. So, the question for this community to decide is, what if any topics should be off-limits here?

Comment author: UnholySmoke 18 January 2010 01:19:09PM 5 points [-]

"If you could reason with religious people, there would be no religious people."

  • House M.D.

Robin, I'm a little surprised to read you saying that topics on which it's difficult to stay on track should be skirted. As far as I'm concerned, 'What are your religious views?' is the first question on the Basic Rationality test. I know that encouraging compartmentalisation isn't your goal by any means, but it sounds to me as though it would be the primary effect.

I can also see a need for a place for people to gather who want to be rational about all topics.

Now you're talking. No topics should be off-limits!

Comment author: byrnema 16 January 2010 06:03:57PM *  5 points [-]

I think you did miss something. You write that everything adds back up to normalcy, but I observe that physical materialism feels bereft of meaning compared to the theistic worldview. (I can give more details regarding this, and concede in advance it is not a universal experience.)

If I can construct a free-floating belief system that makes "values" coherent for this bereft person, on what basis should they not prefer the free-floating belief system? The running argument seems to be that they should value 'truth'. However the obvious catch is that the person only places a terminal value for truth from within the free floating belief system.

Comment author: UnholySmoke 18 January 2010 01:09:57PM 1 point [-]

physical materialism feels bereft of meaning compared to the theistic worldview.

On what are you basing your assumption that the world should have whatever you mean by 'meaning'?

View more: Prev | Next