Comment author: christopherj 10 October 2013 02:13:29AM 4 points [-]

I wonder how long before an insurance company decides to test cryonics as an excuse. "We respect his belief that he is not dead, but rather in suspended animation."

Comment author: DanielH 13 June 2014 05:41:33AM 1 point [-]

That would probably be a good thing. I think that the company says they pay out in the event of legal death, so this would mean that they'd have to try to get the person declared "not dead". By extension, all cryonics patients (or at least all future cryonics patients with similar-quality preservations) would be not dead. If I were in charge of the cryonics organization this argument was used against, I would float the costs of the preservation and try to get my lawyers working on the same side as those of the insurance company. If they succeed, cryonics patients aren't legally dead and have more rights, which is well worth the cost of one guy's preservation + legal fees. If they fail, I get the insurance money anyway, so I'm only out the legal fees.

At least most cryonics patients have negligible income, so the IRS isn't likely to get very interested.

Comment author: Cthulhoo 27 January 2012 10:36:58AM *  8 points [-]

Not totally IT, but I tried it on Eliezer's "The 5-Second Level". Highlits include:

I won't socially kill you

Hope to reflect on consequentialist grounds

Say, what a vanilla ice cream, and not-indignation, and from green?

Associate to persuade anyone of how you were making the dreadful personal habit displays itself in a concrete example.

Rather you can't bear the 5-second level?

To develop methods of teaching rationality skills, you need more practice to get lost in verbal mazes; we will tend to have our feet on the other person.

Be sufficiently averse to the fire department and see if that suggests anything.

Comment author: DanielH 11 November 2013 11:14:08AM 0 points [-]

Be sufficiently averse to the fire department and see if that suggests anything.

I do believe it suggests libertarianism. But I can't be sure, as I can't simply "be sufficiently averse" any more than I can force myself to believe something.

Still, that one seems to be a fairly reasonable sentence. If I were to learn only that one of these had been used in an LW article (by coincidence, not by a direct causal link), I would guess it was either that one or "I won't socially kill you".

Comment author: Risto_Saarelma 10 October 2010 07:15:24AM *  1 point [-]

I assume you just copy and pasted characters into the comment box from another source?

I got bored of doing that and just put my text through

def sc(x): return unicode(x).translate(dict([(ord('a') + i, c) for i, c in enumerate(u"ᴀʙᴄᴅᴇꜰɢʜɪᴊᴋʟᴍɴᴏᴘ-ʀꜱᴛᴜᴠᴡxʏᴢ")]))

Comment author: DanielH 17 October 2013 08:05:37AM 1 point [-]

I find it odd that Unicode doesn't have a Latin Letter Small Capital Q but does have all the others.

Comment author: Gabriel 06 March 2013 05:43:49PM 7 points [-]

Different people get upset about all sorts of stimuli, from squirting blood to scraping nails to clicking computers to a microcosm inhabited by adorable inhuman sentients to whom no one gives proper moral consideration .

Actually, if I recall correctly, in the original Friendship is Optimal, once they were constructed, the non-uploaded people received the same moral consideration as those originally human. They were designed to fit into a preconceived world but they weren't slaves. I'm not quite sure whether that feels bad because it's actually bad or because it's so very different from our current methods of manufacturing minds (roll genetic dice, expose to local memes, hope for the best).

Comment author: DanielH 17 October 2013 06:19:29AM 1 point [-]

I don't see it as bad at all and suspect most who do see it as bad do so because it's different from the current method. These minds are designed to have lives that humans would consider valuable, and that they enjoy for all its complexity. It is like making new humans in the usual method, but without the problems of abusive upbringing (the one pony with abusive upbringing wasn't a person at the time) or other bad things that can happen to a human.

Comment author: Strilanc 07 March 2013 03:19:51AM 6 points [-]

Well, I don't really remember the exact boundary between Friend is Optimal vs Caelum est Conterrens, but...

  • People who don't want to upload are eternally harassed as society crumbles around them until they give in or die.
  • People are lied to and manipulated constantly.
  • Everything non-human (puppies, trees, stars, aliens capable of radio communication) is destroyed.
  • The uploading process seemed to be destructive only for convenience's sake.
Comment author: DanielH 17 October 2013 05:47:28AM 1 point [-]

The aliens with star communication weren't destroyed. They were close enough to "human" that they were uploaded or ignored. What's more, CelestAI would probably satisfy (most of) the values of these aliens, who probably find "friendship" just as approximately-neutral as they and we find "ponies".

Comment author: Pavitra 08 March 2013 04:54:11AM *  4 points [-]

Consider the epistemic state of someone who knows that they have the attention of a vastly greater intelligence than themselves, but doesn't know whether that intelligence is Friendly. An even-slightly-wrong CAI will modify your utility function, and there's nothing you can do but watch it happen.

Comment author: DanielH 17 October 2013 05:43:11AM 0 points [-]

An even-slightly-wrong CAI won't modify your utility function because she isn't wrong in that way. An even-slightly-wrong CAI does do several other bad things, but that isn't one of them.

Comment author: chaosmage 20 March 2013 09:08:48AM 3 points [-]

So you think you can guess that character's desire more accurately than a godlike AI with full access to her mind could?

Comment author: DanielH 17 October 2013 05:40:00AM 0 points [-]

Yes. The author wrote that part because it was a horrifying situation. It isn't a horrifying situation unless the character's desire is to actually know. Therefore, the character wanted to actually know. I can excuse the other instances of lying as tricks to get people to upload, thus satisfying more values than are possible in 80-odd years; that seems a bit out of character for Celestia though.

Comment author: OnTheOtherHandle 25 March 2013 08:11:50PM *  2 points [-]

I personally didn't find the actual experience at Equestria itself terrifying at all. It was a little disturbing at first, but almost all of that was sheer physical disgust or a knee-jerk sour grapes reaction. But it seems to avoid almost all of the pitfalls of failed Utopias everywhere:

  • You interact with real, sentient creatures who are independent and have their own values and desires. Thunder is capable of getting hurt, angry, and frustrated with his wife. Limeade is capable of feeling envious of her friend. They are in no way less than any complete human mind, and are given the same moral weight. They satisfy Lavender's values, but only as an effect of satisfying their own values, not as their primary directive. The love and friendship are real.
  • You're not isolated from other uploaded humans. There are only very few shards of Equestria that grow around only one upload; most interact with others from their Earthly lives.
  • It's not stagnant - jryy, guvf vf n ovg qrongnoyr, V xabj, orpnhfr bs gur Ybbc Vzzbegnyf, but there are always new things to learn and discover and opportunities for growth and enlightenment if you so choose.
  • It's not devoid of pain or sadness; it's only devoid of arbitrary pain or sadness. It recognizes that to live a fully human life, you need sadness and frustration sometimes; it just makes sure that the pain is, as Paul Graham said, the pain of running a marathon, not the pain of stepping on a nail. Not everything is perfect.

That said, there were moments of genuine horror, mainly stuff people have pointed out before:

  • Perhaps trillions and trillions of sentient alien species were wiped out to expand Celestia's empire.
  • The people left behind, who didn't upload, are living in a post-apocalyptic wasteland. Celestia was no doubt capable of arranging for functional societies and amenities for those who chose not to upload, but her primary directive was to satisfy values through friendship and ponies, and making life hell for those who held out made them more likely to upload quickly.
  • Fridge Logic: One of Siofra's coworkers said his version of the PonyPad game was like God of War; he brutally slaughtered and tortured ponies as part of Celestia's palace guard. Well, what would happen to his shard of Equestria when he uploaded? Would he be massacring living minds? Presumably the ponies in Horndog Dan's version of Equestria truly desired him and satisfied their own values by having sex with him, but the ones who were killed to satisfy the other colleague's desire for heroism? Presumably his values don't involve killing ponies who are essentially automata who exist only to be killed; he wants to kill genuinely evil enemy minds, not drones. Also, how does Celestia manage to satisfy the values of sociopaths with "friendship and ponies"?
Comment author: DanielH 17 October 2013 05:33:43AM *  0 points [-]

I suspect your fridge logic would be solved by fvzcyl abg trggvat qb jung ur jnagrq, hagvy ur jvfurq ng fbzr cbvag gung ur jbhyq abg or n fbpvbcngu. I'm more worried about the part you rot13ed, and I suspect it's part of what makes Eliezer consider it horror. I feel that's the main horror part of the story.

There are also the issues of Celestia lying to Lavendar when clearly she wants the truth on some level, the worry about those who would have uploaded (or uploaded earlier) if they had a human option, and the lack of obviously-possible medical and other care for the unuploaded humans (whose values could be satisfied almost as much as those of the ponies). These are instances when an AI is almost-but-not-quite Friendly (and, in the case of the simple fictional story instead of everyday life, could have been easily avoided by telling Celestia to "satisfy values" and that most people she meets initially want friendship and ponies). These are probably the parts that Eliezer is referring to, because of his work in avoiding uFAI and almost-FAI. On the other hand, they are far better than his default scenario, the no AI scenario, and the Failed Utopia #4-2 scenario in the OP. EDIT: Additionally, in the story at least, everything except the lying was easily avoidable by having Celestia just maximize values, while telling her that most people she meets early on will value friendship and ponies (and the lying at the end seems to be somewhat out-of-character because it doesn't actually maximize values).

One other thing some might find horrifying, but probably not Eliezer, is the "Does Síofra die" question. To me, and I presume to him, the answer is "surely not", and the question of ethics boils down to a simple check "does there ever exist an observer moment without a successor; i.e., has somebody died?". Obviously some people do die preventable deaths, but Síofra isn't one of them.

Comment author: [deleted] 01 October 2012 02:35:36PM 6 points [-]

I wonder if a Catholic priest is theologically allowed to kill sinners so long as they never say why

I don't think they are, any more than they are allowed to kill anyone else.

In response to comment by [deleted] on Prices or Bindings?
Comment author: DanielH 05 October 2013 03:31:44AM 0 points [-]

I don't know the Catholic church's current take on this, but the Bible does require the death penalty for a large number of crimes, and Jesus agreed with that penalty. If there was no state-sponsored death penalty, and nobody else was willing, my religious knowledge fail me on whether an individual or a Catholic priest would be forbidden, allowed, or required to performing the execution by this, and I'm unsure if or how that's affected by the context of a confessional.

Comment author: christopherj 16 September 2013 03:32:54AM 1 point [-]

Incidentally, it is currently possible to achieve total happiness, or perhaps a close approximation. A carefully implanted electrode to the right part of the brain, will be more desirable than food to a starving rat, for example. While this part of the brain is called the "pleasure center", it might rather be about desire and reward instead. Nevertheless, pleasure and happiness are by necessity mental states, and it should be possible to artificially create these.

Why should a man who is perfectly content, bother to get up to eat, or perhaps achieve something? He may starve to death, but would be happy to do so. And such a man will be content with his current state, which of course is contentment, and not at all resent his current state. Even a less invasive case, where a man is given almost everything he wants, yet not so much so that he does not eventually become dissatisfied with the amount of food in his belly and decide to put more in, even so there will be higher level motivations this man will lose.

While I consider myself a utilitarian, and believe the best choices are those that maximize the values of everyone, I cannot agree with the above situation. For now, this is no problem because people in their current state would not choose to artificially fulfill their desires via electrode implants, nor is it yet possible to actually fulfill everyone's desires in the real world. I shall now go and rethink why I choose a certain path, if I cannot abide reaching the destination.

Comment author: DanielH 05 October 2013 01:34:18AM *  1 point [-]

Welcome to Less Wrong!

First, let me congratulate you on stopping to rethink when you realize that you've found a seeming contradiction in your own thinking. Most people aren't able to see the contradictions in their beliefs, and when/if they do, they fail to actually do anything about them.

While it is theoretically possible to artificially create pleasure and happiness (which, around here, we call wirehading), converting the entire observable universe to orgasmium (maximum pleasure experiencing substance) seems to go a bit beyond that. In general, I think you'll find most people around here are against both, even though they'd call themselves "utilitarians" or similar. This is because there's more than one form of utilitarianism; many Less Wrongers believe other forms, like preference utilitarianism are correct, instead of the original Millsian hedonistic utilitarianism.

Edit: fixed link formatting

View more: Next