Posts

Sorted by New

Wiki Contributions

Comments

DanielH10y10

That would probably be a good thing. I think that the company says they pay out in the event of legal death, so this would mean that they'd have to try to get the person declared "not dead". By extension, all cryonics patients (or at least all future cryonics patients with similar-quality preservations) would be not dead. If I were in charge of the cryonics organization this argument was used against, I would float the costs of the preservation and try to get my lawyers working on the same side as those of the insurance company. If they succeed, cryonics patients aren't legally dead and have more rights, which is well worth the cost of one guy's preservation + legal fees. If they fail, I get the insurance money anyway, so I'm only out the legal fees.

At least most cryonics patients have negligible income, so the IRS isn't likely to get very interested.

DanielH10y10

Be sufficiently averse to the fire department and see if that suggests anything.

I do believe it suggests libertarianism. But I can't be sure, as I can't simply "be sufficiently averse" any more than I can force myself to believe something.

Still, that one seems to be a fairly reasonable sentence. If I were to learn only that one of these had been used in an LW article (by coincidence, not by a direct causal link), I would guess it was either that one or "I won't socially kill you".

DanielH11y10

I find it odd that Unicode doesn't have a Latin Letter Small Capital Q but does have all the others.

DanielH11y10

I don't see it as bad at all and suspect most who do see it as bad do so because it's different from the current method. These minds are designed to have lives that humans would consider valuable, and that they enjoy for all its complexity. It is like making new humans in the usual method, but without the problems of abusive upbringing (the one pony with abusive upbringing wasn't a person at the time) or other bad things that can happen to a human.

DanielH11y10

The aliens with star communication weren't destroyed. They were close enough to "human" that they were uploaded or ignored. What's more, CelestAI would probably satisfy (most of) the values of these aliens, who probably find "friendship" just as approximately-neutral as they and we find "ponies".

DanielH11y00

An even-slightly-wrong CAI won't modify your utility function because she isn't wrong in that way. An even-slightly-wrong CAI does do several other bad things, but that isn't one of them.

DanielH11y00

Yes. The author wrote that part because it was a horrifying situation. It isn't a horrifying situation unless the character's desire is to actually know. Therefore, the character wanted to actually know. I can excuse the other instances of lying as tricks to get people to upload, thus satisfying more values than are possible in 80-odd years; that seems a bit out of character for Celestia though.

DanielH11y00

I suspect your fridge logic would be solved by fvzcyl abg trggvat qb jung ur jnagrq, hagvy ur jvfurq ng fbzr cbvag gung ur jbhyq abg or n fbpvbcngu. I'm more worried about the part you rot13ed, and I suspect it's part of what makes Eliezer consider it horror. I feel that's the main horror part of the story.

There are also the issues of Celestia lying to Lavendar when clearly she wants the truth on some level, the worry about those who would have uploaded (or uploaded earlier) if they had a human option, and the lack of obviously-possible medical and other care for the unuploaded humans (whose values could be satisfied almost as much as those of the ponies). These are instances when an AI is almost-but-not-quite Friendly (and, in the case of the simple fictional story instead of everyday life, could have been easily avoided by telling Celestia to "satisfy values" and that most people she meets initially want friendship and ponies). These are probably the parts that Eliezer is referring to, because of his work in avoiding uFAI and almost-FAI. On the other hand, they are far better than his default scenario, the no AI scenario, and the Failed Utopia #4-2 scenario in the OP. EDIT: Additionally, in the story at least, everything except the lying was easily avoidable by having Celestia just maximize values, while telling her that most people she meets early on will value friendship and ponies (and the lying at the end seems to be somewhat out-of-character because it doesn't actually maximize values).

One other thing some might find horrifying, but probably not Eliezer, is the "Does Síofra die" question. To me, and I presume to him, the answer is "surely not", and the question of ethics boils down to a simple check "does there ever exist an observer moment without a successor; i.e., has somebody died?". Obviously some people do die preventable deaths, but Síofra isn't one of them.

DanielH11y10

I don't know the Catholic church's current take on this, but the Bible does require the death penalty for a large number of crimes, and Jesus agreed with that penalty. If there was no state-sponsored death penalty, and nobody else was willing, my religious knowledge fail me on whether an individual or a Catholic priest would be forbidden, allowed, or required to performing the execution by this, and I'm unsure if or how that's affected by the context of a confessional.

DanielH11y30

Welcome to Less Wrong!

First, let me congratulate you on stopping to rethink when you realize that you've found a seeming contradiction in your own thinking. Most people aren't able to see the contradictions in their beliefs, and when/if they do, they fail to actually do anything about them.

While it is theoretically possible to artificially create pleasure and happiness (which, around here, we call wirehading), converting the entire observable universe to orgasmium (maximum pleasure experiencing substance) seems to go a bit beyond that. In general, I think you'll find most people around here are against both, even though they'd call themselves "utilitarians" or similar. This is because there's more than one form of utilitarianism; many Less Wrongers believe other forms, like preference utilitarianism are correct, instead of the original Millsian hedonistic utilitarianism.

Edit: fixed link formatting

Load More