Nominull3

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I don't actually know that separate agree/disagree and low/high quality buttons will be all that helpful. I don't know that I personally can tell the difference very well.

Hardly any potential catastrophies actually occur. If you only plan for the ones that actually occur (say, by waiting until they happen, or by flawlessly predicting the future), then you save a lot of mental effort.

Also, consider the difference between potential and actual catastrophe regarding how willing you will be to make a desperate effort to find the best solution.

I don't know about that, denis. The first part at least is a cute take on the "shut up and multiply" principle.

By my math it should be impossible to faithfully serve your overt purpose while making any moves to further your ulterior goal. It has been said that you can only maximize one variable; if you consider factor A when making your choices, you will not fully optimize for factor B.

So I guess Lord Administrator Akon remains anesthetized until the sun roasts him to death? I can't decide if that's tragic or merciful, that he never found out how the story ended.

Anonymous: The blog is shutting down anyway, or at least receding to a diminished state. The threat of death holds no power over a suicidal man...

Personally, I side with the Hamburgereaters. It's just that the Babyeaters are at the very least sympathetic, I can see viewing them as people. As they've said, the Babyeaters even make art!

Nominull3-30

I agree with the President of Huygens; the Babyeaters seem much nicer than the Lotuseaters. Maybe that's just because they don't physically have the ability to impose their values on us, though.

Nominull3-10

"Normal" End? I don't know what sort of visual novels you've been reading, but it's rare to see a Bad End worse than the death of humanity.

Why do you consider a possible AI person's feelings morally relevant? It seems like you're making an unjustified leap of faith from "is sentient" to "matters". I would be a bit surprised to learn, for example, that pigs do not have subjective experience, but I go ahead and eat pork anyway, because I don't care about slaughtering pigs and I don't think it's right to care about slaughtering pigs. I would be a little put off by the prospect of slaughtering humans for their meat, though. What makes you instinctively put your AI in the "human" category rather than the "pig" category?

Load More