Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Jack 06 November 2013 05:45:26PM *  3 points [-]

I don't think you have to be a moral anti-realist to believe the orthogonality thesis but you certainly have to be a moral realist to not believe it.

Now if you're a moral realist and you try to start writing an AI you're going to quickly see that you have a problem.

/#Initiates AI morality /#

  1. action_array.sort(morality)
  2. do action_array[0]

Doesn't work. So you have to start defining "morality" any you figure out pretty quickly that no one has the least idea how to do that in a way that doesn't rapidly lead to disastrous consequences. You end up with the only plausible option looking like : "Examine what humans would want if they were rational and had all the information you have". Seems to me that that is the moment you should just become a moral subjectivist -- maybe of the ideal observer theory variety.

Now you might just believe the orthogonality thesis because you are a moral realist who doesn't believe in motivational internalism-- they're lots of ways to get there. But you can't be an anti-realist and ever even come close to making such a mistake.

Comment author: novalis 11 November 2013 06:20:38AM 1 point [-]

So you have to start defining "morality" any you figure out pretty quickly that no one has the least idea how to do that in a way that doesn't rapidly lead to disastrous consequences.

No, because it's possible that there genuinely is a possible total ordering, but that nobody knows how to figure out what it is. "No human always knows what's right" is not an argument against moral realism, any more than "No human knows everything about God" is an argument against theism.

(I'm not a moral realist or theist)

Comment author: Yvain 02 November 2013 07:01:08PM *  19 points [-]

Since high school I've been involved in conworlding - collaborative development of fictional worlds and societies, then setting stories or games in them.

Around 2005, I and some friends set a story in a culture with a goddess named Per married to a god named Elith. The religion gets called "Perelithve".

Skip to 2008. Neil Stephenson publishes Anathem, One throwaway reference mentions two of the avout, a woman named Per who marries a man named Elith. The marriage rite they invent gets called "Perelithian"

If names can have between 3 and 8 letters, and always alternate vowels and consonants, and ''th' counts as one sound, I calculate that the chances of someone who needs two names coming up with "Per" and "Elith" is on the order of one in a billion. The similarity in stories maybe adds another two or three bits of unlikelihood. If I've read 1000 novels, each of which has 100 minor characters, and my conworlds contain 1000 characters, then the odds aren't really that bad, maybe as high as 1%

Still freaks me out, though.

Comment author: novalis 03 November 2013 07:35:21AM 10 points [-]

Unless you were both influenced by Perelandra, in which case the odds are much higher.

Comment author: brainoil 11 October 2013 07:43:10AM 1 point [-]

I'm not American, so I could be wrong about this. But at first glance it seems to me that Republicans have to run two vastly different campaigns, one in the primaries and the other in the general election, while Democrats could run pretty much the same campaign in the primaries as well as the general election. It seems to me that people like Rand Paul and Ted Cruz would have to get called flip-flops if they were to run for presidency in 2016 while Hillary Clinton would be able to run just one campaign.

Am I wrong, or is the Democratic party just larger than the Republican party and therefore more mainstream?

Comment author: novalis 11 October 2013 11:45:26PM 0 points [-]

while Democrats could run pretty much the same campaign in the primaries as well as the general election.

Democrats in fact differ between the primary and the general election. Off the top of my head, consider Obama's shift on FISA from 2007 (voted against) to post-primary 2008 (voted for telecom immunity).

Comment author: Lumifer 08 August 2013 01:04:33AM *  2 points [-]

The first one is rude, the second one is blunt, the third one is subtle/tactful/whatever.

We keep hitting the Typical Mind Fallacy over and over again :-)

Let me offer you my interpretation: the first one is blunt and might or might not be rude, depending on what the social norms and context are (and on whether thinking about frobnicating the beezlebib does provide incontrovertible evidence of severe brain trauma). The second one is not blunt at all, it's entirely neutral. The third one is a slighly more polite version of neutral. Your fourth example is still neutral, by the way -- there's nothing particularly blunt about explaining why something should not be done (or about using four-letter words, for that matter).

To contrast I'll offer my examples:

  • (rude) You are a moron and can't code your way out of a wet paper bag! Stuff your code where the sun don't shine and never show it to me again!
  • (blunt) This is not working and will never work. You need to scrap this entirely and start from scratch.
  • (subtle) While this is a valuable contribution, we would really appreciate it if you went and twiddled the bogon emitter for us while we try to deal with the beezlebib frobnication on our own.

What's so special about Linux?

It's only the most successful open software ever. Otherwise, not much :-P

Comment author: novalis 11 October 2013 11:20:51PM -1 points [-]

I recently came across this, which seems to have some evidence in my favor (and some irrelevant stuff): http://www.bakadesuyo.com/2013/10/extraordinary-leader/

Comment author: shminux 19 September 2013 03:23:30PM *  1 point [-]

I would rather the QM sequence was shortened to the low-controversy subset Eliezer described in An Intuitive Explanation of Quantum Mechanics and checked for technical accuracy. The pure MWI advocacy part belongs in some appendix, and the outright nonsense like "a Bayesian can become as smart as Einstein" should be chucked and never mentioned again.

Comment author: novalis 20 September 2013 05:54:04PM 0 points [-]


Comment author: Kawoomba 24 August 2013 04:26:16PM 7 points [-]

how sure are you that whole brain emulations would be conscious

Slightly less than I am that you are.

is there anything we can do now to to get clearer on consciousness?

Experiments that won't get approved by ethics committees (suicidal volunteers PM me).

Comment author: novalis 24 August 2013 05:13:24PM 8 points [-]

Before I tell my suicidal friends to volunteer, I want to make sure that your experimental design is good. What experiment are you proposing?

Comment author: PhilGoetz 19 August 2013 11:07:23PM 7 points [-]

In this analogy, they correspond to non-human animals, who have not yet expressed an opinion on the matter.

Comment author: novalis 20 August 2013 04:46:43AM 2 points [-]

You mean, have not yet expressed an opinion in a way that you understand.

Anyway, the fact that slaves and ex-slaves did advocate for the rights of slaves indicates that closeness to a problem does not necessarily lead one to ignore it.

Comment author: shminux 19 August 2013 04:07:56PM 1 point [-]

Killing it reduces the overall suffering, since its quality of life is well below the "barely worth living" level, with no hope of improvement.

Comment author: novalis 20 August 2013 04:42:40AM 0 points [-]

That doesn't work for preference utilitarians (it would strongly prefer to remain alive).

Comment author: MugaSofer 18 August 2013 10:46:45PM 5 points [-]

I kinda think the opposite is true. It's people who live in cities who join PETA. Country folk get acclimatized to commoditizing animals.

This sounds right to me. After all, you don't find plantation owners agitating for the rights of slaves. No, it's people who live off far away from actual slaves, meeting the occasional lucky black guy who managed to make it in the city and noting that he seems morally worthy.

Comment author: novalis 19 August 2013 06:31:15AM 1 point [-]

Um, what about the actual slaves and ex-slaves?

Comment author: shminux 18 August 2013 04:29:46PM *  3 points [-]

I am mildly consequentialist, but not a utilitarian (and not in the closet about it, unlike many pretend-utilitarians here), precisely because any utilitarianism runs into a repugnant conclusion of one form or another. That said, it seems that the utility-monster type RC is addressed by negative utilitarians, who emphasize reduction in suffering over maximizing pleasure.

Comment author: novalis 19 August 2013 06:29:35AM 2 points [-]

Isn't there an equivalent negative utility monster, who is really in a ferociously large amount of pain right now?

View more: Next