All of Manon_de_Gaillande's Comments + Replies

I talked to other people about such calls. They called me evil. Apparently, people don't see the proposition "Aid is good" as following from "Aid helps people" (a purely factual claim) and "Helping people is good" (which only evil people deny); it's all in the same mental bucket. So we're pretty much screwed explaining it. Moreover, even when people finally get the distinction, the claims tend to be rejected at the speed of thought - because we all know "Aid is good".

I'm somewhat puzzled by how all the influences you quote are fiction. I read and watched fiction as a child, and the only obvious consequence on my personality has been 1) extremely distorted - I can recognize the influence because I remember it, but you couldn't look at that part of my personality and say "Aha, that came from Disney movies!" 2) tossed out of the window in a recent crisis of faith 3) more influenced by real life than fiction. I've been recalculating a lot of things since as young as 4 (most of which ended up wrong because of lack... (read more)

I find this harder to read. The arguments are obscured. The structure sucks; claims are not isolated into neat little paragraphs so that I can stop and think "Is this claim actually true?". It's about you (why you aren't Wise) rather than about the world (how Wisdom works).

I've rarely heard "You'll understand when you're older" on questions of simple fact. Usually, it's uttered when someone who claims to be altruistic points out someone else's actions are harmful. The Old Cynic then tells the Young Idealist "I used to be like you, but then I realized you've got to be realistic, you'll understand when you're older that you should be more selfish.". But they never actually offer an object-level argument, or even seem to have changed their minds for rational reasons - it looks like the Selfishness Fairy just... (read more)

Oh, I'm starting to see why the Superhappies are not so right after all, what they lack, why they are alien, in the Normal Ending and in Eliezer's comments. I think this should have been explained in more detail in the story, because I initially failed to see their offer as anything but good, let alone bad enough to kill yourself. I want untranslatable 2!

Still, if I had been able to decide on behalf of humanity, I would have tried to make a deal - not outright accepted their offer, but negotiated to keep more of what matters to us, maybe by adopting more o... (read more)

Wait. Aren't they right? I don't like that they don't terminally value sympathy (though they're pretty close), but that's beside the point. Why keep the children suffering? If there is a good reason - that humans need a painful childhood to explore, learn and develop properly, for example - shouldn't the Super Happy be conviced by that? They value other things than a big orgasm - they grow and learn - they even tried to forsake some happiness for more accurate beliefs - if, despite this, they end up preferring stupid happy superbabies to painful growth, it's likely we agree. I don't want to just tile the galaxy with happiness counters - but if collapsing into orgasmium means the Supper Happy, sign me up.

8Tamfang
"Every human culture had expended vast amounts of intellectual effort on the problem of coming to terms with death. Most religions had constructed elaborate lies about it, making it out to be something other than it was – though a few were dishonest about life, instead. But even most secular philosophies were warped by the need to pretend that death was for the best. ¶ It was the naturalistic fallacy at its most extreme — and its most transparent, but that didn't stop anyone. Since any child could tell you that death was meaningless, contingent, unjust, and abhorrent beyond words, it was a hallmark of sophistication to believe otherwise." —Margit in 'Border Guards' by Greg Egan

@Jotaf: No, you misunderstood - guess I got double-transparent-deluded. I'm saying this:

  • Probability is subjectively objective
  • Probability is about something external and real (called truth)
  • Therefore you can take a belief and call it "true" or "false" without comparing it to another belief
  • If you don't match truth well enough (if your beliefs are too wrong), you die
  • So if you're still alive, you're not too stupid - you were born with a smart prior, so justified in having it
  • So I'm happy with probability being subjectively objective, a

... (read more)

@Eliezer: Can you expand on the "less ashamed of provincial values" part?

@Carl Shuman: I don't know about him, but for myself, HELL YES I DO. Family - they're just randomly selected by the birth lottery. Lovers - falling in love is some weird stuff that happens to you regardless of whether you want it, reaching into your brain to change your values: like, dude, ew - I want affection and tenderness and intimacy and most of the old interpersonal fun and much more new interaction, but romantic love can go right out of the window with me. Friends - I... (read more)

Oh please. Two random men are more alike than a random man and a random woman, okay, but seriously, a huge difference that makes it necessary to either rewrite minds to be more alike or separate them? First, anyone who prefers to socialize with the opposite gender (ever met a tomboy?) is going to go "Ew!". Second, I'm pretty sure there are more than two genders (if you want to say genderqueers are lying or mistaken, the burden of proof is on you). Third, neurotypicals can get along with autists just fine (when they, you know, actually try), and t... (read more)

5MugaSofer
Leaving aside the fact that this was a failed utopia, I am troubled by your comment "neurotypicals can get along with autists just fine (when they, you know, actually try), and this makes the difference between genders look hoo-boy-tiiiiny." While it appears to be true, it is also true that even a minor change could easily render cooperation with another mind extremely difficult. Diversity has its cost. Freedom of speech means you can't arrest racists until they actually start killing Jews, for example
1A1987dM
For any two groups A and B, two random members of A are more alike than a random member of A and a random member of B, aren't they?
7TuviaDulin
The clever fool doesn't seem to have taken these facts into account. He was a fool, after all.

I don't see how removing getting-used-to is close to removing boredom. IANAneurologist, but on a surface level, they do seem to work differently - boredom is reading the same book everyday and getting tired of it, habituation is getting a new book everyday and not thinking "Yay, new fun" anymore.

I'm reluctant to keep habituation because, at least in some cases, it is evil. When the emotion is appropriate to the event, it's wrong for it to disminish - you have a duty to rage against the dying of the light. (Of course we need it for survival, we ca... (read more)

I'm going to stick out my neck. Eliezer wants everyone to live. Most people don't.

People care about their and their loved ones' immediate survival. They discount heavily for long-term survival. And they don't give a flying fuck about the life of strangers. They say "Death is bad.", but the social norm is not "Death is bad.", it's "Saying "Death is bad." is good.".

If this is not true, then I don't know how to explain why they dismiss cryonics out of hand with arguments about how death is not that bad that are clearly ... (read more)

-1MugaSofer
If this is true, then I still don't know how to explain it, do I? If bias isn't enough to explain not being horrified at the lack of universal cryonics, so you must resort to "they secretly don't care", then you still have to explain not being horrified by the deaths of their loved ones. Or rather, being visibly horrified, but not taking this option to prevent it. Why would bias be enough to explain this but not the latter? And you have to explain how they got so good at lying, too.

Actually, the Mystic Eyes of Depth Perception are pretty underwhelming. You can tell how far away things are with one eye most of the time. The difference is big enough to give a significant advantage, but nothing near superpower level. My own depth perception is crap (better than one eye though), and I don't often bump into walls.

Nazir Ahmad Bhat, you are missing the point. It's not a question of identity, like which ice cream flavor you prefer. It's about truth. I do not believe there is a teapot orbiting around Jupiter, for the various reasons explained on this site (see Absence of evidence is evidence of absence and the posts on Occam's Razor). You may call this a part of my identity. But I don't need people to believe in a teapot. Actually, I want everyone to know as much as possible. Promoting false beliefs is harming people, like slashing their tires. You don't believe in a flying teapot: do you need other people to?

Eliezer, sure, but that can't be the whole story. I don't care about some of the stuff most people care about. Other people whose utility functions differ in similar but different ways from the social norm are called "psychopaths", and most people think they should either adopt their morals or be removed from society. I agree with this.

So why should I make a special exception for myself, just because that's who I happen to be? I try to behave as if I shared common morals, but it's just a gross patch. It feels tacked on, and it is.

I expected (thou... (read more)

Constant: "Give a person power, and he no longer needs to compromise with others, and so for him the raison d'etre of morality vanishes and he acts as he pleases."

If you could do so easily and with complete impunity, would you organize fights to death for your pleasure? Would you even want to? Moreover, humans are often tempted to do things they know they shouldn't, because they also have selfish desires. AIs don't if you don't build it into them. If they really do ultimately care about humanity's well-being, and don't take any pleasure from making people obey them, they will keep doing so.

I'm confused. I'll try to rephrase what you said, so that you can tell me whether I understood.

"You can change your morality. In fact, you do it all the time, when you are persuaded by arguments that appeal to other parts of your morality. So you may try to find the morality you really should have. But - "should"? That's judged by your current morality, which you can't expect to improve by changing it (you expect a particular change would improve it, but you can't tell in what direction). Just like you can't expect to win more by changing yo... (read more)

This argument sounds too good to be true - when you apply it to your own idea of "right". It also works for, say, a psychopath unable to feel empathy who gets a tremendous kick out of killing. How is there not a problem with that?

0MarsColony_in10years
Well, it isn't the same as a morality that is written into the fabric of the universe or handed down on a stone tablet or something, but it is the "best" we have or could hope to have. (whatever "best" even means, in this case) It evaluates the same (or at least the Coherent Extrapolated Volition converges) for all psychologically healthy humans. But if someone has a damaged mind, or their species simply evolved a different set of values, then they would have their own morality, and you could no more argue human morals into them than you could into a rock. Well, it certainly is a little dissatisfying. It's much better than the nihilistic alternative, though. However, those are coping problems, not problems with the logic itself. If the sky is green then I desire to believe that the sky is green.

No! The problem is not reductionism, or that morality is or isn't about my brain! The problem is that what morality actually computes is "What should you feel-moral about in order to maximize your genetic fitness in the ancestral environment?". Unlike math, which is more like "What axioms should you use in order to develop a system that helps you in making a bridge?" or "What axioms should you use in order to get funny results?". I care about bridges and fun, not genetic fitness.

Actually, "Whatever turns y'all on" is... (read more)

Folks, we covered that already! "You should open the door before you walk trough it." means "Your utility function ranks 'Open the door then walk through it' above 'Walk through the door without opening it'". YOUR utility function. "You should not murder." is not just reminding you of your own preferences. It's more like "(The 'morality' term of) my utility function ranks 'You murder' below 'you don't murder'.", and most "sane" moralities tend to regard "this morality is universal" as a good thing.

Caledonian: 1) Why is it laughable? 2) If hemlines mattered to you as badly as a moral dilemma, would you still hold this view?

I'm pretty sure you're doing it wrong here.

"What if the structure of the universe says to do something horrible? What would you have wished for the external objective morality to be instead?" Horrible? Wish? That's certainly not according to objective morality, since we've just read the tablet. It's just according to our intuitions. I have an intuition that says "Pain is bad". If the stone tablet says "Pain in good", I'm not going to rebel against it, I'm going to call my intuition wrong, like "Killing is good", &qu... (read more)

-1[anonymous]
Why? I thought psychopaths where bad because they hurt people not because they construct their own moral philosophies.

I'm surprised no one seems to doubt HA's basic premise. It sure seems to me that toddlers display enough intelligence (especially in choosing what they observe) to make one suspect self-awareness.

I'm really glad you will write about morality, because I was going to ask. Just a data dump from my brain, in case anyone finds this useful:

Obviously, by "We should do X" we mean "I/We will derive utility from doing X", but we don't mean only that. Mostly we apply it to things that have to do with altruism - the utility we derive from helping o... (read more)

0Peterdjones
Obviously the moral should is not the instrumental should.

kevin: Eliezer has written about that already. The AI could convice any human to let it out. See the AI box experiment ( http://yudkowsky.net/essays/aibox.html ). If it was connected to the Internet, it could crack the protein folding problem, find out how to build protein nanobots (to, say, build other nanobots), order the raw material (such as DNA strings) online) and convice some guy to mix it ( http://www.singinst.org/AIRisk.pdf ). It could think of something we can't even think of, like we could use fire if we were kept in a wooden prison (same paper).

Your main argument is "Learning QM shouldn't change your behavior". This is false in general. If your parents own slaves and you've been taught that people in Africa live horrible lives and slavery saves them, and you later discover the truth, you will feel and act differently. Yet you shouldn't expect your life far away from Africa to be affected: it still adds up to normality.

Some arguments are convincing ("you can't do anything about it so just call it the past" and "probability"), but they may not be enough to support your conclusion on their own.

Why does the area under a curve equal the antiderivative? I've done enough calculus to suspect I somehow know the reason, but I just can't quite pinpoint it.

For some reason, this view of time fell nicely in place in my mind (not "Aha! So that's how it is?" but "Yes, that's how it is."), so if it's wrong, we're a lot of people to be mistaken in the same way.

But that doesn't dissolve the "What happened before the Big Bang?" question. I point at our world and ask "Where does this configuration come from?", you point at the Big Bang, I ask the same question, and you say "Wrong question.". Huh?

6imaxwell
Super-late answer! If you ask about a configuration X, "Where does this configuration come from?" I will point at a configuration W for which the flow from W to X is very high. If you ask, "Well, where does W come from?" I will point to a configuration V for which the flow from V to W is very high. We can play this game for a long time, but at each iteration I will almost certainly be pointing to a lower-entropy configuration than the last. Finally I may point to A, the one-point configuration. If you ask, "Where does A come from?" I have to say, "There is nowhere it comes from with any significant probability." At best I can give you a uniform distribution over all configurations with epsilon entropy. But all this means is that no configuration has A in its likely future. The thing is, it doesn't make sense to ask what is the probability of a configuration like A, external to the universe itself: you can only ask the probability that a sufficiently long path passing through some specific configuration or set of configurations will have A in * its future, or * its past. The probability of the former is probably 0, so we don't expect a singularity in the future. That of the latter is probably 1, so we do expect a singularity in the past.

Eliezer: "A little arrow"? Actual little arrows are pieces of wood shot with a bow. Ok, amplitudes are a property of a configuration you can map in a two-dimensional space (with no preferred basis), but what property? I'll accept "Your poor little brain can't grok it, you puny human." and "Dunno - maybe I can tell you later, like we didn't know what temperature was before Carnot.", but a real answer would be better.

I am not smarter than that. But you might (just might) be. "Eliezer says so" is strong evidence for anything. I'm too stupid to use the full power of Bayes, and I should defer to Science, but Eliezer is one of the few best Bayesian wannabes - he may be mistaken, but he isn't crazily refusing to let go of his pet theory. Still not enough to make me accept MWI, but a major change in my estimate nonetheless.

As a side note, what actually happens in a true libertarian system is Europe during the Industrial Revolution.

I don't believe you.

I don't believe most scientists would make such huge mistakes. I don't believe you have shown all the evidence. This is the only explaination of QM I've been able to understand - I would have a hard time checking. Either you are lying for some higher purpose or you're honestly mistaken, since you're not a physicist.

Now, if you have really presented all the relevant evidence, and you have not explained QM in a way which makes some interpretation sound more reasonable than it is (what is an amplitude exactly?), then the idea of a single world is preposterous, and I really need to work out the implications.

"My curiosity doesn't suddenly go away just because there's no reality, you know!" Eliezer, I want to high-five you.

Does this "Many worlds" thing imply that there exists (in some meaningful sense) other worlds alongside us where whatever quantum events didn't happen here happened? (If not, or if this is a wrong question, disregard the following.)

What are the moral implications? If some dictator says "If this photon passes through this filter (which it can do with probability 0.5), I will torture you all; if it is absorbed, I will d... (read more)

5DanArmak
Disclaimer: I don't understand QM on a formal level. But here's what I got out of reading the Sequences and other LW discussions on the subject. They exist, in a special sense of the word. Instead of arguing about definitions of existence, measure of reality, etc., let's talk about the experimental consequences. Which are: you're not going to interact with them ever again. They exist at most as much as people in our own branch who are outside our Hubble radius. Should you still grieve for them? That's for you to decide, but I do make a suggestion: grief is in part a useful adaptation. It may help motivate you to prevent more future grief. If you cannot prevent future grief-causing events (because quantum torture branches will always keep splitting off, and to the extent you cannot influence their measure), then that grief is useless. Eliminating it (not grieving) makes you better off and no-one else worse off, so in such cases I suggest you do not grieve. Again, there may well be good quantum theoretical arguments against quantum suicide. But here's a more practical one. Suppose it works. It has been suggested that it in the vast majority of the branches in which you survive, you do not survive unscathed: you survive hurt, reduced, as an invalid, etc. If you rig up a gun to shoot you, there are some branches where it fails to shoot entirely, but there are many more branches where it misses just enough that you live on as a cripple. Quantum suicide is dangerous like an outcome pump. In principle, any world whose past evolution does not contradict the laws of physics exists as a branch. Most people try to avoid the unpleasant implications by assigning significance to the weight of those branches. I find this a bit problematic when applied to branches that are not in our future: the Born probabilities govern the branch we expect to witness, but we don't understand why or how, so why should we say they govern some "reality measure" of branches we cannot interact wi
1MarkusRamikin
We did?
3DanielLC
People were, in fact, tortured. You can grieve for them if you wish. That is also a question of how branching world-lines work. I'd say no. Identity is an illusion. Everyone only exists for an instant, and a "person" is actually a world-line composed of tons of different people who all think they're the same person. If you perform the experiment, there will be fewer people who think they're you. Every world exists, but some exist more than others. Don't take that at face value. All it means is that not all of the worlds are equally likely. I have no idea why. Just rest assured that the other worlds exist somehow.

If people do have a religion-shaped hole (I can tell at least some do), what are they supposed to do about it? Ignoring it to focus on real things will not plug the hole. Modifying your brain or creating a real godlike thing is not possible yet. So what are we to do?

"Sure, someone else knows the answer - but back in the hunter-gatherer days, someone else in an alternate Earth, or for that matter, someone else in the future, knew what the answer was."

I think the difference is that someone else knows the answer and can tell you.

What's the bad thing that happens if I do 35? It's a mistake, but how will it prevent me from using words correctly? I'd still be able to imagine a triangular lightbulb.

You lost me there.

1) If Alice and Bob observe the system in your first example, and Alice decides to keep track precisely of X's possile states while Bob just says "2-8", the entropy of X+Y is 2 bits for Alice and 2.8 for Bob. Isn't entropy a property of the system, not the observer? (This is the problem with "subjectivity": of course knowledge is physical, it's just that it depends on the observer and the observed system instead of just the system.)

2) If Alice knows all the molecules' positions and velocities, a thermometer will still ... (read more)

I think I've found one of the factors (besides scope insensivity) involved in the intuitive choice: in real life, a small amount of harm inflicted n times to one person has negative side-effects which don't happen when you inflict it once to n persons. Even though there aren't any in this thought experiment, we are so used to it we probably take it into account (at least I did).

Maybe the reason we tend to choose bet 2 over bet 1 (before computing the actual expected winnings) is not the higher probability to win, but the smaller sum we can lose (either we expect to lose or we can lose at worst, I'm not sure about that). So the bias here could be more something along the lines of status quo bias or endownment effect than a need for certainty.

I can only speak for myself, but I do not intuitively value certainty/high probability of winning, while I am biased towards avoiding losses.

Actually, the last statement (about spankings instead of jails) doesn't sound foolish at all. We abolished torture and slavery, we have replaced a lot of punishments with softer ones, we are trying to make executions painless and more and more people are against death penalty, we are more and more concerned about the well-being of ever larger groups (white men, then women, then other "races", then children), we pay attention to personal freedom, we think inmates are entitled to human rights, and if we care more about preventing further misdeeds than punishing the culprit, jails may not be efficient. I doubt spanking will replace jail, but I'd bet on something along these lines.

2AlexanderRM
I think that the idea of spanking replacing jail is very unlikely, but doesn't sound as absurd as most of them on the list. The thing that makes it sound absurd is the idea that our grandchildren will find it EVIL to put criminals in jail instead of spanking them. Imagine people 100 years from now pointing to revered historical figures and saying they supported the use of jails, the way people point out that some of America's Founding Fathers owned slaves. Although... if you accept the premise of other punishments replacing jail, I wouldn't be entirely surprised by the idea that people would come to regard jail as evil, and that many people would be reluctant to accept it even in a discussion of situations where the replacement punishments weren't practical. I think that also makes a rather useful way for me, personally at least, to consider the idea of whether modern ideas might not actually be better than past ones in a different light. In this thought experiment, our descendants genuinely do have a society better than ours, but their moral standards that have resulted from it aren't necessarily better than ours, even though they think they are.

This pressure exists once religion is already in place, but doesn't explain why it appears and spreads.

However, selecting for cheats doesn't matter, since they must teach their religion to their children in order to properly simulate faith. Moreover, I suspect that most people who didn't actively choose their religion, but passively accepted it as children don't fully believe it.