All of Nate_Gabriel's Comments + Replies

P Supernatural: 6.68 + 20.271 (0, 0, 1) [1386]

P God: 8.26 + 21.088 (0, 0.01, 3) [1376]

The question for P(Supernatural) explicitly said "including God." So either LW assigns a median probability of at least one in 10,000 that God created the universe and then did nothing, or there's a bad case of conjunction fallacy.

0[anonymous]
Something else I noticed: Agnostic: 156, 10.4% Lukewarm theist: 44, 2.9% Deist/pantheist/etc.: 22,, 1.5% Committed theist: 60, 4.0% A true agnostic should be 50% on the probability of God, but we'll say 25-75% as reasonable. A lukewarm theist should be 50-100%. I don't like the deist wording, but we'll say 50-100% for them, and 75-100% for the committed theists. We then get: 10.4.25+2.9.5+1.5.5+4.75 = 7.8% P God as our lower bound Compared to the 8.26% actual That's assuming all the atheists assigned a 0% probability to God. So it seems everybody is very close to their minimum on this; even likely below the minimum for some of them. My guess is a lot of people have some major inconsistencies in their views on God's existence.
4Scott Garrabrant
Conjunctions do not work with medians that way. From what you quoted, it is entirely possible that the median probability for that claim is 0. You can figure it out from the raw data.
9epursimuove

An infinite number of mathematicians walk into a bar. It collapses into a gravitational singularity.

6Lumifer
...and starts to emit Stephen Hawkings

I tried something vaguely similar with completely different assumptions. I basically ignored the number of animal deaths in favor of minimizing the amount of animal torture. The whole thing was based on how many animals it takes before empathy kicks in, rather than an actual utility comparison.

I instinctively distrust animal-to-human utility conversions, but the ideal version of your method is better than the ideal version of mine. I do recommend that meat eaters do what I did to establish an upper bound, though. It might even convince someone to change th... (read more)

Do you think we currently need more inequality, or less?

1Ixiel
In the US I would say more-ish. I support a guaranteed basic income, and any benefit to one person or group (benefitting the bottom without costing the top would decrease inequality but would still be good), but think there should be a smaller middle class. I don't know enough about global issues to comment on them.

Compared to technological progress, there has been little or no social/political progress since the mid-18th century - if anything, there has been a regression

Regression? Since the 1750s? I realize Europe may be unusually bad here (at least, I hope so), but it took until 1829 for England to abolish the husband's right to punish his wife however he wanted.

2RowanE
I think that progress is specifically what he's on about in his third point. It's standard neoreactionary stuff, there's a reason they're commonly regarded as horribly misogynist.

I once walked around a university campus convincing people that it's impossible to see the Moon during daylight hours. I think it was about 2/3 who believed me, at least until I pointed up.

Just that moment. I definitely didn't follow any of its implications. (Other than "if I say this then people will react as if I said an obvious true thing.")

2Sarunas
In my case such "short term mistakes" are often caused by fatigue. It's as if my brain enters some kind of energy saving mode and sanity checks are deemed not quite as necessary as some other things. In one case I somehow managed not to notice a contradiction in the idea that a cube has four sides and because of that I failed to solve a problem in a school mathematics competition (it must have been one of the problems, as I must have been really tired by then). It seems to me that sanity checks are analogous to redundancy and duplication of components in engineering. Therefore it is not surprising that when the mental energy is very low my brain may decide that these safety measures are not necessary (of course, they aren't until they are). In another case, another student asked me how to solve a particular exercise saying that he tried to use a certain lemma he thought might be useful but was unable to apply it. It was only after some time of trying to solve it myself I got the idea to check whether the statement of a lemma was correct (it wasn't). It seems that in this energy saving mode I did not to think about what exactly was the best thing to check given the fact that he tried and failed to solve it, and instead tried to solve it myself without a single thought that lemma's statement might be incorrect. In other words, my brain did not try to estimate conditional expectations of possible action to take given all the facts I had, it "calculated" only expectations for a general case, when lemmas printed in a textbook are usually stated correctly (in other words, I did not take all the information into account when deciding what should I do next). Even if it still wasn't more likely, the idea about wrongness of lemma should have at least occurred to me (and it would have been easier to check on a toy example). Of course, this seems to be a "hybrid" mistake as it seems to be caused by both failure of a heuristic (to trust mathematics textbooks) in this particular

I once believed that six times one is one.

I don't remember how it came up in conversation, but for whatever reason numbers became relevant and I clearly and directly stated my false belief. It was late, we were driving back from a long hard chess tournament, and I evidently wasn't thinking clearly. I said the words "because of course six times one is one." Everyone thought for a second and someone said "no it's not." Predictable reactions occurred from there.

The reason I like the anecdote is because I reacted exactly the same way I woul... (read more)

0polymathwannabe
I suspect you were saying six times one, but your brain was thinking of one to the sixth power, which indeed is one.
8A1987dM
Except when it doesn't.
7Punoxysm
One of the smartest people in my high school spent a class arguing that a there were 4^20 possibilities for a sequence of 4 amino acids, when in fact it was 20^4. Not quite as elementary as yours, but our brains all play tricks on us.
5ChristianKl
How long do you think you had the wrong belief? Was it just something that happened in that moment or did you carry that believe around for you for longer?

The main prediction that comes to mind is that if Christianity is true, one would expect substantially more miracle claims by Christians (legitimate claims plus false ones) than by any other religion (false claims only).

This also assumes there isn't some saturation point of people only wanting to talk about so many miracles. (Ignoring buybuydandavis' point, which probably interacts with this one in unfortunate ways.) If people only forward X annoying chain emails per month, you'd expect X from each religion. The best we can hope for is the true religion having on average slightly more plausible claims since some of their miracles are true.

3Desrtopa
I certainly can't say this is the best we can hope for; the best case scenario would be one where practically nobody talks about the value of miracles as evidence for an interventionist deity the way practically nobody talks about the value of working automobiles as evidence for our models of thermodynamics; the evidence is simply too obvious to be worth belaboring.

It wasn't actually a muscular condition. My friend is surprisingly unwilling to spread this around and only told me under the extreme circumstances of me telling her I might be about to become an atheist. I wanted to change enough that if she read this on the Internet she wouldn't know it was about her.

7buybuydandavis
So there was a clear potential payoff to her desires in giving you a miracle story - keeping you in the fold. I don't question her good will toward you, but I've found that the correspondence theory of truth is not as widely held as those who rely on it believe. One alternative is that truths are useful statements, whether or not they accurately model the state of the world.

I have done this. The most impressive-sounding one happened to a friend of mine who had formerly been an athlete. She had to withdraw from sports for a year because of an unexpected muscular condition. (If this is obviously medically wrong, it's probably because I changed details for privacy.) As you probably expect, that year involved plenty of spiritual growth that she attributes to having had to quit sports.

At the end of that time, a group of church people laid hands on her and prayed, she felt some extreme acceleration in her heart rate, and her endura... (read more)

0NancyLebovitz
For what it's worth, I think people can have very strong aliefs that affect their health, and powerful experiences can change the aliefs.
6Azathoth123
The atheist/neo-pagan Eric Raymond claims to be able to do this semi-reliably.
9CAE_Jones
I recall reading--I forget where--that laying on of hands does have positive effects that outperform chance. (But cuddling probably does, too. Emotionally-charged human contact does tend to interact with body systems in interesting ways.) My father (who calls himself a Buddhist) has done feats of hand-laying. Most notable was the instance when his mother was in the hospital, and the staff was convinced she was within hours of death (they put out a call to her (Christian) preacher, but he was busy). My father did his trick, and she got better enough to be discharged. (Things went back to awful not long thereafter, but this might be said to have bought her several months at least.) I don't get the feeling that my dad really alieves in the abilities he claims to have. (For starters, he only ever tried it on me once, and was clearly non-serious about that one.) He has been serious in how he talked about using it on others, though.
2John_Maxwell
Can you go in to more detail on the muscular condition? This might be relevant. Regarding an increase in heart rate, that's pretty normal to experience as a result of a social situation (think public speaking, going on a date, laughing with friends, etc.) I imagine if atheism is true, the reason theists "lay hands" on one another is because it's a social situation that seems consistently provoke an interesting and intense feeling in the person who is having hands laid on them.

It's appointed. Doesn't mean the guy who did the appointing can't make exceptions if he feels like it.

Well no, because I doubt he'd share the downvoter's objective. (I assume. I wasn't following the kerfuffle.) To conclude that he would, you have to transplant his methods onto a forum setting but not his goals. Which is a weird level to model at.

Anthropics fails to explain King George because it's double-counting the evidence. The same does not apply to any extinction event, where you have not already conditioned on "I wouldn't exist otherwise."

If it's a non-extinction nuclear exchange, where population would be significantly smaller but nonzero, I'm not confident enough in my understanding of anthropics to have an opinion.

I still don't think George VI having more siblings is an observer-killing event.

Since we now know that George VI didn’t have more siblings, we obtain

Probability(You exist [and know that George VI had exactly five siblings] | George VI had more than five siblings) = 0

I assume you mean "know" the usual way. Not hundred percent certainty, just that I saw it on Wikipedia and now it's a fact I'm aware of. Then P(I exist with this mind state | George VI had more than five siblings) isn't zero, it's some number based on my prior for Wikipedia being ... (read more)

2KnaveOfAllTrades
Yep; in which case the anthropic evidence isn't doing any useful explanatory work, and the thesis 'Anthropics doesn't explain X' holds.

I don't think it's lumping everything together. It's criticizing the rule "Act on what you feel in your heart." That applies to a lot of people's beliefs, but it certainly isn't the epistemology of everyone who doesn't agree with Penn Jillette.

The problem with "Act on what you feel in your heart" is that it's too generalizable. It proves too much, because of course someone else might feel something different and some of those things might be horrible. But if my epistemology is an appeal to an external source (which I guess in this conte... (read more)

3DanArmak
'Act on an external standard' is just as generalizable - because you can choose just about anything as your standard. You might choose to consistently act like Gandhi, or like Hitler, or like Zeus, or like a certain book suggests, or like my cat Peter who enjoys killing things and scratching cardboard boxes. If the only thing I know about you is that you consistently behave like someone else, but I don't know like whom, then I can't actually predict your behavior at all. The more important question is: if you act on what you feel in your heart, what determines or changes what is in your heart? And if you act on an external standard, what makes you choose or change your standard?
3Mestroyer
It looks like there's all this undefined behavior, and demons coming out the nose from the outside because you aren't looking at the exact details of what's going on in with their feelings that are choosing the beliefs. Though a C compiler given an undefined construct may cause your program to crash, it will never literally cause demons to come out of your nose, and you could figure this out if you looked at the implementation of the compiler. It's still deterministic. As an atheistic meta-ethical ant-realist, my utility function is basically whatever I want it to be. It's entirely internal. From the outside, from someone who has a system where they follow something external and clearly specified, they could shout "Nasal demons!", but demons will never come out my nose, and my internal, ever so frighteningly non-negotiable desires are never going to include planned famines. It has reliable internal structure. The mistake is looking at a particular kind of specification that defines all the behavior, and then looking at a system not covered by that specification, but which is controlled by another specification you haven't bothered to understand, and saying "Who can possibly say what that system will do?" Some processors (even x86) have instructions (such as bit rotate) which are useful for significant performance boosts in stuff like cryptography, and yet aren't accessible from C or C++, and to use it you have to perform hacks like writing the machine code out as bytes, casting its address to a function pointer and calling it. That's undefined behavior with respect to the C/C++ standard. But it's perfectly predictable if you know what platform you're on. Other people who aren't meta-ethical anti-realists' utility functions are not really negotiable either. You can't really give them a valid argument that will convince them not to do something evil if they happen to be psychopaths. They just have internal desires and things they care about, and they care a lot mo

What we need to do is convince Harvard to perform a double-blind test. Accept half their students as normal, and the other half at random from their applicants. We'll have an answer within a couple decades.

I always do. Mentally but not muscularly, and I can kind of suppress it if I consciously try. It is indeed the limiting factor on my reading speed.

2trist
You wouldn't notice the muscular subvocalizations. The easiest way to detect them is EMGs on the neck. I do get to a point where the external world fades away and (with fiction) I have much stronger auditory and visual sensations. I imagine I stop subvocalization during that, I certainly appear to read faster.

Is it possible for a tulpa to have skills or information that the person doing the emulating doesn't? What happens if you play chess against your tulpa?

8klkblake
I tried that last week. I lost. We were actively trying to not share our strategies with each other, although in our case abstract knowledge and skills are shared.

I just realized it's possible to explain people picking dust in the torture vs. dust specks question using only scope insensitivity and no other mistakes. I'm sure that's not original, but I bet this is what's going on in the head of a normal person when they pick the specks.

0ygert
Yes. This. Whenever I talk with anyone about the Torture vs. Dust Specks problem, I constantly see them falling into this trap. See, for instance, this discussion post from a few months back, and my reply to it. This happens again and again, and by this point I am pretty sure that the whole problem boils down to just this.

The dust speck "dillema" - like a lot of the other exercises that get the mathematically wrong answer from most people is triggering a very valuable heuristic. - The "you are trying to con me into doing evil, so fuck off" Heuristic.
Consider the problem as you would of it was a problem you were presented with in real life.

The negative utility of the "Torture" choice is nigh-100% certain. It is in your physical presence, you can verify it, and "one person gets tortured" is the kind of event that happens in real life w... (read more)

8Viliam_Bur
It probably goes like this: "Well, 3^^^3 is a big number; something like 100. Would I torture a person to prevent 100 people having a dust speck in their eyes? How about 200 or 1000? No, this is obviously a madness."

Not very tempted, actually. In this hypothetical, since I'm not feeling empathy the murder wouldn't make me feel bad and I get money. But who says I have to decide based on how stuff makes me feel?

I might feel absolutely nothing for this stranger and still think "Having the money would be nice, but I guess that would lower net utility. I'll forego the money because utilitarianism says so." That's pretty much exactly what I think when donating to the AMF, and I don't see why a psychopath couldn't have that same thought.

I guess the question I'm ge... (read more)

0passive_fist
Caring about someone else's utility function is practically the definition of empathy.
1[anonymous]
I think that ethics, as it actually happens in human brains, are determined by emotions. What causes you to be an utilitarian?
0Viliam_Bur
They could. But if you select a random psychopath from the whole population, what is the probability of choosing an utilitarian? To be afraid of non-empathic people, you don't have to believe that all of them, without an exception, would harm you for their trivial gain. Just that many of them would.

I had actually been wondering about this recently. People define a psychopath as someone with no empathy, and then jump to "therefore, they have no morals." But it doesn't seem impossible to value something or someone as a terminal value without empathizing with them. I don't see why you couldn't even be a psychopath and an extreme rational altruist, though you might not enjoy it. Is the word "psychopath" being used two different ways (meaning a non-empathic person and meaning a complete monster), or am I missing a connection that makes these the same thing?

0hyporational
The nearest term used in contemporary psychiatry is antisocial personality disorder. AFAIK some forensic psychiatrists use the term psychopath, but the criteria are not clear and it's not a recognized diagnosis. Forget about the press the term gets. Lack of empathy certainly isn't sufficient for either label, and can be caused by other psychiatric conditions.
8kalium
You don't notice someone has no empathy until you see them behaving horribly. The word is being used technically to refer to a non-empathic person, but people assume that all non-empaths behave horribly because (with rare exceptions like this neuroscientist) all the visible ones do.

Well, it doesn't establish that induction is always valid, so I guess we might not really be disagreeing. But, pragmatically, everyone basically has to assume that it usually works, or is likely to work in whatever the particular case is. I think it's a good enough heuristic to be called a rational principle that people already have down.

0Fivehundred
OK, forget it.

I'm sure there are philosophers who say they don't, but I guarantee you they act as if they do. Even if they don't know anything about electronics, they'd still expect the light to come on when they flip the switch.

3Fivehundred
That's... not really an argument. Of course everyone has to act pragmatically; we wouldn't even be able to think if we didn't. But that's quite different from establishing the validity of the principle itself.

Standard young-Earther responses, taken from when I was a young-Earth creationist.

Round Earth: Yes. You sort of have to stretch to interpret the Bible as saying the Earth is round or flat, so it's not exactly a contradiction. Things like "the four corners of the Earth" are obvious metaphor.

Animals on the boat: The "kinds" of animals (Hebrew "baramin") don't correspond exactly to what we call species. There are fewer animals in the ark than 2*(number of modern species); this is considered to be a sufficient answer even though i... (read more)

Ideally, how people feel about things would be based in real-world consequences, and a chance of someone being not dead is usually strictly better than the alternative. But I can see how for a small enough chance of resurrection, it could possibly be outweighed by other people holding on to it. I still hope that isn't what's going on in this case, though. That would require people to be feeling "I'd rather have this person permanently dead, because at least then I know where I stand."

0hyporational
That's a pretty insulting way to put it. Consider an alternative: I'll rather spend my only short life living it to the fullest than worrying about people, including me, who will very likely permanently die no matter what I do to help it.

That's...that's terrible. That it would feel worse to have a chance of resurrection than to have closure. It sounds depressingly plausible that that's people's true rejection, but I hope it's not.

Religion doesn't have the same problem, and in my experience it's because of the certainty. People believe themselves to be absolutely certain in their belief in the afterlife. So there's no closure problem, because they simply know that they'll see the person again. If you could convince people that cryonics would definitely result in them being resurrected together with their loved ones, then I'd expect this particular problem to go away.

0[anonymous]
That would depend on how high the chance of resurrection is. Closure is relatively certain.
8Brillyant
In my experience, people holding on to very, very small probabilites can be unhealthy. Misplaced hope can be harmful. I don't think it is quite this cut and dry. Religious people will assert they are certain, but I think there is a significant level of doubt there. People do use heaven as a way to cope with the loss of a loved one -- it is perfectly understandable, but I think it ultimately often prevents them from grieving and acheiving healthy and proper closure.
0Viliam_Bur
Religious people also believe that after they are resurrected (assuming it will be in heaven), all their problems will be magically fixed. So there is nothing to worry about (besides getting to the heaven).

And I'm not sure it's a mistake. If you're getting your information in a context where you know it's meant completely literally and nothing else (e.g., Omega, lawyers, Spock), then yes, it would be wrong. In normal conversation, people may (sometimes but not always; it's infuriating) use "if" to mean "if and only if." As for this particular case, somervta is probably completely right. But I don't think it's conducive to communication to accuse people of bias for following Grice's maxims.

0buybuydandavis
Other nations abolished it within a few generations of the US abolishing it? Plus or minus a few generations? Ok, guess that's how I'd read that.

Of the set of all possible actions that you haven't denied doing, you've only done a minuscule percentage of them.

Of the times that you deny having done something, you lie some non-trivial percent of the time.

Therefore, your denial is evidence of guilt.

wedrifid100

Of the set of all possible actions that you haven't denied doing, you've only done a minuscule percentage of them.

Of the times that you deny having done something, you lie some non-trivial percent of the time.

Therefore, your denial is evidence of guilt.

Even if the conclusion is true it does not follow from the premises given. It relies on the additional implied premise:

  • We know nothing about the thing you are denying except that it is in the set of all possible things that could be denied.

There are some cases where denial is evidence of guilt, there are other cases where it is evidence of innocence and still others where it is no evidence either way.

Sure. This is not surprising; if I spontaneously deny having done something, many people will in fact treat this as evidence of my having done it. (Obligatory TV Tropes link.)

That said, of the set of all possible actions that I haven't denied doing that I've been accused of doing, I've done a non-trivial percentage P1 of them. Of the times that I deny having done something that I've been accused of doing, I lie some non-trivial percentage P2 of the time.

Therefore, my denial of something I'm accused of is evidence of guilt if P2 > P1 and evidence of innocence if P1 > P2.

3Protagoras
Denials are usually prompted by some circumstances, perhaps circumstances that provide some evidence that the denied action actually took place. That may be a confounding factor; among cases where such evidence is present, is there more likely to be a denial when the person is guilty than when the person is innocent? If not, perhaps you shouldn't take the denial as contributing anything further beyond what you learned from the evidence that prompted the denial.

This post almost convinced me. I was thinking about it in terms of a similar algorithm, "one-box unless the number is obviously composite." Your argument convinced me that you should probably one-box even if Omega's number is, say, six. (Even leaving aside the fact that I'd probably mess up more than one in a thousand questions that easy.) For the reasons you said, I tentatively think that this algorithm is not actually one-boxing and is suboptimal.

But the algorithm "one-box unless the numbers are the same" is different. If you were pl... (read more)

As cool as that term sounds, I'm not sure I like it. I think it's too strongly reinforcing of ideas like superiority of rationalists over non-rationalists. Even in cases where rationalists are just better at things, it seems like it's encouraging thinking of Us and Them to an unnecessary degree.

Also, assuming there is a good enough reason to convince me that the term should be used, why is transhumanism-and-polyamory the set of powers defining the non-muggles? LessWrong isn't that overwhelmingly poly, is it?

0PrometheanFaun
I thought for a while, and I really can't imagine any cases of works which would be unsuitable for all LWers that arn't worth hanging around and arguing about. I agree. We should be calling these people ignorant and criticising their work, not assigning them a permanent class division, shaking our heads, and going back to our camp.
5Risto_Saarelma
I don't really see the inherent superiority idea. Seems like there should be plenty of interesting ways to mess up everything with polyamory and transhumanism as well as with monogamy and bioconservatism, just like muggles and wizards both have failure modes, just different.

Plots which are just about people not being rational are a subspecies of "Idiot Plots". Plots which are about people not behaving like SF con-goers are "Muggle Plots".