Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Pudlovich 22 December 2014 12:30:14PM 0 points [-]

It was done by Doyle himself. In 1898 he published two short stories - "The Lost Special" and "The Man with the Watches", where "an amateur reasoner of some celebrity" participates in solving a crime mystery and fails. It was written after Doyle killed off Sherlock, so he is probably parodying the character - he was quite tired with him at the time.

Comment author: Epictetus 22 December 2014 12:19:09PM 0 points [-]

Hello. My name is Tom. I'm 27 and currently working an a PhD in mathematics. I came to this site by following a chain of links that started with TVTropes of all things.

I have been a fan of rational thinking as long as I can remember. I'd always had the habit of asking questions and trying to see things from every point of view. I devoured all sorts of books growing up and shifted my viewpoints often enough that I became willing to accept the notion that everything I currently believe is wrong. That's what pushed me to constantly question my own beliefs. I have read enough of this site to satisfy myself that it would be worthwhile to make an account and perhaps participate in the community that built it.

In response to comment by [deleted] on Akrasia and Shangri-La
Comment author: hesperidia 22 December 2014 07:40:36AM 0 points [-]

I personally feel that doing abs help me feel less hungry because they kind of compress my stomach (but so does wearing higher-rise trousers and pulling their belt tighter)

This is also observed when wearing back-braces and corsets over the long term. In the corset-wearing/waist-training community particularly, some people have observed that without significant changes in behavior, corsets may decrease appetite; the actual effect is of course highly variable, but it's frequent enough to be conventional wisdom in that community, so.

Comment author: Jayson_Virissimo 22 December 2014 07:35:53AM 0 points [-]

Our human tendency is to disguise all evidence of the reality that most frustrates us: death. We need only look at the cemeteries, the gravestones. the monuments to understand the ways in which we seek to embellish our mortality and banish from our minds this ultimate failure of our humanity. Sometimes we even resort to “canonizing” our dead. After Saint Peter’s Square, the place where most people are canonized is at wakes: usually the dead person is described as a “saint.” Of course, he was a saint because now he can’t bother us! These are just ways of camouflaging the failure that is death.

-- Pope Francis and Jorge Mario Bergoglio, Open Mind, Faithful Heart: Reflections on Following Jesus

Comment author: clever_idiot 22 December 2014 06:01:51AM *  0 points [-]

If we’re pretending that free will is both silly and surprising, then why aren’t we more surprised by stronger biases towards more accurate notions like causality?

If there was no implicit provision like this, there’s no sense to asking any question like “why would brains tend to believe X and not believe not X?” To entertain the question, first we entertain a belief that our brains were “just naïve enough” to allow surprise at finding any sort of cognitive bias. Free will indicates bias--this is the only sense I can interpret from the question you asked.

Obviously, it is irrational to believe strongly either way if no evidence is commonly admitted. Various thought experiments could be made to suggest free will is not among those beliefs we hold by evaluation of log-likelihoods over hypotheses given evidence. And so, if “free will” is significantly favored while also baseless, then a cognitive bias remains one of the better possible explanations for the provisional surprise we claim about observing belief in free will.

At least it is so in my general, grossly naïve understanding. And in lieu of a stack trace, I'll say this: cognitive biases seem like heuristic simplifications that cause general errors in the calculation of inference. They favor improper scoring when betting expectations in certain contexts. Assuming any reason exists, the motivation is most likely as with over-fitting in any other model—it’s a sampling bias. And, since engineering mistakes into our brain sounds generally harmful, each type of over-fitting must pay off tremendously in some very narrow scope of high risk, high reward opportunities.

The need to reason causally isn’t any more apparent than free will, but it just sounds less mysterious because it fits the language of mathematics. Causality and free will are related, but learning causality seems such a necessary objective to a brain that I doubt we’d get so many other biases without getting causality ensured first. I doubt we’re built without an opinion on either issue.

Comment author: ike 22 December 2014 02:58:55AM 0 points [-]

“The birthrate in the United States is at an all-time low. Whereas our death rate is still holding strong at 100 percent.”

Jimmy Kimmel

In response to comment by AndyC on Circular Altruism
Comment author: themusicgod1 22 December 2014 01:33:55AM *  0 points [-]

Couldn't you argue this the opposite way? That life is such misery, that extra torture isn't really adding to it.

The world with the torture gives 3^^^3+1 suffering souls a life of misery, suffering and torture.

The world with the specs gives 3^^^3+1 suffering souls a life of misery, suffering and torture, only basically everyone gets extra specks of dust in their eye.

In which case, the first is better?

It's not as much of a stretch as you might think..

In response to comment by EAS on You Only Live Twice
Comment author: ilzolende 22 December 2014 01:13:15AM 1 point [-]

I think the general idea here is that you stay in cryosuspension until the better version comes along, because most of the people on this forum don't expect the better version to arrive during their lifetimes. (I do, but I'm a teenager, which means I both rationally expect to live to a later year than the average adult does, and irrationally underestimate mortality risks as they apply to me.)

Comment author: Jiro 22 December 2014 01:09:02AM *  1 point [-]

You can only "invest $100" in cryonics by buying an insurance policy with a $100 premium that covers a very short period, where the chance of immortality is the probability that cryonics works multipled by the probability that you will die during the exact period covered by the premium before you have to pay a second premium. Because the chance that you will die during the period is non-zero, the return on the investment is also non-zero. However, the overhead for this investment is huge (and bear in mind that overhead includes such things as "everyone thinks you're crazy for making a single payment that only returns anything if you die within the week.")

Furthermore, what does it even mean to say "this instance of Pascal's Mugging maximizes my return, over several instances of Pascal's mugging"? If it's an instance of Pascal's mugging, the return is useless information and maximizing it is meaningless.

In response to You Only Live Twice
Comment author: EAS 22 December 2014 12:21:58AM 0 points [-]

Thanks for bringing this up! I never knew this stuff existed. I'm expecting that a better version of this comes along at some point where they can store copies of our brain, computerized into an AI-like form. And then post us into a new body, bionic or organic. some long time into the future. More or less immortality. Not an expert, but I can see it developing at some time.

Comment author: Algernoq 21 December 2014 09:57:38PM 0 points [-]

I am not currently suicidal.

[Really Extreme Altruism is purchasing a life insurance policy, then] two years after the policy is purchased, it will pay out in the event of suicide. The man waits the required two years, and then kills himself, much to the dismay of his surviving relatives.

So...you're not currently suicidal...and you plan to kill yourself... I notice that I am confused.

Theory: suicidal feelings evolved to encourage people expelled by a tribe to do whatever it took to rejoin a tribe. That is, living without a tribe was certain death, but doing something heroically self-sacrificing (like killing a dangerous predator, or stealing from a strong enemy) was only probable death. "Altruistic" suicide with life insurance fits this pattern, but is not adaptive to the modern world.

Look, I'm against death in most circumstances, including yours. Your emotions are lying to you: you're not really tribeless -- you're a citizen of a reasonably powerful nation. A low-risk way to feel better (in addition to the conventional ones you apparently reject) is to join some tribes -- join some activity where you see the same people at least once a week. High-risk approaches, like a large psilocybin dose, or boot camp for the Marines, would also be less damaging than suicide.

Look, screw altruistic self-sacrifice. Crocodiles exist, and for literally 50 million years have survived by violently killing and eating other animals. Sharks exist, and for literally 400 million years have survived by violently killing and eating other animals. Apparently, God does not care. You get to decide what your needs are, and to pursue happiness as you see it.

Comment author: Manfred 21 December 2014 09:52:11PM 0 points [-]

Minor nitpicks:

Would have been less suspenseful to mention CDT and UDT in the introduction.

Top of second half of page 8, you say a strategy-selecting agent doesn't update its action. This confused me, since strategies can have different observations map onto different actions - probably better to say something like it doesn't update its strategy based on observations that are already in the strategy.

You switch over to talking about "algorithms" a lot, without talking about what that is. Maybe something like "An agent's decision algorithm is the abstract specification of how it maps observations onto actions." You can use this to explain the notation A()=s; both sides of the equation are maps from observations to actions, the right side is just concrete rather than abstract.

When you say "the environment is an algorithm," you could cash this out as "the environment has an abstract specification that maps outputs of a strategy onto outcomes."

Comment author: eli_sennesh 21 December 2014 09:23:47PM 1 point [-]

I'm almost certain that if most of these people found out they had cancer and would die unless they got a treatment and (1) with the treatment they would have only a 20% chance of survival, (2) the treatment would be very painful, (3) the treatment would be very expensive, and (4) if the treatment worked they would be unhealthy for the rest of their lives; then almost all of these cryonics rejectors would take the treatment.

It's painful, expensive, leaves you in ill health the rest of your (shortened) life, and you've only got a 20% chance?

Why would someone take that deal?

Comment author: Gurkenglas 21 December 2014 09:23:34PM 1 point [-]

If you're looking for rationalizations for not giving into Pascal's Wager here, a better one might be "If I wanted to maximize my chance at immortality, paying 100$ for prayers is less effective than investing 100$ into cryonics."

Comment author: eli_sennesh 21 December 2014 09:22:30PM 0 points [-]

Ah, desert-dryness of speech: capable of making even immortality sound boring and unappealing!

Comment author: Jiro 21 December 2014 07:21:11PM 1 point [-]

I find the idea of cryonics having a 20% chance of working to be orders of magnitude too optimistic.

Comment author: Princess_Stargirl 21 December 2014 06:59:36PM *  0 points [-]

The value of immortality does not seem infinite to me. Merely very large. The odds that magic or religion will save you seem vastly tiny. Sufficiently tiny that they are bad uses of time and energy even if the benefits are potentially very large.

Comment author: Princess_Stargirl 21 December 2014 06:56:31PM 0 points [-]

This is more than slightly odd. I am considering cryonics but I would never take that cancer treatment. It seems like a horrible deal .

Comment author: JoshuaZ 21 December 2014 06:16:10PM 0 points [-]

Sure, but I fail to see how that's relevant to the point in question.

Comment author: Manfred 21 December 2014 05:53:24PM *  2 points [-]

This seems like explaining vs. explaining away. The process by which better players pick up wins is by winning the "contest of athletic prowess." The game itself is interesting to watch because we like to see competent people play, and when upsets happen, they often happen for reasons that are easily displayed and engaged with in terms of the mechanics of the game.

View more: Next