Comment author: CCC 26 April 2016 07:23:40AM 0 points [-]

Hmmm. Fair enough. But even if they're not squeamish about it, it would make sense for them to select the material from which they make their walkways according to flavour (among other factors, such as strength and durability).

In response to comment by CCC on Humans in Funny Suits
Comment author: CynicalOptimist 07 October 2016 02:29:57AM 1 point [-]

Yup! I agree completely.

If you were modeling an octopus-based sentient species, for the purposes of writing some interesting fiction, then this would be a nice detail to add.

Comment author: gjm 10 May 2016 11:16:56PM 1 point [-]

Again, I agree with all of that.

Comment author: CynicalOptimist 07 October 2016 02:06:44AM 0 points [-]

Thank you. :)

Comment author: Decius 11 October 2012 04:58:37AM 0 points [-]

At what point does the decision "This is true" diverge from the observation "There is very strong evidence for this", other than in cases where the model is accepted as true despite a lack of strong evidence?

I'm not discussing the case where a model goes from unknown to known- how does deciding to believe a model give you more information than knowing what the model is and the reason for the model. To better model an actual agent, one could replace all of the knowledge about why the model is true with the value of the strength of the supporting knowledge.

How does deciding that things always fall down give you more information than observing things fall down?

Comment author: CynicalOptimist 19 August 2016 03:18:21PM *  0 points [-]

I believe the idea was to ask "hypothetically, if I found out that this hypothesis was true, how much new information would that give me?"

You'll have two or more hypotheses, and one of them is the one that would (hypothetically) give you the least amount of new information. The one that would give you the least amount of new information should be considered the "simplest" hypothesis. (assuming a certain definition of "simplest", and a certain definition of "information")

Comment author: Günther_Greindl 23 July 2008 01:31:14PM 2 points [-]

If all you have is a gut feeling of uncertainty, then you should probably stick with those algorithms that make use of gut feelings of uncertainty, because your built-in algorithms may do better than your clumsy attempts to put things into words.

I would like to add something to this. Your gut feeling is of course the sum of experience you have had in this life plus your evolutionary heritage. This may not be verbalized because your gut feeling (as an example) also includes single neurons firing which don't necessarily contribute to the stability of a concept in your mind.

But I warn against then simply following one's gut feeling; of course, if you have to decide immediately (in an emergency), there is no alternative. Do it! You can't get better than the sum of your experience in that moment.

But usually only having a gut feeling and not being able to verbalize should mean one thing for you: Go out and gather more information! (Read books to stabilize or create concepts in your mind; do experiments; etc etc)

You will find that gut feelings can change quite dramatically after reading a good book on a subject. So why should you trust them if you have the time to do something about them, viz. transfer them into the symbol space of your mind so the concepts are available for higher-order reasoning?

Comment author: CynicalOptimist 12 May 2016 09:57:03PM 0 points [-]

This is excellent advice.

I'd like to add though, that the original phrase was "algorithms that make use of gut feelings... ". This isn't the same as saying "a policy of always submitting to your gut feelings".

I'm picturing a decision tree here: something that tells you how to behave when your gut feeling is "I'm utterly convinced" {Act on the feeling immediately}, vs how you might act if you had feelings of "vague unease" {continue cautiously, delay taking any steps that constitute a major commitment, while you try to identify the source of the unease}. Your algorithm might also involve assessing the reliability of your gut feeling; experience and reason might allow you to know that your gut is very reliable in certain matters, and much less reliable in others.

The details of the algorithm are up for debate of course. For the purposes of this discussion, i place no importance on the details of the algorithm i described. The point is just that these procedures are helpful for rational thinking, they aren't numerical procedures, and a numerical procedure wouldn't automatically be better just because it's numerical.

Comment author: CCC 10 May 2016 12:11:41PM 1 point [-]

Never underestimate the utility of properly describing a problem. I've found that it's really amazing how often, by the time you've figured out what question you really want to ask to solve the problem, you're already most of the way to the answer...

Comment author: CynicalOptimist 10 May 2016 09:14:13PM 0 points [-]

I think this is the basis of good Business Analysis. A field I'm intending to move into.

It's the very essence of "Hold off on proposing solutions".

Comment author: Jiro 06 May 2016 07:57:18PM 2 points [-]

Believing in germs has a pretty big effect, yet most people have no problem believing in germs (or atoms, or electricity, or the Earth moving around the sun). All they need is a couple of scientists to say "there are these invisible things that cause disease" and they're perfectly happy to believe the scientists.

It may be that scientists themselves had trouble believing in continental drift or germs when they were first introduced, but we're not talking about scientists here; we're talking about everyday people who get their knowledge from authorities. Everyday people have no trouble believing in germs or atom bombs when told by an authority, and evolution isn't any more absurd-sounding than those. They only think evolution "sounds absurd" because it contradicts their religion.

Comment author: CynicalOptimist 10 May 2016 09:07:14PM 0 points [-]

This is perfectly true. But it doesn't much matter, because the point here is that when these people reject the idea of evolution, for these kind of reasons, they use feelings of "absurdity" as a metric - without critically assessing the reasons why they feel that way.

The point here isnt that the lady was using sound and rational reasoning skills. The contention is that her style of reasoning was something a rationalist shouldn't want to use - and that it was something the author no longer wants to use in their own thinking.

Comment author: gjm 05 May 2016 11:04:09PM -1 points [-]

I agree with all of that. But there's a limit to how much effort you can reasonably be expected to put into considering whether something that seems absurd to you is really not-absurd. I suggest that that depends on what other evidence there is for its non-absurdity. E.g., in the case of evolution, it's highly relevant that it's endorsed by the great majority of biologists, including biologists belonging to religions whose traditions contain stories that prima facie conflict with evolution.

There are a lot of super-smart Christians too, which I think it's reasonable to take as evidence that Christianity can't rightly be dismissed simply because its tradition contains a story about a talking snake. On the other hand, there aren't so many super-smart talking-snake-believers -- even among Christians, most[1] of the cleverest and most educated don't take the story as indicating that there was ever a talking snake -- which suggests that treating a literal reading of the talking-snake story as absurd probably isn't unreasonable.

[1] Though certainly not all.

Comment author: CynicalOptimist 10 May 2016 08:48:04PM 0 points [-]

Oh absolutely. We don't have time to thoroughly investigate the case for every idea we come across. There comes a time when you say that you're not interested in exploring an idea any further.

But there is an intellectual honesty to admitting that you haven't heard all of the evidence, and acknowledging that you might conceivably have changed your mind (or least significantly changed your probability estimates) if you had done more research.

And there's a value to it as well. Some ideas have been thoroughly researched and should be labelled in our minds as "debunked". Others should be labelled as "not yet disproven". Later, if we happen to encounter more evidence on the topic, we might take this into account when we decide how seriously to take it.

The lady in the story might have sounded much more sensible to us if she had said "Evolution still sounds absurd to me, but I'll admit that i haven't yet given the pro-evolution argument a proper opportunity to change my mind".

And i think we should try to be that sensible ourselves.

Comment author: jimmy 14 March 2009 09:49:21AM 0 points [-]

"If someone had a reason to seriously present it, then I'd not dismiss it out of hand"

It's important to not that no one has. You can't update on fictitious evidence.

In the (unlikely) case that something unprobable ends up with strong evidence backing it, then it becomes probable whether or not it was called "absurd". Until then, we dismiss it because it's absurd.

Comment author: CynicalOptimist 10 May 2016 08:26:37PM *  1 point [-]

I think that absurdity, in this sense, is just an example of Occam's Razor / Bayesian rationalty in practice. If something has a low prior, and we've no evidence that would make us raise our probability estimates, then we should believe that the idea probably isn't true.

I've always assumed that the absurdity bias was a tendency to do something slightly different. In this context, absurdity is a measure of how closely an idea conforms to our usual experiences. It's a measure of how plausible an idea feels to our gut. By this definition, absurdity is being used as a proxy for "low probability estimate, rationally assigned".

It's often a good proxy, but not always.

Or perhaps another way to put it: when evidence seems to point to an extremely unlikely conclusion, we tend to doubt the accuracy of the evidence. And the absurdity bias is a tendency to doubt the evidence more thoroughly than ideal rationality would demand.

(Admission: I've noticed that I've had some trouble defining the bias, and now I'm considering the possibility that "absurdity bias" is a less useful concept than I thought it was).

Comment author: gjm 04 January 2016 04:48:16PM 4 points [-]

I know of an old prime number that happens to end with a 2.

Comment author: CynicalOptimist 05 May 2016 10:00:37PM 0 points [-]

Incidentally, does this prime number have to be expressed in Base 10?

Comment author: gjm 04 January 2016 04:48:16PM 4 points [-]

I know of an old prime number that happens to end with a 2.

Comment author: CynicalOptimist 05 May 2016 09:59:31PM 1 point [-]

I think the original poster would have agreed to this even before they had the realisation. The point here is that, even when you do listen to an explanation, the absurdity bias can still mislead you.

The lady in the story had an entire conversation about evolution and still rejected it as absurd. Some ideas simply take more than 20 minutes to digest, understand and learn about. Therfore after 20 minutes of conversation, you cannot reasonably conclude that you've heard everything there is. You cannot reasonably conclude that you wouldn't be convinced by more evidence.

It's just like any bias really. Even when you know about it and you think you've adjusted sufficiently, you probably haven't.

View more: Next