Comment author: shminux 29 July 2011 09:57:17PM 5 points [-]

Hi, as requested, here is my introduction: I ended up here thanks to HPMoR, I have a physics degree and frequent the relevant freenode channels. I have observed that scientists are not significantly more likely to behave rationally than anyone else, not even in their area of expertise and this site appears to explain some of that. Ironically, it appears that this community is less wrong not much more often than an average person,either, though this might be just my initial impression. In any case, I hope to improve my personal rationality quotient, despite the overwhelming odds against it.

Comment author: shminux 29 July 2011 09:32:44PM 2 points [-]

It seems somewhat ironic that EY's favorite "many worlds" interpretation http://lesswrong.com/lw/r8/and_the_winner_is_manyworlds/ is a classic case of fake explanations, as it has no power without the Born rule (probability of outcome is proportional to square modulus of amplitude density) of the orthodox quantum mechanics, but it is often used to explain away the uncomfortable feeling the infamous "collapse" model leaves most people with. Take away the Born rule, and the MWI can explain anything you want.

Comment author: shminux 29 July 2011 07:50:19PM *  7 points [-]

It may or may not be helpful to realize that infinities (including infinitesimals) are merely a mathematical abstraction. Everything you encounter in the physical world is finite. Thus, it's not overly surprising that something actually happens, even though a given mathematical model of that something assigns it a zero probability.

That said, mathematical descriptions that include continuity are extremely convenient (life would be rather cumbersome if we had to use finite difference calculus instead of derivatives in all applications).

It is a very common tendency to identify a physical phenomenon with a particular mathematical model of it (one of the most abused models is that of virtual particles in particle physics), but one would be rather less wrong by keeping in mind that an abstraction of an object is not the object itself.

A nice (if fantastical) description of objects vs models can be found in the HPMoR chapter on partial transfiguration.

Comment author: TheOtherDave 28 July 2011 10:20:57PM 3 points [-]

Depends on what you mean by "correct".

For example, if $religion's teachings correctly constrain expectations in verifiable ways, I expect such debates to look something like this: Skeptic: "Why do you follow the teachings of $religion?" Believer: "Because its teachings correctly constrain expectations. Here, I'll show you: here's a real-world situation. What do you expect to happen next?" Skeptic: "I expect $A." Believer: "Well, applying $religion's teachings I conclude that $B is more likely." Skeptic: "Excellent! Let's see what happens." (lather, rinse, repeat) Eventually one of them says to the other: "Huh. Yeah, it seems you were right!"

Comment author: shminux 29 July 2011 03:35:26AM *  1 point [-]

"$religion's teachings correctly constrain expectations in verifiable ways" -- that's where it fails every time. That the universe was created 6000 years ago "should not be taken literally" now, though it was back when it was not testable. There is some nice stuff about it in HPMoR, Ch 22, Belief in Belief. There is no rational argument you can make that would change someone's belief if they are determined to keep it. Our Mormon friend here is a typical example. An honestly religious person would say that "this is what I choose to believe, leave logic out of it."

Comment author: JGWeissman 28 July 2011 09:01:15PM 5 points [-]

If a religion were correct, what would you expect debates with followers of that religion to look like?

Comment author: shminux 28 July 2011 09:18:02PM 0 points [-]

First, it would be interesting to know how one can convince a neutral and mildly rational observer what it means for a given religion to be correct and explain how this correctness can be tested experimentally. I don't have Yudkowsky's imagination, so it's not something I can easily conceive.

Comment author: [deleted] 28 July 2011 07:22:15PM 5 points [-]

Beh, half of LW downvotes everything remotely theist on sight. It wasn't a judgment of the evidence.

I do worry that I have been insufficiently diligent in evaluating the many religions. Hopefully any extant gods will turn out to be understanding.

In response to comment by [deleted] on How to Convince Me That 2 + 2 = 3
Comment author: shminux 28 July 2011 08:35:49PM 1 point [-]

" Rationality can't be used to argue for a fixed side, its only possible use is deciding which side to argue." People arguing for their own religion automatically fail this rather basic premise of rationality, so what's the point getting into a discussion with them on finer points of religious doctrine, given that they have no clue about rationality to begin with, regardless of what they say?

My question would instead be "Is it important to you for your religion to be right? If so, how does this mesh with rationality, if not, what are the odds that all the available evidence you evaluated pointed you in this convenient direction without any bias involved?".

Comment author: [deleted] 27 July 2011 05:53:44PM *  19 points [-]

"These are methods for solving problem X that worked for me, in case you hadn't considered attempting something similar in solving X for yourself."

No, it's not the delivery that I take issue with. It's the unintended consequences. See this.

If LW must tackle self-help, I want to see meta-analyses of published science. I want sample sizes bigger than one. If a writer feels compelled to write down "what worked for them", then in the name of Bayes, at least set up an ad-hoc internet survey first. Gather some data about how people actually are, not how you are. Because you don't even know how you are.

That's the center of the other-optimizing problem.

In response to comment by [deleted] on How to enjoy being wrong
Comment author: shminux 27 July 2011 09:08:08PM 3 points [-]

I agree that the 3rd part of the article was a self-help style, something that only works for those who happen to self-optimize in the same way the author does. This is likely a small percent of the readers, but apparently large enough to provide glowing testimonials for published books.

However, the first two parts are simply a relevant personal experience to share, potentially interesting and maybe even useful for some readers and so worthy of a post, especially if it was (less moralizingly) named along the lines of "How I learned to stop worrying and enjoy being wrong, YMMV".

I'd also like to see some tangible benefits reported at the end of the article (otherwise, what's the point of trying to be rational?), but that's just me, not going to try to other-optimize.

Comment author: khafra 22 July 2011 08:03:54PM *  3 points [-]

Shminux,Welcome to Lesswrong!

When using Traditional Rationality, we expect people to make claims and justify them with arguments. However, if we're just seeking the truth by the most effective means available, what may be considered evidence has a broader scope. Even in the absence of a repeatable test, we can say that there is evidence favoring one model over the alternatives.

Of course, if I were on the defense committee, I'd certainly demand some tests like the ones you described before voting for war with Sylvania; as the "ambient evidence" doesn't seem sufficient.

Comment author: shminux 22 July 2011 09:28:24PM 2 points [-]

Thanks for the welcome. I understand that Hard Rationality is often not readily applicable and you are tempted to make do with what you have, such as in forensics.

The issue I have with this approach is that "the most effective means available" is not really effective, unless the issue is either clear-cut (shoelaces are either tied or untied) or not very important (what's the worst that can happen if wrongly reflected light makes you mistakenly believe that your shoelaces are tied?).

My concern (summarized in the last paragraph of my original comment) is that people naturally and subconsciously gravitate toward collecting evidence (comparatively easy) instead of building testable models (hard). This issue probably deserves a separate thread, unless it has already been discussed, in which case I'd appreciate a link.

Comment author: shminux 21 July 2011 09:56:39PM *  1 point [-]

Presumably no definitive (i.e. testable/falsifiable) evidence either way is originally available, so the whole argument appears to be rather anti-rationalist.

A better approach would be to set personal prejudices and idle musings aside and to attempt to construct a model that can be tested (for example: what Sylvanian-made tech would be capable of diverting asteroids? would it leave residue at the impact site? were there sudden and unexplained expenditures at the time the tech would have been designed/activated? what other artificial and/or natural events could have affected the strike timing/location?)

If one remains content with upshifting/downshifting probabilities, s/he is distracted from the only task that makes sense: building a testable model and testing it.