Comment author: Romashka 12 August 2015 06:03:16PM 1 point [-]

What would be an adequate weapon, then? Pavlovian training to follow the rationality to the best of one's abilities?

Comment author: Stephen_Cole 16 August 2015 07:26:42PM 1 point [-]

Great question. I believe Jack Good's answer was his "type 2 rationality", which implies a Bayes/non-Bayes synthesis, semiparametric statistics, and nondogmatism.

Comment author: [deleted] 15 August 2015 08:11:41AM 11 points [-]

Hi everyone.

I'm about to start my second year of college in Utah. My intent is to major in math and/or computer science, although more generally I'm interested in many of the subjects that LessWrongers seem to gravitate towards (philosophy, physics, psychology, economics, etc.)

I first noticed something that Eliezer Yudkowsky posted on Facebook several months ago, and have since been quietly exploring the rationality-sphere and surrounding digital territories (although I'm no longer on FB). Joining LessWrong seemed like the obvious next step given the time I had spent on adjacent sites. I'm here solely out of curiosity and philosophical interest.

Thanks to Sarunas and predecessors for the welcome page, and the LW community more generally. I look forward to being a part of it.

Comment author: Stephen_Cole 15 August 2015 03:43:21PM 3 points [-]

Exciting! If I were in your place I would look at the growing field of causal inference which lives at the interface of statistics, computer science, epidemiology and economics. The books by Hernan and Robins (causal inference) and Pearl (causality), as well as the journal edited by Judea Pearl and Maya Petersen (causal inference).

Comment author: elharo 15 August 2015 12:08:34PM 4 points [-]

Only in mathematics is it possible to demonstrate something beyond all doubt. When held to that standard, we find ourselves quickly overwhelmed.

-- Max Shron, Thinking with Data, O'Reilly 2014

Comment author: Stephen_Cole 15 August 2015 03:37:27PM -1 points [-]

Beyond all doubt sounds fairly dogmatic, no? Godel proved in 1931 that Hilbert's program for a solid mathematical foundation (circa 1900) was impossible.

Comment author: [deleted] 11 August 2015 12:37:26PM -1 points [-]

The principle, stated simply in my bastardized version, is to believe no thing with probability 1.

Meeehhhh. Believe nothing empirical with probability 1.0. Believe formal and analytical proofs with probability 1.0.

In response to comment by [deleted] on Open thread, Aug. 10 - Aug. 16, 2015
Comment author: Stephen_Cole 14 August 2015 08:39:12PM 3 points [-]

I get your point that we can have greater belief in logical and mathematical knowledge. But (as pointed out by JoshuaZ) I have seen too many errors in proofs given at scientific meetings (and in submitted publications) to blindly believe just about anything.

Comment author: IlyaShpitser 10 August 2015 04:39:50PM *  1 point [-]

Physicists are not very precise about it, may I suggest looking into "potential outcomes" (the language some statisticians use to talk about counterfactuals):

https://en.wikipedia.org/wiki/Rubin_causal_model

https://en.wikipedia.org/wiki/Counterfactual_definiteness

Potential outcomes let you think about a model that contains a random variable for what happens to Fred if we give Fred aspirin, and a random variable for what happens to Fred if we give Fred placebo. Even though in reality we only gave Fred aspirin. This is "counterfactual definiteness" in statistics.

This paper uses potential outcomes to talk about outcomes of physics experiments (so there is an exact isomorphism between counterfactuals in physics and potential outcomes):

http://arxiv.org/pdf/1207.4913.pdf

Comment author: Stephen_Cole 10 August 2015 04:52:28PM 0 points [-]

Sounds like this is perhaps related to the counterfactual-consistency statement? In its simple form, that the counterfactual or potential outcome under policy "a" equals the factual observed outcome when you in fact undertake policy "a", or formally, Y^a = Y when A = a.

Pearl has a nice (easy) discussion in the journal Epidemiology (http://www.ncbi.nlm.nih.gov/pubmed/20864888).

Is this what you are getting at, or am I missing the point?

Comment author: IlyaShpitser 01 July 2015 04:21:00AM 1 point [-]

Man on the street needs to learn what counterfactual definiteness is.

Comment author: Stephen_Cole 10 August 2015 04:34:49PM 2 points [-]

Ilya, can you give me a definition of "counterfactual definiteness" please?

Comment author: Stephen_Cole 10 August 2015 02:20:29PM 2 points [-]

Has there been discussion of Jack Good's principle of nondogmatism? (see Good Thinking, page 30).

The principle, stated simply in my bastardized version, is to believe no thing with probability 1. It seems to underlie Good's type 2 rationality (to maximize expected utility, within reason).

This is (almost) in accord with Lindley's concept of Cromwell's rule (see Lindley's Understanding Uncertainty or https://en.wikipedia.org/wiki/Cromwell%27s_rule). And seems to be closely related to Jaynes' mind projection fallacy.

Comment author: Stephen_Cole 10 August 2015 02:05:07PM 1 point [-]

"Irrationality is intellectual violence against which the pacifism of rationality may or may not be an adequate weapon."

  • Jack Good, Good Thinking, page 25.

View more: Prev