Clarity comments on Rationality Quotes Thread December 2015 - Less Wrong

5 Post author: elharo 02 December 2015 11:28AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (75)

You are viewing a single comment's thread. Show more comments above.

Comment author: Anders_H 01 January 2016 12:16:10AM 2 points [-]

Rate how much each intervention (or decision not to intervene) helped or hurt the situation, in retrospect, on a >scale from -10 to +10.

How do you plan to do this without counterfactual knowledge?

Comment author: Clarity 01 January 2016 06:28:40AM -1 points [-]

https://en.wikipedia.org/wiki/Quasi-experiment

https://en.wikipedia.org/wiki/Observational_study

https://en.wikipedia.org/wiki/Multiple_baseline_design

take your pick

it requires a good handle of experiment design but biostatisticians do this day in day out. Hopefully risk analysts do this too in defense institutions.

Comment author: Anders_H 01 January 2016 06:19:55PM 3 points [-]

The original quote said to rate each intervention by how much it helped or hurt the situation, i.e. its individual-level causal effect. None of those study designs will help you with that: They may be appropriate if you want to estimate the average effect across multiple similar situations, but that is not what you need here.

This is a serious question. How do you plan to rate the effectiveness of things like the decision to intervene in Libya, or the decision not to intervene in Syria, under profound uncertainty about what would have happened if the alternative decision had been made?

Comment author: Clarity 02 January 2016 05:00:50AM -1 points [-]

The original quote said to rate each intervention by how much it helped or hurt the situation, i.e. its individual-level causal effect. None of those study designs will help you with that: They may be appropriate if you want to estimate the average effect across multiple similar situations, but that is not what you need here.

Yes I concede that cross-level inferences between aggregate (average of multiple similar situations) and individual level causes has less predictive power than inferences across identical levels of inference. However, I reckon it's the best available means to make such an inference.

This is a serious question. How do you plan to rate the effectiveness of things like the decision to intervene in Libya, or the decision not to intervene in Syria, under profound uncertainty about what would have happened if the alternative decision had been made?

Analysts has tools to model and simulate scenarios. Analysis of competiting hypothesis is staple in intelligence methodology. It's also used by earth scientists, but I haven't seen it used elsewhere. Based on this approach, analysts can:

  • make a prediction about outcomes without interventions in libya with and without intervention
  • when they choose to intervene on non-intervene, calculate those outcomes
  • over the long term of making comparisons between predicted and actual outcomes, they make decide to re-adjust their predictions post-hoc for the counterfactual branch

under profound uncertainty about what would have happened if the alternative decision had been made?

I'm not trying to downplay the level of uncertainty. Just that the methodological considerations remain constant.

Comment author: ChristianKl 01 January 2016 09:14:25PM 2 points [-]

biostatisticians

Just for completion, Anders_H is one of those guys.

Comment author: Clarity 02 January 2016 04:50:49AM -1 points [-]

How self-referentially absurd. More precisely, epidemiologists do this day in day out using biostatistical models, then applying causal inference (the counterfactual knowledge part incl.). I said biostatisticians because epidemiology isn't in the common vernacular. Ironically, counterfactual knowledge is, to those familiar with the distinction, distinctly removed from the biostatistical domain.

Just for the sake of intellectual curiosity, I wonder what kind of paradox was just invoked prior to this clarification.

It wouldn't be the epimenides paradox since that refers to an individual making a self-referentially absurd claim:

The Epimenides paradox is the same principle as psychologists and sceptics using arguments from psychology claiming humans to be unreliable. The paradox comes from the fact that the psychologists and sceptics are human themselves, meaning that they state themselves to be unreliable

Anyone?

Comment author: ChristianKl 02 January 2016 09:29:22AM 1 point [-]

More precisely, epidemiologists do this day in day out using biostatistical models, then applying causal inference (the counterfactual knowledge part incl.)

Yes, Anders_H is Doctor of Science in Epidemiology. He's someone worth listening to when he tells you about what can and can't be done with experiment design.

Comment author: Clarity 03 January 2016 09:17:10AM *  -1 points [-]

Oooh, an appeal to authority. If that is the case he is no doubt highly accomplished. However, that need not translate to blind deference.

This is a text conversation, so rhetorical questions aren't immediately apparent. Moreover, we're in a community that explicitly celebrates reason over other modes of rhetoric. So, my interpretation of his question about counterfactual conditions was interpreted was sincere rather than disingenuous.

Comment author: ChristianKl 03 January 2016 09:49:02AM *  1 point [-]

Oooh, an appeal to authority. If that is the case he is no doubt highly accomplished. However, that need not translate to blind deference.

Yes, but if you disagree you can't simply point to biostatisticians do this day in day out and a bunch of wikipedia articles but actually argue the merits of why you think that those techniques can be used in this case.