Comment author: RomeoStevens 26 January 2016 02:03:25AM 4 points [-]

I have been studying meta-research (a la METRICS, Cambridge Handbook of Expertise, Kuhnian revolution etc.) and while I'm not looking for a study partner per se (my schedule is very sporadic) I would be interested in diffing models about this topic with anyone who has done some of their own investigation in the area.

Comment author: RomeoStevens 20 January 2016 06:17:42AM 2 points [-]

One aspect of both rationality training and EA I feel interacts poorly with anxiety is that it can make it easy to implicitly get into a frame of fixing yourself or doing things because you are bad. Even the title of this site is a fairly negative frame. Opportunity/obligation distinction seems relevant. One thing I have done to combat this in myself is inverting biases so I can make sticky a positive/opportunity version.

Comment author: Jack_LaSota 16 January 2016 06:33:38PM 4 points [-]

Anyone have a better procedure for fixing this than the following?

  1. Notice the feeling.
  2. Treat it as a signal that your S1 wants you to search for cheaper ways to figure out which option is right than continuing to drive. Search for cheaper ways and execute them. Make it a thorough search and show your S1 the thoroughness of your search. Acknowledge the awfulness of "drive back and forth in an expensive search pattern" and only choose that as a last resort.
  3. If you don't immediately become much more certain of which way the hotel is in, and the "go 30mph" feeling does not go away, treat it as a signal that your S1 thinks the thought process by which you chose (under evidence-starvation) is wrong, which does not necessarily mean that the conclusion is wrong.
  4. List the ways your S1 thinks you're biased which are screwing up your evidenced-starved reasoning.
  5. Perform sanity-inducing rituals to counter those biases. (Think about your actual goal of getting to the hotel as soon as possible, forgive yourself for maybe driving past it, imagine all 4 outcomes (60mph forward, 60mph backward) x (get to hotel on next try after this, don't get to hotel on next try after this) and how you would feel about them)
  6. If the feeling is still there, this procedure has failed.
Comment author: RomeoStevens 16 January 2016 11:45:20PM *  2 points [-]

My current model is that closed/exploitation mode just can not handle ambiguity. So the generalized cue becomes: notice ambiguity->pop into open/exploration mode and figure out what heuristic I actually endorse->back to closed mode.

Open mode: generating checklists

Closed mode: executing checklists

Open and closed research summary: https://www.youtube.com/watch?v=Qby0ed4aVpo

Comment author: RomeoStevens 02 January 2016 07:22:26AM 7 points [-]

“Rationality is not just something you do so that you can make more money, it is a binding principle. Rationality is a really good idea. You must avoid the nonsense that is conventional in one’s own time. It requires developing systems of thought that improve your batting average over time.”

-Charlie Munger on average decision quality and systems vs goals.

In response to Why CFAR's Mission?
Comment author: coyotespike 01 January 2016 03:53:16PM 7 points [-]

This is an excellent post, which I'll return to in future. I particularly like the note about the convergence between Superforecasting, Feynman, Munger, LW-style rationality, and CFAR - here's a long list of Munger quotations (collected by someone else) which exemplifies some of this convergence. http://25iq.com/quotations/charlie-munger/

Comment author: RomeoStevens 02 January 2016 02:41:34AM 2 points [-]

There's also a pretty big overlap with the intelligence community which is briefly discussed in Superforecasting (the good judgement project was funded by IARPA).

In response to Why CFAR's Mission?
Comment author: alyssavance 31 December 2015 01:11:21PM *  11 points [-]

I mostly agree with the post, but I think it'd be very helpful to add specific examples of epistemic problems that CFAR students have solved, both "practice" problems and "real" problems. Eg., we know that math skills are trainable. If Bob learns to do math, along the way he'll solve lots of specific math problems, like "x^2 + 3x - 2 = 0, solve for x". When he's built up some skill, he'll start helping professors solve real math problems, ones where the answers aren't known yet. Eventually, if he's dedicated enough, Bob might solve really important problems and become a math professor himself.

Training epistemic skills (or "world-modeling skills", "reaching true beliefs skills", "sanity skills", etc.) should go the same way. At the beginning, a student solves practice epistemic problems, like the ones Tetlock uses in the Good Judgement Project. When they get skilled enough, they can start trying to solve real epistemic problems. Eventually, after enough practice, they might have big new insights about the global economy, and make billions at a global macro fund (or some such, lots of possibilities of course).

To use another analogy, suppose Carol teaches people how to build bridges. Carol knows a lot about why bridges are important, what the parts of a bridge are, why iron bridges are stronger than wood bridges, and so on. But we'd also expect that Carol's students have built models of bridges with sticks and stuff, and (ideally) that some students became civil engineers and built real bridges. Similarly, if one teaches how to model the world and find truth, it's very good to have examples of specific models built and truths found - both "practice" ones (that are already known, or not that important) and ideally "real" ones (important and haven't been discovered before).

Comment author: RomeoStevens 01 January 2016 03:02:23AM 0 points [-]

Before and after prediction market performance jumps to mind and is easy, though doesn't cover the breadth of short feedback topics that would be ideal.

Comment author: RomeoStevens 31 December 2015 03:23:15AM *  7 points [-]

I never would have gotten a business off the ground without understanding concepts like the outside view, the introspection illusion, anchoring, fox vs hedgehog thinking, belief constraining expected evidence, basic statistics, and many more.

But it's more than that. LW type material upgraded my understanding of what it means to understand material. Previously I was doing a lot of guessing the teacher's password, CYA, moral licensing etc in justifying my actions to myself. Building models of when and where to apply concepts, how to seek out new ones when the existing concepts are insufficient, and how to validate them against problem domains is as important as the concepts themselves.

This is partly the difference between LW and just handing someone a copy of Thinking Fast and Slow. In the latter case, I would have read the book, gone "yes, that sounds very nice" and then continued on my way (I think).

Comment author: Lumifer 23 December 2015 09:18:09PM 13 points [-]

Just say you are a dictator and ban at a whim

There is a slight problem in that LW is not Nancy's personal blog to be shaped by her whims.

Comment author: RomeoStevens 23 December 2015 10:40:04PM 20 points [-]

Voting for a new CEO is dramatically more effective than the board trying to micromanage the current CEO with rules. Find a reasonable person and let them be flexibly reasonable.

Comment author: Gunnar_Zarncke 23 December 2015 09:11:15AM 0 points [-]

Could you link to the citations you find smelly?

Comment author: RomeoStevens 23 December 2015 10:02:05PM *  2 points [-]

Top comment here on the Alexander Parkhomov replication: http://www.e-catworld.com/2014/12/30/alexander-parkhomov-on-calibration-in-his-test/

Claims that this: http://animpossibleinvention.com/2015/10/15/swedish-scientists-claim-lenr-explanation-break-through/ bolsters the case smells like typical aggrandizing claim since it is not a replication, but simply a speculative paper on the causal mechanism if such an effect exists. As has been repeated many times, no one is questioning that the energy is there, it's the mechanism by which it actually provides excess power at low temperatures that is under question, see the comments thread in the next big future piece here:http://nextbigfuture.com/2015/06/chinas-lenr-is-getting-excess-600-watts.html#soa_062bbe85

A review of the more credible replication does cause an update in the positive direction, but only a small one: http://www.infinite-energy.com/iemagazine/issue118/analysis.html

Comment author: Lumifer 22 December 2015 09:34:46PM 4 points [-]

That was a bit... strange.

Huw Price, a professional philosopher who happens to be one of the founders and the Academic Director of the Centre for the Study of Existential Risk (the one in Cambridge, UK), wrote a piece which is quite optimistic about cold fusion in general and Andrea Rossi in particular.

Comment author: RomeoStevens 23 December 2015 04:02:52AM 0 points [-]

Indeed strange. Following up on the linked citations finds things that smell pretty dubious.

View more: Prev | Next