Comment author: PhilGoetz 31 August 2015 06:12:10PM 2 points [-]

I really like this post. Questions:

  • Can the (physical or mental) posture that's appropriate for avoiding mistakes be opposed to the posture appropriate for focusing power on one point?

  • Are there multiple styles of posture or thought that are equally effective local maxima, while hybrids of them are less effective?

Comment author: Valentine 31 August 2015 10:55:49PM 1 point [-]

I really like this post.

Thanks!

Can the (physical or mental) posture that's appropriate for avoiding mistakes be opposed to the posture appropriate for focusing power on one point?

Sorry, I'm not sure what you mean.

I don't do a lot of brick-breaking with my fists, so I might not know much about doing that well. But my impression is that the principles that transfer force well through the body in aikido will also transfer force well when trying to deliver a sharp blow to exactly one spot on a brick. In aikido at least, there's no opposition between posture that helps make you do the right thing and posture that helps you avoid doing the wrong thing.

…but of course, it's possible to compromise posture and still deliver a lot of power to one point, just like it's possible to avoid falling over when throwing someone while you have a weak spine posture.

I think the analog in the mind is something like focus or concentration. I think it's certainly possible to concentrate really hard in a way that violates good mental posture in other situations, but I intuitively wouldn't anticipate very good results from that compared to the counterfactual where the focus is done while maintaining good mental posture.

But I really don't know.

And I might have totally misunderstood what you were gesturing at. Please feel free to clarify if needed!

Are there multiple styles of posture or thought that are equally effective local maxima, while hybrids of them are less effective?

I don't know. I don't know of any for the body, I don't think. Some people claim that you should never round your lower back outward, but as far as I can tell the real rule is to brace your spine so that it can transfer force well, which is much harder to do when it's rounded but not impossible. There are some situations where using the rule of thumb of "curve in your lower back" just isn't possible, so you have to go back to the reason why the rule is there. At that point you start getting things that look like violations of "good posture" but are actually quite good uses of body mechanics. (In this case you brace your spine with your abs while "lengthening" it.)

I'm less sure about mental posture. But that's because I don't have a very good reductionistic model of what "mental posture" is yet.

Comment author: CCC 31 August 2015 09:10:05AM 1 point [-]

I felt like I was considering the opposite roughly the same way a young child replies to their parent saying "Now say that you're sorry" with an almost sarcastic "I'm sorry."

One thought on this point - it might be easier to evaluate "what-if" scenarios by explicitly considering them as fictional. What if I am wrong about this assertion? Well, in such a fictional universe, I would then observe consequences A, B, and C... and only then do I ask the question about whether the assertion or its opposite appears more likely to be fictional.

...it may also be partially because I enjoy speculating about fictional universes.

Comment author: Valentine 31 August 2015 05:41:11PM 0 points [-]

Yep! I find stuff like this helpful.

…and yet when I pump it through the analogical mapping I'm using here between mind and body, mental movements like this feel a bit like practicing grabbing things as I fall as a way of dealing with being pushed or knocked into. That seems like a useful skill, but like a second-order tweak after figuring out how to not get knocked totally off-balance when someone bumps into me. Sort of like trying to learn how to do parkour before learning how to brace one's spine.

And appropriately enough, doing that with parkour actually endangers your spine. Mapping that back through the analogy to the mind again, I think I see a close correlate: I don't know when I can trust my fictional "what-if" thinking to kick online when I need it, or if my attempt to do "what-if" thinking will still be dutiful rather than based on a sincere desire to know the truth.

…although I do think the technique you're suggesting is useful, and I'm totally going to play with it.

Comment author: Lumifer 31 August 2015 04:22:57PM 4 points [-]

A bit of a side question -- would you recommend the Supple Leopard book for figuring out the underlying biomechanics of many martial arts techniques? The spine positioning, in particular, looks a lot like what Tai Chi tries to achieve...

Comment author: Valentine 31 August 2015 05:26:23PM 3 points [-]

would you recommend the Supple Leopard book for figuring out the underlying biomechanics of many martial arts techniques?

Yes.

Comment author: ScottL 31 August 2015 11:45:52AM *  6 points [-]

What, exactly, are the principles of good mental posture for the Art of Rationality?

I’m not sure if I can answer this because I don’t understand what good mental posture is or even what good physical posture is, for that matter. Can you please confirm if my understanding, below, of what these are is correct?

Basically, posture refers to the body's alignment and positioning with respect to the force of gravity.

Good posture:

  • is efficient
  • allows movement within the posture
  • prepares for the next movement
  • allows you to react to unexpected forces
  • is structurally strong

Good posture refers to the removal of impediments in movement. It is about activating only the right muscles at the right time in order to achieve specific movements.

Good mental posture, on the other hand, seems to involve taking certain perspectives or entering certain frames of mind that are conducive to the achievement of your current goals.

From the article you linked:

I've been using a term for changing the overall quality of my thoughts and feelings to something more conducive to accomplishing my immediate goal. I call it "adopting a mental posture".

If we view thought activation in a similar way to how we view muscle activation in regards to physical posture, then we can think of good mental posture as the undertaking of certain perspectives or mindsets that inhibit unhelpful thoughts and induce helpful thoughts, where what is helpful depends on the current task at hand.

A good mental posture will be:

  • Relaxed - there is no misattribution. That is, you are not carrying thoughts from previous interactions or arguments. You start the thought process with a relaxed mind set in which you are free from recurrent and intruding thoughts.
  • Fluid - there is no stickiness in your perspectives. This means that you can easily change your perspective. You can think of what the opposites are or what the other person you’re arguing with thinks or what the situation would be like if certain variables were changed etc. The key point here is that you can move between perspectives with ease. There is no flinching.
  • Efficient and synchronous - you are activating only the thoughts that are pertinent to the task at hand. You are also thinking of the pertinent thoughts at the right time. That is, you don’t linger and dwell on certain thoughts.
  • Adaptable - if you receive new information that requires you to change perspective, if you are to keep good posture, then you do so. This means that you update your beliefs.
  • Normally in a broad perspective - we can think of broadness as similar to stability in physical posture. In the same way that stability is transient in physical posture, that is, you are not stable during the transition to a new movement, but do default to being stable. Your psychical (mental) posture should by default be broad, but you should be able to transition to a narrow perspective if this is going to be beneficial. You do need to be able to transition back to the broad perspective, though.

PS. Physical posture and mental posture may be entwined. People who are in pain or tired often have bad posture.

Comment author: Valentine 31 August 2015 05:25:07PM 6 points [-]

I’m not sure if I can answer this because I don’t understand what good mental posture is or even what good physical posture is, for that matter. Can you please confirm if my understanding, below, of what these are is correct?

Well, I can do that for physical posture. I don't know if I can do that for mental posture, but I'll try.

I think in broad strokes your description of what good physical posture is sounds right to me. I wouldn't tie it to gravity specifically; I think it makes sense to talk about good posture in a space station. But maybe replace "gravity" with "surrounding forces" and I think it's basically right.

I'd sum it up by saying that posture is a description of how efficient the arrangement of your body is at transmitting forces. A curled-forward upper back is terrible at transmitting forces between your arms and your hips when compared to a more straight upper back, so I'm inclined to call a straighter upper back "better posture".

There seem to be a few default physical positions that are about as good at general force transmission as a human body can get. Those positions are what I call "good posture".

I personally like Todd Hargrove's breakdown of what good physical posture does for you, though I think the one you linked to is reasonably good too.

I honestly don't know what good mental posture is. I'm gesturing at an intuition based on a bunch of my own experiences and how they resonate with my experience with physical posture.

For instance, if someone trips and knocks into me, I'm much more likely than untrained people to just keep my ground. If I get knocked to the side, though, I'm likely to keep my torso moving as basically one piece, which makes it really easy for me to recover my balance. It's really notable to me when my postural habits slip up and someone knocks into me because I feel like I'm flopping around through the air as I fall over, and relative to my baseline it feels physically dangerous to me.

I notice something that feels analogous in my mind. If someone turned to me and said "Val, go get me coffee", I'm likely to get agitated in a way that reminds me of getting bumped into while having a floppy core. I can pause and use some CBT-like techniques to "catch" myself by, say, noticing that the person probably didn't mean to offend me - but this seems more analogous to grabbing a hold of something nearby to keep myself from falling than it does having a solid core. Instead, I notice that there's some kind of way I can choose to orient myself to the situation and to myself that lets me notice my annoyance at being ordered around and basically not get "knocked over". In that mental "position", I feel like the CBT-like thoughts are much more solid mental "movements", more like taking a stable step to keep my balance than grabbing at whatever is in reach as I fall.

If I had to guess at a definition of mental posture, I would try by analogy to the "efficient at transmitting forces" description of physical posture above. Maybe something like saying it's a description of how efficiently one's patterns of directing attention let one mentally navigate one's environment. The thing is, I haven't really worked out how to capture the intuition I have that being unbothered by being offended is a function of good mental posture whereas being really fast at mathematical computations isn't.

Good mental posture, on the other hand, seems to involve taking certain perspectives or entering certain frames of mind that are conducive to the achievement of your current goals.

From the article you linked:

I've been using a term for changing the overall quality of my thoughts and feelings to something more conducive to accomplishing my immediate goal. I call it "adopting a mental posture".

Right, though I think this might be too abstract to be useful. I could also say that physical posture involves taking certain physical positions that are conductive to the achievement of your current goals. I think that's accurate, but I don't think it quite captures the details that are useful in the analogical mapping.

If we view thought activation in a similar way to how we view muscle activation in regards to physical posture, then we can think of good mental posture as the undertaking of certain perspectives or mindsets that inhibit unhelpful thoughts and induce helpful thoughts, where what is helpful depends on the current task at hand.

That's a neat take on it. I feel like it's missing something; e.g., in the anxious/avoidant trap in attachment theory, the problem isn't just the thoughts, but also something about the way that emotional anticipations seem "off balance". Just changing thought patterns a la CBT doesn't seem to reach deeply enough to fix attachment wounds in my experience. But the basic idea is neat. It reminds me of the idea of avoiding wasted mental movements (e.g., thoughts like "I don't know if I can handle this!" when you have to are utterly wasted in nearly all possible futures where you succeed, so it seems worthwhile to just not bother with that thought).

(By the way, I'd warn not to take attachment theory too seriously. It has a lot of psychobabble in it. I do think it does a really nice job of describing some experiences people have, and the "anxious/avoidant trap" is a great example. But the page I just linked to includes a bunch of Freudian guesswork about why avoidants attract anxious folk and vice versa, and that's basically without any empirical support as far as I know.)

A good mental posture will be:

  • Relaxed…
  • Fluid…
  • Efficient and synchronous…
  • Adaptable…
  • Normally in a broad perspective…

I like this breakdown. It resonates with me. There are two details I'd want to tweak based on my limited personal experience playing with this stuff:

  • While I really like the framing of good mental posture in terms of avoiding what I (due to some conversations with Eliezer) call "wasted mental movements", I'm really hesitant to name keeping one's mind unwaveringly on a task a virtue. I'm reminded of how mathematicians classically need to distract themselves after being stuck on a problem for a long while. There seems to be something very good that comes out of (1) priming the subconscious mind with a lot of potential updates and then (2) getting the conscious mind out of the way so that the subconscious mind can do some kind of magical processing in the background. (The same thing seems to happen with physical skills, by the way: I keep finding that taking weeks-long breaks from aikido sometimes boosts my skill quite a lot more than training over similar time periods does.)
  • I intuit that the "adaptable" point isn't quite right. I'm inclined to think that being adaptable is a little bit like being able to sidestep or block an attack: you really need good posture to do it well, but there's still a skill that needs to be trained. But this is based just on how the analogy between mental and physical postures maps in my head.

Overall I like your description though. It gives me the impression that you're looking at basically the same thing I am.

… psychical posture…

I thought this was a delightful use of language! I had been using "mental arts" to act as a verbal and visual mirror for "martial arts", but hadn't noticed this mapping between "physical" and "psychical". Thank you for this!

PS. Physical posture and mental posture may be entwined. People who are in pain or tired often have bad posture.

Yep. I'm a little surprised by how strong the analogy is in my inner experience, which makes me wonder if the mapping is somehow a natural one.

I'm reminded of Todd Hargrove's suggestion that the brain is for movement and his follow-up analysis of the idea.

Comment author: John_Maxwell_IV 24 May 2015 05:29:16AM 5 points [-]

People select hypotheses for testing because they have previously weakly updated in the direction of them being true. Seeing empirical data produces a later, stronger update.

Comment author: Valentine 24 May 2015 07:36:08PM 1 point [-]

I like your way of saying it. It's much more efficient than mine!

Comment author: [deleted] 24 May 2015 06:17:16PM *  -2 points [-]

Those are not different models. They are different interpretations of the utility of probability in different classes of applications.

though I'm not sure how you would find out the frequency at which hypotheses turn out to be true the way you figure out the frequency at which a coin comes up heads. But that could just be my not being as familiar thinking in terms of the Frequentist model

You do it exactly the same as in your Bayesian example.

I'm sorry, but this Bayesian vs Frequentist conflict is for the most part non-existent. If you use probability to model the outcome of an inherently random event, people have called that “frequentist.” If instead you model the event as deterministic, but your knowledge over the outcome as uncertain, then people have applied the label “bayesian.” It's the same probability, just used differently.

It's like how if you apply your knowledge of mechanics to bridge and road building, it's called civil engineering, but if you apply it to buildings it is architecture. It's still mechanical engineering either way, just applied differently.

One of the failings of the sequences is the amount of emphasis that is placed on “Frequentist” vs “Bayesian” interpretations. The conflict between the two exists mostly in Yudkowsky's mind. Actual statisticians use probability to model events and knowledge of events simultaneously.

Regarding the other points, every single example you gave involves using empirical data that had not sufficiently propagated, which is exactly the sort of use I am in favor of. So I don't know what it is that you disagree with.

Comment author: Valentine 24 May 2015 07:32:00PM *  7 points [-]

Those are not different models. They are different interpretations of the utility of probability in different classes of applications.

That's what a model is in this case.

I'm sorry, but this Bayesian vs Frequentist conflict is for the most part non-existent.

[…]

One of the failings of the sequences is the amount of emphasis that is placed on “Frequentist” vs “Bayesian” interpretations. The conflict between the two exists mostly in Yudkowsky's mind. Actual statisticians use probability to model events and knowledge of events simultaneously.

How sure are you of that?

I know a fellow who has a Ph.D. in statistics and works for the Department of Defense on cryptography. I think he largely agrees with your point: professional statisticians need to use both methods fluidly in order to do useful work. But he also doesn't claim that they're both secretly the same thing. He says that strong Bayesianism is useless in some cases that Frequentism gets right, and vice versa, though his sympathies lie more with the Frequentist position on pragmatic grounds (i.e. that methods that are easier to understand in a Frequentist framing tend to be more useful in a wider range of circumstances in his experience).

I think the debate is silly. It's like debating which model of hyperbolic geometry is "right". Different models highlight different intuitions about the formal system, and they make different aspects of the formal theorems more or less relevant to specific cases.

I think Eliezer's claim is that as a matter of psychology, using a Bayesian model of probability lets you think about the results of probability theory as laws of thought, and from that you can derive some useful results about how one ought to think and what results from experimental psychology ought to capture one's attention. He might also be claiming somewhere that Frequentism is in fact inconsistent and therefore is simply a wrong model to adopt, but honestly if he's arguing that then I'm inclined to ignore him because people who know a lot more about Frequentism than he does don't seem to agree.

But there is a debate, even if I think it's silly and quite pointless.

And also, the axiomatic models are different, even if statisticians use both.

Regarding the other points, every single example you gave involves using empirical data that had not sufficiently propagated, which is exactly the sort of use I am in favor of. So I don't know what it is that you disagree with.

The concern about AI risk is also the result of an attempt to propagate implications of empirical data. It just goes farther than what I think you consider sensible, and I think you're encouraging an unnecessary limitation on human reasoning power by calling such reasoning unjustified.

I agree, it should itch that there haven't been empirical tests of several of the key ideas involved in AI risk, and I think there should be a visceral sense of making bullshit up attached to this speculation unless and until we can find ways to do those empirical tests.

But I think it's the same kind of stupid to ignore these projections as it is to ignore that you already know how your New Year's Resolution isn't going to work. It's not obviously as strong a stupidity, but the flavor is exactly the same.

If we could banish that taste from our minds, then even without better empiricism we would be vastly stronger.

I'm concerned that you're underestimating the value of this strength, and viewing its pursuit as a memetic hazard.

I don't think we have to choose between massively improving our ability to make correct clever arguments and massively improving the drive and cleverness with which we ask nature its opinion. I think we can have both, and I think that getting AI risk and things like it right requires both.

But just as measuring everything about yourself isn't really a fully mature expression of empiricism, I'm concerned about the memes you're spreading in the name of mature empiricism retarding the art of finishing thinking.

I don't think that they have to oppose.

And I'm under the impression that you think otherwise.

Comment author: [deleted] 22 May 2015 06:58:10PM *  3 points [-]

Thank you for correcting me on this.

So the source of the confusion is the Author's notes to HPMoR. Eliezer promotes both CFAR and MIRI workshops and donation drives, and is ambiguous about his full employment status--it's clear that he's a researcher at MIRI, but if was ever explicitly mentioned who was paying for his rationality work, I missed it. Googling "CFAR site:hpmor.com" does show that on http://hpmor.com/applied-rationality/, a page I never read he discloses not having a financial relationship with CFAR. But he notes many times elsewhere that "his employer" has been paying for him to write a rationality textbook, and at times given him paid sabbaticals to finish writing HPMOR because he was able to convince his employer that it was in their interest to fund his fiction writing.

As I said I can understand the argument that it would be beneficial to an organization like CFAR to have as fun and interesting an introduction to rationality as HPMOR is, ignoring for a moment the flaws in this particular work I pointed out elsewhere. It makes very little sense for MIRI to do so--I would frankly be concerned about them losing their non-profit status as a result, as writing rationality textbooks let alone harry potter fanfics is so, so far outside of MIRI's mission.

But anyway, it appears that I assumed it was CFAR employing him, not MIRI. I wonder if I was alone in this assumption.

EDIT: To be clear, MIRI and CFAR have shared history--CFAR is an offshoot of MIRI, and both organizations have shared offices and staff in the past. You staff page lists Eliezer Yudkowsky as a "Curriculum Consultant" and specifically mentions his work on HPMOR. I'll take your word that none of it was done with CFAR funding, but that's not the expectation a reasonable person might have from your very own website. If you want to distance yourself from HPMOR you might want to correct that.

Comment author: Valentine 24 May 2015 05:52:18PM 3 points [-]

To be clear, I can understand where your impression came from. I don't blame you. I spoke up purely to crush a rumor and clarify the situation.

I'll take your word that none of it was done with CFAR funding, but that's not the expectation a reasonable person might have from your very own website. If you want to distance yourself from HPMOR you might want to correct that.

That's a good point. I'll definitely consider it.

We're not trying to distance ourselves from HPMOR, by the way. We think it's useful, and it does cause a lot of people to show interest in CFAR.

But I agree, as a nonprofit it might be a good idea for us to be clearer about whom we are and are not paying. I'll definitely think about how to approach that.

Comment author: [deleted] 24 May 2015 04:05:35AM *  0 points [-]

Perhaps you're using a Frequentist definition of "likelihood" whereas I'm using a Bayesian one?

There's a difference? Probability is probability.

So, if you mean to suggest that figuring out which hypothesis is worthy of testing does not involve altering our subjective likelihood that said hypothesis will turn out to be true, then I quite strongly disagree.

But if you mean that clever arguments can't change what's true even by a little bit, then of course I agree with you.

If you go about selecting a hypothesis by evaluating a space of hypotheses to see how they rate against your model of the world (whether you think they are true) and against each other (how much you stand to learn by testing them), you are essentially coming to reflective equilibrium regarding these hypothesis and your current beliefs. What I'm saying is that this shouldn't change your actual beliefs -- it will flush out some stale caching, or at best identify an inconsistent belief, including empirical data that you haven't fully updated on. But it does not, by itself, constitute evidence.

So a clever argument might reveal an inconsistency in your priors, which in turn might make you want seek out new evidence. But the argument itself is insufficient for drawing conclusions. Even if the hypothesis is itself hard to test.

Comment author: Valentine 24 May 2015 05:28:14PM 8 points [-]

Perhaps you're using a Frequentist definition of "likelihood" whereas I'm using a Bayesian one?

There's a difference? Probability is probability.

There very much is a difference.

Probability is a mathematical construct. Specifically, it's a special kind of measure p on a measure space M such that p(M) = 1 and p obeys a set of axioms that we refer to as the axioms of probability (where an "event" from the Wikipedia page is to be taken as any measurable subset of M).

This is a bit like highlighting that Euclidean geometry is a mathematical construct based on following thus-and-such axioms for relating thus-and-such undefined terms. Of course, in normal ways of thinking we point at lines and dots and so on, pretend those are the things that the undefined terms refer to, and proceed to show pictures of what the axioms imply. Formally, mathematicians refer to this as building a model of an axiomatic system. (Another example of this is elliptic geometry, which is a type of non-Euclidean geometry, which you can model as doing geometry on a sphere.)

The Frequentist and Bayesian models of probability theory are relevantly different. They both think of M as the space of possible results (usually called the "sample space" but not always) and a measurable subset EM as an "event". But they use different models of p:

  • Frequentists suggest that were you to look at how often all of the events in M occur, the one we're looking at (i.e., E) would occur at a certain frequency, and that's how we should interpret p(E). E.g., if M is the set of results from flipping a fair coin and E is "heads", then it is a property of the setup that p(E) = 0.5. A different way of saying this is that Frequentists model p as describing a property of that which they are observing - i.e., that probability is a property of the world.
  • Bayesians, on the other hand, model p as describing their current state of confidence about the true state of the observed phenomenon. In other words, Bayesians model p as being a property of mental models, not of the world. So if M is again the results from flipping a fair coin and E is "heads", then to a Bayesian the statement p(E) = 0.5 is equivalent to saying "I equally expect getting a heads to not getting a heads from this coin flip." To a Bayesian, it doesn't make sense to ask what the "true" probability is that their subjective probability is estimating; the very question violates the model of p by trying to sneak in a Frequentist presumption.

Now let's suppose that M is a hypothesis space, including some sector for hypotheses that haven't yet been considered. When we say that a given hypothesis H is "likely", we're working within a partial model, but we haven't yet said what "likely" means. The formalism is easy: we require that HM is measurable, and the statement that "it's likely" means that p(H) is larger than most other measurable subsets of M (and often we mean something stronger, like p(H) > 0.5). But we haven't yet specified in our model what p(H) means. This is where the difference between Frequentism and Bayesianism matters. A Frequentist would say that the probability is a property of the hypothesis space, and noticing H doesn't change that. (I'm honestly not sure how a Frequentist thinks about iterating over a hypothesis space to suggest that H in fact would occur at a frequency of p(H) in the limit - maybe by considering the frequency in counterfactual worlds?) A Bayesian, by contrast, will say that p(H) is their current confidence that H is the right hypothesis.

What I'm suggesting, in essence, is that figuring out which hypothesis HM is worth testing is equivalent to moving from p to p' in the space of probability measures on M in a way that causes p'(H) > p(H). This is coming from using a Bayesian model of what p is.

Of course, if you're using a Frequentist model of p, then "most likely hypothesis" actually refers to a property of the hypothesis space - though I'm not sure how you would find out the frequency at which hypotheses turn out to be true the way you figure out the frequency at which a coin comes up heads. But that could just be my not being as familiar thinking in terms of the Frequentist model.

I'll briefly note that although I find the Bayesian model more coherent with my sense of how the world works on a day-by-day basis, I think the Frequentist model makes more sense when thinking about quantum physics. The type of randomness we find there isn't just about confidence, but is in fact a property of the quantum phenomena in question. In this case a well-calibrated Bayesian has to give a lot of probability mass to the hypothesis that there is a "true probability" in some quantum phenomena, which makes sense if we switch the model of p to be Frequentist.

But in short:

Yes, there's a difference.

And things like "probability" and "belief" and "evidence" mean different things depending on what model you use.

What I'm saying is that this shouldn't change your actual beliefs -- it will flush out some stale caching, or at best identify an inconsistent belief, including empirical data that you haven't fully updated on. But it does not, by itself, constitute evidence.

Yep, we disagree.

I think the disagreement is on two fronts. One is based on using different models of probability, which is basically not an interesting disagreement. (Arguing over which definition to use isn't going to make either of us smarter.) But I think the other is substantive. I'll focus on that.

In short, I think you underestimate the power of noticing implications of known facts. I think that if you look at a few common or well-known examples of incomplete deduction, it becomes pretty clear that figuring out how to finish thinking would be intensely powerful:

  • Many people make resolutions to exercise, be nicer, eat more vegetables, etc. And while making those resolutions, they often really think they mean it this time. And yet, there's often a voice of doubt in the back of the mind, as though saying "Come on. You know this won't work." But people still quite often spend a bunch of time and money trying to follow through on their new resolution - often failing for reasons that they kind of already knew would happen (and yet often feeling guilty for not sticking to their plan!).
  • Religious or ideological deconversion often comes from letting in facts that are already known. E.g., I used to believe that the results of parapsychological research suggested some really important things about how to survive after physical death. I knew all the pieces of info that finally changed my mind months before my mind actually changed. I had even done experiments to test my hypotheses and it still took months. I'm under the impression that this is normal.
  • Most people reading this already know that if they put a ton of work into emptying their email inbox, they'll feel good for a little while, and then it'll fill up again, complete with the sense of guilt for not keeping up with it. And yet, somehow, it always feels like the right thing to do to go on an inbox-emptying flurry, and then get around to addressing the root cause "later" or maybe try things that will fail after a month or two. This is an agonizingly predictable cycle. (Of course, this isn't how it goes for everyone, but it's common enough that well over half the people who attend CFAR workshops seem to relate to it.)
  • Most of Einstein's work in raising special relativity to consideration consisted of saying "Let's take the Michelson-Morley result at face value and see where it goes." Note that he is now considered the archetypal example of a brilliant person primarily for his ability to highlight worthy hypotheses via running with the implications of what is already known or supposed.
  • Ignaz Semmelweis found that hand-washing dramatically reduced mortality in important cases in hospitals. He was ignored, criticized, and committed to an insane asylum where guards beat him to death. At a cultural level, the fact that whether Semmelweis was right was (a) testable and (b) independent of opinion failed to propagate until after Louis Pasteur gave the medical community justification to believe that hand-washing could matter. This is a horrendous embarrassment, and thousands of people died unnecessarily because of a cultural inability to finish thinking. (Note that this also honors the need for empiricism - but the point here is that the ability to finish thinking was a prerequisite for empiricism mattering in this case.)

I could keep going. Hopefully you could too.

But my point is this:

Please note that there's a baby in that bathwater you're condemning as dirty.

Comment author: [deleted] 23 May 2015 06:50:03PM *  -1 points [-]

Selecting a likely hypothesis for consideration does not alter that hypothesis' likelihood. Do we agree on that?

Comment author: Valentine 23 May 2015 08:17:04PM 4 points [-]

Hmm. Maybe. It depends on what you mean by "likelihood", and by "selecting".

Trivially, noticing a hypothesis and that it's likely enough to justify being tested absolutely is making it subjectively more likely than it was before. I consider that tautological.

If someone is looking at n hypotheses and then decided to pick the kth one to test (maybe at random, or maybe because they all need to be tested at some point so why not start with the kth one), then I quite agree, that doesn't change the likelihood of hypothesis #k.

But in my mind, it's vividly clear that the process of plucking a likely hypothesis out of hypothesis space depends critically on moving probability mass around in said space. Any process that doesn't do that is literally picking a hypothesis at random. (Frankly, I'm not sure a human mind even can do that.)

The core problem here is that most default human ways of moving probability mass around in hypothesis space (e.g. clever arguments) violate the laws of probability, whereas empirical tests aren't nearly as prone to that.

So, if you mean to suggest that figuring out which hypothesis is worthy of testing does not involve altering our subjective likelihood that said hypothesis will turn out to be true, then I quite strongly disagree.

But if you mean that clever arguments can't change what's true even by a little bit, then of course I agree with you.

Perhaps you're using a Frequentist definition of "likelihood" whereas I'm using a Bayesian one?

Comment author: Valentine 23 May 2015 04:48:45PM *  7 points [-]

Thank you for this.

I see you as highlighting a virtue that the current Art gestures toward but doesn't yet embody. And I agree with you, a mature version of the Art definitely would.

In his Lectures on Physics, Feynman provides a clever argument to show that when the only energy being considered in a system is gravitational potential energy, then the energy is conserved. At the end of that, he adds the following:

It is a very beautiful line of reasoning. The only problem is that perhaps it is not true. (After all, nature does not have to go along with our reasoning.) For example, perhaps perpetual motion is, in fact, possible. Some of the assumptions may be wrong, or we may have made a mistake in reasoning, so it is always necessary to check. It turns out experimentally, in fact, to be true.

This is such a lovely mental movement. Feynman deeply cared about knowing how the world really actually works, and it looks like this led him to a mental reflex where even in cases of enormous cultural confidence he still responds to clever arguments by asking "What does nature have to say?"

In my opinion, people in this community update too much on clever arguments. I include myself in that. I disagree with your claim that people shouldn't update at all on clever arguments, but I very much agree that there would be much more strength in the Art if it were to emphasize an active hunger for asking nature its opinion.

I think there's a flavor of mistake that comes from overemphasizing the direction I see you pointing at the expense of other virtues. I've known quite a number of scientists who think the way I see you suggesting who feel like they can't have any opinions or thoughts about things they haven't seem empirical tests of. I think they're in part trying to protect themselves against what Eliezer calls "privileging the hypothesis", but they also make themselves unnecessarily stupid in some ways. The most common and blatant I recall is their getting routinely blindsided by predictable social expectations and drama.

But I think Feynman gets it right.

And I think we ought to, too.

So again, thank you for bringing this up. It clarified something that had been nagging me, and now I think I see how to fix it.

View more: Next