Comment author: fractalcat 09 April 2015 03:24:05PM *  1 point [-]

First off, I should note that I'm still not really sure what 'Bayesianism' means; I'm interpreting it here as "understanding of conditional probabilities as applied to decision-making".

No human can apply Bayesian reasoning exactly, quantitatively and unaided in everyday life. Learning how to approximate it well enough to tell a computer how to use it for you is a (moderately large) research area. From what you've described, I think you have a decent working qualitative understanding of what it implies for everyday decision-making, and if everyday decision-making is your goal I suspect you might be better-served reading up on common cognitive biases (I heartily recommend /Heuristics and Biases/ ed Kahneman and Tversky as a starting point). Learning probability theory in depth is certainly worthwhile, but in terms of practical benefit outside of the field I suspect most people would be better off reading some cognitive science, some introductory stats and most particularly some experimental design.

Wrt your goals, learning probability theory might make you a better programmer (depends what your interests are and where you are on the skill ladder), but it's almost certainly not the most important thing (if you would like more specific advice on this topic, let me know and I'd be happy to elaborate). I have examples similar to dhoe's, but the important bits of the troubleshooting process for me are "base rate fallacy" and "construct falsifiable hypotheses and test them before jumping to conclusions", not any explicit probability calculation.

Comment author: fractalcat 11 January 2015 07:55:41AM 5 points [-]

Doug McGuff, MD, fitness guru and author of the exercise book with the highest citation-to-page ratio of any I've seen.

I had a curious skim through this guy's blog. Soon happened upon this interview with Joe Mercola. I get that people sometimes do questionable things they wouldn't otherwise do for publicity, but this is pretty out there. For those unfamiliar with the good Dr Mercola, he's second only to Mehmet Oz in damage done to public understanding of the scientific basis of medicine. That's no flaw in McGuff's own work, but I'm a little dubious of a physician who's willing to associate with antivaxxers in a public professional context, from an ethical standpoint if nothing else. He may have good reasons for this, but it does trigger my quack-heuristic.

Comment author: fractalcat 15 December 2014 09:48:05PM 3 points [-]

Imagine mapping my brain into two interpenetrating networks. For each brain cell, half of it goes to one map and half to the other. For each connection between cells, half of each connection goes to one map and half to the other.

What would happen in this case is that there would be no Manfreds, because (even assuming the physical integrity of the neuron-halves was preserved) you can't activate a voltage-gated ion channel with half the potential you had before. You can't reason about the implications of the physical reality of brains while ignoring the physical reality of brains.

Or are you asserting no physical changes to the system, and just defining each neuron to be multiple entities? For the same reason I think the p-zombies argument is incoherent, I'm quite comfortable not assigning any moral weight to epiphenomenal 'people'.

In response to ...
Comment author: fractalcat 15 November 2014 08:12:24PM 1 point [-]

Can someone post a ROT13ed link? I'm curious.

Comment author: fractalcat 14 April 2014 10:50:01AM 0 points [-]

I'm not totally sure of your argument here; would you be able to clarify why satisficing is superior to a straight maximization given your hypothetical[0]?

Specifically, you argue correctly that human judgement is informed by numerous hidden variables over which we have no awareness, and thus a maximization process executed by us has the potential for error. You also argue that 'eutopian'/'good enough' worlds are likely to be more common than sirens. Given that, how is a judgement with error induced by hidden variables any worse than a judgement made using deliberate randomization (or selecting the first 'good enough' world, assuming no unstated special properties of our worldspace-traversal)? Satisficing might be more computationally efficient, but that doesn't seem to be the argument you're making.

[0] The ex-nihilo siren worlds rather than the designed ones; an evil AI presumably has knowledge of our decision process and can create perfectly-misaligned worlds.

Comment author: fractalcat 31 January 2014 10:13:06AM 1 point [-]

Yes, indeed, we don't always have conscious control over the same set of things over which we intuitively believe we have conscious control. That's the foundation of (among other things) the difference between System 1 and System 2 in the biases literature. It's also (as Kaj_Sotala noted) one reason habit is such a powerful influence on human behaviour, and the reason things like drug addiction exist. But how could it be any other way? Brains aren't made of magical-consciousness-stuff, they're physical, modular, evolved entities in a species descended from lizards.

I'd be interested in hearing more about the methods you've found effective for noticing the semiconscious decisions that you're making and how you've evaluated their effectiveness.

In response to comment by RobbBB on Why Eat Less Meat?
Comment author: Juno_Watt 24 July 2013 06:40:18PM 0 points [-]

I woudn;t hasten to describe them a confused. How about the modest proposal of growing acephalus humans for consumption? Is that too far down the slope?

Comment author: fractalcat 29 July 2013 12:22:27PM 1 point [-]

Nitpick: 'anencephalic'. 'cephalon' is head, 'encephalon' is brain.