Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open Thread June 2010, Part 4

5 Post author: Will_Newsome 19 June 2010 04:34AM

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

This thread brought to you by quantum immortality.

Comments (325)

Comment author: knb 20 June 2010 04:41:46AM *  10 points [-]

Some random thoughts about thinking, based mostly on my own experience.

I've been playing minesweeper lately (and I've never played before). For the uninitiated, minesweeper is a game that involves using deductive reasoning (and rarely, guessing) to locate the "mines" in a grid of identical boxes. For such an abstract puzzle, it really does a good job of working the nerves, since one bad click can spoil several minutes' effort.

I was surprised to find that even when I could be logically certain about the state of a box, I felt afraid that I was incorrect (before I clicked), and (mildly) amazed when I turned out to be correct. It felt like some kind of low level psychic power or something. So it seems that our brains don't exactly "trust" deductive reasoning. Maybe because problems in the ancestral environment didn't have clean, logical solutions?

I also find that when I'm stymied by a puzzle, if I turn my attention to something else for a while, when I come back, I can easily find some way forward. The effect is stunning, an unsolvable problem becomes trivial five minutes later. I'm pretty sure there is a name for this phenomenon, but I don't know what it is. In any case, it's jarring.

Another random thought. When I'm sad about something in my life, I usually can make myself feel much better by simply saying, in a sentence, why I'm sad. I don't know why this works, but it seems to make the emotion abstract, as though it happened to somebody else.

Comment author: Alicorn 20 June 2010 05:17:38AM *  6 points [-]

When I'm sad about something in my life, I usually can make myself feel much better by simply saying, in a sentence, why I'm sad.

Explicitly acknowledging emotions as things with causes is a huge chunk of managing them deliberately. (I have a post in the works on this, but I'm not sure when I'll pull it together.)

Comment author: CronoDAS 21 June 2010 09:10:55PM 2 points [-]

Another random thought. When I'm sad about something in my life, I usually can make myself feel much better by simply saying, in a sentence, why I'm sad. I don't know why this works, but it seems to make the emotion abstract, as though it happened to somebody else.

I don't think that works for me. I often can't identify a specific cause of my sad feeling, and when I can, thinking about it often makes me feel worse rather than better.

Comment author: SilasBarta 21 June 2010 09:25:50PM *  3 points [-]

Same here. I also found that often there's not any cause in the sense of something specific upsetting me; it's just an automatic reaction to not getting enough social interaction.

Comment author: knb 25 June 2010 12:27:21AM 2 points [-]

Well I don't mean ruminating about the cause of the sad feeling. That is probably one of the worst things you can do. Rather I meant just identifying it.

For example, when a girlfriend and I broke up (this was a couple years ago) I spent maybe two days feeling really depressed. Eventually, I thought to myself, "You're sad because you broke up with your girlfriend."

That really put it in perspective for me. It made me think of all the cheesy teen movies where kids breakup with their sweethearts and act like it's the end of the world, when in the viewer sees it as a normal, even banal rite of passage to adulthood. I had always thought people who reacted like that were ridiculous. In other words, it feels like that thought put the issue in "far mode" for me.

Comment author: Kaj_Sotala 20 June 2010 03:27:52AM 10 points [-]

A visual study guide to 105 types of cognitive biases

"The Royal Society of Account Planning created this visual study guide to cognitive biases (defined as "psychological tendencies that cause the human brain to draw incorrect conclusions). It includes descriptions of 19 social biases, 8 memory biases, 42 decision-making biases, and 36 probability / belief biases."

Comment author: steven0461 29 June 2010 10:34:00PM 7 points [-]

http://arxiv.org/abs/1006.3868

Philosophy and the practice of Bayesian statistics

Andrew Gelman, Cosma Rohilla Shalizi (Submitted on 19 Jun 2010) A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science. Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework.

Comment author: cousin_it 30 June 2010 07:36:04PM *  2 points [-]

I guess everyone here already understands this stuff, but I'll still try to summarize why "model checking" is an argument against "naive Bayesians" like Eliezer's OB persona. Shalizi has written about this at length on his blog and elsewhere, as has Gelman, but maybe I can make the argument a little clearer for novices.

Imagine you have a prior, then some data comes in, you update and obtain a posterior that overwhelmingly supports one hypothesis. The Bayesian is supposed to say "done" at this point. But we're actually not done. We have only "used all the information available in the sample" in the Bayesian sense, but not in the colloquial sense!

See, after locating the hypothesis, we can run some simple statistical checks on the hypothesis and the data to see if our prior was wrong. For example, plot the data as a histogram, and plot the hypothesis as another histogram, and if there's a lot of data and the two histograms are wildly different, we know almost for certain that the prior was wrong. As a responsible scientist, I'd do this kind of check. The catch is, a perfect Bayesian wouldn't. The question is, why?

Comment author: steven0461 30 June 2010 08:45:25PM 3 points [-]

But my sense is that the "substantial school in the philosophy of science [that] identifies Bayesian inference with inductive inference and even rationality as such", as well as Eliezer's OB persona, is talking more about a prior implicit in informal human reasoning than about anything that's written down on paper. You can then see model checking as roughly comparing the parts of your prior that you wrote down to all the parts that you didn't write down. Is that wrong?

Comment author: cousin_it 01 July 2010 11:11:58AM 1 point [-]

I don't think informal human reasoning corresponds to Bayesian inference with any prior. Maybe you mean "what informal human reasoning should be". In that case I'd like a formal description of what it should be (ahem).

Comment author: Cyan 02 July 2010 03:10:47PM 1 point [-]

I'd like a formal description of what it should be

Solomonoff induction, mebbe?

Comment author: cousin_it 02 July 2010 04:15:05PM 1 point [-]

Wei Dai thought up a counterexample to that :-)

Comment author: steven0461 02 July 2010 07:41:40PM *  1 point [-]

Gelman/Shalizi don't seem to be arguing from the possibility that physics is noncomputable; they seem to think their argument (against Bayes as induction) works even under ordinary circumstances.

Comment author: magfrump 05 July 2010 06:07:47PM 0 points [-]

It seems to me that Wei Dai's argument is flawed (and I may be overly arrogant in saying this; I haven't even had breakfast this morning.)

He says that the probability of knowing an uncomputable problem would be evaluated at 0 originally; I don't fundamentally see why "measure zero hypothesis" is equivalent to "impossible;" for example the hypothesis of "they're making it up as they go along" having probability 2^(-S) based on the size of the set shrinks at a certain rate as evidence arrives; that means that given any finite amount of inference the AI should be able to distinguish between two possibilities (they are very good at computing or guessing vs all humans have been wrong about mathematics forever) unless new evidence comes in to support one over the other "humans have been wrong forever" should have a consistent probability mass which will grow in comparison to the other hypothesis "they are making it up."

Nobody seems to propose this (although I may have missed it skimming some of the replies) and it seems like a relatively simple thing (to me) to adjust the AI's prior distribution to give "impossible" things low but nonzero probability.

Comment author: cousin_it 05 July 2010 06:32:40PM *  0 points [-]

Wei Dai's argument was specifically against the Solomonoff prior, which assigns probability 0 to the existence of halting problem oracles. If you have an idea how to formulate another universal prior that would give such "impossible" things positive probability, but still sum to 1.0 over all hypotheses, then by all means let's hear it.

Comment author: magfrump 06 July 2010 06:15:16AM 0 points [-]

Yeah well it is certainly a good argument against that. The title of the thread is "is induction unformalizable" which point I'm unconvinced of.

If I were to formalize some kind of prior, I would probably use a lot of epsilons (since zero is not a probability); including an epsilon for "things I haven't thought up yet." On the other hand I'm not really an expert on any of these things so I imagine Wei Dai would be able to poke holes in anything I came up with anyway.

Comment author: cousin_it 06 July 2010 08:52:14AM 1 point [-]

There's no general way to have a "none of the above" hypothesis as part of your prior, because it doesn't make any specific prediction and thus you can't update its likelihood as data comes in. See the discussion with Cyan and others about NOTA somewhere around here.

Comment author: Matt_Simpson 02 July 2010 08:25:35AM *  2 points [-]

See, after locating the hypothesis, we can run some simple statistical checks on the hypothesis and the data to see if our prior was wrong. For example, plot the data as a histogram, and plot the hypothesis as another histogram, and if there's a lot of data and the two histograms are wildly different, we know almost for certain that the prior was wrong. As a responsible scientist, I'd do this kind of check. The catch is, a perfect Bayesian wouldn't. The question is, why?

Model checking is completely compatible with "perfect Bayesianism." In the practice of Bayesian statistics, how often is the prior distribution you use exactly the same as your actual prior distribution? The answer is never. Really, do you think your actual prior follows a gamma distribution exactly? The prior distribution you use in the computation is a model of your actual prior distribution. It's a map of your current map. With this in mind, model checking is an extremely handy way to make sure that your model of your prior is reasonable.

However, a difference in the data and a simulation from your model doesn't necessarily mean that you have an unreasonable model of your prior. You could just have really wrong priors. So you have to think about what's going on to be sure. This does somewhat limit the role of model checking relative to what Gelman is pushing.

Comment author: cousin_it 26 April 2011 03:52:30PM *  0 points [-]

With this in mind, model checking is an extremely handy way to make sure that your model of your prior is reasonable.

You shouldn't need real-world data to determine if your model of your own prior was reasonable or not. Something else is going on here. Model checking uses the data to figure out if your prior was reasonable, which is a reasonable but non-Bayesian idea.

Comment author: Matt_Simpson 26 April 2011 07:05:15PM 0 points [-]

Well, if you're just checking your prior, then I suppose you don't need real data at all. Make up some numbers and see what happens. What you're really checking (if you're being a Bayesian about it, i.e. not like Gelman and company) is not whether your data could come from a model with that prior, but rather whether the properties of the prior you chose seems to match up with the prior you're modeling. For example, maybe the prior you chose forces two parameters, a and b, to be independent no matter what the data say. In reality, though, you think it's perfectly reasonable for there to be some association between those two parameters. If you don't already know that your prior is deficient in this way, posterior predictive checking can pick it up.

In reality, you're usually checking both your prior and the other parts of your model at the same time, so you might as well use your data, but I could see using different fake data sets in order to check your prior in different ways.

Comment author: saturn 30 June 2010 09:35:33PM 2 points [-]

This sounds like a confusion between a theoretical perfect Bayesian and practical approximations. The perfect Bayesian wouldn't have any use for model checking because from the start it always considers every hypothesis it is capable of formulating, whereas the prior used by a human scientist won't ever even come close to encoding all of their knowledge.

(A more "Bayesian" alternative to model checking is to have an explicit "none of the above" hypothesis as part of your prior.)

Comment author: CarlShulman 01 July 2010 11:49:20PM 1 point [-]

NOTA is addressed in the paper as inadequate. What does it predict?

Comment author: cousin_it 01 July 2010 10:36:24AM *  1 point [-]

(A more "Bayesian" alternative to model checking is to have an explicit "none of the above" hypothesis as part of your prior.)

I don't see how that's possible. How do you compute the likelihood of the NOTA hypothesis given the data?

Comment author: Cyan 02 July 2010 03:04:36PM *  2 points [-]

NOTA is not well-specified in the general case, but in at least one specific case it's been done. Jaynes's student Larry Bretthorst made a useable NOTA hypothesis in a simplified version of a radar target identification problem (link to a pdf of the doc).

(Somewhat bizarrely, the same sort of approach could probably be made to work in certain problems in proteomics in which the data-generating process shares the key features of the data-generating process in Bretthorst's simplified problem.)

Comment author: cousin_it 02 July 2010 04:30:49PM *  0 points [-]

If I'm not mistaken, such problems would contain some enumerated hypotheses - point peaks in a well-defined parameter space - and the NOTA hypothesis would be a uniformly thin layer over the rest of that space. Can't tell what key features the data-generating process must have, though. Or am I failing reading comprehension again?

Comment author: Cyan 02 July 2010 08:24:57PM *  0 points [-]

If I'm not mistaken, such problems would contain some enumerated hypotheses - point peaks in a well-defined parameter space - and the NOTA hypothesis would be a uniformly thin layer over the rest of that space

Yep.

Can't tell what key features the data-generating process must have, though.

I think the key features that make the NOTA hypothesis feasible are (i) all possible hypotheses generate signals of a known form (but with free parameters), and (ii) although the space of all possible hypotheses is too large to enumerate, we have a partial library of "interesting" hypotheses of particularly high prior probability for which the generated signals are known even more specifically than in the general case.

Comment author: SilasBarta 30 June 2010 10:13:45PM 3 points [-]

See, after locating the hypothesis, we can run some simple statistical checks on the hypothesis and the data to see if our prior was wrong. For example, plot the data as a histogram, and plot the hypothesis as another histogram, and if there's a lot of data and the two histograms are wildly different, we know almost for certain that the prior was wrong. As a responsible scientist, I'd do this kind of check. The catch is, a perfect Bayesian wouldn't. The question is, why?

I thought that what I'm about to say is standard, but perhaps it isn't.

Bayesian inference, depending on how detailed you do it, does include such a check. You construct a Bayes network (as a directed acyclic graph) that connects beliefs with anticipated observations (or intermediate other beliefs), establishing marginal and conditional probabilities for the nodes. As your expectations are jointly determined by the beliefs that lead up to them, then getting a wrong answer will knock down the probabilities you assign to the beliefs leading up to them.

Depending on the relative strengths of the connections, you know whether to reject your parameters, your model, or the validity of the observation. (Depending on how detailed the network is, one input belief might be "i'm hallucinating or insane", which may survive with the highest probability.) This determination is based on which of them, after taking this hit, has the lowest probability.

Pearl also has written Bayesian algorithms for inferring conditional (in)dependencies from data, and therefore what kinds of models are capable of capturing a phenomenon. He furthermore has proposed causal networks, which have explicit causal and (oppositely) inferential directions. In that case, you don't turn a prior into a posterior: rather, the odds you assign to an event at a node are determined by the "incoming" causal "message", and, from the other direction, the incoming inferential message.

But neither "model checking" nor Bayesian methods will come up with hypotheses for you. Model checking can attenuate the odds you assign to wrong priors, but so can Bayesian updating. The catch is that, for reasons of computation, a Bayesian might not be able to list all the possible hypotheses and arbitrarily restrict the hypothesis space, and potentially be left with only bad ones. But Bayesians aren't alone in that either.

(Please tell me if this sounds too True Believerish.)

Comment author: WrongBot 30 June 2010 08:07:17PM 1 point [-]

Apologies if this has already been covered elsewhere, but isn't a prior just a belief? The prior is by definition whatever it was rational to believe before the acquisition of new evidence (assuming a perfect Bayesian, anyway). I'm not quite sure what you mean when you propose that a prior could be wrong; either all priors are statements of belief and therefore true, or all priors are statements of probability that must be less accurate than a posterior that incorporates more evidence.

I suspect that there are additional steps I'm not considering.

Comment author: cousin_it 01 July 2010 10:56:45AM *  2 points [-]

The prior is by definition whatever it was rational to believe before the acquisition of new evidence (assuming a perfect Bayesian, anyway).

Nope, this isn't part of the definition of the prior, and I don't see how it could be. The prior is whatever you actually believe before any evidence comes in.

If you have a procedure to determine which priors are "rational" before looking at the evidence, please share it with us. Some people here believe religiously in maxent, others swear by the universal prior, I personally rather like reference priors, but the Bayesian apparatus doesn't really give us a means of determining the "best" among those. I wrote about these topics here before. If you want the one-word summary, the area is a mess.

Comment author: WrongBot 01 July 2010 04:55:51PM 0 points [-]

Thanks for the links (and your post!), I now have a much clearer idea of the depths of my ignorance on this topic.

I want to believe that there is some optimal general prior, but it seems much more likely that we do not live in so convenient a world.

Comment author: thomblake 01 July 2010 05:04:31PM 0 points [-]

I want to believe that there is some optimal general prior, but it seems much more likely that we do not live in so convenient a world.

But if you can evaluate how good a prior is, then there has to be an optimal one (or several). You have to have something as your prior, and so whichever one is the best out of those you can choose is the one you should have. As for how certain you are that it's the best, it's (to some extent) turtles all the way down.

Comment author: WrongBot 01 July 2010 07:09:32PM 0 points [-]

Instead of using "optimal general prior", I should have said that I was pessimistic about the existence of a standard for evaluating priors (or, more properly, prior probability distributions) that is optimal in all circumstances, if that's any clearer.

Having thought about the problem some more, though, I think my pessimism may have been premature.

A prior probability distribution is nothing more than a weighted set of hypotheses. A perfect Bayesian would consider every possible hypothesis, which is impossible unless hypotheses are countable, and they aren't; the ideal for Bayesian reasoning as I understand it is thus unattainable, but this doesn't mean that there are benefits to be found in moving toward that ideal.

So, perfect Bayesian or not, we have some set of hypotheses which need to be located before we can consider them and assign them a probabilistic weight. Before we acquire any rational evidence at all, there is necessarily only one factor that we can use to distinguish between hypotheses: how hard they are to locate. If it is also true that hypotheses which are easier to locate make more predictions and that hypotheses which make more predictions are more useful (and while I have not seen proofs of these propositions I'm inclined to suspect that they exist), then we are perfectly justified in assigning a probability to a hypothesis based on it's locate-ability.

This reduces the problem of prior probability evaluation to the problem of locate-ability evaluation, to which it seems maxent and its fellows are proposed answers. It's again possible there is no objectively best way to evaluate locate-ability, but I don't yet see a reason for this to be so.

Again, if I've mis-thought or failed to justify a step in my reasoning, please call me on it.

Comment author: cousin_it 01 July 2010 08:13:37PM *  6 points [-]

If it is also true that hypotheses which are easier to locate make more predictions

This doesn't sound right to me. Imagine you're tossing a coin repeatedly. Hypothesis 1 says the coin is fair. Hypothesis 2 says the coin repeats the sequence HTTTHHTHTHTTTT over and over in a loop. The second hypothesis is harder to locate, but makes a stronger prediction.

The proper formalization for your concept of locate-ability is the Solomonoff prior. Unfortunately we can't do inference based on it because it's uncomputable.

Maxent and friends aren't motivated by a desire to formalize locate-ability. Maxent is the "most uniform" distribution on a space of hypotheses; the "Jeffreys rule" is a means of constructing priors that are invariant under reparameterizations of the space of hypotheses; "matching priors" give you frequentist coverage guarantees, and so on.

Please don't take my words for gospel just because I sound knowledgeable! At this point I recommend you to actually study the math and come to your own conclusions. Maybe contact user Cyan, he's a professional statistician who inspired me to learn this stuff. IMO, discussing Bayesianism as some kind of philosophical system without digging into the math is counterproductive, though people around here do that a lot.

Comment author: WrongBot 01 July 2010 08:46:24PM 0 points [-]

I'm in the process of digging into the math, so hopefully some point soon I'll be able to back up my suspicions in a more rigorous way.

This doesn't sound right to me. Imagine you're tossing a coin repeatedly. Hypothesis 1 says the coin is fair. Hypothesis 2 says the coin repeats the sequence HTTTHHTHTHTTTT over and over in a loop. The second hypothesis is harder to locate, but makes a stronger prediction.

I was talking about the number of predictions, not their strength. So Hypothesis 1 predicts any sequence of coin-flips that converges on 50%, and Hypothesis 2 predicts only sequences that repeat HTTTHHTHTHTTTT. Hypothesis 1 explains many more possible worlds than Hypothesis 2, and so without evidence as to which world we inhabit, Hypothesis 1 is much more likely.

Since I've already conceded that being a Perfect Bayesian is impossible, I'm not surprised to hear that measuring locate-ability is likewise impossible (especially because the one reduces to the other). It just means that we should determine prior probabilities by approximating Solomonoff complexity as best we can.

Thanks for taking the time to comment, by the way.

Comment author: cousin_it 01 July 2010 08:53:23PM *  1 point [-]

Then let's try this. Hypothesis 1 says the sequence will consist of only H repeated forever. Hypothesis 2 says the sequence will be either HTTTHHTHTHTTTT repeated forever, or TTHTHTTTHTHHHHH repeated forever. The second one is harder to locate, but describes two possible worlds rather than one.

Maybe your idea can be fixed somehow, but I see no way yet. Keep digging.

Comment author: thomblake 01 July 2010 07:55:25PM 0 points [-]

It's again possible there is no objectively best way

I'm not sure I'm willing to grant that's impossible in principle. Presumably, you need to find some way of choosing your priors, and some time later you can check your calibration, and you can then evaluate the effectiveness of one method versus another.

If there's any way to determine whether you've won bets in a series, then it's possible to rank methods for choosing the correct bet. And that general principle can continue all the way down. And if there isn't any way of determining whether you've won, then I'd wonder if you're talking about anything at all (weird thought experiments aside).

Comment author: Blueberry 30 June 2010 08:14:01PM 0 points [-]

we can run some simple statistical checks on the hypothesis and the data to see if our prior was wrong. For example, plot the data as a histogram, and plot the hypothesis as another histogram, and if there's a lot of data and the two histograms are wildly different, we know almost for certain that the prior was wrong.

That check should be part of updating your prior. If you updated and got a hypothesis that didn't fit the data, you didn't update very well. You need to take this into account when you're updating (and you also need to take into account the possibility of experimental error: there's a small chance the data are wrong).

Comment author: Morendil 30 June 2010 08:52:18PM 2 points [-]

Hopefully the Book Club will get around to covering that as part of Chapter 4.

I can't recall that it has anything to do with "updating your prior"; Jaynes just says that if you get nonsense posterior probabilities, you need to go back and include additional hypotheses in the set you're considering, and this changes the analysis.

See also the quote (I can't be bothered to find it now but I posted it a while ago to a quotes thread) where Jaynes says probability theory doesn't do the job of thinking up hypotheses for you.

Comment author: SilasBarta 28 June 2010 01:50:33PM *  7 points [-]

About the Rumsfeld quote mentioned in the most recent top-level post:

There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we now know we don’t know. But there are also unknown unknowns. These are things we do not know we don’t know.

Why is it that people mock Rumsfeld so incessantly for this? Whatever reason you might have not to like him, this is probably the most insightful thing any government official has said at a press conference. And yet he's ridiculed for it by the very same people that are emphasizing, or at least should be emphasizing, the imporance of the insight.

Heck, some people even thought it was clever to format it into a poem.

What gives? Is this just a case of "no good deed goes unpunished"?

ETA: In your answer, be sure to say, not just what's wrong with the quote or its context, but why people don't make that as their criticism instead of just saying, ha ha, the quote sure is funny.

Comment author: simplicio 28 June 2010 04:37:59PM 3 points [-]

I agree that the quote is insightful and brilliant.

I think it was seen by certain (tribally liberal) people as somehow euphemistic or sophistic, as though he were trying to invent a whole new epistemology to justify war.

Politics is the mind-killer.

Comment author: cupholder 28 June 2010 02:08:44PM *  2 points [-]

Some ideas.

  • People didn't/don't like Rumsfeld.

  • In the quote's original context, Rumsfeld used it as the basis of a non-answer to a question:

In regard to Iraq weapons of mass destruction and terrorists, is there any evidence to indicate that Iraq has attempted to or is willing to supply terrorists with weapons of mass destruction? Because there are reports that there is no evidence of a direct link between Baghdad and some of these terrorist organizations.

[snip]

Q: Excuse me. But is this an unknown unknown?

Rumsfeld: I'm not --

Q: Because you said several unknowns, and I'm just wondering if this is an unknown unknown.

Rumsfeld: I'm not going to say which it is.

  • People think Rumsfeld's particular phrasing is funny, and people don't judge it as insightful enough to overcome the initial 'hee hee that sounds funny' reaction.

  • However insightful the quote is, Rumsfeld arguably failed to translate it into appropriate action (or appropriate non-action), which might have made it seem simply ironic or contrary rather than insightful.

(Edit to fix formatting.)

Comment author: SilasBarta 28 June 2010 02:15:18PM *  1 point [-]

People think Rumsfeld's particular phrasing is funny,

So what would be the non-funny way to say? IMHO, Rumsfeld's phrasing is what you get if you just say it the most direct way possible.

This is what always bothers me: people who say, "hey, what you said was valid and all, but the way you said it was strange/stupid". Er, so what would be the non-strange/stupid way to say it? "Uh, implementation issue."

Rumsfeld used it as the basis of a non-answer to a question...

In the exchange, it looks like the reporter's followup question is nonsense. It only makes sense to ask if it's a known unknown, since you, er, never know the unknown unknowns. (Hee hee! I said something that sounds funny! Now you can mock me while also promoting what I said as insightful!)

See also the edit to my original comment.

Comment author: NancyLebovitz 28 June 2010 02:01:16PM 2 points [-]

It's possibly a matter of people being already disposed to dislike Rumsfeld, combined with a feeling that if he had so much understanding of ignorance, he shouldn't have been so pro-war.

Comment author: WrongBot 28 June 2010 05:06:11PM 1 point [-]

I agree that it's a brilliant idea, and that's why I cited him. He does the best job of describing that particular idea that I know of, and I'm amazed, as you are, that he said it at a press conference. I vehemently disagree with his politics, but that doesn't make him stupid or incapable of brilliance.

If the tone of my post came across as mocking, that was not at all my intention.

Comment author: RichardKennaway 28 June 2010 03:07:59PM *  1 point [-]

Heck, some people even thought it was clever to format it into a poem.

I am surely not the first to recognise the similarity to this poem.

ETA: no, I'm not.

Comment author: h-H 20 June 2010 11:41:51AM *  7 points [-]

genes, memes and parasites?

tl:dr:"People who suffer from schizophrenia are, in fact, three times more likely to carry T. gondii than those who do not."

"Over the last five years or so, evidence has been building that some human cultural shifts might be influenced, or even caused, by the spread of Toxoplasma gondii."

"In the United States, 12.3 percent of women tested carried the parasite, and in the United Kingdom only 6.6 percent were infected. But in some countries, statistics were much higher. 45 percent of those tested in France were infected, and in Yugoslavia 66.8 percent were infected!"

Comment author: wedrifid 20 June 2010 11:57:51AM 1 point [-]

"In the United States, 12.3 percent of women tested carried the parasite

Wow. How is this parasite spread? Could those 'girly germs' that I avoided in primary school actually reduce my chances of getting schizophrenia?

Comment author: h-H 20 June 2010 12:31:37PM 1 point [-]

wait, what's a girly germ? I googled it and it game me a link about a Micronesian island :/

Comment author: wedrifid 20 June 2010 12:38:21PM 2 points [-]

Do young kids where you are come tease each other about the other sex? 'Cooties?' Whatever they call it.

My question is how the parasite is spread. What does that 12.3% mean for the rest of the population? Why did they only test women?

Comment author: Morendil 20 June 2010 05:03:07PM 3 points [-]

Why did they only test women?

It's a major pregnancy risk.

Comment author: Kevin 21 June 2010 05:36:15PM 5 points [-]

Part one of a five part series on the Dunning-Kruger effect, by Errol Morris.

http://opinionator.blogs.nytimes.com/2010/06/20/the-anosognosics-dilemma-1/

Also note that Oscar winning director Morris's next project is a dark comedy that is a fictionalized version of the founding of Alcor!

Comment author: arundelo 21 June 2010 10:32:45PM 1 point [-]

Ooh, it's nice to see more details on the lemon juice bank robber. When I first heard about him I thought he was probably schizophrenic. Maybe he was, but the details make it sound like he may indeed have been just really stupid.

Comment author: gwern 21 June 2010 06:26:04PM 1 point [-]

Also note that Oscar winning director Morris's next project is a dark comedy that is a fictionalized version of the founding of Alcor!

Isn't that a bad thing? I suspect a major source will be that recent book...

Comment author: NancyLebovitz 21 June 2010 01:10:07PM 5 points [-]

On not being able to cut reality at the joints because you don't even know what a joint is: diagnosing schizophrenia

If you gave Aristotle ten thousand unplugged computers of different makes and models, no matter how systematically he analyzed them he'd not only be wrong, he'd be misleadingly wrong. He would find that they were related by shape-- rectangles/squares; by color-- black, white, or tan. Size/weight; material.

Aristotle was smart, but there is nothing he could ever learn about computers from his investigations. His science is all wrong for what he was doing. But Aristotle would think he knew a terrible amount about computers from his studies. In fact, he'd probably be considered an expert. "To fix this computer, we need to make it more rectangular. Get chopping, malaka."

Comment author: Kaj_Sotala 19 June 2010 09:38:58PM *  5 points [-]

Deus Ex: Human Revolution

IGN Preview

It has been a while since I needed to buy a new computer to play a game.

In addition to being a sequel to Deus Ex and looking generally bad-ass, transhumanism is explicitly mentioned. From the FAQ:

Essentially, DX: HR explores the beginnings of human augmentation and the transhumanism movement is a major influence in the game. There are people who think it's "playing God" to modify the body whatsoever and there are people (Transhumanists) who think it's the natural evolution of the human species to utilise technology. You're caught in the middle of this storm and must decide which path you take. The visual stigma augmentated people bear adds fuel to the huge societal rift between them and natural humans that's at the centre of Deus Ex: HR's vision of the future.

Comment author: Kevin 19 June 2010 06:06:09AM *  5 points [-]

Strange occurrence in US South Carolina Democratic primary.

The only explanation, Mr. Rawl’s representatives told the committee, was faulty voting machines — not chance, name order on the ballot, or Republicans crossing over to vote for the weaker Democrat. With testimony dominated by talk of standard variances, preference theories and voting machine software, the hearing took on the spirit of a political science seminar.

The Washington Post profiled Alvin Greene last week

10 minute video interview with Greene

What happened here?

Wikipedia has a list of possible explanations.

Fivethirtyeight lists possible explanations and analysis.

Rawl and co presented five hours of testimony that the results could only be attributed to a problem with the voting machines.

What is your probability estimate for Alvin Greene's win in this election being legitimate (Greene getting lucky as a result of aggregate voter intent+indifference+confusion, as opposed to voting machine malfunction or some sort of active conspiracy)? What evidence do you need in order to update your estimate?

Comment author: Morendil 19 June 2010 01:30:00PM 5 points [-]

My most likely explanations would be 1) software bug(s) 2) voter whim or confusion 3) odd hypothesis no one has thought of yet. Active intent to steal the nomination a distant fourth. Make it 60/30 among the first two.

Evidence? Well, anything credible, but how likely is that. :)

Comment author: SilasBarta 19 June 2010 01:23:30PM *  8 points [-]

Not ready to answer the rationalist questions, but why is it that, as soon as elections don't go toward someone who played the standard political game, suddenly, "it must be a mistake somehow"? You guys set the terms of the primaries, you pick the voting machines. If you're not ready to trust them before the election, the time to contest them was back then, not when you don't like the result.

Where was Rawl on the important issue of voting machine reliability when they did "what they're supposed to"?

I understand that elections are evidence, and given the prior on Greene, this particular election may be insufficient to justify a posterior that Greene has the most "support", however defined. But elections also serve as a bright line to settle an issue. We could argue forever about who "really" has the most votes, but eventually we have to say who won, and elections are just as much about finality on that issue as they are as an evidential test of fact.

To an extent, then, it doesn't matter that Greene didn't "really" get the most votes. If you allow every election to be indefinitely contested until you're convinced there's no reason the loser really should have won, elections never settle anything. The price for indifference to voting procedure reliability (in this case, the machines) should be acceptance of a bad outcome for that time, to be corrected for the next election, or through the recall process.

Frankly, if Greene had lost but could present evidence of the strength Rawl presented, we wouldn't even be having this conversation.

ETA: Oh, and you gotta love this:

On election night, I was among the first reporters to speak with Greene after his victory was announced. His verbal tics and strange affect were immediately apparent: he frequently repeats and interrupts himself, speaks haltingly, and sometimes descends into incoherent rambling, as subsequent video and audio interviews have made all the more obvious.

Damn those candidates with autism symptoms! Only manipulative people like us deserve to win elections!

Comment author: jimrandomh 19 June 2010 01:27:57PM *  6 points [-]

I should point out that most of the people who ought to know about the issue, have been screaming bloody murder about electronic voting machines for some time now. Politicians and the general public just haven't been listening. This issue is surfacing now, not because it wasn't an issue before, but because having a specific election to point to makes it easier to get people to listen. It also helps that the election wasn't an important one (it was a Democratic primary for a safe Republican seat), and the candidates involved don't have the resources to influence the discussion like they normally would.

Comment author: JoshuaZ 19 June 2010 04:35:59PM 6 points [-]

This doesn't sound like autism to me. It sounds more like a neurotypical individual who is dealing with a very unexpected and stressful set of events and having to talk about them.

Comment author: SilasBarta 19 June 2010 11:35:33PM *  3 points [-]

Be that as it may, those are typical characteristics of high-functioning autistics, and I'm more than a little bothered that they view this as justification for reversing his victory.

Take the part I bolded and remove the "incoherent rambling" bit, and you could be describing me! Well, at least my normal mode of speech without deliberate self-adjustment.

And my lack of incoherent rambling is a judgment call ;-)

Comment author: wedrifid 19 June 2010 04:42:57PM 3 points [-]

Damn those candidates with autism symptoms!

Well... knowing that someone is autistic is some inferential evidence in favor of them being a good hacker.

Comment author: Blueberry 19 June 2010 04:23:18PM *  1 point [-]

But elections also serve as a bright line to settle an issue. We could argue forever about who "really" has the most votes, but eventually we have to say who won, and elections are just as much about finality on that issue as they are as an evidential test of fact.

To an extent, then, it doesn't matter that Greene didn't "really" get the most votes. If you allow every election to be indefinitely contested until you're convinced there's no reason the loser really should have won, elections never settle anything.

Yes. Exactly. This is true for lawsuits as well: getting a final answer is more important than getting the "right" answer, which is why finality is an important judicial value that courts balance.

Comment author: prase 19 June 2010 04:41:00PM *  2 points [-]

I don't know the details about the American voting system, but (or maybe therefore) I am surprised how low estimates all people give to the possibility that the result is genuine. My estimate (without much research, I've just read the links) is

  • 0.5 voters actually voted for Greene
  • 0.3 error of some kind
  • 0.2 conspiracy

In order to update, any evidence is accepted, of course. What I would most like to see: results of some statistical survey, conducted either before or better after the election, historical data concerning performance of black candidates, historical data from elections with big difference between the intensity of the campaign between the competing candidates, a lot of independent testimonies of trustworthy voters reporting non-standard behaviour of the voting machines, description of how can the results be altered (and what is normally done to avoid that).

Comment author: JoshuaZ 19 June 2010 01:49:46PM *  2 points [-]

I put a very high probability that some form of tampering occurred primarily due to the failure of the data to obey a generalized Benford's law. Although a large amount of noise has been made about the the fact that some counties had more votes cast in the Republic governor's race than reported turnout, I don't see that as strong evidence of fraud since turnout levels in local elections are often based on the counting ability of the election volunteers who often aren't very competent.

I'd give probability estimates very similar to those of Jim's but with a slightly higher percentage for people actually voting for him. I'd do that I think by moving most of the probability mass from the idea of someone tampering with the election to expose the insecure voting machines which implies a very strange set of ethical thought processes. I've also had enough experience in local elections to know that sometimes very weird things happen for reasons that no one can explain (and that this occurs even with systems that are difficult to tamper with). So using the primary breakdown given by Jim I'd put it as follows:

* Voters actually voted for him: 0.25
* Someone tampered with the voting machines or memory cards to make Alvin Greene win: 0.25
* ...and that person did it because they wanted Alvin Greene to win: 0.1
* ...and that person did it for kicks: 0.1
* ...and that person did it because they wanted to expose the insecure voting machines: 0.05
* Someone meant to tamper with a different election on the same ballot, but accidentally altered the democratic primary additionally or instead: 0.1
* The votes were altered by leftover malware from a previous election which was also hacked: 0.2
* There was a legitimate error in setting up or managing the voting machines altered the vote totals: 0.2

Edit: Thinking this through another possibility that should be listed is deliberate Republican cross-over (since it is an open primary) but given the evidence that seems of negligible probability at this point (< .01)).

Comment author: wedrifid 19 June 2010 11:18:18AM 2 points [-]

Probability that this person would have a worse influence on the senate than a more standard politician: 5%.

Comment author: jimrandomh 19 June 2010 01:08:13PM *  3 points [-]

Here is my probability distribution:

  • Voters actually voted for him: 0.1
  • Someone tampered with the voting machines or memory cards to make Alvin Greene win: 0.4
  • ...and that person did it because they wanted Alvin Greene to win: 0.1
  • ...and that person did it for kicks: 0.1
  • ...and that person did it because they wanted to expose the insecure voting machines: 0.2
  • Someone meant to tamper with a different election on the same ballot, but accidentally altered the democratic primary additionally or instead: 0.1
  • The votes were altered by leftover malware from a previous election which was also hacked: 0.2
  • There was a legitimate error in setting up or managing the voting machines altered the vote totals: 0.2

Note that I started researching this topic with an atypically high prior probability for voting machine fraud, and believe that it is very likely that major US elections in the past were altered this way. The strongest direct evidence I see for fraud having occurred is that there were "three counties with more votes cast in Republican governor's race than reported turnout in the Republican primary" FiveThirtyEight. Note that this means botched vote fraud, not correctly-implemented vote fraud, since correctly implemented vote fraud, using a strategy such as the Hursti hack, would have changed the votes but not the turnout numbers.

The Benford's Law analysis on FiveThirtyEight, on the other hand, I find very unconvincing - first because it has a low p-value, and second because it doesn't represent the way voting machine fraud really works; it can only detect if someone makes up vote totals from scratch, rather than adding to or subtracting from real vote totals.

Comment author: Liron 19 June 2010 07:17:09PM 1 point [-]

I think voters were clueless about both candidates, but they like to fill in all the boxes on the ballot, so they chose the name that has the higher positive affect by far: "Alvin Greene".

To me that would be sufficient to explain the entire anomaly, if not for the mysterious origin of Greene's $10,000 filing fee.

Comment author: Kevin 20 June 2010 01:05:34AM 3 points [-]

Also the possible "Al Green" effect -- voters may have thought they were voting for the famous soul singer.

Comment author: wedrifid 19 June 2010 04:46:07PM 1 point [-]

What evidence do you need in order to update your estimate?

The next election being won by a ficus would boost my estimate. Or, you know, something else ridiculous like an action hero actor.

Comment author: LucasSloan 20 June 2010 03:20:59AM *  2 points [-]

an action hero actor.

Why is this at all ridiculous? Is there any reason to believe Arnold Schwarzenegger has done a significantly worse job than other governors, controlling for ability of the legislature to agree on anything and the health of the economy?

Comment author: wedrifid 20 June 2010 03:41:16AM *  3 points [-]

Why is this at all ridiculous?

It merely serves to illustrate what politics is really about. It certainly isn't about voting for people who are the best suited for making and implementing the decisions that are best for the country, planet or species. I actually would have voted for him unless he had a particularly remarkable opponent. All else being equal I take a contribution in another field that is popular and that I appreciate is a more important signal to me than success as a pure courtier. It is unfortunate that I do not have reason to consider consider political popularity as a stronger signal of country-leading competence than creating 'kindergarten cop'.

Is there any reason to believe Arnold Schwarzenegger has done a significantly poorer job than other governors, controlling for ability of the legislature to agree on anything and the health of the economy?

I've already assigned a low probability to Alvin being at all worse than the alternatives. I expect Arnold would be 'even' better.

(Oh, and I do think that one liner is sub par. It would be better to stick to actual ridiculous rather than superficially ridiculous.)

Comment author: Alicorn 20 June 2010 05:45:52AM 13 points [-]
Comment author: Unnamed 20 June 2010 06:30:45AM 4 points [-]
Comment author: khafra 27 July 2010 10:05:58PM 0 points [-]
Comment author: NancyLebovitz 27 June 2010 02:59:48PM 4 points [-]

News and mental focus

RIKI OTT: Exxon never said it in a press conference. Just when the media started to ask questions, where did that 10.8 million gallons come from, has it been independently verified, Frank Iarossi, the owner of Exxon Shipping, at a press conference said, alcohol may be involved. And I kid you not, I witnessed the entire international media just switch tracks, and that was how we got 10.8 million gallons, rounded up to 11.

A couple years later, when I saw the movie Wag the Dog, I saw that scene where the president was just about to get nailed, and a plant in the audience says, well, what about the bombs in Albania? And the whole media switched to bombs in Albania. And I rose up out of my seat, and I said, that is how we got 11 million gallons. And my two friends each grabbed a wrist and pulled me back down into my chair. And I just swore that I would never forget 38 million gallons.

Comment author: xamdam 27 June 2010 04:27:13PM *  1 point [-]

I think Derren Brown uses this as a mind hack a lot.: http://www.youtube.com/watch?v=3Vz_YTNLn6w (notice specific diversion into spatial memory, it's probably been tried and tested as the best distraction from the color of money in hand)

I feel that mental focus if VERY weak and very exploitable.

As a side note, I think there is another, less obious, mental hack going on, on the audience. Derren claims (in the intro to this TV series) that there is no acting here, but a lot of misdirection. I believe it. I think when he shows this trick work 2 out of 3 times, it's probably more like 2 out of 30. My guess is that he biases the sample quite cleverly, showing 3 cases is exactly the minimum that you can show giving the impression that a) reporting is honest (see - I showed a failure!) and b) the 'magic' works in most cases. Also I think getting caught/embarrassed by a hot dog vendor evokes certain associations that yeah, he can be beat which prevent you from thinking how much he can be beat.

Here is to you Derren, Master of Dark Arts.

Comment author: Vladimir_M 27 June 2010 08:16:08PM 4 points [-]

Note however that Derren Brown's tricks have turned out to be staged in at least one instance. This makes me extremely skeptical towards the rest of them too.

Comment author: Randaly 21 June 2010 10:39:40PM 4 points [-]

A recent study found that one effective way to resist procrastination in future tasks is to forgive previous procrastination- because the negative emotions that would otherwise remain create an ugh field around that task.

I found the study recently, but I've personally found this to be effective previously. Forcing your way through an ugh field isn't sustainable due to our limited supply of willpower (this is hardly a new idea, but I haven't seen it referenced in my limited readings on LW.)

Comment author: multifoliaterose 19 June 2010 08:11:03PM 4 points [-]

I remember a post by Eliezer in which he was talking about how a lot of people who believe in evolution are actually exhibiting the same thinking styles that creationists use when they justify their belief in evolution (using buzz words like "evidence" and "natural selection" without having a deep understanding of what they're talking about, having Guessed the Teacher's Password ). I can't remember what this post was called - does anybody remember? I remember it being good and wanted to refer people to it.

Comment author: Vladimir_M 19 June 2010 11:01:24PM *  7 points [-]

I remember reading a post titled "Science as Attire," which struck me as making a very good point along these lines. It could be what you're looking for.

As a related point, it seems to me that people who do understand evolution (and generally have a strong background in math and natural sciences) are on average heavily biased in their treatment of creationism, in at least two important ways. First, as per the point made in the above linked post, they don't stop to think that the great majority of folks who do believe in evolution don't actually have any better understanding of it than creationists. (In fact, I would say that the best informed creationists I've read, despite the biases that lead them towards their ultimate conclusions, have a much better understanding of evolution than, say, a typical journalist who will attack them as ignorant.) Second, they tend to way overestimate the significance of the phenomenon. Honestly, if I were to write down a list of widespread delusions sorted by the practical dangers they pose, creationism probably wouldn't make the top fifty.

Comment author: Mass_Driver 20 June 2010 02:35:45AM 6 points [-]

I'm extremely curious to hear both your list and JoshuaZ's list of the top 20 or so most harmful delusions. Feel free to sort by category (1-4, 5-10, 11-20, etc.) rather than rank in individual order.

Comment author: JoshuaZ 20 June 2010 04:46:47AM *  6 points [-]

I've separated some forms of alternative medicine out when one might arguably put them closer together. Also, I'm including Young Earth Creationism, but not creationism as a whole. Where that goes might be a bit more complicated. There's some overlap between some of these (such as young earth creationism and religion). The list also does not include any beliefs that have a fundamentally moral component. I've tried to not include beliefs which are stupid but hard to deal with empirically (say that there's something morally inferior about specific racial groups). Finally, when compiling this list I've tried to avoid thinking too much about the overall balance that the delusion provides. So for example, religion is listed where it is based on the harm it does, without taking into account the societal benefits that it also produces.

1-4: Religion, Ayurveda, Homeopathy, Traditional Chinese medicine (as standardized post 1950s)

5-10 The belief that intelligence differences have no strong genetic component. The belief that intelligence differences have no strong environmental component. The belief that there are no serious existential threats to humans. The belief that external cosmetic features or national allegiances are strong indicators of mental superiority or inferiority. That human females have fundamentally less mental capacity and that this difference is enough to be a useful data point when evaluating humans. The belief that the Chinese government can be trusted to benefit its people or decide what information they should or should not have access to. (The primary reason this gets on the list is the sheer size of China. There are other governments which are much, much worse and have similar delusions by the people. But the damage level done is frequently much smaller.)

11-20 Vaccines cause autism. Young Earth Creationism. Invisible Hand of the Market solves everything. Government solves everything. Providence. That there are not fundamental limits on certain natural resources. That nuclear power is intrinsically worse than other forms of energy. The belief that large segments of the population are fundamentally not good at math or science. Astrology. The belief that antibiotics can deal with viral infections.

There were a few that I wanted to stick on for essentially emotional reasons. So for example Holocaust Denial almost got on the list and when I tried to justify it I saw myself engaging in what was clearly motivated cognition.

This list is very preliminary. The grouping is also very tentative and could likely be easily subject to change.

Comment author: wedrifid 21 June 2010 08:53:52AM 2 points [-]

The belief that the Chinese government can be trusted to benefit its people or decide what information they should or should not have access to.

Is it trust or fear that is the real problem in that case? What would you do as an average Chinese citizen who wanted to change the policy? (Then, the same question assuming you were an actual Chinese citizen who didn't have your philosophical mind, intelligence, idealism and resourcefulness.)

Comment author: JoshuaZ 21 June 2010 03:16:35PM *  3 points [-]

Is it trust or fear that is the real problem in that case?

It seems like it is a mix. From people I've spoken to in China and the impression I get from what I've read about the Chinese censorship, the majority of people are generally ok with letting the government control things and think that that's really for the best. This seems to be changing slightly with the younger generation but it is hard to tell.

What would you do as an average Chinese citizen who wanted to change the policy? (Then, the same question assuming you were an actual Chinese citizen who didn't have your philosophical mind, intelligence, idealism and resourcefulness.)

Good points certainly. I'm not sure any average Chinese citizen alone can do anything. If I were an actual Chinese citizen alone given my "philosophical mind, intelligence, idealism and resourcefulness," I'm not sure I'd do anything either, not because I can't, but because the risk would be high. It is easy to say "oh, people in X situation should do Y because that's morally better or better for everyone overall" when one isn't in that situation. When one's life, family, or livelihood is the one being threatened then it is obviously going to be a lot more difficult. It isn't that I'm a coward (although I might be) it is just that standing up to the government in that sort of situation takes a lot of courage that I'm pretty sure I (and most people) don't have. But if the general population took an attitude that was more willing to do minor things (spread things like TOR or other methods of getting around the Great Firewall for example), then things might be different. But even that might not have a large impact.

So yeah, I may need to take this off the list.

Comment author: Emile 21 June 2010 05:19:19PM 1 point [-]

From people I've spoken to in China and the impression I get from what I've read about the Chinese censorship, the majority of people are generally ok with letting the government control things and think that that's really for the best. This seems to be changing slightly with the younger generation but it is hard to tell.

I get the impression that overall, the younger generation is more apathetic about politics than the older one.

(Though there is also the relatively recent phenomenon of "angry youths" (fenqing), who rant on forums and such.)

Comment author: Emile 21 June 2010 08:21:40AM 2 points [-]

Lists like that are good !

The belief that the Chinese government can be trusted to benefit its people or decide what information they should or should not have access to. (The primary reason this gets on the list is the sheer size of China. There are other governments which are much, much worse and have similar delusions by the people. But the damage level done is frequently much smaller.)

I'm a bit surprised at that one - the current Chinese government seems pretty rational and efficient to me, and I'd be hard-pressed to say what I would do differently in it's place or rather - there are things I would do differently, but I'm not sure I'd get better results).

Control of information by the government should be seen mostly as a way of preserving it's own power. So I'm not really sure of how to interpret "The belief that the Chinese government can be trusted to [...] decide what information they should or should not have access to." - could you rephrase that belief so that it's irrationality becomes more apparent, maybe tabooing "can be trusted to" ? If you mean "Chinese people wrongly believe that the government is restricting information access for their own good", then I'm not sure that a lot of people actually believe that, and for those that do, that believing it does any harm.

Comment author: JoshuaZ 21 June 2010 03:42:33PM *  1 point [-]

If you mean "Chinese people wrongly believe that the government is restricting information access for their own good", then I'm not sure that a lot of people actually believe that, and for those that do, that believing it does any harm.

Ok. My impression is that that is a common belief in China and is connected to the belief that the government doesn't actively lie. I don't have a very good citation for this other than general impressions so I'm going to point to a relevant blog entry by a friend who spent a few years in China where she discusses this with examples. There are of course even limits to how far that will go. This is also complicated by the fact that many of the really serious harm in China (detainment of citizens for questioning policies, beatings and torture, ignoring of basic environmental and safety issues) stem from the local governments rather than the central government, and the relationship between Beijing and the local governments is very complicated. See also my remarks above to wedrifid which touch on these issues also. So yeah, it may make sense to take this off the list given the lack of harm directly coming from this issue.

Comment author: Douglas_Knight 21 June 2010 09:09:29PM 2 points [-]

I don't interpret the story in that blog post that way at all. People repeating nationalist lies doesn't mean they've been fooled.

I highly recommend these posts about the psychology of mass lies. I don't recommend the third part.

Comment author: JanetK 20 June 2010 09:12:43AM 2 points [-]

Where would you put 'belief in free will' and 'belief in determinism'?

Comment author: JoshuaZ 20 June 2010 01:37:48PM 3 points [-]

They probably wouldn't get anywhere on the list for the reason that a) I'm not convinced that either determinism or free will as often given are actually well-defined notions and b) I don't see either belief as causing much harm in practice.

Comment author: Risto_Saarelma 21 June 2010 12:51:22PM 1 point [-]

The belief that large segments of the population are fundamentally not good at math or science.

This one caught my eye, I don't think I've seen this listed as an obvious delusion before. Can you maybe expand more on this? I guess the idea is that a much larger number of people could make use of math or science if they weren't predisposed to think that they belong in an incapable segment?

I'm thinking of something like picking the quarter of population that scores in the bottom at a standard IQ test or the local SAT-equivalent as the "large segment of population" though. A test for basic science and mathematics skills could be being able to successfully figure out solutions for some introductionary exercises from a freshman university course in mathematics or science, given the exercise, relevant textbooks and prerequisite materials, and, say, up to a week to work out things from the textbook.

It doesn't seem obvious to me that such a test would end up with results that would make the original assertion go straight into 'delusion' status. My suspicions are somewhat based on the article from a couple of years back, which claimed that many freshman computer science students seem to simple lack the basic mental model building ability needed to start comprehending programming.

Comment author: JoshuaZ 21 June 2010 03:58:22PM 2 points [-]

I guess the idea is that a much larger number of people could make use of math or science if they weren't predisposed to think that they belong in an incapable segment?

Yes. And more people would go into math and science.

My suspicions are somewhat based on the article from a couple of years back, which claimed that many freshman computer science students seem to simple lack the basic mental model building ability needed to start comprehending programming.

That's a very interesting article. I think that the level of, and type of abstraction necessary to program is already orders of magnitude beyond where most people stop being willing to do math. My own experience in regards to tutoring students who aren't doing well in math is that one of the primary issues is one of confidence: students of all types think they aren't good at math and thus freeze up when they see something that is slightly different from what they've done before. If they understand that they aren't bad at math or that they don't need to be bad at math, they are much more likely to be willing to try to play around with a problem a bit rather than just panic.

I was an undergraduate at Yale which is generally considered to be a decent school that admits people who are by and large not dumb. And one thing that struck me was that even in that sort of setting, many people minimized the amount of math and science they took. When asked about it the most common claim was that they weren't good at it. Some of those people are going to end up as future senators and congressman and have close to zero idea of how science works or how statistics work other than at the level they got from high school. If we're lucky, they know the difference between a median and a mean.

Comment author: Emile 21 June 2010 08:33:04AM 1 point [-]

That there are not fundamental limits on certain natural resources.

Does anybody actually claim to believe that ?

Comment author: JoshuaZ 21 June 2010 03:28:51PM 3 points [-]

This view is surprisingly common. I don't want to move to much to a potentially mind-killing subject, but the idea isn't uncommon among certain groups in US politics. Indeed, they think it so strongly about some resources that they take it almost as an ideological point. This occurs when discussing oil most frequently. Emphasis is placed on things like the Eugene Island field and abiotic oil which they argue shows we won't run out of oil. The second is particularly galling because even if the abiotic oil hypotheses were correct the level of oil production would still be orders of magnitudes below the consumption rate. I'd point more generally to followers of Julian Simon (not Simon himself per se. His own arguments were generally more nuanced and subtle than what many people seem to get out of them).

Comment author: CronoDAS 21 June 2010 08:11:50AM 5 points [-]

I'll give you a big one: Dying a martyr's death gives you a one-way ticket to Paradise.

Comment author: Vladimir_M 20 June 2010 09:37:25AM *  4 points [-]

Mass_Driver:

I'm extremely curious to hear both your list and JoshuaZ's list of the top 20 or so most harmful delusions.

I'm not sure if that would be a smart move, since it would mean an extremely high concentration of unsupported controversial claims in a single post. Many of my opinions on these matters would require non-obvious lengthy justifications, and just dumping them into a list would likely leave most readers scratching their heads. If you're really curious, you can read the comment threads I've participated in for a sample, in particular those in which I argue against beliefs that aren't specific to my interlocutors.

Also, it should be noted that the exact composition of the list would depend on the granularity of individual entries. If each entry covered a relatively wide class of beliefs, creationism might find itself among the top fifty (though probably nowhere near the top ten).

Comment author: wedrifid 20 June 2010 10:01:49AM *  5 points [-]

I'm not sure if that would be a smart move, since it would mean an extremely high concentration of unsupported controversial claims in a single post.

In this format that sounds like a good thing! At worst it would spark curiosity and provoke discussion. At best people would encounter a startling opinion that they had never seriously considered, think about for 60 seconds then form an understanding that either agrees with yours or disagrees, for a considered reason.

Comment author: h-H 20 June 2010 12:35:23PM *  1 point [-]

seconded, but a list of 20 seems too long/too much work, no?

Comment author: wedrifid 20 June 2010 12:48:29PM 1 point [-]

I'd be thinking 5. :)

Comment author: wedrifid 20 June 2010 03:52:35AM 2 points [-]
  1. The creation of an FAI is not the most important thing the species could be doing.
  2. The best way to create an FAI is not...
Comment author: LucasSloan 20 June 2010 03:18:15AM 3 points [-]

If I might jump in on the listing of delusions, I think that perhaps one of the most important things to understand about widespread delusions is who, in fact, holds them. A bunch of rednecks in Louisiana not believing in evolution isn't important, because even if they did, it wouldn't inform other parts of their worldview. In general, the specific delusions of ordinary people (IQ < 120) aren't important, because they aren't the ones who are actually affecting anything. Even improving the rationality and general problem awareness of smart people (120 < IQ < 135) doesn't really help, because then you get people who will expend enormous effort doing things like evangelizing atheism to the ordinary people and fighting global warming and the like. Raising the sanity waterline is important, but effort should be focused on people with the ability to actually use true beliefs.

Comment author: cupholder 20 June 2010 07:12:48AM 3 points [-]

In general, the specific delusions of ordinary people (IQ < 120) aren't important, because they aren't the ones who are actually affecting anything.

I'm less sure. I would have thought that they affect things indirectly at least through social transmission of beliefs, what they choose to spend their money on, and the demands they make of politicians.

Even improving the rationality and general problem awareness of smart people (120 < IQ < 135) doesn't really help, because then you get people who will expend enormous effort doing things like evangelizing atheism to the ordinary people and fighting global warming and the like.

Arguably, one should expect it to help less than improving the rationality and awareness of people with IQ < 120, just because there are 11 times as many people with IQ < 120 than there are with 120 < IQ < 135.

Comment author: Mass_Driver 20 June 2010 03:52:53AM 3 points [-]

I sincerely hope that you are using IQ as only the crudest shorthand for "ability to actually use true beliefs," but your point in general is very well taken. Please do jump in if you have a listing of the most harmful delusions. :-)

Comment author: wedrifid 20 June 2010 03:58:11AM 2 points [-]

IQ >= 120 is a fairly low bar. IQ is also a strong indicator for the potential for someone's behavior to be influenced by delusions (rather than near mode thinking + social pressure being the dominant adaptation.)

Comment author: Mass_Driver 20 June 2010 04:03:51AM 2 points [-]

Do you mean do say that people of ordinary intelligence, as a general rule, don't actually believe whatever it is they say they believe, but instead just parrot what those around them say? You might be right. I think I need to find a way to re-immerse myself in a crowd of people of average intelligence; it's been far too long, and my predictive/descriptive powers for such people are fraying.

Note that none of this is sarcasm; this comment is entirely sincere.

Comment author: Douglas_Knight 20 June 2010 04:24:15AM 4 points [-]

Wedrifid only said "potential"; most people, smart or not, behave as you say. And I would expand "delusion" to 'belief": being smart is correlated with being influenced by beliefs, true or false.

That people act on beliefs or have at all coherent world-views is the most dangerous widespread delusion. ("The world is mad.") Immersing yourself in a crowd of average intelligence might help you see this, but I rather doubt that your associates act on their beliefs.

Comment author: wedrifid 20 June 2010 04:34:13AM 3 points [-]

Another thing that is dangerous is the people that actually act on their beliefs. They are much harder to control. People 'acting as if' pragmatically don't do things that we strongly socially penalize.

Comment author: Mass_Driver 20 June 2010 04:44:37AM 1 point [-]

I rather doubt that your associates act on their beliefs.

Not on their stated beliefs; surely, but don't most people have a set of actual beliefs? Can't these actual beliefs, at least in some contexts, be nudged so as to influence the level and direction of cognitive dissonance, which in turn can influence actions?

Comment author: JoshuaZ 20 June 2010 04:28:01AM *  3 points [-]

There's certainly evidence that intelligent people are more likely to have more coherent worldviews. For example, the GSS data shows that higher vocabulary is associated with more extreme political views to either end of the traditional political spectrum. There's similar research for IQ scores but I don't have a citation for that.

Comment author: Mass_Driver 20 June 2010 04:42:57AM 1 point [-]

There's certainly evidence that intelligence people

You really should watch your grammar, syntax, and spelling while commenting on intelligence. The irony is distracting, otherwise. Unless you were referring to the CIA and FBI?

Comment author: JoshuaZ 20 June 2010 04:50:32AM 1 point [-]

It might be more generally a sign that I shouldn't comment when it is late at night in my timezone. Also, it should constitute evidence that we need better spellcheckers that don't just catch non-words but also words that are clearly wrong from minimal context (although in this particular case catching that that was the wrong word would almost seem to require solving the natural language problem unless one had very good statistical methods).

Comment author: Blueberry 20 June 2010 06:54:54AM 1 point [-]

Are you saying more extreme political views are more coherent? I'm not following this.

Comment author: Vladimir_M 20 June 2010 09:58:39AM *  5 points [-]

Blueberry:

Are you saying more extreme political views are more coherent?

That seems like an almost self-evident observation to me. I have never seen anyone state clearly any political or ideological principles, of whatever sort and from whatever position, whose straightforward application wouldn't lead to positions that are utterly extremist by the standards of the present centrist opinion.

Getting people with regular respectable opinions to contradict themselves by asking a few Socratic questions is a trivial exercise (though not one that's likely to endear you to them!). The same is not necessarily true for certain extremist positions.

Comment author: LucasSloan 20 June 2010 06:59:48AM 2 points [-]

Typically, yes. People with extreme views typically don't fail to make inferences from their beliefs along the lines of "X is good, so doing Y, which creates even more of X's goodness, would be even better!" Y might in fact be utterly stupid and evil and wrong, and a moderate with less extreme views might be against it, but the moderate and the extremist might both agree with X, even though the failure of logic that leads the extremist to endorse the evil Y is the belief that X is good.

Comment author: wedrifid 20 June 2010 04:21:34AM 2 points [-]

I differentiate between 'actually believe' and 'act as if they are an agent with the belief that'. All people mostly do the latter but high IQ people are somewhat more likely to let 'actual beliefs' interfere with their lives.

Comment author: LucasSloan 20 June 2010 04:03:43AM 2 points [-]

Taking into account what I already said about needing to influence people who can actually use beliefs (thus controlling for things like atheism, evolution, etc.)...

  1. FAI and related.
  2. Inability to do math.
  3. Failures around believing the state of the world is good (thinking aging is a good thing and the like).
  4. Believing that politics is the best way to influence the world.
Comment author: JoshuaZ 20 June 2010 04:43:09AM 2 points [-]

FAI and related.

What is the delusion here?

Inability to do math.

What is the delusion here? Do you mean people convincing themselves that they can't do math?

Failures around believing the state of the world is good

This seems too subjective to label a delusion.

Believing that politics is the best way to influence the world.

What do you mean by best and by influence?

Comment author: wedrifid 20 June 2010 04:36:24AM 2 points [-]

Inability to do math? Really? Are you talking 'disinclination to shut up and multiply' or actual ability to do math?

I love math but don't really think most people need it.

Comment author: RichardKennaway 21 June 2010 10:11:48AM 2 points [-]

Dredging this up from deep nesting, because I think it's important: wedrifid says

The biggest problem for people learning basic calculus is that people teaching it try to convey that it is hard.

Yes. Never tell anyone that what you're teaching them is hard. When you do that, you're telling them they'll fail, telling them to fail.

Comment author: Alicorn 21 June 2010 05:39:12PM 3 points [-]

But if you tell them it's easy, then they will be embarrassed for failing at something easy, or can't be proud of succeeding at something easy.

Comment author: RichardKennaway 21 June 2010 06:48:59PM 2 points [-]

Telling them it's easy is also a bad idea.

Comment author: Alicorn 21 June 2010 08:18:27PM *  1 point [-]

It strikes me that giving no information about the general difficulty of the subject is also a bad idea. (I imagined myself struggling with a topic where I had no information on how hard others found it, and my hypothetical self was ashamed, because clearly if it were something everyone found hard, they'd warn people and teach it more slowly, so it must be easy for everybody else but me.)

Comment author: Blueberry 21 June 2010 08:25:32PM 3 points [-]

Ideally, you'd teach the student not to be concerned with how well or how quickly they learn compared to others, which is a general learning technique that can apply to any field.

Comment author: RichardKennaway 22 June 2010 07:38:29AM 0 points [-]

When I teach, I don't say anything about "easy" or "difficult". I just teach the material. What is this "easy", this "difficult"? There is no "easy" or "difficult" for a Jedi -- there is only the work to be done and the effort it takes. "Difficult" means "I will fail". "Effort" means "I will succeed".

(I imagined myself

You are torturing yourself by inventing fictional evidence. You have an entire imaginary scenario there, shadows and fog conjured from thin air.

Comment author: SilasBarta 21 June 2010 08:57:00PM *  2 points [-]

Right, and there's the issue of whose fault the difficulty is. Sure, the student might not really be trying. But also, the teacher may not be explaining in a way that speaks to the learner's natural fluency. A method that works for the geeky types won't work work for more neurotypical types.

For my part, I never have trouble explaining high school math to those who haven't completed it, even if they're told that trig, calculus, etc. is hard. It's because I first focus on finding out where exactly their knowledge deficit is and why the subject matter is useful. Of course, teachers don't have the luxury of one-on-one instruction, but yes, how you present the material matters greatly.

Comment author: PhilGoetz 20 June 2010 04:47:17AM 2 points [-]

Most people don't need to understand evolution. Maybe we should distinguish between "harmful to self", "harmful to society", and "harmful to a democratic society".

If you can't do math at a fairly advanced level - at least having competence with information theory, probability, statistics, and calculus - you can't understand the world beyond what's visible on its (metaphorical) surface.

Comment author: JoshuaZ 20 June 2010 04:59:48AM *  7 points [-]

If you can't do math at a fairly advanced level - at least having competence with information theory, probability, statistics, and calculus - you can't understand the world beyond what's visible on its (metaphorical) surface.

While as a mathematician I find that claim touching, I can't really agree with it. To use the example that was one of the starting points of this conversation, how much math do you need to understand evolution? Sure, if you want to really understand the modern synthesis in detail you need math. And if you want to make specific predictions about what will happen to allele frequencies you'll need math. But in those cases it is very basic probability and maybe a tiny bit of calculus (and even then, more often than not you can use the formulas without actually knowing why they work beyond a very rough idea).

Similar remarks apply to other areas. I don't need a deep understanding of any of those subjects to have a basic idea about atoms, although again I will need some of them if I want to actually make useful predictions (say for Brownian motion).

Similarly, I don't need any of those subjects to understand the Keplerian model of orbits, and I'll only need one of those four (calculus) if I want to make more precise estimations for orbits (using Newtonian laws).

The amount of actual math needed to understand the physical world is pretty minimal unless one is doing hard core physics or chemistry.

Comment author: wedrifid 20 June 2010 08:20:29AM 1 point [-]

The amount of actual math needed to understand the physical world is pretty minimal unless one is doing hard core physics or chemistry.

For example... trying to work out what happens when I shoot a neverending stream of electrons at a black hole. The related theories were more or less incomprehensible to me at first glance. Not being able to do off the wall theorizing on everything at the drop of the hat has to at least make 49!

Comment author: sketerpot 21 June 2010 02:34:04AM *  10 points [-]

I've got a tangential question: what math, if learned by more people, would give the biggest improvement in understanding for the effort put into learning it?

Take calculus, for example. It's great stuff if you want to talk about rates of change, or understand anything involving physics. There's the benefit; how about the cost? Most people who learn it have a very hard time doing so, and they're already well above average in mathematical ability. So, the benefit mostly relates to understanding physics, and the cost is fairly high for most people.

Compare this with learning basic probability and statistical thinking. I'm not necessarily talking about learning anything in depth, but people should have at least some exposure to ideas like probability distributions, variance, normal distributions and how they arise, and basic design of experiments -- blinding, controlling for variables, and so on. This should be a lot easier to learn than calculus, and it would give insight into things that apply to more people.

I'll give a concrete example: racism. Typical racist statements, like "black people are lazy and untrustworthy," couldn't possibly be true in more than a statistical sense, and obviously a statistical statement about a large group doesn't apply to every member of that group -- there's plenty of variance to take into account. Basic statistical thinking makes racist bigotry sound preposterously silly, like someone claiming that the earth is flat. This also applies to every other form of irrational bigotry that I can think of off the top of my head.

Remember when Larry Summers suggested that maybe part of the reason for the underrepresentation of women in Harvard's science faculty was that women may have lower variance in intelligence than men, and so are underrepresented in the highest part of the intelligence bell curve? What almost everybody heard was "Women can't be scientists because they're stupid." People heard a statistical statement and had no idea how to understand it.

There are important, relevant subjects that people just can not understand without basic statistical thinking. I would like to see most people exposed to basic statistical thinking.

Are there any other kinds of math that offer high bang-for-the-buck, as far as learning difficulty goes? (I've always thought that the math behind computer programming was damn useful stuff, but the engineering students I've talked with usually find it harder than calculus, so maybe that's not the best idea.)

Comment author: nhamann 21 June 2010 04:13:05AM 2 points [-]

(I've always thought that the math behind computer programming was damn useful stuff, but the engineering students I've talked with usually find it harder than calculus, so maybe that's not the best idea.)

Tangential question to your tangential question: I'm puzzled, which math are you talking about here? The only math relevant to programming that I can think of that engineering students would also learn would be discrete math, but the extent needed for good programming competency is pretty small and easy to pick up.

Are we talking numerical computing instead, with optimization problems and approximating solutions to DE's? That's the only thing I can think of relevant to engineering for which the math background might be more difficult than calculus.

Comment author: sketerpot 21 June 2010 04:50:23AM 2 points [-]

I was thinking more basic: induction, recursion, reasoning about trees. Understanding those things on an intuitive level is one of the main barriers that people face when they learn to program. It's one thing to be able to solve problems out of a textbook involving induction or recursion, but another thing to learn them so well that they become obvious -- and it's that higher level of understanding that's important if you want to actually use these concepts.

Comment author: taiyo 21 June 2010 03:54:44AM 2 points [-]

Probability theory as extended logic.

I think it can be presented in a manner accessible to many (Jaynes PT:LOS is not accessible to many).

Comment author: Will_Newsome 20 June 2010 06:48:48AM 2 points [-]

I'd like this to be true, as I want the time I spend learning math in the future to be as useful as you say, but I seem to have come rather far by knowing the superficial version of a lot of things. Knowing the actual math from something like PT:LOS would be great, and I plan on reaching at least that level in the Bayesian conspiracy, but I can currently talk about things like quantum physics and UDT and speed priors and turn this into changes in expected anticipation. I don't know what Kolmogorov complexity is, really, in a strictly formal from-the-axioms sense, nor Solomonoff induction, but I reference it or things related to it about 10 times a day in conversations at SIAI house, and people who know a lot more than I do mostly don't laugh at my postulations. Perhaps you mean a deeper level of understanding? I'd like to achieve that, but my current level seems to be doing me well. Perhaps I'm an outlier. (I flunked out of high school calculus and 'Algebra 2' and haven't learned any math since. I know the Wikipedia/Scholarpedia versions of a whole bunch of things, including information theory, computer science, algorithmic probability, set theory, etc., but I gloss over the fancy Greek letters and weird symbols and pretend I know the terms anyway.)

Comment author: CronoDAS 21 June 2010 09:22:11AM 3 points [-]

I flunked out of high school calculus and 'Algebra 2' and haven't learned any math since.

I have a belief that I can fix things like this, having spent time working with other students in high school. If I ever meet you in person, will you assist me in testing that belief? ;)

Comment author: Will_Newsome 20 June 2010 07:34:33AM 3 points [-]

A public reminder to myself so as to make use of consistency pressure: I shouldn't write comments like the one I wrote above. It lingers too long on a specific argument that is not particularly strong and was probably subconsciously fueled by a desire to talk about myself and perhaps countersignal to someone whose writing I respect (Phil Goetz).

Comment author: LucasSloan 20 June 2010 06:55:01AM 2 points [-]

I'm pretty sure that most people around lesswrong have about the same level of familiarity with most subjects (outside whatever field they actually specialize). Although I do think that you are relatively weak in mathematics, but advanced math just really isn't that important, vis-a-vis being generally well educated and rational.

Comment author: JoshuaZ 20 June 2010 02:24:18AM 3 points [-]

If you said that it wouldn't make the top 10, I'd find that not implausible. Claiming it wouldn't make the top 50 seems implausible. Actual dangers posed by creationism:1) It makes people have a general more anti-science attitude and makes children less likely to become scientists 2) it takes up large sets of resources that would be spent usefully otherwise 3) it actively includes the spreading of a lot of misinformation 4) it snags otherwise bright minds who might otherwise becomes productive individuals (Jonathan Sarfati for example is a chess master, unambiguously quite bright, and had multiple good scientific papers before getting roped into YECism. Michael Behe is in a similar situation although for ID rather than young earth creationism). 5) The young earth variants encourage a narrow time outlook which is not helpful for long-term planning about the world or appreciation of serious existential threats (although honestly so few people pay attention to existential risks this is probably a minor issue) 6) It causes actual scientists and teachers to lose their jobs or have their work restricted (admittedly this isn't common but that's partially because creationism doesn't have much ground). 7) It encourages general extremist religious attitudes.

So not in the top 10? I'd agree with that. But I have trouble seeing it not in the top 50 most dangerous widespread delusions.

Comment author: multifoliaterose 20 June 2010 01:37:22AM 2 points [-]

Thanks, this is what I had in mind.

Comment author: wedrifid 19 June 2010 08:36:38PM 3 points [-]

I don't remember a post by Eliezer on the subject but it is oh so true. I often feel a 'cringe' reaction when I hear 'evidence' being used as religious symbol. It is the same cringe reaction I get when I hear people say "God Says" on something that I know isn't even covered in their bible. In both cases something BAD is going on that has nothing to do with whether or not there is a God.

Comment author: ideclarecrockerrules 19 June 2010 07:58:44PM *  4 points [-]

Here is some javascript to help follow LW comments. It only works if your browser supports offline storage. You can check that here.

To use it, follow the pastebin link, select all that text and make a bookmark out of it. Then, when reading a LW page, just click the bookmark. Unread comments will be highlighted, and you can jump to next unread comment by clicking on that new thing in the top left corner. The script looks up every (new) comment on the page and stores its ID in the local database.

Edit: to be more specific, all comments are marked as read as soon as the script is run. I could come up with a version that only marks them as read once you click that thing in upper left corner. Let me know if you're using it or if you'd like anything changed/added.

Comment author: W-Shadow 19 June 2010 11:13:24PM 1 point [-]

I made a similar Greasemonkey script some time ago.

Comment author: Kevin 28 June 2010 08:02:33PM *  3 points [-]

The sting of poverty

What bees and dented cars can teach about what it means to be poor - and the flaws of economics

http://www.boston.com/bostonglobe/ideas/articles/2008/03/30/the_sting_of_poverty/?page=full

and lots of Hacker News comments: http://news.ycombinator.com/item?id=1467832

Comment author: SilasBarta 28 June 2010 05:34:50PM 3 points [-]

Another economics WTF:

A lot of you may remember my criticism of mainstream economics, that they become so detached from what is meant by a "good economy", that they advocate things that are positively destructive in this original, down-to-earth sense.

Scott Sumner, I find to be particularly guilty of this. His sound economic reasoning has led him to believe that what the economy vitally needs right now is for banks to make bad (or at least wasteful) loans, just to get money circulating and prop up nominal GDP -- a measure known to be meaningless because it's an artifact of the money supply and has to be adjusted for interpretation.

Fed up with him saying this kind of thing, I sarcastically posted this remark:

Yes, the economy will definitely collapse if the Fed doesn’t print up more money to make shoddy loans for purchases people don’t want, and it’s a shame that folks at the Fed are stopping Bernanke from such a wise action.

And in his immediately following comment, he said,

Silas, I agree. :-)

Huh?

Comment author: [deleted] 24 June 2010 11:08:48PM 3 points [-]

Has anybody looked into OpenCog? And why is it that the wiki doesn't include much in the way of references to previous AI projects?

Comment author: Mitchell_Porter 25 June 2010 03:46:59AM 1 point [-]

If making a Friendly AI is compared to landing on the moon, I'd say OpenCog is something like the scaffolding for a backyard rocket. It still needs something extra - the rocket - and even then it won't achieve escape velocity. But a radically scaled-up version of OpenCog - with a lot more theory behind it, and tailored to run at the level of a whole data center rather than on a single PC - is the sort of toolset that could make a singularity.

Comment author: [deleted] 22 June 2010 06:51:37AM *  3 points [-]

For those of you who don't want to register to fanfic.com to receive notifications of new chapters to Harry Potter and the methods of rationality, I have added a Mailinglist. You can add yourself here: http://felix-benner.com/cgi-bin/mailman/listinfo/fanfic It is still untested so I don't know it will work, but I assume so.

Comment author: Kevin 29 June 2010 12:28:16AM 2 points [-]

The next advances in genomics may happen in China

http://www.economist.com/node/16349434?story_id=16349434

But the organisation is involved in even more controversial projects. It is about to embark on a search for the genetic underpinning of intelligence. Two thousand Chinese schoolchildren will have 2,000 of their protein-coding genes sampled, and the results correlated with their test scores at school. Though it will cover less than a tenth of the total number of protein-coding genes, it will be the largest-scale examination to date of the idea that differences between individuals’ intelligence scores are partly due to differences in their DNA.

Comment author: Alexandros 27 June 2010 09:20:30AM *  2 points [-]

I started writing something but it came up short for an article, so I'm posting it here:

Title: On the unprovability of the omni*

Our hero is walking down the street, thinking about proofs and disproofs of the existence of a god. This is no big coincidence as our hero does this often. Suddenly, between one step and the next, the world around her fades out, and she finds herself standing on thin air, surrounded by empty space. Then she hears a voice. "I am Omega. The all-powerful, all-knowing, all-good, ever-present being. I see you have been debating my existence with true purity of heart, so I have decided to provide you with any evidence you request". Once the shock wears off, our hero runs through the list of possible requests she could make. Healing the sick? Perhaps the reanimation of a dead person? Some time-travel? Maybe this could still be doubted. How about creation of a solar system? Or a universe? Maybe a proof of P vs. NP? Alas, our hero realises that any evidence she could request would only be proof of the power of Omega to produce just that thing, not an inclusive proof.

What's more, our hero knows that her thinking is subject to the operation of her mind and the readings of her senses, something she cannot trust in the presence of a vastly overpowering entity. The lower bound of power required of Omega to produce any experience for our hero is much lower than the power to create universes. It is the ability to control only the senses of our hero, become a kind of hypervisor, and simulate all requests. While this is great power indeed, the distance from there to omnipotence is great indeed. Similarly for omniscience, omnipresence, and omnibenevolence.

Our hero does not ask anything of Omega, and their meeting ends uneventfully, at least in terms of new universes being created, or problems thought unsolvable being solved. She does realise though, that omnipotence, omniscience, omnipresence, and omnibenevolence are not properties that can be verified by a human. If this is the definition of a god that theists are working with, then it is not only undisprovable, it is also unprovable. Taking knowledge to be 'justified true belief', a belief in an omni* god can never be justified, putting if firmly in the territory of the unknowable. The strongest claims that can be reasonably made are that of a being that is very powerful, very knowledgeable, etc. But that is not nearly as interesting.


Now, I have posted a question along those lines in this thread before, with little response. What I would like your feedback on is whether this is a reasonable argument, whether I've gotten something completely wrong in my epistemology, and whether there have been similar arguments made by others. All help appreciated, cheers.

Comment author: Alicorn 27 June 2010 06:53:32PM *  5 points [-]

Wait... a being which, while possibly not omni-anything, is likely very powerful, offers to provide her any evidence she likes, and she considers and rejects the "healing the sick" and "resurrecting the dead" plans?

Comment author: Blueberry 27 June 2010 08:30:32PM 1 point [-]

Not to mention a solution to the P=NP problem (or the Riemann Hypothesis)?

Comment author: wedrifid 30 June 2010 05:31:52PM 0 points [-]

A super-powerful agent who is desperate to prove itself to her! That's the perfect opportunity! Unless she messes up the requested 'proof' she has can become a demi-god, just below the Omega (until Omega cracks it with her).

  • "If you are Omnipotent please prove it by giving me a pet genie."
  • "Genie, I want you to create an FAI that has my CEV."
  • "Genie, please do whatever my FAI tells you to."

That should result in an exponentially growing multiverse of universes, with each universe self-replicating on a sub-nano-second time frame while simultaniously expanding in size and neg-entropy, all arranged for maximum Fun. Still not proof of Omnipotence but hey, it'll do.

Comment author: Alexandros 30 June 2010 05:05:37PM 0 points [-]

That's a good point. Any ideas on how to mend the hole?

Comment author: whpearson 30 June 2010 05:19:21PM *  0 points [-]

Have Omega offer to provide the proof but then will ask for an answer to the question of whether he is actually omni*. If the answer is incorrect he will destroy the world, if correct he will let the world continue with whatever changes were made by the "wish". There is also the choice not to play.

You would have to make him non omni benevolent though.

Comment author: Sniffnoy 27 June 2010 09:33:22PM 2 points [-]

Can we not get around this by using randomly chosen questions? And then we have IP=PSPACE, so anything that's in PSPACE, he can relatively quickly convince us he can solve. Obligatory Scott Aaronson link.

Comment author: Alexandros 30 June 2010 05:09:03PM 0 points [-]

Thanks for that link, it was quite good. Any chance you could elaborate a bit on the IP=PSPACE identity?

Comment author: Sniffnoy 30 June 2010 05:37:10PM 0 points [-]

No, I don't really know complexity theory at all, so I couldn't really tell you any more than Wikipedia could.

Comment author: NancyLebovitz 27 June 2010 01:25:47PM 2 points [-]

What if your hero asks to be made omniscient, including the capacity to still be able to think well in the face of all that knowledge?

Throw in omnibenevolence if you like, but I think you get some contradictions if you ask omnipotence. Either that, or you and Omega coalesce.

How could you test your omniscience to be sure it's the real thing?

Comment author: Oscar_Cunningham 27 June 2010 07:01:37PM 1 point [-]

Nothing is provable to the level you demand (well, pretty much nothing, cogito ergo sum and all that). Given that none of the omni* are well defined, the question doesn't mean much either.

Comment author: Alexandros 30 June 2010 05:08:20PM 0 points [-]

Are you saying that it's an inference problem and after enough pieces of evidence we should just accept omnipotence (for instance) as the best hypothesis with a high degree of confidence, as we trust gravity now? How about the mind control problem?

Also, what you say about the omni* being not well defined sounds interesting. can you elaborate?

Comment author: Oscar_Cunningham 30 June 2010 07:06:26PM 0 points [-]

Are you saying that it's an inference problem and after enough pieces of evidence we should just accept omnipotence (for instance) as the best hypothesis with a high degree of confidence, as we trust gravity now? How about the mind control problem?

That's exactly what I'm saying, and you're right to point out that mind control will always be a more probable explanation than omnipotence (as will mental illness). If I knew that something would continue to apeear omnipotent, I would just treat it as omnipotent (which equates to "accepting the simulation" if the actual explanation is mind control).

Omnipotence is badly defined because it leads to questions like "Can Omega create a rock so heavy that Omega cannot lift it?", can omnipotent beings create logical contradictions? Can they make 2+2=3? Omniscience leads to similar problems, can Omega answer the halting problem for programs that can call Omega as an oracle? Omnibenevolence is the least paradox ridden, but the hardest to define. Whose version of good is Omega working toward?

Comment author: cupholder 27 June 2010 12:25:42AM *  2 points [-]

Statisticians Andrew Gelman and Cosma Shalizi have a new preprint out, 'Philosophy and the practice of Bayesian statistics.' The abstract:

A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science.

Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework.

Comment author: CronoDAS 23 June 2010 05:34:28PM 2 points [-]
Comment author: GuySrinivasan 21 June 2010 04:18:28PM 2 points [-]

Statistical Analysis Overflow is trying to start up. If you'd be a regular contributor go over and commit, if enough commit it'll go into beta.

It's a "Proposed Q&A site for statistics, data analysis, data mining and data visualization", like Stack Overflow or Math Overflow.

Comment author: timtyler 01 August 2010 11:01:02AM *  2 points [-]

A TED talk: "Laurie Santos: How monkeys mirror human irrationality "

"Why do we make irrational decisions so predictably? Laurie Santos looks for the roots of human irrationality by watching the way our primate relatives make decisions. A clever series of experiments in "monkeynomics" shows that some of the silly choices we make, monkeys make too."

Comment author: NancyLebovitz 01 August 2010 01:06:18PM 0 points [-]

Interesting speech. I wonder whether the monkeys had a safe way to save their tokens, and whether the experiment would play out the same way if it could be done with squirrels.

She implies that the amount of complexity in finance is just there. I agree with Scott Adams that a good bit of complexity is a deliberate effort to confuse people into making bad choices.

Comment author: timtyler 01 August 2010 02:18:01PM 0 points [-]

If things are complex you may need to meet with a financial advisor - and then they can try to sell you more stuff.

Comment author: SilasBarta 29 June 2010 10:58:56PM 1 point [-]

Yet another exchange regarding experts and the "difficulty of explaining" excuse. It's the kind of exchange normally found here, and with LW regulars, so given the topic matter, I thought folks here would be interested in they haven't seen it already.

Comment author: Morendil 30 June 2010 09:30:47AM 1 point [-]

You appear to be responding to a different point than the one Robin was making in the original post.

Robin's post centers on the "intellectually nutritious" metaphor (and has nothing to do with "difficulty of explaining"). Your reply conflates that with some argument about reverence for particular authorities, which Robin isn't making except insofar as it is implied by his use of the word "classic".

Comment author: SilasBarta 30 June 2010 11:51:47AM 1 point [-]

I don't think I am. He says, "You need to read the classics." I say, "No, I just need to know their key insights."

I further say that people who think you need to read a particular classic are typically wrong, as others have assimilated its insights -- and become capable of discussing the related issues -- without having to read it.

How is that not responsive? Where do I make an issue of reverence for authorities?

Comment author: Morendil 30 June 2010 01:19:18PM 3 points [-]

OK, "reverence for authorities" might a red herring here. Please disregard that and accept a fractional apology; I think my observation still stands.

Robin's saying "the expected value of your reading (something like) a classic is higher than the expected value of equivalent time spent reading (something like) my blog".

He isn't saying "you need to read the classics (and nothing else will do)", in spite of what the title says. You sound as if you're reacting to the title only - and an idiosyncratic reading of it at that.

Your point regarding a specific article - Coase's - may have merit. Some issues you need to consider are:

  • Reading a primary source often allows you to understand how it has been misunderstood; there is a (ahem) classic example in the field of software engineering, where for years the article cited as the primary inspiration for the well-known "waterfall lifecycle" was Winston Royce 1970; it turns out, when you actually read the article, that it condemns the waterfall cycle as oversimplistic and unworkable - here we have a misunderstanding with a cost measured in billions and attributable to failure to read the classics carefully.
  • As a corollary, modern popularizations of a classic may contain distortions due to the popularizer's various other biases, including poor skill at explaining; just as much as they may enhance the value of the classic by providing a streamlined explanation; how are you to sort one from the other?
  • A distilled explanation of the insight from a classic strips it of all the anecdotes and background material which lent the insight force in the first place; that may be valuable and, depending on your purpose, even more valuable than reading the primary source, but it doesn't convey the same understanding; your grasp on why the insight has force may be shakier than if you'd read the primary source. There's often a trade-off between time spent acquiring an insight and depth of understanding. (Admittedly, this trade-off can be substantially modified by the time you spent exercising the insight.)

Another respondent on Robin's blog says "Pfui, blogs have led me to classics". Well, that point doesn't work if all you ever read are blogs, showing precisely how I suspect folks are misunderstanding Robin's point.

What Robin says is that there is a hierarchy of sources of knowledge, not all are worth the same, and it's unwise to spend all your time on secondary or tertiary (etc.) sources that (often) are lesser sources of intellectual nourishment. In short, there's a reason the classics are acknowledged as such.

Comment author: wedrifid 30 June 2010 01:27:41PM 1 point [-]

In short, there's a reason the classics are acknowledged as such.

It would astound me if this reason was that they were the optimal source of educational. That would completely shake my entire understanding of the fairness of the universe.

Better than the classics are the later sources that cover the same material once the culture has had a chance to fully process the insights and experiment with the best way to understand them. You pick the sources that become popular and respected despite not having the prestige of being the 'first one to get really popular in the area'. You want the best, not the 'first famous' and shouldn't expect that to be the same source. After all, the author of the Classic had to do all the hard work of thinking of the ideas in the first place... we can't expect him to also manage to perfect the expression of them and teach them in the most effective manner. Give the poor guy a break!

As an example,

Comment author: CronoDAS 30 June 2010 03:19:30PM 4 points [-]

As an example,

You seem to be missing the examples at the moment, but I'll give one... it's damn hard to learn relativity by reading Einstein's original papers. Your average undergraduate textbook gives a much better explanation of special relativity.

On the other hand, when it comes to studying history, sometimes classics are still the best sources. For example, when it comes to the Peloponnesian War, everything written by anyone other than Thucydides is merely footnotes.

Comment author: Morendil 30 June 2010 02:14:12PM 4 points [-]

Better than the classics are the later sources

Reading Dawkins may be more effective than reading Darwin, to appreciate descent with modification and differential survival as an optimization algorithm.

Reading Darwin may be more effective than reading Dawkins, to appreciate what intellectual work went into following contemporary evidence to that conclusion, in the face of a world filled with bias and confusion.

Reading Dawkins OR Darwin is - and I think that is Robin's point - more valuable than the same time spent reading blogs expounding shaky speculations on evolution.

Comment author: NancyLebovitz 30 June 2010 08:05:12PM 1 point [-]

I'm underlining your point about Darwin-- just getting the insights doesn't give you information about the process of thinking them out.

Also, a "just the insights" version will probably leave out any caveats the originator of the insights included.

Comment author: Morendil 30 June 2010 08:48:55PM 1 point [-]

a "just the insights" version will probably leave out any caveats

Spectacularly so in the case of the Waterfall software development process. It's as if the "classic" in question had said "Drowning kittens" at the end of page 1, and of course the beginning of page 2 goes right on to say "...is evil, don't do it". But everyone reads page one which has a lovely diagram and goes, "Oh yeah; drowning kittens. Wonderful idea, let's make that the official government norm for feline management."

Comment author: wedrifid 30 June 2010 03:00:11PM 1 point [-]

Reading Dawkins OR Darwin is - and I think that is Robin's point - more valuable than the same time spent reading blogs expounding shaky speculations on evolution.

100% agree that is Robin's point and another 100% with Robin's point. Hmm. Wrong place to throw 100% around. Let's see... 99.5% and 83% respectively. Akrasia considerations and the intrinsic benefits of the social experience of engaging with a near-in-time social network account for the other 17%.

Comment author: SilasBarta 30 June 2010 03:18:11PM *  0 points [-]

Robin's saying "the expected value of your reading (something like) a classic is higher than the expected value of equivalent time spent reading (something like) my blog".

He isn't saying "you need to read the classics (and nothing else will do)", in spite of what the title says. You sound as if you're reacting to the title only - and an idiosyncratic reading of it at that.

No, I think I addressed the broader point he was making, not just the title: He's saying, don't just rely on blog posts and blog comment exchanges -- actually read the classic works. This would imply that these blog discussions suffer from lack of appreciation of certain classics that imparted Serious knowledge.

I disputed this diagnosis of the problem. The phenomenon Robin_Hanson describes is more due to experts not understanding their own topics, and not communicating the fruits of these classics. The proper response to this, I contend, is not to wade through classics, hoping to be able to sort the good from the bad. Rather, it's for those who are aware of the classics' insights to understand and present them where applicable.

In other words, not to do what Gene Callahan does in the (corrected) link.

This is why I challenged Robin_Hanson to say what he's doing about it: if people really are stumbling along, unaware of some classic writer's insight on the matter, a work that just completely enlightens and clarifies the debate, what is he doing to make sure these insights are applied to the relevant issue? That is how you establish the worth of classics, by repeated ability to obviate debates that people get into when they aren't familiar with them.

It's true that in reading works that draw from the classics, you have to separate the good from the bad, but you have to do that anyway -- and classics will typically have a lot of bad with the good.

If classics are higher up on the hierarchy, it is specific classics that are known for being completely good, or for because their bad part is known and articulated to the learner in advance. But that requires advising of specific classics, not telling someone to read classics in general.

Keep in mind, you were my example of someone failing to learn the best arguments against gay rights, despite a sincere effort to find them. The experts either didn't understand the arguments, or weren't able to apply them in discussions. How many (additional!) classics would you need to have read to be enlightened about this?

Comment author: Morendil 30 June 2010 03:46:48PM 1 point [-]

But that requires advising of specific classics, not telling someone to read classics in general.

Perhaps we're actually on the same page there. I don't think Robin was saying "read classics in general", so much as "go and spend some quality time with what you'd think is a truly awesome classic". If he had been saying "go and spend time reading classics just because they had the 'classic' label stamped on them" I'd also disagree with him.

One issue is that judgments of "intellectually nutritious" vary from person to person in extremely idiosyncratic ways. For instance I'm currently reading Wilson and Sperber's Relevance which comes heartily recommended by Cosma Shalizi but is more or less boring me to death. You never know in advance which book is going to shake your world-view to its foundations.

Keep in mind, you were my example of someone failing to learn the best arguments against gay rights, despite a sincere effort to find them. [...] How many (additional!) classics would you need to have read to be enlightened about this?

Maybe we need to make a distinction here between one-topic classics and broader-ranging, multi-topic classics. What I would need (and love) to read is the "Gödel, Escher, Bach" of moral theories. :)

But while I derived nourishment from Rawls Theory of Justice I wouldn't necessarily seek out "classics" of communautarism (or other traditions making a strong case against e.g. gay rights), because I don't feel that dire a need to expose my ideas on moral theories to contradiction. I'd be keen to get that contradiction in smaller and more pre-digested doses.

Usually when I have identified a topic as really, really important I find it worthwhile to round out my understanding of it by going back to primary or early sources, if only because every later commentator is implicitly referring back to them, even if "between the lines".

I also seek out the "classic" in a field when my own ideas stand in stark opposition to those attributed to that field. For instance I read F.W. Taylor's original "Scientific Management" book because I spent quite a bit of energy criticizing "Taylorism", and to criticize something effectively it's judicious to do everything you can not to misrepresent it.

Comment author: wedrifid 30 June 2010 03:57:57PM 2 points [-]

I also seek out the "classic" in a field when my own ideas stand in stark opposition to those attributed to that field. For instance I read F.W. Taylor's original "Scientific Management" book because I spent quite a bit of energy criticizing "Taylorism", and to criticize something effectively it's judicious to do everything you can not to misrepresent it.

And at times we also discover that the eponymous mascot's actual ideas are quite a lot different to those that we are rejecting. Then at least we know to always direct the criticisms at "Taylorism" and never "Taylor" (depending whether the mascot in question shares the insanity.)

Comment author: SilasBarta 30 June 2010 07:11:09PM 1 point [-]

Well, I'm not sure where we agree or don't now. We certainly agree here:

But while I derived nourishment from Rawls Theory of Justice I wouldn't necessarily seek out "classics" of communautarism (or other traditions making a strong case against e.g. gay rights), because I don't feel that dire a need to expose my ideas on moral theories to contradiction. I'd be keen to get that contradiction in smaller and more pre-digested doses.

Yes, yes you should learn about these contradictions of your worldview from summaries of the insights that go against it.

But you also say:

I don't think Robin was saying "read classics in general", so much as "go and spend some quality time with what you'd think is a truly awesome classic". If he had been saying "go and spend time reading classics just because they had the 'classic' label stamped on them" I'd also disagree with him.

But what's the difference? If I'm already so lacking as to need to read (more) classics, how would I even know which classics are worth it? He gives no advice in this respect, and if he did, I wouldn't be so critical. But then it would be an issue about whether people should read this or that book, not about "classics" as such.

Usually when I have identified a topic as really, really important I find it worthwhile to round out my understanding of it by going back to primary or early sources, if only because every later commentator is implicitly referring back to them, even if "between the lines".

Did you regard gay rights as really, really important?

Comment author: Morendil 30 June 2010 04:11:04PM 0 points [-]

what Gene Callahan does

You'd need to spell out more precisely what he's doing that you think deserves criticism.

Interestingly I seem to have read quite a few of the "classics" that come up in that discussion on "what science does". Polanyi's Personal Knowledge, Feyerabend's Against Method, Lakatos' Proofs and Refutations, Kuhn's Structure of Scientific Revolutions. Not Popper however - I've read The Open Society but not his other works.

Given your stance on "explaining" those strike me as good examples of the kind of stuff you might want to have read because that would leave you in a better position to criticize what you're criticizing: less prone to misrepresenting it. (As for me, I'm now investing a lot of time and energy into this "Bayesian" stuff, which definitely is sort of a counterpoint to my prior leanings.)

Comment author: SilasBarta 30 June 2010 06:17:34PM *  1 point [-]

You'd need to spell out more precisely what [Gene Callahan]'s doing that you think deserves criticism.

Exactly what I referred to in the previous paragraph.

it's [up to] those who are aware of the classics' insights to understand and present them where applicable.

Callahan is, supposedly, aware of these classics' insights. Did he present them where applicable? Show evidence he understands them? No. Every time he drops the name of a great author or a classic, he fails to put the argument in his own words, sketch it out, or show its applicability to the arguments under discussion.

For example, he drops the remark that "Polanyi showed that crystallography is an a priori science [in the sense that Austrian economics is]" as if it were conclusively settled. Then, when I explain why this can't possibly be the case, Callahan is unable to provide any further elaboration of why that is (and I couldn't find a reference to it anywhere).

The problem, I contend, is therefore on his end. To the extent that Callahan's list of classics is relevant, and that he is a majestic bearer of this deep, hard-won knowledge, he is unable to actually show how the classics are relevant, and what amazing arguments are presented in them that obviate our discussion. The duty falls on him to make them relevant, not for everyone else to just go out and read everything he has, just because he thinks, in all his gullible wisdom, that it will totally convince us.

Note: I wasn't alone in noticing Callahan's refusal to engage. Another poster remarked:

Gene, The problems with appeals to authority are, 1) as you point out, not everyone may be familiar with the work of the authority, 2) the ‘authority’ may actually not be one (see Silas’ comments on crystallography), and 3) it’s a substitute for actually making an argument. It’s easy, and pointless, to simply say ‘other people have shown you’re wrong’. But if you present an argument then we can discuss it’s merits and flaws. ...

See, that’s how discussion works. If you have a position, just explain it! Then we can talk about it.


With regard to the books you mention: what little I have read about them, they aren't impressive or promising. For example, Feyerabend seems to think he has some great insight that good scientific theories don't have to incorporate the old theory, but rather, the normally make progress by ignoring the old. But he's attacking a strawman: new theories aren't expected to incorporate the old theory, just to be able to make the same predictions. [EDIT: Sorry, original version didn't have the complete sentence.]

Also, people like to make a big deal about how clever Quine's holism argument is, but if you're at all familiar with Bayesianism, you roll your eyes at it. Yes, theories can't be tested in isolation, but Bayesian inference can tell you which beliefs are most strongly weakened by which evidence, showing that you have a basis for saying which theory was, in effect, tested by the observations.

Things like these make me skeptical of those who claim that these philosophers have something worthwhile to say to me about science. I would rather focus on reading the epistemology of those who are actually making real, unfakeable, un-groupthinkable progress, like Sebastian Thrun and Judea Pearl.

Comment author: RobinZ 02 July 2010 12:13:33PM 1 point [-]

I think Lakatos, Proofs and Refutations is a fun book, but the chief thing I learned from it is that mathematical proofs aren't absolutely true, even when there is no error in reasoning. It's about mathematics, not science. It's also quite short, particularly if you skip the second, much more mathematically-involved dialogue.

Comment author: RichardKennaway 02 July 2010 01:20:17PM 1 point [-]

I learned the opposite: that mathematical proofs can be and should be absolutely true. When they fall short, it is a sign that some confusion still remains in the concepts.

Comment author: RobinZ 02 July 2010 01:36:37PM 0 points [-]

I see no contradiction between these interpretations. :P

Comment author: Morendil 30 June 2010 06:48:36PM 1 point [-]

For example, he drops the remark that "Polanyi showed that crystallography is an a priori science [in the sense that Austrian economics is]" as if it were conclusively settled.

You're basically doing the same when you name-drop "a Bayesian revival in the sciences". I've been here for months trying to figure out what the hell people mean by "Bayesian" and frankly feel little the wiser. It's interesting to me, so I keep digging, but clearly explained? Give me a break. :)

I found Polanyi somewhat obscure (all that I could conclude from Personal Knowledge was that I was totally devoid of spiritual knowledge), so I won't defend him. But one point that keeps coming up is that if you look closely, anything that people have so far come up with that purports to be a "methodological rule of science", can be falsified by looking at one scientist or another, doing something that their peers are happy to call perfectly good science, yet violates one part or another of the supposed "methodology".

As an example being impartial certainly isn't required to do good science; you can start out having a hunch and being damn sure your hunch is correct, and the energy to devise clever ways to turn your hunch into a workable theory lets you succeed where others don't even acknowledge there is a problem to be solved. Semmelweis seems to be a good example of an opinionated scientist. Or maybe Seth Roberts.

What's your take on string theorists? ;)

Comment author: SilasBarta 30 June 2010 07:21:24PM *  1 point [-]

You're basically doing the same when you name-drop "a Bayesian revival in the sciences".

That's not remotely the same thing -- I wasn't bringing that up as some kind of substantiation for any argument, while Callahan was mentioning the thing about "a priori crystallography" (???) as an argument.

But one point that keeps coming up is that if you look closely, anything that people have so far come up with that purports to be a "methodological rule of science", can be falsified by looking at one scientist or another, doing something that their peers are happy to call perfectly good science, yet violates one part or another of the supposed "methodology".

So? I was arguing about what deserves to be called science, not what happens to be called science. And yes, people practice "ideal science" imperfectly, but that's no evidence against the validity of the ideal, any more than it's a criticism of circles that no one ever uses a perfect one. Furthermore, every time someone points to one of these counterexamples, it happens to be at best a strawman view. Like what you do here:

As an example being impartial certainly isn't required to do good science; you can start out having a hunch and being damn sure your hunch is correct, ...

The claim isn't that you have to be impartial, but that you must adhere to a method that will filter out your partiality. That is, there has to be something that can distinguish your method from groupthink, from decreeing something true merely because you have a gentleman's agreement not to contradict it.

Comment author: WrongBot 28 June 2010 11:03:41PM 1 point [-]

A question for LW regulars: is there a rule of thumb for how often it is acceptable to make top-level posts?

Comment author: Unnamed 28 June 2010 11:33:16PM 5 points [-]

There was some discussion of that here. Suggestions include once or twice per week, let karma be your guide, and don't worry about posting too much.

Comment author: cousin_it 28 June 2010 11:33:45PM *  4 points [-]

Not sure what the others will say, but for me it depends on the quality. I'd be overjoyed to see a new post by Yvain, Nesov or Wei Dai every morning. (Yep, I consider these three posts to be the gold standard for LW. Not to say that there weren't others like them, of course.) Your own first post was exceptionally good for a first post, but the topic is kinda controversial, so I'd be extra cautious and wait another day or two to avoid being seen as "spamming" or "hijacking the agenda".

Comment author: WrongBot 29 June 2010 12:47:13AM 0 points [-]

Why thank you (I'm blushing like a schoolgirl). I don't imagine I'll have anything ready for at least another day or two, but it seemed like a good question to ask just in case.

My next post will hopefully be a little less controversial and a little more practical. Managing jealousy isn't simple by any means, but it's a little less tied up with people's value systems.

Comment author: JoshuaZ 28 June 2010 11:21:45PM 3 points [-]

I don't know if I'm exactly a regular, but I'd naively think that if one makes posts that are well-written, relevant, and not redundant, the total number won't be an issue.

Comment author: Kevin 27 June 2010 07:27:16AM 1 point [-]
Comment author: Alexandros 26 June 2010 07:09:40AM 1 point [-]
Comment author: Alexandros 24 June 2010 01:52:35PM *  1 point [-]

If a being presented itself to you and claimed to be omni(potent/scient/present/benevolent), what evidence would you require to accept its claim?

(EDIT: On a second reading, this sounds like a typical theist opening a conversation. I assure you, this is not the case. I am genuinely interested in the range of possible answers to this question.)

Comment author: Matt_Simpson 23 June 2010 06:09:45PM *  1 point [-]

There's an interesting article in the New York Times on warfare among chimpanzees. One problem, though, is that they attempt to explain the level of coordination necessary in warfare with group selection. This, of course, will not do. I'm under-read in evolutionary biology, but it seems like kin selection accounts for this phenomenon just fine. You are more likely to be related to members of your group than an opposing group, so taking territory from a rival group doesn't just increase your fitness directly, but indirectly through your shared genes among group members.

What do you think, LessWrong?

edit: Some commentary on the article.

Comment author: cousin_it 22 June 2010 09:00:18AM *  1 point [-]

A recent comment about Descartes inspired this thought: the simplest possible utility function for an agent is one that only values survival of mind, as in "I think therefore I am". This function also seems to be immune to the wireheading problem because it's optimizing something directly perceivable by the agent, rather than some proxy indicator.

But when I started thinking about an AI with this utility function, I became very confused. How exactly do you express this concept of "me" in the code of a utility-maximizing agent? The problem sounds easy enough: it doesn't refer to any mystical human qualities like "consciousness", it's purely a question about programming tricks, but still it looks quite impossible to solve. Any thoughts?

Comment author: Vladimir_Nesov 22 June 2010 10:05:59AM *  2 points [-]

You want the program to keep running in the context of the world. To specify what that means, you need to build on top of an ontology that refers to the world. But figuring out such ontology is a very difficult problem and you can't even in principle refer to the whole world as it really is: you'll always have uncertainty left, even in a general ontological model.

The program will have to know what tradeoffs to make, for example whether it's important to survive in most possible worlds with fair probability, or in at least one possible world with high probability. These would lead to very different behavior, and the possibility of such tradeoffs exemplifies how much data such preference would require. If additionally you want to keep most of the world as it would be if the AI was never created, that's another complex counterfactual for you to bake in into its preference.

It's a very difficult problem, probably more difficult that FAI, since for FAI we at least have some hope of cheating and copying formal preference from an existing blueprint, and here you have to build that from scratch, translating your requirements from human-speak to formal specification.

Comment author: RichardKennaway 22 June 2010 09:55:30AM *  1 point [-]

An agent's "me" is its model of itself. This is already a fairly complicated thing for an agent to have, and it need not have one.

Why do you say that an agent can "directly perceive" its own mind? Or anything else? A perception is just a signal somewhere inside the agent: a voltage, a train of neural firings, or whatever. It can never be identical to the thing that caused it, the thing that it is a perception of. People can very easily have mistaken ideas of who they are.