Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Rational Me or We?

106 Post author: RobinHanson 17 March 2009 01:39PM

Martial arts can be a good training to ensure your personal security, if you assume the worst about your tools and environment.  If you expect to find yourself unarmed in a dark alley, or fighting hand to hand in a war, it makes sense.  But most people do a lot better at ensuring their personal security by coordinating to live in peaceful societies and neighborhoods; they pay someone else to learn martial arts.  Similarly, while "survivalists" plan and train to stay warm, dry, and fed given worst case assumptions about the world around them, most people achieve these goals by participating in a modern economy.

The martial arts metaphor for rationality training seems popular at this website, and most discussions here about how to believe the truth seem to assume an environmental worst case: how to figure out everything for yourself given fixed info and assuming the worst about other folks.  In this context, a good rationality test is a publicly-visible personal test, applied to your personal beliefs when you are isolated from others' assistance and info.  

I'm much more interested in how we can can join together to believe truth, and it actually seems easier to design institutions which achieve this end than to design institutions to test individual isolated general tendencies to discern truth.  For example, with subsidized prediction markets, we can each specialize on the topics where we contribute best, relying on market consensus on all other topics.  We don't each need to train to identify and fix each possible kind of bias; each bias can instead have specialists who look for where that bias appears and then correct it. 

Perhaps martial-art-style rationality makes sense for isolated survivalist Einsteins forced by humanity's vast stunning cluelessness to single-handedly block the coming robot rampage.  But for those of us who respect the opinions of enough others to want to work with them to find truth, it makes more sense to design and field institutions which give each person better incentives to update a common consensus.

Comments (127)

Comment author: Sideways 17 March 2009 06:26:22PM 28 points [-]

Robin Hanson has identified a breakdown in the metaphor of rationality as martial art: skillful violence can be more or less entirely deferred to specialists, but rationality is one of the things that everyone should know how to do, even if specialists do it better. Even though paramedics are better trained and equipped than civilians at the scene of a heart attack, a CPR-trained bystander can do more to save the life of the victim due to the paramedics' response time. Prediction markets are great for governments, corporations, or communities, but if an individual's personal life has gotten bad enough to need the help of a professional rationalist, a little training in "cartography" could have nipped the problem in the bud.

To put it another way, thinking rationally is something I want to do, not have done for me. I would bet that Robin Hanson, and indeed most people, respect the opinions of others in proportion to the extent that they are rational. So the individual impulse toward learning to be less wrong is not only a path to winning, but a basic value of a rationalist community.

Comment author: mark_spottswood 17 March 2009 08:51:10PM 7 points [-]

One can think that individuals can profit from being more rational, while also thinking that improving our social epistemic systems or participating in them actively will do more to increase our welfare than focusing on increasing individual rationality.

Comment author: ciphergoth 17 March 2009 11:49:28PM 3 points [-]

Another thing that you must do for yourself is politics; sadly EY is right that we can't start discussing that here.

Comment author: anonym 17 March 2009 05:28:41PM 8 points [-]

One of the ways that I think that OB could have been better, and that I think LW could be more helpful, is to put a greater emphasis on practice and practical techniques for improving rationality in the writings here and to give many more real-life examples than we do.

When making a post that hints at any kind of a practical technique, posters could really make an effort to clearly identify the practical implications and techniques, to put all the practical parts together in the essay rather than mixing them throughout 15 paragraphs of justification and reasoning, and to highlight that practical part of the post.

The practical parts could be extracted and placed together somewhere in order to have one single place that people can go to easily find them. Perhaps the LW software could provide some kind of support for distinguishing the practice sections of a post, and the extraction and aggregation of the practical howto sections could be automated.

Comment author: Court_Merrigan 18 March 2009 02:09:48AM 1 point [-]

Hear, hear. Practice and practical techniques. Isn't that what we're after here?

Comment author: Emile 17 March 2009 02:46:16PM 6 points [-]

Maybe personal finance is a better analogy than Martial Arts. It's useful for nearly anybody to know about personal finance, yet many people are lacking even in the basics. Some high-falutin stock market concepts may not be useful to the average joe, the same way advanced rationality ("better then Einstein") may not be needed, but still, education about the basics is useful.

Comment author: RobinHanson 17 March 2009 05:00:04PM 2 points [-]

Sure, most prediction market traders could stand to review some rationality basics.

Comment author: markrkrebs 26 February 2010 01:37:39PM 5 points [-]

I love your thesis and metaphor, that the goal is for us all jointly to become rational, seek, and find truth. But I do not "respect the opinions of enough others." I have political/scientific disagreements so deep and frequent that I frequently just hide them and. worry. I resonated best with your penultimate sentence: "humanity's vast stunning cluelessness" does seem to be the problem. Has someone written on the consequences of taking over the world? The human genome, presumptively adapted to forward it's best interests in a competitive world, may have only limited rationality, inadequate to the tasks of altruism, global thinking, and numerical analysis. By this last phrase I refer to our overreaction to a burning skyscraper, when an equal number of deaths monthly on freeways, by being less spectacular or poignant, motivates a disproportionately low response. Surely the difference there is a "gut" reaction, not a cogent one. We need to change what we care about, but we're hardwired to worry about spectacle, perhaps?

Comment author: wedrifid 26 February 2010 02:19:09PM *  3 points [-]

We need to change what we care about, but we're hardwired to worry about spectacle, perhaps?

Unusual threat by a rival tribe. Retaliation necessary. Excuse to take politically self serving moves by surfing a tide of patriotic sentiment. That sort of thing. What you would expect monkeys to care about.

Comment deleted 17 March 2009 05:02:38PM *  [-]
Comment author: scientism 17 March 2009 07:32:50PM 1 point [-]

Can you offer any examples of generalists (and/or rationalists) who have produced significant insights besides Eliezer? When I look at history, I see subject specialists successfully branching out into new areas and making significant progress, whereas generalists/rationalists have failed to produce any significant work (look at philosophy).

Comment author: anonym 17 March 2009 11:47:22PM *  15 points [-]

Leibniz, Da Vinci, Pascal, Descartes, and John von Neumann spring immediately to mind for me.

There's also Poincaré, often considered the last universalist. Kant is famous as a philosopher, but also worked in astronomy. Bertrand Russell did work in philosophy as well as mathematics, and was something of a generalist. Noam Chomsky is the linguist of the 20th century, and if you consider any of his political and media analysis outside of linguistics to be worthwhile, he's another. Bucky Fuller. Charles Peirce. William James. Aristotle. Goethe. Thomas Jefferson. Benjamin Franklin. Omar Khayyám.


Just thought of Gauss, who in addition to his work in mathematics did considerable work in physics.

Herbert Simon: psychology and computer science (got an economics Nobel).

Alan Turing: don't know how I could have forgotten him.

Norbert Wiener.

Comment author: Yvain 17 March 2009 11:57:48PM 8 points [-]

Good answers. Also, Pierre-Simon Laplace, one of the inventors of Bayesian statistics, was also an excellent astronomer and physicist (and briefly the French Minister of the Interior, of all things)

Comment author: anonym 18 March 2009 12:02:11AM 1 point [-]

Yeah, Laplace certainly belongs close to the top of any such list.

Comment author: scientism 18 March 2009 12:13:38AM 2 points [-]

There's probably a few in there. I won't try to dispute them on a case by case basis. There are, on the other hand, literally thousands of specialists who have achieved more impressive feats in their fields than many of the people you cite. (I take straightforward exception to Chomsky who founded a school of linguistics that's explicitly anti-empirical.)

Comment author: komponisto 18 March 2009 12:44:37AM 4 points [-]

Not to defend anything specific about Chomsky's program, but "anti-empirical" is unfair. "Anti-empiricist" would be more reasonable (though still missing the point, in my opinion).

Comment deleted 18 March 2009 11:47:00AM *  [-]
Comment author: astray 18 March 2009 04:14:03PM 1 point [-]

Another method may be to list the top 10 achievements first and then check whether a specialist or a generalist. I imagine Prometheus was a generalist.

Comment author: anonym 18 March 2009 04:36:41PM *  1 point [-]

This is a good idea. But I think 10 is too few. It would be better to pick the top 100 or 200, and see how many people who contributed to multiple fields are on the list.

I've not created the list first, but have thought of which of those I listed above have done something that would belong on that list, so feel free to take possible confirmation bias into account on my part, but even after trying to account for that, I think many of the following accomplishments would be on the list:

  • Calculus: Leibniz, Newton
  • Physics: Newton [forgot about Newton originally, but he was a generalist]
  • Entscheidungsproblem, Turing machine: Turing
  • Too much important math to list: Gauss
  • Contributions to quantum mechanics, economics & game theory, computer science (we're using a von Neumann-style computer), set theory, logic, and much else: von Neumann
Comment author: scientism 18 March 2009 06:22:10PM 2 points [-]

It's worth remembering that what we're looking for is not just people who contributed to multiple fields but generalists/rationalists: people who took a "big picture" view. (I'm willing to set aside the matter of whether their specific achievements were related to their "big picture" view of things since it will probably just lead to argument without resolution.) Leibniz would definitely fall into that category, for example, but I'm not sure Newton would. He had interests outside of physics (religion/mysticism) but they weren't really related to one another.

Comment author: thomblake 18 March 2009 06:46:18PM *  1 point [-]

It should be noted that Turing and Shannon both studied with Norbert Wiener, and he might have come up with most of their interesting ideas (and possibly von Neumann's as well). Also, Wiener founded the study of cybernetics, made notable contributions to gunnery, and made the first real contribution to the field of computer ethics.

ETA: not to discredit the work of Turing, Shannon, and von Neumann, but rather to note that Wiener is definitely someone who made major contributions and should be on the 'generalists' list.

Comment author: anonym 19 March 2009 06:24:54AM *  2 points [-]

Wiener is on the original list I gave a couple of posts up.

Do you have a reference for Turing studying with Wiener and Turing getting his ideas from him? I checked all pages in Hodges's biography of Turing that mention Wiener, and none of them mention that he studied with Wiener.

Turing's Entscheidungsproblem paper (which also introduced the Turing machine) was published in 1936. The only (in-person) connection between them I found (though I didn't search other than checking the bio) is that Wiener spoke with Turing about cybernetics in 1947 while passing by on his way to Nancy.

Are there specific discoveries you believe are falsely attributed to Turing, von Neumann, or Shannon, and can you provide any evidence?

Comment deleted 17 March 2009 08:09:03PM *  [-]
Comment author: gwern 18 March 2009 04:51:20PM 5 points [-]

Roko: rather than picking out of random, it'd be better to start with a survey of the historical literature. Fortunately, the search and statistical ranking has already been done in Human Accomplishment.

For the combined science index, we get:

  • Newton
  • Galileo
  • Aristotle
  • Kepler
  • Lavoisier
  • Descartes
  • Huygens
  • Laplace
  • Einstein
  • Faraday

It's a list that seems reasonable to me, as surprising as Lavoisier, Huygens, and Faraday may be.

Comment author: astray 18 March 2009 04:11:51PM 4 points [-]

Darwin was almost preempted by Wallace. Newton and Leibniz arrived at the same calculus independently, and similar work was done by Seki Kowa at the same time. They were merely there first and most prominently, but not uniquely. I think to satisfy importance, we want cut vertex scientists and academics.

Comment author: Eliezer_Yudkowsky 18 March 2009 06:09:05PM 2 points [-]

What constitutes a "cut vertex" here depends entirely on how far you want to take the counterfactual. Who do you shoot so that humanity makes no further progress, ever?

Comment author: MichaelHoward 18 March 2009 08:31:41PM 9 points [-]

Stanislav Yevgrafovich Petrov?

Comment author: Michelle 19 March 2009 07:32:15AM 3 points [-]

I think an important issue in this generalist/specialist debate and this attempt to create a list of the most important figures is that the historical time frame may be very relevant.

As the world becomes increasingly complex and fields of study, old and new, become increasingly specialized, would this not affect the ability of a generalist/specialist to produce a significant insight or make a significant contribution?

Perhaps it makes more sense to consider much more recent people as examples if we want to apply this to society as it stands now.

Comment author: thomblake 18 March 2009 06:04:35PM 3 points [-]

Socrates is an odd fellow to have on the list, since there aren't any works by Socrates. If you think Plato should be on the list, feel free to kick Socrates off.

Comment author: MBlume 17 March 2009 11:29:28PM 3 points [-]

As a physicist, I've always been partial to Maxwell's work -- he deduced the induction of a curled magnetic field by a changing electric field solely from mathematical considerations, and from this, was able to guess the nature of light before any other human.

I've mixed feelings about Descartes. The pull of the Cartesian Theater has muddling effects in serious cognitive philosophy. On the other hand, by making the concept explicit, he did make it easier for others to point out that it was wrong.

Comment author: thomblake 18 March 2009 06:12:08PM 2 points [-]

Regarding the Cartesian Theater, I think it obviously had an impact on Global Workspace Theory, which actually seems to be going in the right direction.

And let's not forget Decartes's many other contributions. The coordinate grid and analytic geometry, anyone?

Comment author: Court_Merrigan 18 March 2009 02:15:45AM 2 points [-]

Exactly. Descartes laid the foundation for future progress.

Comment author: rhollerith 17 March 2009 10:55:04PM 3 points [-]

The top-ten list needs Galileo. Galileo > Newton. Galileo > Einstein.

And Berners-Lee? If he had never started the WWW, within 2 years of when he did start it, someone else would have started something very similar. (And his W3C does dumb things.) If you want a contributor to the internet on the list, I humbly suggest J.C.R. Licklider, his protogee Roberts, or one of the four authors of "The End-to-end Argument".

Comment author: Eliezer_Yudkowsky 17 March 2009 10:43:27PM 3 points [-]

Berners-Lee? Recency effect much?

Comment author: John_Maxwell_IV 17 March 2009 11:15:32PM -2 points [-]

Darwin? Seriously? The essential kernel of his theory is so easy to understand that I'm reluctant to give him much credit for inventing it.

Comment author: MBlume 17 March 2009 11:17:29PM 10 points [-]

Massive hindsight bias. Whether we, as a race, are proud of it or not, it wasn't until Darwin, only 150 years ago, that someone seriously suggested and developed it.

Comment author: John_Maxwell_IV 18 March 2009 07:33:03PM *  3 points [-]

Natural selection is the combination of two ideas: 1. Population characteristics change over time if members of the population are systematically disallowed reproduction. 2. Nature systematically disallows reproduction.

I'm willing to accept that I'm suffering from hindsight bias. But will you at least give me that his theory is much easier to understand than any of the others? And maybe a few guesses on the topic of why it was so hard to think of?

Also, even if an insight is rare, that doesn't mean its bearer deserves credit. Many inventors made important accidental discoveries, and I imagine luck must have factored into Darwin's discovery somehow as well. If 1% of biologists who had gone on the voyage Darwin went on also would have developed the theory, does he still deserve to be on the list of the top ten intellectuals?

Addendum: Here is an argument that ancient scientists and mathematicians don't deserve as much credit as we give them: they were prolific. We have no modern equivalent of Euler or Gauss; John Von Neumann was called "the last of the great mathematicians". There are two possibilities here: either the ancient thinkers were smarter than we were, or their accomplishments were more important and less difficult than those of modern thinkers. The Flynn effect suggests that IQs are rising over time, so I'm inclined to believe that their accomplishments were genuinely less difficult.

And even if making new contributions to these fields isn't getting more difficult, surely you must grant that it must become more difficult at some point, assuming that to make a new contribution to a field you must understand all the concepts your contribution relies on, and all the concepts those concepts rely on, etc.

Comment author: MBlume 21 March 2009 04:33:13AM *  6 points [-]

Natural selection is the combination of two ideas: 1. Population characteristics change over time if members of the population are systematically disallowed reproduction. 2. Nature systematically disallows reproduction.

I'm willing to accept that I'm suffering from hindsight bias. But will you at least give me that his theory is much easier to understand than any of the others? And maybe a few guesses on the topic of why it was so hard to think of?

Extraordinarily so, yes -- it does astonish me that no one hit it before. Nonetheless, the empirical fact remains, so...

I suppose the sense of "mystery" people attached to life played into it somewhat.

People were breeding animals, people were selecting them, and...socially there was already some idea of genetic fitness. Men admired men who could father many children.The idea of heredity was there.

Honestly, the more I think of it, the more I share your confusion. It is deeply odd that we were blinded for so long. Perhaps we should work to figure out how this happened, and whether we can avoid it in the future.

I don't think luck can factor in quite as much as you imagine though. We're not attempting to award credit, so much as we are attempting to identify circumstances which tend to produce people who tend to produce important insights. Darwin's insight was incredibly important, and had gone unseen for centuries. To me, that qualifies him.

Even if you put it at a remove, even if you say, well, Darwin was uniquely inspired by his voyage, another biologist could have done the same, then the voyage becomes important. Why didn't another biologist wind up on a voyage like that? What can we do to ensure that inspiring experiences like that are available to future intellectuals? In this way, Darwin's life remains an important data point, even if -- especially if -- we deny that there was anything innately superior about the man.

Addendum: Here is an argument that ancient scientists and mathematicians don't deserve as much credit as we give them: they were prolific.

Agreed, completely -- they pulled the low-hanging fruit from the search space.

Comment author: thomblake 18 March 2009 07:42:07PM *  2 points [-]

I'm confused - do you mean that deism, specifically, made it hard to think of, or easy? And I'm not sure many were deists - I can't find numbers, but I was under the impression deism was always a really small movement.

EDIT: nevermind, reference to deism was removed in an edit.

Comment author: John_Maxwell_IV 19 March 2009 09:35:13PM 3 points [-]

I meant that I thought the fact that so many took for granted the fact that God created the animals was one of the factors that made evolution hard to think of, and Darwin shouldn't get genius status just because he overcame it. But then I remembered Lamarck and thought better of it. I still think it is a weak argument in favor of Darwin not being a genius, though.

Comment deleted 18 March 2009 02:47:18AM [-]
Comment author: VAuroch 17 December 2013 11:44:25PM 4 points [-]

Most truly great insights feel obvious in retrospect.

Comment author: teageegeepea 17 March 2009 11:26:39PM 8 points [-]

I like Eliezer's writing, but I think he himself has described his work as "philosophy of AI". He's been a great popularizer (and kudos to folks like him and Dawkins), but that's different from having "produced significant insights". Or perhaps his insight is supposed to be "We are really screwed unless we resolve certain problems requiring significant insights!".

Comment author: JulianMorrison 18 March 2009 09:12:09PM 1 point [-]

Aubrey De Grey hasn't yet been proved right, so he's a tentative example, but he is a rare biological theorist where most biologists are specialized experimenters.

Comment author: CronoDAS 17 March 2009 08:29:43PM 0 points [-]

Isaac Asimov was a generalist.

Make of that what you will.

Comment deleted 17 March 2009 09:22:02PM [-]
Comment author: scientism 17 March 2009 10:46:20PM 0 points [-]

I think philosophy is a good example. Philosophers are supposed to be more logical/rational than other people and have been generalists until recently (many still are). They have also failed to produce a single significant piece of work on par with anything found in science. Now, some people might disagree with that assessment, but I suspect their counterexamples would be chiefly in specialist sub-disciplines: formal logic, for example. I think to the degree that there has been "good philosophy" it's found under the model of specialists working under the kind of robust institutional framework Robin alludes to rather than individual theorists taking a global perspective (philosophy as martial arts). I can't think of any systematizers I'd credit with discovering truth. I do not think Socrates, Plato, Aristotle and Descartes discovered any substantial truths (Descartes mathematical work aside) so we probably differ there. Regardless, I think there's a good argument to be made that historically truth has come from robust institutions involving many specialists (such as science) rather than brilliant lone thinkers taking a global perspective.

Comment deleted 17 March 2009 11:24:12PM *  [-]
Comment author: scientism 18 March 2009 12:01:34AM 3 points [-]

There's a huge difference between being considered historically important and having discovered substantial truth. The Bible is historically important. It helped lay the foundations of Western culture. This is hardly disputable. It does not, however, contain much in the way of truth. Nor do the works of Plato and Aristotle.

Comment author: Court_Merrigan 18 March 2009 02:14:49AM 4 points [-]

To take one example: Aristotle laid down the foundation of what became modern science. Modern science became modern science as we think of it by rebelling against Aristotle's a priori assumptions; without Aristotle, what science we have today would be very different, indeed.

I don't think you can so easily dismiss Plato, Aristotle, Descartes, et al: without them we we wouldn't be where we are today.

This is part of the problem I often detected at OB and see again here at LW: people with little respect for intellectual history.

Comment author: Eliezer_Yudkowsky 17 March 2009 05:28:16PM 1 point [-]

Roko, great comment, but you should've just Edited. Why delete and repost?

Comment author: RobinHanson 17 March 2009 05:49:50PM 0 points [-]

FYI, I had replied to the previous version of the comment.

Comment author: xamdam 01 July 2010 08:12:31PM 1 point [-]

'll bet at long odds that a prediction market containing an expert on evo-psych, an expert on each of five narrow AI specialisms, an expert on quantum mechanics, an expert on human biases, an expert on ethics and an expert on mathametical logic would not even have produced FAI as an idea to be bet upon.

I think Robin already pre-answered this, though perhaps with a touch of sarcasm: "Perhaps martial-art-style rationality makes sense for isolated survivalist Einsteins forced by humanity's vast stunning cluelessness to single-handedly block the coming robot rampage."

Comment author: Z_M_Davis 17 March 2009 06:16:17PM 11 points [-]

One problem with trusting the experts rather than trying to think things through for yourself is that you need a certain amount of expertise just to understand what the experts are saying. The experts might be able to tell you that "all symmetric matrices are orthonormally diagonalizable," and you might have perfect trust in them, but without a lot of personal study and inquiry, the mere words don't help you very much.

Comment author: John_Maxwell_IV 17 March 2009 11:18:47PM 5 points [-]

That doesn't matter if the expert can say "hire this guy", "invest in this company", "vote for this guy", or "donate to this charity". If you're doing some sort of complicated action with careful integration of expert advice, then it's probably worthwhile becoming at least a semi-expert yourself.

Comment author: mark_spottswood 17 March 2009 08:53:49PM 4 points [-]

Experts don't just tell us facts; they also offer recommendations as to how to solve individual or social problems. We can often rely on the recommendations even if we don't understand the underlying analysis, so long as we have picked good experts to rely on.

Comment deleted 18 March 2009 02:53:03AM [-]
Comment author: mark_spottswood 18 March 2009 03:20:14PM 1 point [-]

True. But it is still easier in many cases to pick good experts than to independently assess the validity of expert conclusions. So we might make more overall epistemic advances by a twin focus: (1) Disseminate the techniques for selecting reliable experts, and (2) Design, implement and operate institutions that are better at finding the truth.

Note also that your concern can also be addressed as one subset of institutional design questions: How should we reform fields such as medicine or economics so that influence will better track true expertise?

Comment author: Vladimir_Nesov 17 March 2009 07:57:05PM 4 points [-]

All the worse if you are convinced that God hates diagonalizable matrices, and so you prefer not to believe the heathen.

Comment deleted 17 March 2009 07:34:15PM [-]
Comment author: Court_Merrigan 18 March 2009 02:07:35AM 0 points [-]

But there are simply far too many areas of life involving putative "orthonormally diagonalizable matrices" for any one individual to be able to rationally investigate. At some point you have to take someone's word for it; so rather than taking one expert's word, you're likely better off trusting a community of experts. A current example might be with global warming - most scientists seem to feel it's a major issue.

Unfortunately, though, radical changes in thinking come usually come from the margin, e.g., Galileo. The hard part, it seems to me, is to distinguish between mere status quo convention and genuine expert agreement.

Comment author: jimmy 17 March 2009 11:05:24PM 8 points [-]

Following the martial arts analogy, I guess that makes Robin a supporter of "Rationalist Gangs".

Comment author: Alan 17 March 2009 04:25:09PM 3 points [-]

Robin wrote: "Martial arts can be a good training to ensure your personal security, if you assume the worst about your tools and environment." But this does not mean that martial arts cannot also be good training if you assume a more benign environment. Environments are known to be unpredictable.

One of the most important insights a person gains from martial arts training is to understand one's limits--which relates directly to the bias of overconfidence. If martial arts training enables a person to project an honestly greater degree of self confidence, then the signaling benefit alone may merit the effort. Does rationality training confer analogous signaling benefits?

Comment author: Nebu 17 March 2009 05:25:38PM 2 points [-]

One of the most important insights a person gains from martial arts training is to understand one's limits--which relates directly to the bias of overconfidence.

Good point. Fortunately, I think the OB and LW blogs have helped me understand my limits, in the sense that it showed me many errors-in-rationality in the ways I used to (and unfortunately, currently do still) think.

If martial arts training enables a person to project an honestly greater degree of self confidence, then the signaling benefit alone may merit the effort. Does rationality training confer analogous signaling benefits?

It probably does. If you go to cocktail parties tossing around terms like "Bayesian updating with Occam priors" or "Epistemic rationality" and sound like you really know what you're talking about, then you'll probably exude this signal of being a fairly smart person.

But you have to ask yourself if your goal is to sound smart, or to actually be smart.

Comment author: Eliezer_Yudkowsky 17 March 2009 05:48:40PM 24 points [-]

Yes, it would be silly to think of ourselves as isolated survivalists in a society where so many people are signed up for cryonics, where Many-Worlds was seen as retrospectively obvious as soon as it was proposed, and no one can be elected to public office if they openly admit to believing in God. But let us be realistic about which Earth we actually live in.

I too am greatly interested in group mechanisms of rationality - though I admit I put more emphasis on individuals; I suspect you can build more interesting systems out of smarter bricks. The obstacles are in many ways the same: testing the group, incentivizing the people in it. In most cases if you can test a group you can test an individual and vice versa.

But any group mechanism of that sort will have the character of a band of survivalists getting together to grow carrots. Prediction markets are lonely outposts of light in a world that isn't so much "gone dark" as having never been illuminated to begin with; and the Policy Analysis Markets were burned by a horde of outraged barbarians.

We have always been in the Post-Apocalyptic Rationalist Environment, where even scientists and academics are doing it wrong and Dark Side Epistemology howls through the street; I don't even angst about this, I just take it for granted. Any proposals for getting a civilization started need to take into account that it doesn't already exist.

Comment author: RobinHanson 17 March 2009 11:02:05PM 10 points [-]

Sounds like you do think of yourself as an isolated survivalist in a world of aliens with which you cannot profitably coordinate. Let us know if you find those more interesting systems you suspect can be built from smarter bricks.

Comment author: Eliezer_Yudkowsky 18 March 2009 12:28:58AM 15 points [-]

It's pretty hard to be isolated in a world of six billion people. The more key question is the probability of coordinating with any randomly selected person on a rationalist topic of fixed difficulty, and the total size of the community available to support some number of institutions.

To put it bluntly, if you built the ideal rationalist institution that requires one million supporters, you'd be in trouble because the 99.98th percentile of rationality is not adequate to support it (and also such rationalists may have other demands on their time).

But if you can build institutions that grow starting from small groups even in a not-previously-friendly environment, or upgrade rationalists starting from the 98th percentile to what we would currently regard as much higher levels, then odds look better for such institutions.

We both want to live in a friendly world with lots of high-grade rationalists and excellent institutions with good tests and good incentives, but I don't think I already live there.

Comment author: AndySimpson 18 March 2009 08:44:23AM 8 points [-]

Even in the most civilized civilizations, barbarity takes place on a regular basis. There are some homicides in dark alleys in the safest countries on earth, and there are bankruptcies, poverty, and layoffs even in the richest countries.

In the same way, we live in a flawed society of reason, which has been growing and improving with starts and fits since the scientific revolution. We may be civilized in the arena of reason in the same way you could call Northern Europe in the 900s civilized in the arena of personal security: there are rules that nearly everyone knows and that most obey to some extent, but they are routinely disrespected, and the only thing that makes people really take heed is the theater of enforcement, whether that's legally-sanctioned violence against notorious bandits or a dressing-down of notorious sophists.

Right now, we are only barely scraping together a culture of rationality, it may have a shaky foundation and many dumber bricks, but it seems a bit much to say we don't have one.

Comment author: RobinHanson 18 March 2009 12:41:21PM 3 points [-]

Let me us distinguish "truth-seekers", people who respect and want truth, from "rationalists", people who personally know how to believe truth. We can build better institutions that produce truth if only we have enough support from truth-seekers; we don't actually need many rationalists. And having rationalists without good institutions may not produce much more shared accessible truth.

Comment author: Yvain 18 March 2009 02:22:40PM *  15 points [-]

I'm not sure I can let you make that distinction without some more justification.

Most people think they're truth-seekers and honestly claim to be truth-seekers. But the very existence of biases shows that thinking you're a truth-seeker doesn't make it so. Ask a hundred doctors, and they'll all (without consciously lying!) say they're looking for the truth about what really will help or hurt their patients. But give them your spiel about the flaws in the health system, and in the course of what they consider seeking the truth, they'll dismiss your objections in a way you consider unfair. Build an institution that confirms your results, and they'll dismiss the institution as biased or flawed or "silly". These doctors are not liars or enemies of truth or anything. They're normal people whose search for the truth is being hijacked in ways they can't control.

The solution: turn them into rationalists. They don't have to be black belt rationalists who can derive Bayes' Theorem in their sleep, but they have to be rationalist enough that their natural good intentions towards truth-seeking correspond to actual truth-seeking and allow you to build your institutions without interference.

Comment author: MichaelBishop 18 March 2009 03:29:17PM 5 points [-]

"The solution: turn them into rationalists."

You don't say how to accomplish this. Would it require (or at least benefit greatly from) institutional change?

Comment author: RobinHanson 24 March 2009 12:48:39PM 3 points [-]

I had in mind that you might convince someone abstractly to support eg prediction markets because they promote truth, and then they would accept the results of such markets even if it disagreed with their intuitions. They don't have to know how to bet well in such markets to accept that they are a better truth-seeking institution. But yes, being a truth-seeker can be very different from believing that you are one.

Btw, I only just discovered the "inbox" that lets me find responses to my comments.

Comment author: Eliezer_Yudkowsky 18 March 2009 06:12:45PM 8 points [-]

This sounds like you're postulating people who have good taste in rationalist institutions without having good taste in rationality. Or you're postulating that it's easy to push on the former quantity without pushing on the latter. How likely is this really? Why wouldn't any such effort be easily hijacked by institutions that look good to non-rationalists?

Comment deleted 18 March 2009 05:33:46AM [-]
Comment author: MichaelBishop 18 March 2009 03:07:38PM 2 points [-]

I'm surprised to see this go negative.

Granted, Marshall didn't explain his position in any detail. But his position is not indefensible, and I'm glad he's willing to share it.

Comment deleted 18 March 2009 01:45:04PM *  [-]
Comment deleted 18 March 2009 06:48:51PM [-]
Comment deleted 18 March 2009 08:15:01PM *  [-]
Comment author: pjeby 18 March 2009 10:45:32PM 2 points [-]

It doesn't take much - just one jerk systematically downvoting a page or two of your existing comments. I lost like 37 points in less than an hour that way a few days ago. We really need separate up/down counts, or better yet ups and downs per voter, so you can ignore systematic friend upvotes and foe downvotes.

Comment author: Eliezer_Yudkowsky 18 March 2009 10:53:07PM 1 point [-]

Are we already getting this behavior? I'll have to start looking into voting patterns... Sigh.

Comment author: ciphergoth 18 March 2009 11:10:05PM 1 point [-]

Have you looked at Raph Levien's work on attack resistant trust metrics?

Comment author: Emile 19 March 2009 10:56:54AM *  0 points [-]

Couldn't it also be due to a change in the karma calculation rules in order to, say, not take your own upvote in account on karma calculations? I remember that was mentioned, but don't know if it was implemented in the meantime.

Edit: Well, it seems that it isn't implemented yet, since posting this got me a karma point :)

Comment author: ciphergoth 17 March 2009 11:38:17PM 9 points [-]

Putting so much work into talking about these things isn't the act of an isolated survivalist, though.

Comment deleted 17 March 2009 02:39:29PM [-]
Comment author: RobinHanson 17 March 2009 02:43:32PM 5 points [-]

Yes, you are right that designing need not be the hard part. So I just changed "design" to "design and field."

Comment author: vizikahn 17 March 2009 03:32:10PM 4 points [-]

As a martial arts enthusiast I have to concur that the practical survivability impact of my training is somewhat limited. In fact, I would go as far as to say that my martial art training is far less likely to save my life than is my previous sporting hobby, running.

My krav maga instructor (a bouncer) used to emphasize that 90% of realistic self-defense is about avoiding trouble, and running is a battle-tested survival technique. I think running was the best way to keep your sanity in the Cthulhu role-playing too. So, the first line of self-defense: don't open that old book, run away and read what people at LW are saying.

Comment author: nazgulnarsil 17 March 2009 06:29:06PM 5 points [-]

90% of actual self defense confrontations involve extremely simple techniques. hard core martial arts training is about beating other martial artists. if you just want practical survival skills you learn the control techniques cops use and just practice.

Comment author: ciphergoth 17 March 2009 02:03:28PM 6 points [-]

On this point, we should also be talking about effective evangelism for rationality.

Comment author: John_Maxwell_IV 17 March 2009 11:24:26PM 1 point [-]

One thing I thought of is to print out a bunch of copies of this paper and start giving it to the Greenpeace activists I see around my community college.

Comment author: mark_spottswood 17 March 2009 02:08:23PM 5 points [-]

Another good example is the legal system. Individually it serves many participants poorly on a truth-seeking level; it encourages them to commit strongly to an initial position and make only those arguments that advance their cases, while doing everything they can to conceal their cases' flaws short of explicit misrepresentation. They are rewarded for winning, whether or not their position is correct. On the other hand, this set-up (in combined with modern liberalized disclosure rules) works fairly well as a way of aggregating all the relevant evidence and arguments before a decisionmaker. And that decisionmaker is subject to strong social pressures not to seek to affiliate with the biased parties. Finally, in many instances the decisionmaker must provide specific reasons for rejecting the parties' evidence and arguments, and make this reasoning available for public scrutiny.

The system, in short, works by encouraging individual bias in service of greater systemic rationality.

Comment author: RobinHanson 17 March 2009 02:44:54PM 5 points [-]

The legal system does supposedly encourage individual bias to aggregate evidence; I'm more of a skeptic about how well it actually does this in practice though.

Comment author: mark_spottswood 17 March 2009 02:51:42PM 1 point [-]

Care to explain the basis for your skepticism?

Interestingly, there may be a way to test this question, at least partially. Most legal systems have procedures in place to allow judgments to be revisited upon the discovery of new evidence that was not previously available. There are many procedural complications in making cross-national comparisons, but it would be interesting to compare the rate at which such motions are granted in systems that are more adversarially driven versus more inquisitorial systems (in which a neutral magistrate has more control over the collection of evidence).

Comment author: mtraven 17 March 2009 08:00:20PM 6 points [-]

For whatever reason, the community here (so-called "rationalists") is heavily influenced by overly-individualistic ideologies (libertarianism, or in its more extreme forms, objectivism). This leads to ignoring entire realms of human phenomena (social cognition) and the people who have studied them (Vygotsky, sociologists of science, ethnomethodology). It's not that social approaches to cognition provide a magic bullet -- they just provide a very different perspective on how minds work. Imagine if you stop believing that beliefs are in the head and locate themselves in a community or institution. If interested, you could start with How Institutions Think by Mary Douglas.

Comment author: Yvain 17 March 2009 11:52:35PM *  14 points [-]

I am guilty as charged in being much more familiar with individualistic than socially oriented ideologies.

Why don't you write some posts about techniques or discoveries from socially-oriented science that could help rationalists?

Comment author: topynate 17 March 2009 11:31:14PM *  6 points [-]

I would say Robin Hanson's views on status fit quite well into the gap you perceive. I do find it interesting that status isn't talked about more on Less Wrong.

Maybe I can tie this into what I think about the article. LW's articles do currently take an individualist stance on rationality (although I doubt objectivism has any role in this). The "refinements" they propose are mostly alterations of cognitive habits, not suggested ways of changing group dynamics. But LW as a whole is not simply a bunch of iconoclasts. Rather, there appears to be a clear attempt to collectively change patterns of thought. People write stuff, get +/- karma, feel good/bad, update their beliefs and try again. So even though the content of LW is individually applicable, posters will naturally develop preferred topics of expertise, subjects on which they know enough to benefit the community by what they write. And developing expertise does benefit from the martial arts analogy.

Comment author: wedrifid 20 November 2009 03:00:19AM 4 points [-]

I would say Robin Hanson's views on status fit quite well into the gap you perceive. I do find it interesting that status isn't talked about more on Less Wrong.

Was there a time when we neglected status as a topic? wow. I don't remember that.

Comment author: Carinthium 24 November 2010 09:08:20AM 1 point [-]

The flaw in that is that ignores dissenters- to some extent, minorities in a community can dissent from the common belief.

Comment author: Vaniver 28 October 2010 01:44:58AM *  1 point [-]

Imagine if you stop believing that beliefs are in the head and locate themselves in a community or institution. If interested, you could start with How Institutions Think by Mary Douglas.

This sounds to me a lot like "Imagine if you stop believing that information is in the genes and locate it in a species."

I don't think institutional effects on thought are a bad thing to study- institutions definitely have massive effects on the environments individuals operate in- but I think assigning thinking entity status to institutions is a bad way to approach that study. Thinking about information stored in species has a long and storied history of making worse predictions than thinking about information stored in genes.

But institutions certainly apply selection pressure on memes, and influence how memes replicate themselves and propagate. The analogy is also somewhat tenuous- institutions are far more fluid (almost by definition) in their boundaries than species. Because of their tremendous impact, institutional design deserves comparable attention to environmental design (architecture, agriculture, lots of smaller fields).

(We do already have those fields, though; the economy is the environment commercial institutions are built for (and other institutions reside in as well), and economists try to study it and design it. Public choice theorists help study the design of (primarily democratic) political institutions.)

Comment author: MichaelBishop 17 March 2009 02:49:48PM 8 points [-]

Robin was kind enough not to say what overemphasizing the heroic individual rationalist implies about our true motivations.

Comment author: anonym 17 March 2009 10:23:33PM *  3 points [-]

That's overly simplistic. Two people might have the same motivations and goals but disagree about the most effective way of achieving those goals. If you think that's not the case, you should give an argument to that effect. If you think it doesn't apply in the particular case that we all know you have in mind, you should give an argument to that effect.

I'm surprised the parent is rated up to 10 points. It indulges in armchair psychologizing with no supporting evidence or reasoning, and it interprets the situation in the least intellectually charitable way and assumes the worst of motivations.

Comment author: Matt_Simpson 17 March 2009 04:50:31PM *  4 points [-]

The rationality dojo seems to be part of a world where "we" work together for truth, at least if you don't take the dojo metaphor too seriously. I assume that training individuals to be more rational is part of your optimal strategy. So I take it that you argument is that we should emphasize individual training less relative to designing institutions which facilitate truth-finding despite our biases. Am I understanding you correctly?

Comment author: RobinHanson 17 March 2009 04:54:45PM 1 point [-]

Yup.

Comment author: James_Miller 17 March 2009 02:18:05PM 4 points [-]

We should learn how to identify trustworthy experts. Is there some general way, or do you have to rely on specific rules for each category of knowledge?

Two examples of rules are never trust someone's advice about which specific stocks you should buy unless the advisor has material non-public information, and be extremely skeptical of statistical evidence presented in Women Studies' journals. Although both rules are probably true you obviously couldn't trust financial advisers or Women Studies' professors to give them to you.

Comment author: RobinHanson 17 March 2009 02:36:22PM 5 points [-]

Prediction markets can forecast the accuracy or fame of purported experts. But preferably you'd accept the market estimate on your question and so not need to know who is an expert.

Comment author: igoresque 29 March 2009 01:11:28AM 2 points [-]

This is ofcourse exactly the point. People will be people. The solution is to depersonalize, not pick some fine guy and put faith in him. Trying to find out which experts to trust feels to me like asking which tyrants can be best trusted. Experts are valuable (unlike tyrants), but is better be placed in a market, rather than in individual people.

Comment author: PhilGoetz 17 March 2009 04:07:26PM 5 points [-]

Have you evaluated statistical evidence in Women Studies' journals?

Comment author: mark_spottswood 17 March 2009 02:26:25PM *  2 points [-]

Obviously it helps if the experts are required to make predictions that are scoreable. Over time, we could examine both the track records of individual experts and entire disciplines in correctly predicting outcomes. Ideally, we would want to test these predictions against those made by non-experts, to see how much value the expertise is actually adding.

Another proposal, which I raised on a previous comment thread, is to collect third-party credibility assessments in centralized databases. We could collect the rates at which expert witnesses are permitted to testify at trial and the rate at which their conclusions are accepted or rejected by courts, for instance. We could similarly track the frequency with which authors have their articles accepted or rejected by journals engaged in blind peer-review (although if the review is less than truly blind, the data might be a better indication of status than of expertise, to the degree the two are not correlated). Finally, citation counts could serve as a weak proxy for trustworthiness, to the degree the citations are from recognized experts and indicate approval.

Comment author: Eliezer_Yudkowsky 18 March 2009 06:22:07PM 2 points [-]

The suggestions from the second paragraph all seem rather incestuous. Propagating trust is great but it should flow from a trustworthy fountain. Those designated "experts" need some non-incestuous test as their foundation (a la your first paragraph).

Comment author: mark_spottswood 18 March 2009 07:30:19PM 4 points [-]

Internal credibility is of little use when we want to compare the credentials of experts in widely differing fields. But is is useful if we want to know whether someone is trusted in their own field. Now suppose that we have enough information about a field to decide that good work in that field generally deserves some of our trust (even if the field's practices fall short of the ideal). By tracking internal credibility, we have picked out useful sources of information.

Note too that this method could be useful if we think a field is epistemically rotten. If someone is especially trusted by literary theorists, we might want to downgrade our trust in them, solely on that basis.

So the two inquiries complement each other: We want to be able to grade different institutions and fields on the basis of overall trustworthiness, and then pick out particularly good experts from within those fields we trust in general.

p.s. Peer review and citation counting are probably incestuous, but I don't think the charge makes sense in the expert witness evaluation context.

Comment deleted 17 March 2009 04:48:02PM [-]
Comment author: RobinHanson 17 March 2009 04:57:12PM 1 point [-]

Combining two or even three particular topics can the thing that you specialize in.

Comment author: JulianMorrison 17 March 2009 03:41:21PM 2 points [-]

An attempt to even find Einsteins is doomed unless the number of them is large enough as a fraction of the population. (cf: Eliezer's introduction to Bayes.)

On the other hand, a purely aggregate approach is a dirty hack that somehow assumes no (irrational) individual is ever able to be a bottleneck to (aggregate) good sense. It's also fragile to societal breakdown.

It seems evident to me that what's really urgent is to "raise the tide" and have it "lift all boats". Because then, tests start working and the individual bottleneck is rational.

Comment author: Andrew 17 March 2009 04:47:29PM *  11 points [-]

I predict that aggregate approaches are going to be more common in the future than waiting around for an Einstein-level intelligence to be born.

For example, Timothy Gowers recently began a project (Polymath1) to solve an open problem in combinatorics through distributed proof methods. Current opinion is that they were probably successful; unfortunately, the math is too hard for me to render judgment.

Now, it's possible that they were successful because the project attracted the notice of Terence Tao, who probably qualifies as an Einstein-level mathematician. If you look at the discussion, Tao and Gowers both dominate it. On the other hand, many of the major breakthroughs in the project didn't come from either of them directly, but from other anonymous or pseudo-anonymous comments.

The time of an Einstein or Tao is too valuable for them to do all the thinking by themselves. We agree that raising the tide is absolutely necessary for this kind of project to grow.

Comment author: xrchz 31 October 2009 11:18:23AM *  1 point [-]

For Polymath the kind of desired result of collaboration is clear to me: a (new) (dis-) proof of a mathematical statement.

What is the kind of desired result of collaborating rationalists?

From the talk about prediction markets it seems that "accurate predictions" might be one answer. But predictions of what? Would we need to aggregate our values to decide what we want to predict?

The phrase in Robin's post was "join together to believe truth", so perhaps the desired result is more true beliefs (in more heads)? Did you envision making things that are more likely to be true more visible, so that they become defaults? In other words, caching the results of truth-seeking so they can be easily shared by more people?

Comment deleted 17 March 2009 07:01:29PM [-]
Comment author: ciphergoth 17 March 2009 11:55:13PM 5 points [-]

Spending so much time in front of the screen does not seem sensible or rational.

If you didn't have a better plan for making the world a better place already, then spending time thinking about how to improve the general level of optimisation for good things seems like one of the more productive ways to waste time on the Internet.

Comment author: John_Maxwell_IV 17 March 2009 11:19:36PM 1 point [-]

Several weeks ago I unsuccessfully resolved to start doing community service every weekend.

Comment author: CannibalSmith 17 March 2009 03:33:21PM *  0 points [-]

Aren't we the supposed martial rationalists of the humanity? Aren't we the ones being paid (I wish) to protect the neighborhoods from the marauding apologists? Aren't we the ones to go to the wild places and battle dragons?

Comment author: PhilGoetz 17 March 2009 04:04:42PM *  0 points [-]

The martial arts metaphor for rationality training seems popular at this website, and most discussions here about how to believe the truth seem to assume an environmental worst case: how to figure out everything for yourself given fixed info ...

A very good point!

But I can't easily explain why it is a good point without violating the ban on mention of AI.

This observation doesn't invalidate Less Wrong. Someone still has to study these things. But the emphasis on individualism here can diminish awareness of the big picture.

Comment author: zaph 17 March 2009 01:59:47PM 0 points [-]

I think it was just brainstorming based on Eliezer's post; he also wrote about the sanity water line, which I see your rational society approach fitting in with. Maybe a dojo is a bit extreme, but I think a zendo isn't implausible, with people working on rationality koans. Or maybe rationality group therapy, where people can express potential irrationality that they can receive non-judgemental feedback on. Grassroots bottom up approaches could work with larger top down approaches to create the rational society, or whatever word Yvain might find less taboo :)

Comment deleted 17 March 2009 05:19:52PM *  [-]
Comment author: steven0461 17 March 2009 05:26:31PM 6 points [-]

No, you should still be swayed, you just shouldn't represent the swaying as being independent analysis. You also should take into account that the opinions of other group members may have been caused by swaying rather than independent analysis, but that was already true in the individual accuracy case.

Comment deleted 18 March 2009 09:28:49PM *  [-]
Comment author: Eliezer_Yudkowsky 18 March 2009 10:04:10PM 0 points [-]

Roko, when you run into a case of "group win / individual loss" on epistemic rationality you should consider that a Can't Happen, like violating conservation of momentum or something.

In this case, you need to communicate one kind of information (likelihood ratios) and update on the product of those likelihood ratios, rather than trying to communicate the final belief. But the Can't Happen is a general rule.

Comment author: MichaelHoward 18 March 2009 10:47:07PM 1 point [-]

Roko, when you run into a case of "group win / individual loss" on epistemic rationality you should consider that a Can't Happen, like violating conservation of momentum or something.

Really!? No exceptions?

This doesn't feel right. If it is right, it sounds important. Please could you elaborate?

Comment author: John_Maxwell_IV 17 March 2009 11:25:39PM 0 points [-]

Not being swayed means not taking advantage of your group membership.

Comment author: JamesAndrix 17 March 2009 03:57:12PM 1 point [-]

I was just about to respond by asking if you would advocate a website in the beliefs of the members are aggregated based on their reliability, then I remembered: prediction markets.

I'm guessing you're not pushing a real prediction market due to legal issues, but why not create one that uses token points instead of real money?

My first thought was slightly different: have testable predictions, as in a market, but the system treats each persons' likelihood ratios as evidence (as well as the tags for the prediction, to account for each person's area of expertise)

It seems to me that the real issue still is a supply of testable problems.

Comment author: RobinHanson 17 March 2009 04:58:32PM 1 point [-]

It does take work to create judgeable claims, but there are other real issues as well.

Comment author: steven0461 17 March 2009 03:58:32PM 1 point [-]

I'm guessing you're not pushing a real prediction market due to legal issues, but why not create one that uses token points instead of real money?

Foresight Exchange

Comment author: MBlume 17 March 2009 07:53:41PM 0 points [-]

I'm guessing you're not pushing a real prediction market due to legal issues, but why not create one that uses token points instead of real money?

Laws can, and in this case should, be changed.

Comment author: b1shop 15 August 2010 10:35:49AM 0 points [-]

For example, with subsidized prediction markets, we can each specialize on the topics where we contribute best, relying on market consensus on all other topics.

Why do these prediction markets have to be subsidized? In the U.S., online prediction markets are currently considered internet gambling and are hampered. Is there a reason legal, laissez-faire prediction markets couldn't take hold?

Comment author: gwern 15 August 2010 10:48:17AM *  3 points [-]
  • Prediction markets are currently immature and controversial, and so might have trouble bootstrapping.
  • Their legality is problematic. (The IEM had to get a special exemption from the SEC to run.)
  • Prediction markets like Intrade currently are structured in ways bad for financial return. (IIRC, the issue is that Intrade offers a very low or no interest rate on deposited funds - the float is a source of profit for it.)
  • Long-run prediction markets like many possible scientific or academic questions are not financially viable (see 'opportunity cost'), while sports and gambling bets are inherently short-term, taking no more than a year.
  • A succession of short-term markets might help, but then you have the problem that with the natural low prices on 'success' shares, it's hard to make any profit. (eg. imagine a 'cold fusion in 2010' market - it'd be at a penny or two. Suddenly shares double due to a new paper! But because it's so lightly traded, you only made a dime on your prescient long position.)

(Did I miss any?)

Hence, subsidies. Peter McCluskey ran a market-maker bot (OB coverage). Some traders discuss bots; note that they say it's hard to arbitrage Intrade & Betfair in part due to low volume and fees and costs (McCluskey's page mentions that Intrade "agreed not to charge any trading or expiry fees".)

Comment author: b1shop 15 August 2010 11:04:44AM *  0 points [-]

Thanks. I've been curious about the interest question for a while.

Comment author: gwern 20 May 2012 04:35:43AM 0 points [-]
Comment author: gwern 15 August 2010 11:16:58AM 0 points [-]

Googling some more, relevant links are http://www.overcomingbias.com/2007/11/intrade-fee-str.html and http://bb.intrade.com/intradeForum/posts/list/4471.page

Probably could find more examples of how Intrade is not an optimal prediction market using this tag: http://www.overcomingbias.com/tag/prediction-markets

Comment author: Xaq 20 November 2009 12:54:38AM -1 points [-]

If one goes off the notions of others without coming to conclusions for themselves they're just as blind as an evangelical christian. True insight can only come from within. That's why reason is of premium importance.

It is important to denote the difference between insight and belief, however; for insight is based off of rationality and logic whereas belief is based on primal emotions and instincts.

Comment author: wedrifid 20 November 2009 02:51:40AM 3 points [-]

If one goes off the notions of others without coming to conclusions for themselves they're just as blind as an evangelical christian.

Evangelical Christians sometimes form their own insights and conclusions, even about things with religious significance.