[This is a version of an first draft essay I wrote for my blog.  I intend to write another version, but it is going to take some time to research, and I want to get this out where I can start getting some feedback and sources for further research.]

The responses to the recent leaking of the CRU's information and emails, has led me to a changed understanding of science and how it is viewed by various people, especially people who claim to be scientists. Among people who actually do or consume science there seem to be two broad views - what they "believe" about science, rather than what they normally "say" about science when asked.

The classical view, what I have begun thinking of as the idealistic view, is science as the search for reliable knowledge. This is the version most scientists (and many non-scientists) espouse when asked, but increasingly many scientists actually hold another view when their beliefs are evaluated by their actions.

This is the signaling and control view of science. This is the "social network" view that has been developed by many sociologists of science.

For an extended example of the two views in conflict, see this recent thread of 369 comments Facts to fit the theory? Actually, no facts at all! . PhysicistDave is the best exemplar of the idealistic view, with pete and several others having extreme signaling and control viewpoints.

I wonder how much of the fact that there hasn't been any fundamental breakthroughs in the last fifty years has to do with the effective takeover of science by academics and government - that is by the signaling and control view. Maybe we have too many "accredited" scientists and they are too beholden to government, and to a lesser extent other grant-making organizations - and they have crowded out or controlled real, idealistic science.

This can also explain the conflict between those who extol peer review, despite its many flaws, and downplay open source science. They are controlling view scientists protecting their turf and power and prerogatives. Anyone thinking about the ideals of science, the classical view of science, immediately realizes that open sourcing the arguments and data will meet the ends of extending knowledge much better than peer review, now that it is possible. Peer review was a stop gap means of getting a quick review of a paper that was necessary when the costs of distributing information was high, but it is now obsolescent at best.

Instead the senior scientists and journal editors are protecting their power by protecting peer review.

Bureaucrats, and especially teachers, will tend strongly toward the signaling and control view.

Economics and other social "sciences" will tend toward signaling and control view - for examples see Robin Hanson's and Tyler Cowen's take on the CRU leak with their claims that this is just how academia really works and pete, who claims a Masters in economics, in the comment thread linked above.

Robin Hanson's It's News on Academia, Not Climate

Yup, this behavior has long been typical when academics form competing groups, whether the public hears about such groups or not. If you knew how academia worked, this news would not surprise you nor change your opinions on global warming. I’ve never done this stuff, and I’d like to think I wouldn’t, but that is cheap talk since I haven’t had the opportunity. This works as a “scandal” only because of academia’s overly idealistic public image.

And Tyler Cowen in The lessons of "Climategate",

In other words, I don't think there's much here, although the episode should remind us of some common yet easily forgotten lessons.

Of course, both Hanson and Cowen believe in AGW, so these might just be attempts to avoid facing anything they don't want to look at.

As I discussed earlier, those who continue to advocate the general use of peer review will tend strongly toward the signaling and control view.

Newer scientists will tend more toward the classical, idealistic view; while more mature scientists as they gain stature and power (especially as they enter administration and editing) will turn increasingly signaling and control oriented.

New Comment
58 comments, sorted by Click to highlight new comments since:

Separate comment to so it can be voted on separately: I don't know how you get this:

the fact that there hasn't been any fundamental breakthroughs in the last fifty years

I think you can only justify it by arbitrarily relabeling the progress of the last 50 years as "engineering" rather than science. This would be unfair, because the new technologies and capabilities did require new scientific advances to overcome the specific practical problems of getting them to work against everything Nature may throw at it.

Such advances may individually have less theoretical generality, but add them up, count the impact on our lives, and it's huge.

To avoid a long debate about this or that recent breakthrough, let me just borrow a point from (the usually angering) Steven Landsburg, who discusses a book written and set in 1991 -- less than 20 years ago -- with the following plot elements:

1) A door-to-door saleswoman pitches (hardcopy) encyclopedias to customers who eagerly seek easy access to vast quantities of information.

2) A man is eager to read an obscure novel he’s heard about, so he scours used book stores, hoping to find a copy. In the meantime, he’s not sure what the novel is about, and has no way to find out.

3) A comedian stores his collection of jokes on notecards, filling two rooms worth of file cabinets.

4) A collector of sound effects stores her collection on cassette tapes, and has no cost-effective way to create backups.

5) A man is unable to stay in close contact with his (adult) children, because long distance calling rates are prohibitively high.

Notice how archaic all of that looks to us?

The tendency to think that the golden age of scientific progress is past seems to me like an example of pessimistic bias. This particular bias is extremely common but not something I've seen discussed much here.

I wrote fundamental breakthroughs. And it isn't just engineering. Lasers and semiconductors for two examples were new science, but they have all been working out the implications of earlier breakthroughs.

[-]Jack40

What is the difference between a fundamental breakthrough and a not fundamental breakthrough? What method can I use to tell if a breakthrough is fundamental?

How about: a fundamental breakthrough enables new techniques/manipulations?

So all of the 'archaic' examples are for tasks we could already do but perhaps not as fast or as easily. The obscure-novel guy could resort to techniques honed by centuries of librarians & researchers to find out about it; the sound-effects collector could invest in the magnetic reels or whatever 'professionals' used. The man could easily stay in close contact - with a little more money. The customers who seek encyclopedic information get pretty much the same thing today. In all of these cases, the Internet & computers don't enable new things but cheaper, quicker versions of what we already had.

The fundamental breakthrough computing represents is letting us calculate things we could never afford to calculate even with the global GDP, and in the paradigm shift towards representing everything as a computation (and not, say, differential equations).

So, cracking the atom is a fundamental breakthrough, because we simply couldn't do that before. No matter how much money you spent, you could only exploit natural atom-cracking in radioactive decay - you could not vary the rate. So that was a fundamental breakthrough. Going from A-bomb to H-bomb, not so much (we could always just use a couple A-bombs where we could now use an H-bomb).

So, cracking the atom is a fundamental breakthrough, because we simply couldn't do that before. No matter how much money you spent, you could only exploit natural atom-cracking in radioactive decay - you could not vary the rate. So that was a fundamental breakthrough. Going from A-bomb to H-bomb, not so much (we could always just use a couple A-bombs where we could now use an H-bomb).

H bombs would seem to be a different fundamental breakthrough than atom splitting. The similarity is their engineering application more than their fundamentals.

Atom combining, as opposed to atom splitting?

Hm; you're right that that is a bad example - H-bombs are man-caused fusion, not fission.

Although, I'm not sure we couldn't fuse before the first H-bomb: sonoluminescence, which might be caused by bubble fusion, was first produced in 1934.

[-]Jack20

Well one thing is that this standard only works for engineering breakthroughs. What new manipulation techniques did natural selection give us? Or the Copernican revolution? Or even Newton's laws of motion? Better ballistics would appear to fall into the non-fundamental category, no?

Also, it all still looks like a matter of degree to me. Does heavier than air flying count? We could still fly before the Wright brothers, just not as fast and not as heavy. Twenty years ago I couldn't have had back and forth written communication with hundreds of people in real time. That seems pretty new to me. What about the light bulb... surely a huge breakthrough, but oil lamps worked pretty damn well before then.

A fundamental breakthrough is one that could not be developed from earlier knowledge (that required a new idea) and that formed the basis for further developments. That is, not an incremental advance.

The laser, for example, was not a fundamental breakthrough, because it was a direct development of quantum electrodynamics (which is the last fundamental breakthrough I can think of).

ADDED: QCD may be, but I can't think of any further developments it has contributed to, nor, the last I checked it out, had there been any definitive tests of its accuracy.

[-]Jack00

QED wasn't totally original, we obviously needed some earlier knowledge- like say about the photoelectric effect, black body radiation, and Maxwell's wave theory of light. Maybe the conceptual jump to QED was bigger than the jump to lasers and so maybe it is fair to say that we haven't has a big a breakthrough since. But I'm not sure what justifies putting a very small set of breakthroughs in a special category and only counting those. Is there a long enough list of breakthroughs as big as QED to even justify looking at the frequency with which they occur?

the fact that there hasn't been any fundamental breakthroughs in the last fifty years

I think you can only justify it by arbitrarily relabeling the progress of the last 50 years as "engineering" rather than science.

The advance of engineering during 1900-1950 is much more impressive than during 1950-2000. Likewise, 1850-1900 is more impressive than 1900-1950.

That seems a very subjective standard. Personally I find modern computer power a lot more impressive than any dang highway, however cheap. The Romans had highways. And before you accuse me of cherry-picking, they had steam engines too, and railroads. Drawn by elephants because it didn't occur to anyone to make a steam engine do it.

That seems a very subjective standard. Personally I find modern computer power a lot more impressive than any dang highway, however cheap. The Romans had highways. And before you accuse me of cherry-picking, they had steam engines too, and railroads. Drawn by elephants because it didn't occur to anyone to make a steam engine do it.

Yes, it is difficult to make these comparisons, but let me try. Most of Silas's examples were telecommunications. I think the incremental improvements in telegraphs 1850-1900 trump computers in changing the world. The incremental improvements in radio and telephones 1900-1950 probably don't. I don't expect to convince you of those comparisons, but they are swamped by a lot of other things 1850-1950, in contrast to practically nothing else 1950-2000.

I'm not sure what your point is about the Romans. I guess by the standards of "fundamental breakthroughs" steam engines get credited to them, but by Silas's standard, they largely get credited to the first half of the 19th century. Railroads to the second half, and that's what I'm talking about.

Honestly, I'm astounded. I agree that 1950-2000 has nothing comparable to telecommunications, while 1850-1900 and 1900-1950 did, but I think its obvious that telecommunications/computation effects from 1950-2000 swamp 1900-1950 which crushingly swamps 1850-1900. A tiny number of telegraph lines surely had very great impact given what they were, but WTF?!?
Also, it seems to me that the telecommunications of 1900-1950 remain the single biggest element of tech change during that time for all the impact of everything else.

A major question regarding the rate of change is "for whom". Things have changed less for elites than for the masses, as much tech consists of inferior goods, substitutes for things that elites accomplished via human labor or via the ability to pay high rents. For a Chinese commoner, things have changed more in the last 40 years than since the first cities. For ordinary non-intellectual Americans, the last 40 years have seen little significant change and what change has happened may actually be dominated by the improvement in food quality!

"Changing the world" seems like a rather poorly quantified metric.

It's hard to disagree with you when it's not very clear what you are saying.

Thanks for posting this. I'm reminded of the Politics as Mind-killer phenomenon. I attempt to generalize it as:

"Once the question of resource allocation starts to hinge on certain facts, people have a huge impetus to argue for the facts being in whatever way serves them, no matter how logically independent those facts are from their values."

So if some issue of public debate hinged on whether 1+1=2, you would find amazingly good arguments for why it's wrong, if people stood to gain from an implication of it being wrong.

The two important lessons to draw are:

1) In the pursuit of truth, you must always be on the lookout for the motive force of the resource-seeking that hinges on not finding the truth.

2) As in the link above, human thought does not naturally and neatly divide into beliefs and values: the values can yank beliefs "along for the ride", so speak.

1) In the pursuit of truth, you must always be on the lookout for the motive force of the resource-seeking that hinges on not finding the truth.

I think this sums up the "follow the money" axiom quite nicely.

This puzzle - the apparent conflict between the truth-seeking understanding of science (e.g. Popper), and the sociology approach (which as I understand it doesn't predict that scientists do find even approximations of the truth) - is very interesting.

A philosopher that I spoke with at the Singularity Summit made the observation that everyone there seemed to be familar and comfortable with Popper but there were almost no mentions of Kuhn. My understanding of why the scientists and technologists at the summit didn't mention Kuhn is that his theory of how science works isn't (obviously) usable. There will be paradigms, and paradigm shift - but as a practictioner, what do you suggest I actually do?

Pickering's "Mangle" concept may be applicable. I've been trying to digest his "The Mangle of Practice" into a top-level LW post, but other priorities keep getting in the way.

Pickering is a sociologist (and therefore writes in a style that I find annoying and off-putting) but he includes as "actors" non-human entities (like microscopes or bubble chambers). This makes his theory less human-society-centric and more recognizable and sensical to a non-sociologist. Unlike Kuhn, I think Pickering's mangle might be able to be applied in improved methodology.

As a programmer, the best way I can explain the "Mangle" (Pickering's theory) is by reference to programming.

In trying to do something with a computer, you start with a goal, a desired "capture of non-human agency" - that is, something that you want the computer to do. You interact with the computer, alternating between human-acts-on-computer (edit) phases, and computer-acts-on-human (run) phases. In this process, the computer may display "resistances" and, as a consequence, you might change your goals. Not all things are possible or feasible, and one way that we discover impossibilities and infeasibilities is via these resistances. Pickering would say that your goals have been "mangled". Symmetrically, the computer program gets mangled by your agency (mangled into existence, even).

Pickering says that all of science and technology can be described by an actor network including both human and non-human components, mangling each other over time, and in his book he has some carefully-worked out examples (e.g. he applies his theory to Hamilton's invention of quaternions) which seem pretty convincing to me.

Sounds interesting. FWIW, I encourage you to write up that top-level post.

Science is practiced by people, therefore our knowledge about how people act, in particular how they act in situations of interdependence, is directly applicable to scientists.

Thus, I don't find it surprising at all that when we ask the question, "what did we learn about reality and when", the answers include both references to the truth (or reality-correspondence) of scientific facts, and references to the social construction of these very same facts.

Some sociologists of science once came up with an astute observation about historians of science: their accounts exhibited an interesting asymmetry. Whenever a scientist was vindicated, his work would be accounted for on the basis of correspondence with reality. Whenever a scientist was proved wrong, his mistakes would be accounted for on the basis of "social forces" at work.

This asymmetry can only be an artefact of reconstruction after the fact: before a scientic fact has become "knowledge", while it is still in controversy, both reality and social forces are at work on all scientists working on the issue. In fact, "social forces" are merely a name for some aspects of "reality".

Pickering, Latour and others are saying that if the process of science is itself to become an object of knowledge, we need a symmetric account of it, not one which has The Scientist somehow immune to social forces, immune to bias, immune to reality.

That strikes me as entirely unobjectionable.

EDIT: removed "equally" per Yvain's feedback - I just meant to stress that you can't a priori distinguish a good from a bad scientist, they're subject to roughly comparable sets of forces - the word implies stronger symmetry than that, but I don't really need it.

Both reality and social forces are equally at work on all scientists working on the issue

You lost me at "equally" and "all".

Why not just say that both social forces and the part of the natural world under study influence a scientist's decisions, and a scientist becomes a good scientist who draws correct conclusions about the natural world only when ze keeps the ratio of social influence to natural world influence low?

This leads naturally to the conclusion that yes, a disproportionate amount of correct science will be the result of correspondence with reality, and a disproportionate amount of incorrect science will be the result of social forces.

That doesn't quite work: how would you keep that ratio low ? In practice, the only way is by countering social influences which might lead a scientist astray with other social influences. The total amount of "social" stays roughly the same.

Consider the LHC, or any particle accelerator. It takes a good deal of "social influence" to get it built, compared to an infinitesimal fraction of its total mass for what scientists hope to observe.

To a very good approximation, any given quark exerts the same influence on a "bad" scientists as it does on a "good" scientist. It takes exceptional and patient work to set up circumstances where the behaviour of a quark, through a long chain of mediating physical influences, results in noticeably different behaviour for a particular scientist.

Generally, there is an enormous amount of "leveraging", for lack of a better word, that needs to happen between some relevant bit of reality under scrutiny at one end, and the kind of scientific consensus on the other end which affords building something like the LHC.

If you wish to study these leveraging effects accurately, you must adopt a symmetrical stance; you have to bear down and study precisely the nature of these enormously long chains of mediation that bridge the gap between reality and our knowledge of it.

Latour for instance does a great job of this kind of description. Pickering's study of Morpurgo in the case of quarks is interesting; I got the sense that Morpurgo is a perfectly good scientist, he just failed to discover quarks. This doesn't jibe with the asymmetric account. I have yet to read "Leviathan and the Air-Pump" which I understand is the original inspiration for the symmetric approach, but apparently Shapin and Schaffer trace these issues all the way back to the debate between Hobbes and Boyle.

This kind of approach gives you a sense of the reality of science as opposed to its mythology - which is largely a product of scientists themselves, for reasons which Latour also outlines convincingly.

It's a messier, more complicated story than the myth - but then reality always is.

That doesn't quite work: how would you keep that ratio low ? In practice, the only way is by countering social influences which might lead a scientist astray with other social influences. The total amount of "social" stays roughly the same.

Well, depends if you want to define "desire to find truth" as a social force. A scientist motivated by a desire to find the truth is a better scientist and more likely to get an accurate result than a scientist motivated by a desire to confirm the tenets of zir religion or political system, or to fit in, or to get a promotion, or to get home early, or any of those other social forces.

The stronger the motivation to find the truth, the less we would expect other, more traditionally "social" forces to influence a scientist, and the more likely that the scientist's results would be accurate.

Because the direction of the motivation to find truth varies along with the evidence, seems fair to say the scientist motivated primarily by truth-seeking is influenced by the evidence and not by the social situation ze's in.

There may not be any human motivated entirely by truth seeking (except of course Eliezer pbuh), but some people are more than others, and that makes those scientists better.

Well, depends if you want to define "desire to find truth" as a social force.

For the purposes of this conversation, we are using "social" as a shorthand for any influence on the scientist's behaviour that isn't linked (through a verifiable publication trail) to the effect under study. That does include "desire to find truth", if the object of study is (say) the cosmic microwave background.

The stronger the motivation to find the truth, the less we would expect other, more traditionally "social" forces to influence a scientist, and the more likely that the scientist's results would be accurate.

Do we now ? Some motivation to advance your own career will definitely be required in very competitive fields. (See Latour's interview with Pierre Kernowicz, "Portrait of a Biologist as Wild Capitalist".) Given the high degree of specialization in science today, how much do you expect "desire to find truth" to resist to a realization that you don't, after all, care that much about molecular biology ? Science is a job, and we may expect people motivated by "traditional" social forces such as keeping their boss happy, making promotion, tenure or whatever, and so on will contribute to getting accurate results.

We have demonstrable evidence that working scientists are required to submit to certain non-truth-related conventions in order to be permitted to carry out science. You have to write papers in a form acceptable to journal editors, you have to work on subjects acceptable to your thesis advisor to get your PhD, and so on, and if you refuse to comply with this kind of requirements you may well be able to do science of some kind, in spare time left over from your day job, but certainly not, say, experimental physics.

What gets you accurate results in experimental physics isn't "desire to find truth", it is a particle accelerator.

Thanks for the reference, that's just the kind of thing I hoped to get from posting this this early. I just ordered a copy of Pickering's Mangle of Practice from Amazon. You might want to consider leaving a review there (http://www.amazon.com/Mangle-Practice-Time-Agency-Science/dp/0226668037/ref=sr_1_3?ie=UTF8&s=books&qid=1260135413&sr=8-3), especially since there aren't any reviews yet.

Even though I wanted to get something out quickly, I did do a little reading first. The view you are discussing I saw on Wikipedia (http://en.wikipedia.org/wiki/Sociology_of_scientific_knowledge ), where it was referred to as "the French school called Actor-network theory (ANT)". (I do not recommend this Wikipedia page, it does not fit with what I read about this subject about a decade ago.)

I'll contribute my hypothesis for why science hasn't made as much progress since 1920, even though I have no special conviction in it. I just thought about the problem recently and it was the best I came up with.

First, I asked a few people to see if they thought it was the case, also, that science hasn't been progressing 'lately'. The small set of answers were unanimous with respect to basic science (e.g. physics) but it was pointed out that plenty of progress has been made in biology and medicine (e.g.,genome project) and technology fields.

My hypothesis is that we evolved a paradigm for science and thinking in the 18th/19th centuries that was really good at systematizing, and we quickly did all the systematizing we readily could. (All the low-hanging fruit.) For another big jump in progress, we need a new, qualitatively different way of thinking about science that is not based on systematizing.

(I personally don't like this hypothesis because I like systematizing and would hope it could be ever-productive.)

A second hypothesis I heard from one of the people I asked is that all the progress has in fact been indirectly due to Gauss; he laid the seeds for all the progress we've made, and we just need to wait for another systematizing genius to come along. I consider this remotely possible, because Gauss significantly touched so many fields.

With respect to the hypothesis of this post, I think there are certain inefficiencies built into the system (endless grant writing, publication gymnastics, etc.) and possibly more publications than necessary clogging up the system, but that there are good scientists doing good science so it's not really such a problem as to be an explanation for the lack of productivity. I guess what I'm saying is that I believe science is both incremental hard work and progress with big game-changing ideas. We've been doing the incremental work well enough, perhaps, but it all seems incremental lately (after calculus, classical mechanics, relativity, .etc).

(My SO criticizes over my shoulder -- what about Quantum Field heory in the 1950s?)

Later edit: was this down-voted for being off-topic, chatty, noisy or proposing unlikely hypotheses?

I voted the post down for, in a nutshell, flamebait.

The post starts off with observations about the CRU emails, but makes little use of these observations. The CRU-related reasoning appears to be the following: "Hanson dismisses the CRU leaked emails as not a big deal, supporting the hypothesis that economists are less interested in searching for reliable knowledge than in protecting their turf and signaling senority". This is a) peripheral to the central claims of the post, b) using anecdotal evidence about anecdotal evidence in support of a strong overarching claim about science in general.

(Mark Twain once noted: "There is something fascinating about science. One gets such wholesome return of conjecture out of such a trifling investment of fact." This post gets a nice return of speculation out of a small investment of anecdote.)

The central claims are interesting, if caricatural, i.e. that "there hasn't been any fundamental breakthroughs in the last fifty years" and this "has to do with the effective takeover of science by academics and government - that is by the signaling and control view".

There would be value in a post that tried to build an actual argument in support of these theses, and such a post would require no reference to (commentary on commentary on) current events which are likely to be soon forgotten anyway.

What sociologists of science are actually saying about science is much more subtle and interesting than caricatures of "idealistic vs signaling". The two "views" of science are not mutually exclusive, and it's not a matter of one view being the Good Guys' view and the other being the Bad Guys' view.

I mentioned them because they are current, and because it was thinking about them that got me considering the problem again. Not "flamebait" just the most current application of the problem I am addressing. A decade ago I might have used Plasma Cosmology (http://www.plasmacosmology.net/), since I had then recently read Lerner's "The Big Bang Never Happened" (http://bigbangneverhappened.org/ ).

There is a fantastic 24 part CBC podcast called How to think about science mp3 format here. It interviews 24 different research scientists and philosophy of science experts on the history and different views of both the scientific process, historical trends and the role of science in society. It is beyond well worth the time to listen to.

I have found that the series confirms what scientists have known already: Researchers rarely behave differently as a group than any other profession, yet they are presented as a non biased objective homogeneous group by most (Of course there are always outliers). Indeed the sciences are much more social than most would indicate and I think as you point out peer review indicates "social networking" best.

This is nothing new, after all, theories and their acceptance have meant nothing without a strong group of well respected researchers around it.

I vigorously second the recommendation for How to Think About Science.

EDIT: removed the acronym (HTTAS). Sometimes trying to save time results in a net loss... :(

I vigorously second the recommendation for HTTAS.

What is HTTAS?

What is HTTAS?

From the preceding comment, I'm guessing that it's How to think about science.

Bureaucrats, and especially teachers, will tend strongly toward the signaling and control view.

The U.S. government now requires that research it sponsors be placed into open-access databases. You could say that this was driven by legislatures via public pressure, but it's still a case of government-backed open-acess.

I believe it's only the NIH. Also, in practice, the resulting republications are published in open-access journals; but the software and data produced is often not made available. Often its guardians pretend that they want to make it available, but always give one excuse or another for not making it available right now.

I believe it's only the NIH. Also, in practice, the resulting republications are published in open-access journals; but the software and data produced is often not made available. Often its guardians pretend that they want to make it available, but always give one excuse or another for not making it available right now.

Could you comment on journals that require publication of data and software? I read an economics paper that claimed that econ journals with such rules simply ignored them, but that biology had high compliance rates.

All the PLoS journals are open-access. Not sure what their requirements are. Other journals typically let authors opt-in for their articles to be open-access (free) if the authors pay a large fee (eg $2000 IIRC). The journals must be quite a racket; the authors pay the publishers, and the subscribers pay the publishers, and the advertisers pay the publishers, and the editors and reviewers work for free.

Do any such journals exist?

Is it the entire US government? I was under the impression that it'd been only the NIH so far. A quick look in Wikipedia seems to confirm this, though it does mention an act towards this that was proposed in 2006.

NIH. Patient groups that want to read the medical journals played a role.

[-]marc30

I think it's important not to conflate two separate issues.

The term 'science' is used to denote both the scientific method and also the social structure that performs science. It's critical to separate these in ones mind.

What you call "idealistic science" is the scientific method; what you call "social network" science is essentially a human construct aimed at getting science done. I think this is basically what you said.

The key point, and where I seem to disagree with you, is that these views are not mutually exclusive. I see 'social network' science as a reasonably successful mechanism to take humans, with all their failings, and end up with, at least some, 'idealistic science' as an output.

You do that by awarding people a higher status when they show a more detailed understanding of nature. I would agree that this process is subject to all kinds of market failures, but I don't think that it's as bad as you make out. And I certainly don't think that it has anything to do with why we haven't discovered quantum gravity (which, it appears, is the only discovery that would satisfy your definition of progress). There is literally no field of human endeavour that isn't defined by a search for status; 'network science' accepts this and asks how can we use our rationality to structure the game so that when we win, we win from both a individual perspective (get promoted to professor) and a team perspective (humanity gets new understanding/technology/wealth).

But this in no way calls into question 'idealistic science' since 'network science' is merely the process by which we try to attain 'idealistic science' in the real world.

[full disclosure: I am a young scientist]

I have realized I worded this rather poorly, that was one of the reasons for getting it out for feedback.

All science is social - the idealistic and the signaling - the difference is whether the search for knowledge (the idealistic view) is primary or whether the signaling or social issues are primary. It is far too easy to fool yourself, the feedback from other researchers is really necessary for science to advance. The problem is that too many now seem to feel excessive social pressures to conformity. At least in part, do to the institutional/academic/bureaucratic control over science, especially its funding.

I wonder how much of the fact that there hasn't been any fundamental breakthroughs in the last fifty years has to do with the effective takeover of science by academics and government - that is by the signaling and control view. Maybe we have too many "accredited" scientists and they are too beholden to government, and to a lesser extent other grant-making organizations - and they have crowded out or controlled real, idealistic science.

If this is so, then there should be an observable effect when comparing between countries, shouldn't there? Hard to see how one country having massive academic/government control of science could affect all other countries to an equal level. Oughtn't there be research on this? It's an obvious question and has equally obvious ideological value, so funding wouldn't be such an issue.

Nobel laureates is one possible metric: http://www.nationmaster.com/graph/peo_nob_pri_lau_percap-nobel-prize-laureates-per-capita Looks pretty much like one would expect: a bias towards northern Europe (likely in part due to Literature & Peace), and then G8 countries. Only #3, Switzerland, is a country I've ever heard as having a relatively small government.

Or here's a statistical study of scientific papers published per nation: http://www.timeshighereducation.co.uk/story.asp?storyCode=190149 US, then UK, then Japan, then Germany & France. All places with very large powerful governments.

I suppose you could explain both those away as being already corrupted (#1) and entirely unrelated to actual scientific productivity (especially of breakthroughs), but then you're paddling upstream...

You seem to be saying: Countries that spend more money on publishing papers to signal their expertise publish more papers. Therefore, they are not signalling.

This sounds like a fully-general counterargument: if a country is publishing few papers and those papers aren't getting cited or hailed, then obviously they have no major scientific expertise; but if they are publishing scads of highly cited papers, then they're merely spending lots of money on signaling and so have no major scientific expertise.

As I said, there being zero correlation between papers & citations and genuine scientific productivity seems unlikely to me and requires actual evidence and not hand^Wsignal-waving suggestions.

You're conflating normative and positive views of science. Hanson believes in the Signalling model, but I'm not sure he thinks this is desirable, whereas you make it sound like the senior scientists view it as desirable.

You're conflating normative and positive views of science. Hanson believes in the Signalling model, but I'm not sure he thinks this is desirable, whereas you make it sound like the senior scientists view it as desirable.

They (mostly) act as if they find it desirable, whatever their far mode preferences happen to be.

Peer review is just a slightly more formal kind of debate, but debate doesn't work and isn't about finding truth.

Traditional empirical science depends on a mechanistic (as opposed to humanistic) principle for obtaining truth: the scientific method.

The traditional scientific method provides a strong principle for evaluating theories. For some fields of inquiry (physics, chemistry), this principle works very well. But modern scientists want theories about economics, nutrition, medicine, climate change, computer vision, and so on. The traditional method does not justify theories in these fields.

To go further, we must discover new mechanistic principles of truth-seeking. We should never ask: "What would it be good to know?" That road leads to alchemy. Rather we should ask: "For what types of questions can the answers be evaluated by mechanistic principles?"

Debate does work under certain circumstances. The pretense that those circumstances aren't necessary, by those who wish to seize the status of reasoned debate, dilutes what outsiders see until it looks like debate doesn't work.

Peer review is just a slightly more formal kind of debate, but debate doesn't work and isn't about finding truth.

Absolutely!

To go further, we must discover new mechanistic principles of truth-seeking. We should never ask: "What would it be good to know?" That road leads to alchemy.

Or engineering. Perhaps even aerodynamics (wouldn't it be good to know how to fly?). The main problem with alchemy was that it was too difficult to make any genuine progress.

I'm not sure I'm with you on this one. Wanting to know stuff is a rather important motivator for finding out stuff. Often you'll even end up finding out stuff completely different to the stuff you wanted to know.

Rather we should ask: "For what types of questions can the answers be evaluated by mechanistic principles?"

Lots of really boring things that I don't particularly care about. Also, some that I do care about. It'd be good to know those ones.

What is the "scientific method" and can you point to an important scientific discovery made in history that used this method and only this method? Of course, I am asking for an actual historical event, not a rational reconstruction of a historical event.

Obviously, my question is rhetorical and is meant to point out that any such method is far from "mechanistic".

Obviously, my question is rhetorical and is meant to point out that any such method is far from "mechanistic".

I would say Einstein's prediction of the bending of light around the sun was such an example.

Note that I'm only claiming that the actual method of deciding between theories must be mechanistic. The process Einstein followed to obtain the theory was far from mechanistic.

I'm sympathetic to the idea that science is much messier when it's actually being done than it appears in retrospect. But I think it's critical to have a mechanistic principle for choosing theories, even if in practice theories are chosen by disheveled grad students while playing beer pong. Because those grad students are kept honest by the fact that if they humanistically choose the wrong theory, someone will eventually show up and prove them wrong using the mechanistic principle.

Tyler's last name is "Cowen," not "Cowan."

Thanks, that was just sloppy. Fixed.

[-][anonymous]00

You missed one.

I believe that Morendil did not understand that the work of Morpurgo simply showed on the largest quantity of material then used ( a few milligrams) that FREE quarks do not exist. His result has been confimed by others (Smith, Perl etc.) on other materials. The result of Fairbank - also considered in the book of Pickering- is wrong.