In the case of calculus, differential equations, statistics, functional analysis, linear algebra, group theory, and numerical methods, the important results for modern work were in fact developed after their usefulness could be appreciated by an intelligent observer.
This is simply untrue, unless you've rigged the definition of "intelligent observer" and added a dose of hindsight bias. It is unlikely that the true extent of the "practical" importance of calculus today could have been predicted by even the most imaginative of Newton's fellow Cambridge dons in the late 17th century. In that era, thinking about things like the orbits of celestial bodies was "idle speculation" par excellence. It's hard to appreciate this, because it seems so obvious in retrospect that we would have space rockets, doesn't it? Not to mention the use of differential equations in fields like economics, whose existence was a century away but were just so clearly on humanity's horizon, right?
Functional analysis is a particularly interesting choice of example. The fact that its application to quantum mechanics (which is what I presume you were thinking of) arose concurrently with the development of the subject itself was largely a fortuitous (if serendipitous) coincidence. The actual "physical" roots of the subject were more indirect, via differential/integral equations and the calculus of variations (18th-century physics, in other words), and it was basically the result of mathematicians' attempt to turn these somewhat ad-hoc disciplines into nice-looking abstract theories.
As for group theory, its origins lie in the attempt to solve the quintic by radicals -- about as "useless" an undertaking as could be imagined. (The cubic and quartic formulas already being much too complicated for practical use.)
Realistically, an argument like yours, made back in the day, would have shown that Newton should have devoted his life to inventing better agricultural tools. And it might have been a good argument -- applied to someone other than Newton. (They could really have used better agricultural tools, no doubt.)
If you don't feel satisfied doing math, or think you could make a better contribution doing something else, you shouldn't be doing it. But don't make the mistake of pretending that your argument generalizes.
And see here regarding the nature of mathematics' usefulness, which doesn't reside in specific "applications".
This is simply untrue.
I spent just a little more time learning history and disagree even more strongly.
Calculus and differential equations were developed (twice) with the explicit purpose of describing the behavior of the world around us.
The theory of determinants and later of linear algebra were developed with the explicit purpose of solving systems of linear equations which arise in the problem of predicting the world around us.
The calculus of variations and later functional analysis were developed with the explicit purpose of understanding the particular differential equations which arise in the problem of predicting the world around us (laws of motion, heat equations, etc.)
Probability was developed to allow people to understand and calculate probabilities, whose usefulness was already understood (insurance predates the study of probability).
Statistics was developed to understand large quantities of demographic data, whose existence predated the study of statistics.
Group theory and representation theory were developed to understand a problem unconnected to the world around us. The actual importance of finite group theory apart from representation theory appears to be extremely close to zero. The importance of representation theory in physical developments over the last century also appears to be extremely small (although the formalism is used extensively in theoretical treatments) but I don't know enough to say with confidence. I would unquestionably have argued against the development of group theory, but I am not convinced that this would have been a bad thing.
Number theory was developed to understand a problem unconnected from reality. Number theory has apparently contributed almost nothing to society since its creation. You could argue that the development of cryptography was dependent on at least a rudimentary understanding of number theory, but given the existence of lattice cryptography and the early emergence of its predecessors (more or less concurrent with RSA), you would almost certainly lose this argument.
Non-Euclidean geometry was developed without connection to reality. It became applicable with the observation that the universe was best described by non-Euclidean geometry. I would unquestionably have argued against working on non-Euclidean geometry before the development of general relativity; the main question was whether the existence of non-Euclidean geometry facilitated the discovery of general relativity. It seems that Einstein explicitly suggested that spacetime may non-Euclidean before learning that non-Euclidean geometry had been extensively studied. This leads me to suspect that work on non-Euclidean geometry was not essential in the development of general relativity, and that it could just as well have been done after it became relevant.
Of course I can provide a long list of fields unconnected to reality. I cannot think of any significant contributions from any of them. If you can think of a good counterexample here, feel free to suggest it (I think group theory is far and away the best).
Non-Euclidean geometry was developed without connection to reality.
Axiomatic hyperbolic geometry was a game about arbitrary axioms. Perhaps that's why Gauss didn't publish on it. What he did publish was his work on extrinsic differential geometry inspired by his work as a surveyor. The Gauss-Bonnet theorem for triangles answers a question a surveyor would ask. Riemann said that his intrinsic differential geometry was an attempt to understand space.
Number theory was developed to understand a problem unconnected from reality.
Yes, I think that's what Gauss meant when he called number theory the queen of mathematics. But I think number theory was a lot narrower back then. A lot of things that would now be called number theory were instead grouped under "solving equations." I think that when Abel showed that one couldn't solve the quintic by radicals and when he showed that one could solve it by hypergeometric functions, he thought he was studying the same field.
2. Applications
Group theory and number theory are endemic in CS. It's not just cryptography. Consider coding theory. For an application of 20th C math, Margulis's expanders were for decades the only explicit ones (and one does want non-random expanders for randomness extraction).
The actual importance of finite group theory apart from representation theory appears to be extremely close to zero.
I'm not sure what you mean. Perhaps that when finite group theory turned inwards and tried to classify finite simple groups, it stopped having applications? Maybe I'd buy that. It seems like an argument for the "interconnected" position against the "interesting" position, but fairly neutral for the "applied" position.
Anyhow, this seems to discount decades of 19th C struggle to clarify the meaning of an abstract group, which is clearly important if groups are important. It's hard to see in retrospect what was so difficult. One might credit this advance to set theory, the idea that one should talk about abstract sets (like the set of cosets).
So I think set theory is a quite inward-looking subject that turned out to have great clarifying impact on mathematics. But maybe it's not necessary - do physicists think about groups without it? Similarly, category theory clarified a lot of math, perhaps not in ways that have yet reached the physicists (the way set theory could), but it was been picked up for its own sake in CS, both in type theory and in parallel computation.
I don't know much history, but am inclined to disagree with most of your claims (your statement about group theory is completely correct. I might be able to salvage my claim by restricting to the subset of group theory I care about, which is really more linear algebra and representation theory, but I don't know if the history would support me even then. Apologies for my error)
It is unlikely that the true extent of the "practical" importance of calculus today could have been predicted by even the most imaginative of Newton's fellow Cambridge dons in the late 17th century.
I don't understand this. Newton made the observation that calculus described not only the orbits of the bodies but also the behavior of the everyday objects humans interact with (and in fact described motion in general) before formally developing calculus---at least, thats how the normal version of the history goes (I have no idea how accurate it is). Are you claiming that an intelligent observer would doubt the importance of describing the motion of objects around them, or what?
The actual "physical" roots of the subject were more indirect, via differential/integral equations and the calculus of variations (18th-century physics, in other words), and it was basically the result of mathematicians' attempt to turn these somewhat ad-hoc disciplines into nice-looking abstract theories.
I was talking about the applications of functional analysis to understanding differential equations, which are (as I understand it) the actual point of functional analysis. Not coincidentally, functional analysis was developed in response to the obviously important problem of understand differential equations. Its not like someone sat down and developed functional analysis, and then it happened to later be discovered that it was a powerful technique for understanding the world.
Happening to provide a formalization for quantum mechanics is really not important in my view. If you think that no formalization of quantum mechanics would exist if mathematicians hadn't thought of functional calculus, I think you are very confused.
If you don't feel satisfied doing math, or think you could make a better contribution doing something else, you shouldn't be doing it. But don't make the mistake of pretending that your argument generalizes.
I am actually curious knowing whether I should be doing math. If I should, my life is much easier. I would like to have an honest discussion about the utility of math apart from specific applications. I tend to agree that the use of math is not in immediate applications. I also believe that you can foresee that calculus is useful, or differential equations, or any of the other things I mentioned, or even negative or imaginary numbers, and that this is not just hindsight bias but actual discrimination. This seems like a factual question which we have some hope of resolving (though not too much).
The actual "physical" roots of the subject were more indirect, via differential/integral equations and the calculus of variations (18th-century physics, in other words), and it was basically the result of mathematicians' attempt to turn these somewhat ad-hoc disciplines into nice-looking abstract theories.
I was talking about the applications of functional analysis to understanding differential equations, which are (as I understand it) the actual point of functional analysis.
This sounds like violent agreement to me.
Your disagreement is the utility of the application of functional analysis to differential equations. Is it a practical problem to know when Dirichlet's principle applies? Or, if you insist that functional analysis dates from Leray, I am told that physicists do not care about the mathematical problem of whether the Navier-Stokes equation has smooth solutions--water flows, and that is good enough for them.
Realistically, an argument like yours, made back in the day, would have shown that Newton should have devoted his life to inventing better agricultural tools. And it might have been a good argument -- applied to someone other than Newton. (They could really have used better agricultural tools, no doubt.)
1) Are you agreeing with paulfchristiano that he should abandon pure math today and choose some more productive occupation, unless he's as exceptional as Newton was in his time?
2) Why do you think the world would be worse off now if Newton had chosen to invent agricultural tools, or otherwise maximize instrumental good in his own time? How about if everyone else used the same rule too? I think we'd have a pretty awesome world today! Isn't this the proper test of whether an argument "generalizes"?
Paul, have you tried to reverse engineer why your brain made you become interested in doing pure math in the first place? I ask because it sounds like you came up with this list of explicit arguments about the value of such research after you already became interested in it on an intuitive level.
Do you think you now understand what your intuition was doing? Does it now seem like a (subconscious) miscalculation, and if so, can you possibly explain what is the nature of that miscalculation?
Paul, have you tried to reverse engineer why your brain made you become interested in doing pure math in the first place?
No. I seem to have been interested in math since I started developing reliable memories. Most of the things I did back then don't make any sense to me now.
I'm getting a PhD in math and I have had similar thoughts. A few quick remarks about your specific arguments followed by my personal take:
In rebuttal 1, you mention number theory as an example where the application to crypto took a long time to be apparent. I'm a number theorist and I don't find this argument compelling. Much of the number theory that is used in crypto is not deep. There's nothing about Diffie-Hellman or RSA that requires the hundreds of years of number theory research that has gone into it. One could explain the algorithms for such procedures to a mathematician in the early 1800s with little effort (although the idea of having very efficient methods of arithmetic might strike them as very odd). Moreover, while other parts of number theory have turned out to be relevant it is still a very tiny fraction of all number theory.
A possibly better example would be non-Euclidean geometries which really were studied in detail in the 19th century before they were found to have practical applications.
As for myself, I enjoy math a lot, and I suspect that I will be more productive at areas I enjoy than areas I enjoy less. This might be a rationalization but it connects to another aspect: I'm not a good utilitarian. Given the choice between me having a happy life and being somewhat productive for humanity and given the choice between being less happy and more productive for humanity I'll choose being more happy. (When I phrase it that way it triggers far more reactions that lead me to not want to do that. But what decisions I make on a day to day basis about what to think about and what to do with my time are very much near mode).
In rebuttal 1, you mention number theory as an example where the application to crypto took a long time to be apparent. I'm a number theorist and I don't find this argument compelling.
I also find this argument completely uncompelling. It gets brought up a lot though (and 4 years ago I gave it as justification for doing mathematics in a serious conversation, instead of engaging in an honest conversation about whether I should be doing math). It is slightly better than you make it sound, because without many years of number theory we would have basically no confidence about the algorithmic difficulty of number-theoretic problems.
A possibly better example would be non-Euclidean geometries which really were studied in detail in the 19th century before they were found to have practical applications.
I think this and most other positive examples suffer from a common objection; although you can do the math and later find an application, you could just as well wait until the application appears and then do the math. I think this objection is particularly strong here, because the need for the math was recognized by people who didn't know the math existed (I think?)
voted up for this:
although you can do the math and later find an application, you could just as well wait until the application appears and then do the math.
...combined with the fact that a lot of pure math does not (or havent yet anyways) lead to applications. It pays to put effort only into math that is immediately practically useful.
We need proper counterfactuals here, cases where a practical use of math counterfactually would not have been possible without previous development as pure math. And also, what-if the pure mathematicians have been directly working on practical math instead?
We need proper counterfactuals here, cases where a practical use of math counterfactually would not have been possible without previous development as pure math.
I think a decent argument could be made that Einstein would have been unlikely to have been able on his own to work out the necessary math he used in special and general relativity. On the other hand, this is much more of a severe issue for gen relativity, and it isn't implausible that once he had constructed the basic theory others would have listened to him enough to work out the underlying math.
It is slightly better than you make it sound, because without many years of number theory we would have basically no confidence about the algorithmic difficulty of number-theoretic problems.
I'm not sure about this. RSA is published in 1978 and Diffie-Hellman was published in 1976. There was some amount of work on trying to efficiently factor integers before that and most of that work was focused on factoring numbers of special forms, but not nearly as much as their was in the next decade. And before DH, there was very little work on the discrete log question. In the case of factoring the difference in the work level can be seen in the drastic improvements in factoring in the few years after (especially the number field sieve and elliptic curve sieve.) Similarly, determining if a number was prime dropped from being almost as difficult as factoring in the mid 1970s to being provably in P 30 years later, and I don't think almost anyone in the late 1970s saw that coming (although Miller-Rabin did sort of point in that direction).
I feel the same way you do, including the last paragraph, with a possible exception: I do not endorse the fact that I am a bad utilitarian.
Because of this I'm currently attempting to invoke a crisis moment in which I can effectively change fields, although I suspect that I will stay in graduate school to get my Ph.D. and use the opportunity to get more of an education.
For many of us, choosing a career path has a dominant effect on our contribution to the society. For those of us who care what happens to society, this makes it one of the most important decisions we make.
Do you really see this as a one-time choice that you are stuck with for the rest of your life? I think that most people (in the US, at least) find themselves making decisions that change their career paths right up until retirement (and beyond).
A suggestion. Write down the names of three senior people in your field whose contributions you admire. Now add three people who made important discoveries in their youth thirty years ago and are still alive. Add three people who you think have made significant contributions to society in some way related to your field. And finally, three people who made significant contributions to society from any field.
Take a look at the CVs and/or biographical sketches of these people. Look particularly at the career decisions they made - at your age and later in their careers. I'm not sure what you will find, but I have my guesses:
I don't view it as a one time choice at all (if I had to pin down the point at which you made a choice, I would say mine had already passed). My point is that choosing what you do from day to day is important.
Very few people spend a lifetime doing pure research.
This appears to be true only insofar as academia is small. I know a great many people who have spent their entire lifetimes doing pure research, a great many people who are basically guaranteed to spend their entire lifetimes doing pure research, and a great many people who plan to spend their entire lifetimes doing pure research.
The people who contributed most to society did not consciously set out to do so at the start of their careers.
I think this is weak evidence for your implicit conclusion. I have to choose, for example, what I do tomorrow. Do you think that by being concerned with the effects on society I contribute less?
I don't have time right now, but perhaps I will dig through some biographies later tonight. It is certainly an interesting exercise.
I guess you feel the need to maximise your utility in society? You are certain that this is a moral necessity for you? I got into research because I thought it would be interesting, and pay reasonably well, and I didn't want to commit to a real world job. Oh, and potentially i might make more a difference than in other fields I'd considered. I'd also argue that its not that easy to switch fields- I went into statistics because thats where I found my knowledge easiest to apply. I suspect I could have done as well in politics, or in historical study, or in the financial sector. I chose my sector because I felt I'd enjoy doing it.
I guess you feel the need to maximise your utility in society?
I feel a certain symmetry between my needs and others' which makes me want to try to address their needs as well as mine. Thinking about this more has caused the feeling to become less and less abstract, until recently it has acquired motive power in my decision-making process.
I am talking about altruistic justifications of research in particular because this is an argument that people have a lot, even if they don't really care about the outcome deep down. I think resolving this dissonance (if in fact it is a dissonance) will probably make at least some people apply their intelligence to furthering my values instead of doing research, which is of course something I consider important.
I feel a certain symmetry between my needs and others' which makes me want to try to address their needs as well as mine. Thinking about this more has caused the feeling to become less and less abstract, until recently it has acquired motive power in my decision-making process.
:D Awesome! Did you just think about it more in general, or was there a particular kind of thinking about it that made it more salient to your decision-making process? If all it takes to internalize abstract (far mode) philosophical intuitions is thinking about them repeatedly then I have greater hope for a few SIAI Visiting Fellows' work on meta-optimized spaced repetition techniques.
There are many types of math, with differing sorts of value, but I can say a little about the sort of math I find moving.
I agree with you. For the most part, applied souls dream up their advances and make them without relying on the mathematical machine. They invent the math they need to describe their ideas. Or perhaps they use a little of the pure mathematician's machine, but quickly develop it in ways that are more important to their work than the previous mathematical meanderings.
I think you underestimate the role of mathematics as the grand expositor. It is the tortoise that trails forever beyond the hare of applied science. It takes the insights of applications, of calculus for example, and digests them. It reworks them, understands them, connects them, rigorizes them.
The work of mathematics is not useful in your mind because a mathematician does not make a truly new applied advance. A mathematician invents and connects notations to ease the traversal, the learning, and most importantly the storage in working memory of past insights.
What is the purpose of a category? An operad? A type theory? A vector bundle? The digit 0? When these languages were introduced, it could always be claimed they were worthless because the old languages could express the same content as these new languages. But somehow the new language makes it easier to conceptualize and think about the old ideas; it increases the working human RAM.
And what of the poor student? He who must learn so many subjects is grateful when it is realized that many of those subjects are in fact the same: http://arxiv.org/abs/0903.0340 . Mathematics digests theories and rewrites them as branches of a common base. It makes it possible to learn more insights quickly and to communicate them to the next generation.
So young applied scientists, perhaps generations later, benefit by more compactly and elegantly understanding the insights of their forebearers. Then, the mathematician dreams, they are freer to envision the next great ideas: http://arxiv.org/abs/1109.0955
So why the mathematician's focus on solving specific problems? Why so much energy to characterize finite groups? It is not that these problems are important. It is that they serve as testbeds for new languages, for new characterizations of old insights. The problems of pure math are invented as challenges to understand an old applied language, not to invent a new one.
After writing the third paragraph below, it would appear I think your rebuttals to Arguments 4 & 7 are most salient. I believe the idea that fundamental research is very important for future discoveries to be possible, but I have an argument for that which you don't list. The great minds that gave us theories of gravitation, evolution, and quantum mechanics all learned their fields by doing research. Some of them did basic theoretical work, and others did applied work. But, even the ones who did applied work may have learned significantly from people who were more suited, or who more enjoyed, theoretical work.
So, if you really, really love the theoretical stuff, and you're just worried that you'll feel your life has been a waste at the end, being afraid of lifelong commitment is not a good reason by itself to fail to commit. Honest curiosity about other fields is valid, and if that's at least part of what you're feeling, read on. (Well, you may read on anyway, I'm aware :)
What else have you tried doing? Have you ever worked in a position where you did something other than pure research? If the answer is "No," I would say you should definitely value trying something else, at least for a little while. If you are currently a research graduate student, you are in the perfect position to take a year off and do just that. Apply for an internship to do math modeling for an oil company (or work at a radio telescope, or something else that has a practical application.) I did a one year internship at Los Alamos National Labs as a spacecraft payload operator based on my undergrad physics degree. In a "real job," you have several different kind of responsibilities--not just different responsibilities, but different kinds. I checked the daily health reports on the satellite, yes. But I also attended meetings of top astrophysicists, getting insight into how they think and what they do. (One of those scientists, Roger Fenimore, taught me the lesson that the people who really make important things happen often get experience from multiple disparate fields, and then notice important connections between them.) I investigated small failures in the satellite data, learning about materials science and clean room procedures along the way. I gave tours of our facility to visitors. I participated in a student council, helping to improve student life in a small, isolated town.
If you're a professor already, you're in a bit more of a pickle, because there's no guarantee of a place to come back to if you leave. Still, it might be worth the risk.
[26 Feb 2011: Edited for intended generality. I do not think working for an oil company is really your only choice. It's just an example of something a math friend of mine did.]
If you're a professor already, you're in a bit more of a pickle, because there's no guarantee of a place to come back to if you leave. Still, it might be worth the risk.
It is customary for professors to take a year of sabbatical every several years. So it would still probably be possible to take off a year with a guaranteed job at the end.
(that said, at least in the fields I'm familiar with, the sabbatical is supposed to be a working holiday and a chance to start a new project in your own field, instead of try something fairly different)
The way most people can best contribute to society is to make as much money as possible and donate much of it to a charity that offers a high social return per dollar.
If you contribute to a charity that increases by one part in a trillion the probability of mankind surviving the next century and if conditional on this survival mankind will colonize the universe and create a trillion times a trillion sentient lifeforms then your donation will on average save a trillion lives.
Should we not have at least some good evidence that the world has been measurably changed by charitable actions before positing this? Can we also establish that the making of as much money as possible does not itself have costs and do damage?
It can be easily, even sleepily argued that many of the popular vehicles for becoming wealthy are quite destructive. We can happily found charities to ameliorate this damage, but what of it?
You may have excellent arguments to support this charity statement, but these are not at all apparent to me. Please do enumerate them if you have a moment.
To give my own answer, I think the single best contribution that a person can make to society is to raise a child (genetically related or adopted) educated in the sciences and in reason, and with mind strong and nimble and ready to apply this knowledge in any field she finds to be interesting.
Should we not have at least some good evidence that the world has been measurably changed by charitable actions before positing this? By that logic, wouldn't we need good evidence that it hasn't been measurably changed before refraining from posting?
In any case, Give Well looks into a lot of charities. There's many where the difference is quite obvious.
Can we also establish that the making of as much money as possible does not itself have costs and do damage? It makes some difference what you do, but it's not the same order of magnitude. You don't have to kill someone to earn a thousand dollars. You don't have to blind someone for $25.
To give my own answer, I think the single best contribution that a person can make to society is to raise a child (genetically related or adopted) educated in the sciences and in reason, and with mind strong and nimble and ready to apply this knowledge in any field she finds to be interesting.
I don't know of a specific charity that does the same thing but better, which would be an ideal counterargument. That said, raising a child can cost hundreds of thousands of dollars. Is it worth hundreds of lives? Thousands of peoples' sight?
Also, it seems to be based on the idea that what you do is more important that what charity you donate to. It seems like it would be better to raise them to donate large amounts of money to charity. Or to try to convince people you know to donate.
If you contribute to a charity that increases by one part in a trillion the probability of mankind surviving the next century and >>if conditional on this survival mankind will colonize the universe and create a trillion times a trillion sentient lifeforms then >>your donation will on average save a trillion lives.
Alternately, if you do work that increases by one part in a trillion the probability of mankind surviving the next century...
=======
I think there is a lot of value in intelligent charity, but it's a mistake to assume that all careers have the same inherent non-monetary value to society (or to approximate the non-monetary value of all careers as zero). If I understand correctly, the underlying thinking is that the difference in salary between theoretical research and some sort of high-pay job (when multiplied by the value of donating that money to effective charities) outweighs the difference in non-monetary career value?
The way most people can best contribute to society is to make as much money as possible and donate much of it to a charity that offers a high social return per dollar.
People on LW who would be going into pure research are probably not most people. I don't think this is true of anyone smart enough to make significant mathematical contributions. For example, I believe I can contribute much (much) more to society than the benchmark you describe.
Anyone smart enough to "make significant mathematical contributions" is also smart enough to make a significant amount of money in finance.
"I believe I can contribute much (much) more to society than the benchmark you describe" How? Although I'll understand if privacy concerns cause you to not answer.
I'm not sure that this is true. Yes, someone who is smart enough to get into pure maths research will be smart enough to get a decent job in finance, and perhaps earn a few 100k each year, but I'm not sure they'll necessarily scale to higher levels. I suspect the skills required are not necessarily completely transferable, and the ability to have mathematical insights might not be isomorphic to making repeated sound investments. If the earning is only around 10 times as much, one could argue that in research one might well be able to acheive more.
If the earning is only around 10 times as much, one could argue that in research one might well be able to acheive more.
But then you take the higher paying job and use the extra money to hire nine researchers to study whatever really important topic you would otherwise be researching.
And where do you get these nine researchers if the smart people have all decided to go into finance?
Paul's decision to take the high-paying job doesn't cause the other smart people to do likewise. If there's currently an excess of good people wanting to do pure mathematical research over funding to pay for them, then Paul's going into finance won't change that.
That's a good point. I may be spending too much time on LW and thinking about decisions in an abstract setting where one expects all the simlar actors to act essentially the same way using TDT or UDT. Or that may just be a poor excuse for me not thinking.
Right, but if you 'earned' the money as a financier in part by duping a hundred people into predatory mortgage loans that cost them their homes, and then the college enrollment rate of those hundred people's kids gets cut in half as a result, have you really caused a net increase in the amount of research being done?
Assuming your primary talents/interests/passions are in something like academic research rather than practical finance, I think it's very challenging to net $100K+ after taxes and lifestyle expenses (you're eating in restaurants and taking taxis because you are working 80+ hr weeks, etc.) without creating large negative externalities.
and then the college enrollment rate of those hundred people's kids gets cut in half as a result, have you really caused a net increase in the amount of research being done?
Yes. Most undergraduates don't become researchers, and most of those who do won't specialize in the most important topics.
Counterfactual: if you did not enter finance, would significantly fewer people get duped, lose their homes, their access to a college education? Does your marginal contribution to those negative externalities exceed the good you can do with earning that extra money?
EDIT: s/network externalities/negative externalities
Yes, unless you think that, on average, the finance-minded person who you out-competed for the job will give up and go home, or will switch to something like research.
Unlike, say, posts in a state bureaucracy, where there may be a fixed number of positions available, the supply of jobs in the finance industry is elastic with respect to the number of people seeking jobs...if you can't get a job with Goldman Sachs, you can try to convince a smaller firm that wasn't planning on hiring to take you on anyway, or you can try to raise money with friends to start your own fund.
In any given economy, there are a fixed number of arbitrage opportunities that (a) pose minimal negative externalities, (b) are lucrative enough to pay your $100K plus salary, and (c) can be discovered and exploited by a person of average talent. In most of the Western world at the moment, there are significantly more financiers than would be required to exploit these opportunities; the remainder are necessarily exploiting opportunities that fail one or more of the criteria. We are assuming, for purposes of the argument, that you want to make a lot of money but you're not a finance genius, so if you add another financier to the economy by switching careers, you must be increasing negative externalities.
You need to consider your marginal "duping". If the same amount of duping would have taken place had you not been in the industry than your duping imposes zero social costs.
$100K+ jobs also create the enormous positive externality of generating lots of tax revenue.
Er... I agree with you about needing to consider one's marginal "duping", vs. the duping that would be done by one's substitute.
But, by the same token, surely you also need to consider one's marginal impact on tax revenue, vs. the impact of the person who would otherwise have one's job.
I'd like to add to this. If you don't care so much about society that you're willing to give up your life for it, your best bet is probably to donate some to charity. Changing your job would be giving up more and changing society less.
If you think that people working in synthetic biology and bioengineering are doing worthwhile work (and I entirely agree that they are), then go help them. Why the ennui? Set yourself to spend a month investigating these fields and find if you are able to suss out interesting ideas that might (how can you know?) be of use. If your imagination is sparked, then you should find a job in a lab on a trial basis and take your investigations further. I would encourage anyone with a good mind to go into this area of research, as it will doubtless benefit me (I cannot speak to society).
I think your arguments against the utility of mathematics can be applied generally to any science, which is why I reject them. However, the weakness of my objection (it relies on unstable induction) is also the weakness of your argument. Look, sure, you cannot KNOW that what you are doing is going to result in something useful. But I see no evidence at all that anyone who has made worthwhile discoveries knew otherwise. It just is not true, we have no evidence for it, that Newton set out to lay down the mathematical foundations of physics for the benefit of anyone. He seems to have done it for reasons of curiosity and perhaps ambition. I imagine he had a bit of fun with it. Like it or not, this is why people do things, especially when said things require years of work.
I would posit (but do not know), that if you do want to make a useful contribution, the state of ignorance is exactly the right position to be. The x-ray, the laser, the computer, antibiotics, physics, Greek geometry, etc. etc. down the line are the result of accident, aimless research motivated only by curiosity, or people having fun with ideas. Some of these might even have been the result of chaps trying to get the girl. That is how it goes. I see almost no evidence at all (with exceptions of specific technologies, the airplane, for example) that the best way to go about making discoveries is to trying to make specific discoveries. You get interested in something and, if you find something useful, good for you, but most people do not. Given this, that we would find that the most successful scientists are motivated by curiosity, playfulness, and perhaps a little ambition. A survey of the history of science reveals, I think unquestionably, that yes, this is exactly the case!
It is certainly possible, even likely, that if you do spend your time doing theoretical math, that you will do nothing of importance or use to society. The chances, I think, are, at best, only very slightly better if you switch fields to do something else. You should do what gets you excited and interested, because only then, no matter what your pursuit, can you really increase your chances of doing something useful for yourself and society. At the very least, you will be happy, and that is not nothing.
I think this is a very important topic. Clearly the real goal would be a general algorithm that would allow young people to decide on their career paths in such a way to have maximum positive benefit to society. Such an algorithm would necessarily give different answers in different cases (i.e. it obviously could not output "do math research" for everyone - that would be a catastrophe).
I don't know how to design that algorithm, but one heuristic rule I think would be useful is to ask: "are you independently wealthy?" If so, you should think more about careers in low-paying areas like mathematics, physics, literature, or art. If not, you should think about how to become wealthy (then your kids can become poets).
Overall, I think more people should be encouraged to pursue quotidian careers in areas that actually build tangible wealth, like construction, manufacturing, import/export, and traditional business (i.e. not dot-com or biotech). The fact that we don't encourage more people to do this kind of work stems from our weird cultural fascination with "education" and "knowledge industries".
Overall, I think more people should be encouraged to pursue quotidian careers in areas that actually build tangible wealth, like construction, manufacturing, import/export, and traditional business (i.e. not dot-com or biotech).
If [you are independently wealthy], you should think more about careers in low-paying areas like mathematics, physics, literature, or art.
Right now I actually believe that smart people doing foundational work in biotechnology and synthetic biology are creating wealth at a greater rate than almost anyone else in society, though this is an essentially factual question about which we could argue.
I don't think independently wealthy people should go into mathematics, physics, literature, or art. I would be strongly tempted to go into mathematics, physics, or theoretical computer science if I weren't independently wealthy, since I'm basically completely confident I can make a comfortable living in any of those fields. Having money allows you to do riskier things, or things for which society might not compensate you at all.
One aspects of fundamental research (and research in general) that I see missing from this post and many other explanations of why it is not the best use of your time, is being incremental. With some very rare exceptions, the maths you actually need, even if developed at the time where it was needed, depend on many things that had to be found prior to that.
The example that comes to mind, and was not mentioned in the post or the comments (as far as I know), is the birth of computer science. You can say: yay, Turing "invented" (with a lot of other people) theoretical computer science to solve concrete problems, when it was needed. But that would completely obstruct the fact that Turing builds heavily on top of Gödel, which solved questions of a purely mathematical nature. Among the ideas Gödel's work essential to the birth of computer science, diagonalization goes back to Cantor, whose work concerns some of the most pure and abstract maths ever.
That being said, I do agree from experience that many arguments one makes about justifying doing maths or theoretical computer science do not hold under scrutiny. Yet for the reason I give above, I still think pure theoretical research is necessary.
There is a shortage of intelligent, rational people in pretty much every area of human activity. I would go so far as to claim this is the limiting input for most fields.
Uhhh I find this statement extremely surprising. I mean, come on: which labor markets see wages skyrocketing for the lack of intelligent, rational, trained professionals? Certainly not most of them. In fact, outside of a few bubble and rent-seeking fields like software development and investment banking, I do believe most of the developed world's job markets for professionals are in gluts right now, with few shortages of workers apparent anywhere. And it was more so in 2011, when some of the world's economies were growing a bit slower.
I'm not confusing the categories; I'm just holding (from experience) that P(intelligent, rational | diploma) > P(intelligent, rational | ~diploma)
.
Actually, no, hold on. While I do tend to hold that, I didn't state or use that assumption anywhere in the statement. In fact, even mentioning it reinforces my thesis: fields that genuinely have high demand for workers miraculously (rollseyes) stop caring so much about the paper credentials in favor of real experience and productivity.
I am surprised that no one brought up the following point yet (this being LW after all): if you believe that strong AI will be developed before applications for the theoretical research are found, then, from purely practical point of view, you shouldn't dedicate your efforts to the research. It is more efficient to assist the development of strong AI instead of doing the research yourself.
As an applied mathematician, I disagree very strongly with these claims. See the paper "The Dawning of the Age of Stochasticity" by David Mumford:
http://www.dam.brown.edu/people/mumford/Papers/OverviewPapers/DawningAgeStoch.pdf.
I think the fundamental flaw in this line of reasoning is that you are forgetting just how recently calculus was invented. It was not at all that either Leibniz or Newton sat down at the desk and decided to develop the calculus. I can hardly think of a more apt moment to quote Newton himself: "What Descartes did was a good step. You have added much several ways, and especially in taking the colours of thin plates into philosophical consideration. If I have seen a little further it is by standing on the shoulders of Giants."
I would argue that the deterministic developments of what we currently consider to be "the great mathematicians" were, more or less, low-hanging fruit. That, in fact, turning a corner with Godel and Turing, mathematics research, especially in probabilistic modeling, is actually accelerating.
As to whether the pursuit of pure research is the right career choice for a given person, or whether that person can adequately assess how much of a contribution he or she can make in a research field, I feel that there are many additional factors to be weighed. For instance, many people who pursue research also pursue creative endeavors of other types and would prefer to have an unstructured career that affords them time. Being a research professor tends to concentrate stress and working hours into clumps throughout the year, leaving other spans of time with significantly fewer responsibilities. If a person wants to make a small mark in a research domain and also wishes to seriously pursue an art, such as writing or photography, this is a perfectly sensible career choice. If, however, they are strictly trying to maximize the social benefit of their having existed on the Earth, then research may not be the right pursuit. But, in that case, the computational effort expended to try to model one's own eventual contribution could be highly variable and depends on certain types of honesty which people, especially at the ages when long-term career choices are made, rarely employ.
I would also be interested in thinking more about a type of Moore's Law related to the dramatic effect of mathematical progress. If you normalize the "amount of improvement" due to a "tiny mathematical victory" in modern research by the amount of improvement of say calculus of variations, you certainly won't feel that you've accomplished a lot. The tiny blood vessels in the ends of your fingertips might not be seen to be as important as your femoral artery to a layman, but a medical scientist knows that, to some degree, "it don't work that way."
very few people are good at creating a large surplus. it is the subset of people who are good at making money and bad at spending it. if there is a chance you are one of these rare individuals who can throw off excess wealth you should pursue it and donate money to SIAI and SENS. This sort of person is much much rarer than decent researchers.
For many of us, choosing a career path has a dominant effect on our contribution to the society. For those of us who care what happens to society, this makes it one of the most important decisions we make. Like most decisions, this one is very often made by impulses significantly below the level of conscious recognition, with considerable intellectual effort spent on justifying a conclusion but very little spent on actually reaching one. In the case of smart, altruistic rationalists, this seems like the most tragic failure of rationality; so, whatever the outcome, I advocate much more serious consideration by smart rationalists of how our career choices affect society. For the most part this is a personal thing, but some public discussion may be valuable. I apologize (largely in advance) for anything that seems condescending.
I previously planned to do research in pure math (and more recently in theoretical computer science). I frequently justified my position with carefully constructed arguments which I no longer believe. It still may be the case that doing research is a good idea (and spending the rest of my life doing research is still the easiest possible career path for me), so I am interested in additional arguments, or reasons why anything I am about to write is wrong. Here is a basic list of my justifications, and why I no longer believe them.
Argument 1: Much math is practically important today. The math I am working on is not practically important today, but maybe it will be the math that is practically important tomorrow. How can we predict what will be useful? It seems like pushing math generally forward is the best response to this uncertainty.
Rebuttal: If we really want to evaluate this argument, it is important to understand the conditions under which the important math of today was done. In the case of calculus, differential equations, statistics, functional analysis, linear algebra, group theory, and numerical methods, the important results for modern work were in fact developed after their usefulness could be appreciated by an intelligent observer. There is very little honestly compelling evidence that pushing math for the sake of pushing math is likely to lead to practically important results more effectively than waiting until new math is needed and then developing it. Perhaps the most compelling case is number theory and its unexpected application to cryptography, which is still not nearly compelling enough to justify work on pure math (or even provide significant support).
Argument 2: Math is practically important today. The math I am working on is in a field that is practically important today, and not many people are qualified to work on it, so pushing the state of the art here is an excellent use of my time.
Rebuttal: Consider the actual marginal utility of advances in your field of choice, honestly. In the overwhelming majority of cases, the bulk of research effort is directed grotesquely inefficiently from a social perspective. In particular, a small number of largely artificial applications will typically support research programs which consume an incredible amount of intelligent mathematicians' time, compared to the time required to make fundamental progress on the actual problem that people care about. Here you have to make a different argument for every research program, which I would be happy to do if anyone offers a particular challenge.
Argument 3: Theoretical physics research advances the fundamental limits of understanding, which has led to important advances in the past and will probably continue to lead to important advances.
Rebuttal: What matters are interactions in regimes that humans can engineer---improving understanding in such regimes is responsible for every technological development I am aware of. In particular, improvements in our understanding of high energy physics or cosmology are unlikely to be useful until we can design systems which operate in those regimes. There is fundamental physics research which seems likely to have a high payoff---but if you approach theoretical physics with the honest goal of contributing to technological progress, you end up with a research program which is unrecognizably different from most physicists'.
Argument 4: Pure research is at least a little useful, and its what I am best prepared to do.
Rebuttal: There is a shortage of intelligent, rational people in pretty much every area of human activity. I would go so far as to claim this is the limiting input for most fields. If you don't believe this, at least ask yourself why not. Do you have experience in other fields that suggests you are unable to contribute? Do you have a causal argument?
Argument 5: Society is relatively efficient. The marginal returns for work in every field are roughly comparable, so I should work wherever I have comparative advantage.
Rebuttal: Why should society be remotely efficient? I believed this for a long time, but eventually realized it was just a hold-over from a point in my life where I had more faith in other people. If you are typical LW readers, you probably believe at least half a dozen strong counterexamples to this claim already.
Argument 6: Pure research has fundamental value as an intellectual pursuit.
Rebuttal: For whom? If you are concerned exclusively with the intellectual richness of mathematicians' lives, then I can't well disagree and this argument may be completely convincing. Otherwise, if you believe that the increasing richness of human mathematics is a fundamental good which non-mathematicians can enjoy, consider the inferential distances separating modern advances from even the most intelligent layperson. If your ultimate goal is the production of mathematics, or in fact any temporally altruistic objective, then consider alternatives which may increase the future's capacity to do mathematics and which may be orders of magnitude more effective.
Argument 7: What else would I do to make a living? Research provides at least some benefit to society; alternatives seem even worse.
Rebuttal: My past self, at least, was guilty of motivated stopping. See argument 4.