Among my friends interested in rationality, effective altruism, and existential risk reduction, I often hear: "If you want to have a real positive impact on the world, grad school is a waste of time. It's better to use deliberate practice to learn whatever you need instead of working within the confines of an institution."

While I'd agree that grad school will not make you do good for the world, if you're a self-driven person who can spend time in a PhD program deliberately acquiring skills and connections for making a positive difference, I think you can make grad school a highly productive path, perhaps more so than many alternatives. In this post, I want to share some advice that I've been repeating a lot lately for how to do this:

  1. Find a flexible program. PhD programs in mathematics, statistics, philosophy, and theoretical computer science tend to give you a great deal of free time and flexibility, provided you can pass the various qualifying exams without too much studying. By contrast, sciences like biology and chemistry can require time-consuming laboratory work that you can't always speed through by being clever.

     

  2. Choose high-impact topics to learn about. AI safety and existential risk reduction are my favorite examples, but there are others, and I won't spend more time here arguing their case. If you can't make your thesis directly about such a topic, choosing a related more popular topic can give you valuable personal connections, and you can still learn whatever you want during the spare time a flexible program will afford you.

     

  3. Teach classes. Grad programs that let you teach undergraduate tutorial classes provide a rare opportunity to practice engaging a non-captive audience. If you just want to work on general presentation skills, maybe you practice on your friends... but your friends already like you. If you want to learn to win over a crowd that isn't particularly interested in you, try teaching calculus! I've found this skill particularly useful when presenting AI safety research that isn't yet mainstream, which requires carefully stepping through arguments that are unfamiliar to the audience.

     

  4. Use your freedom to accomplish things. I used my spare time during my PhD program to cofound CFAR, the Center for Applied Rationality. Alumni of our workshops have gone on to do such awesome things as creating the Future of Life Institute and sourcing a $10MM donation from Elon Musk to fund AI safety research. I never would have had the flexibility to volunteer for weeks at a time if I'd been working at a typical 9-to-5 or a startup.

     

  5. Organize a graduate seminar. Organizing conferences is critical to getting the word out on important new research, and in fact, running a conference on AI safety in Puerto Rico is how FLI was able to bring so many researchers together on its Open Letter on AI Safety. It's also where Elon Musk made his donation. During grad school, you can get lots of practice organizing research events by running seminars for your fellow grad students. In fact, several of the organizers of the FLI conference were grad students.

     

  6. Get exposure to experts. A top 10 US school will have professors around that are world-experts on myriad topics, and you can attend departmental colloquia to expose yourself to the cutting edge of research in fields you're curious about. I regularly attended cognitive science and neuroscience colloquia during my PhD in mathematics, which gave me many perspectives that I found useful working at CFAR.

     

  7. Learn how productive researchers get their work done. Grad school surrounds you with researchers, and by getting exposed to how a variety of researchers do their thing, you can pick and choose from their methods and find what works best for you. For example, I learned from my advisor Bernd Sturmfels that, for me, quickly passing a draft back and forth with a coauthor can get a paper written much more quickly than agonizing about each revision before I share it.

     

  8. Remember you don't have to stay in academia. If you limit yourself to only doing research that will get you good post-doc offers, you might find you aren't able to focus on what seems highest impact (because often what makes a topic high impact is that it's important and neglected, and if a topic is neglected, it might not be trendy enough land you good post-doc). But since grad school is run by professors, becoming a professor is usually the most salient path forward for most grad students, and you might end up pressuring yourself to follow that standards of that path. When I graduated, I got my top choice of post-doc, but then I decided not to take it and to instead try earning to give as an algorithmic stock trader, and now I'm a research fellow at MIRI. In retrospect, I might have done more valuable work during my PhD itself if I'd decided in advance not to do a typical post-doc.

That's all I have for now. The main sentiment behind most of this, I think, is that you have to be deliberate to get the most out of a PhD program, rather than passively expecting it to make you into anything in particular. Grad school still isn't for everyone, and far from it. But if you were seriously considering it at some point, and "do something more useful" felt like a compelling reason not to go, be sure to first consider the most useful version of grad that you could reliably make for yourself... and then decide whether or not to do it.

Please email me (lastname@thisdomain.com) if you have more ideas for getting the most out of grad school!

New Comment
155 comments, sorted by Click to highlight new comments since: Today at 4:38 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

It's striking how much value there is in academia that I didn't notice, and that a base-level rational person would've noticed if they'd asked "what are the main blind spots of the rationality community, and how can I steelman the opposing positions?". Not a good sign about me, certainly.

Also, is that your actual email address?

I have been talking about this very issue for ages here on LW. "Rationalists" (the tribe, not the ideal platonic type) share a ton of EY's biases, including anti-academic sentiment.

1Ben Pace8y
Question: Did you make a post of this nature before?
3IlyaShpitser8y
I don't write top level posts, but I took issue w/ Luke taking a shit on academic philosophy, for instance.
0Ben Pace8y
I don't see that the above post refutes any arguments Luke made about academic philosophy. What were the basics of your disagreements with his arguments?
3IlyaShpitser8y
Luke is not qualified to shit on academic philosophy. He simply doesn't have the background or the overview. And it's a terrible idea for social reasons, it just makes people not take LW seriously. I would be happy to accept critiques of the philosophy establishment from e.g. Clark Glymour, not from Luke. There is a ton of value in philosophy you are leaving on the table if you shit on philosophy. My other big annoyance is the "LW Bayesians" (who are similarly not qualified generally to have strong opinions about these issues, and instead should read stats/ML literature). Although I should say very sophisticated stats folks occasionally post here (but I don't count them among the "LW Bayesians" number, as they understand issues with Bayes very well).
3Ben Pace8y
Could you provide an object level counter argument please? A strong one would give me a lot more credence that Luke's work was not an accurate portrayal of academic philosophy. (Three would be preferred) (Object level might look like "philosophers are making useful progress by metric X" or "I expect philosophers' work to be very useful in area of science a because b" or "doing a PhD in philosophy has lots of value in the world for reasons p, q and r")
3IlyaShpitser8y
I am not very interested in convincing you. You said: So look for the value! Don't write the entire field off, lots of smart people there, probably you are missing something. ---------------------------------------- But for example quite a few very smart causal inference people are in philosophy. That conference on decision theory MIRI went to in Cambridge was hosted by philosophers. Some philosophers deal with very hard problems that do not map onto empiricism very well, etc.
3pragmatist8y
I think Luke will agree with you on what you say here, though. I remember commenting on one of his posts that was critical of philosophy, saying that his arguments didn't really apply to the area of philosophy I'm involved in (technical philosophy of science). Luke's response was essentially, "I agree. I'm not talking about philosophy of science." I think he'd probably say the same about philosophical work on decision theory and causal inference.
4IlyaShpitser8y
Isn't that motte/bailey: "philosophy, a diseased discipline" is not a very discriminating title. The best line of his post is this: And this is definitely ok! ---------------------------------------- But again, I am not super interested in arguing with people about whether philosophy is worthwhile. I have better things to do. I was only pointing out in response to the OP that I have been harping on LW's silly anti-academic sentiment for ages, that's all.
6pragmatist8y
Not sure it's motte-and-bailey. I do think there are several serious pathologies in large swathes of contemporary philosophy. And I say this not as a dilettante, but a professional philosopher. There are areas of philosophy where these pathological tendencies are being successfully held at bay, and I do think there are promising signs that those areas are growing in influence. But much of mainstream philosophy, especially mainstream metaphysics and epistemology, does suffer from continued adherence to what I consider archaic and unhelpful methodology. And I think that's what Luke is trying to point out. He does go overboard with his rhetoric, and I think he lacks a feel for the genuine insights of the Western philosophical tradition (as smart and insightful as I think Yudkowsky is, I really find it odd that someone who purports to be reasonably familar with philosophy would cite him as their favorite philosopher). But I think there is a sound point lurking under there, and not merely a banal "motte"-style point. I absolutely agree with you on the silliness of the anti-academic sentiment.
4[anonymous]8y
Would you mind explaining your perspective? I'm always interested to hear more angles on this, since with my current sample-size being roughly three (Dennett, Railton, Churchland), I tend to think I have an incomplete picture.
0IlyaShpitser8y
Everyone on LW should consider Francis Bacon their patron saint, imo :).
1Lumifer8y
LW seems to have picked the Presbyterian minister Thomas Bayes as its patron saint with the Franciscan friar William of Ockham running a close second :-)
3Vaniver8y
If I had to pick one, I'd go with Laplace.
4[anonymous]8y
In defense of Luke, when I've spent the time to read through philosophy books by strong-naturalist academic philosophers, they've often devoted page-counts easily equivalent in length to "Philosophy: a diseased discipline" to carefully, charitably, academically, verbosely tearing non-naturalist philosophy a new asshole. Luke's post has tended to be a breath of fresh air that I reread after reading any philosophy paper that doesn't come from a strongly naturalist perspective. It sincerely worries me that the academics in philosophy who do really excellent work, work that does apply to the real world-that-is-made-of-atoms, work that does map-the-territory, have to spend large amounts of effort just beating down obviously bad beliefs over and over again. You should be able to shoot down a bad idea once, preferably in the peer-review phase, and not have to fight it again and again like a bad zombie. (Examples of obviously bad ideas: p-zombies, Platonism, Bayesian epistemology (the latter two may require explanation).) Now, to signal fairness even where I'm blatantly opinionated, plenty of people on LW are indeed irritatingly "men of one idea", that usually being some variation on AIXI. And in fact, plenty of people on LW hold philosophical opinions I consider obviously bad, like mathematical Platonism. But the answer to those bad things hasn't usually been "more philosophy", as if any philosophy is good philosophy, but instead more naturalism, investing more effort to accommodate conceptual theorizing to the world-that-is-made-of-atoms. Since significant portions of academic philosophy (for instance, Thomas Nagel) are instead devoted to the view - one that I once expected to be contrarian but which I now find depressingly common - that science and naturalism are wrong, or that they are unjustified, or that they are necessarily incapable of answering some-or-another important question - having one page on a contrarian intellectual-hipsters' website devoted to raggin
8Lumifer8y
That word, "obviously", I don't think it means what you think it means :-)
6iarwain18y
Could you provide that explanation?
0[anonymous]8y
Sure. If we take Platonism to be the belief that abstract objects (take, for instance, the objects of ZFC set theory) actually exist in a mind-independent way, if not in a particularly well-specified way, then it occurs because people mistake the contents of their mental models of the world for being real objects, simply because those models map the world well and compress sense-data well. In fact, those models often compress most sense-data better than the "more physicalist" truth would: they can be many orders of magnitude smaller (in bits of program devoted to generative or discriminative modelling). However, just because they're not "real" doesn't mean they don't causally interact with the real world! The point of a map is that it corresponds to the territory, so the point of an abstraction is that it corresponds to regularities in the territory. So naive nominalism isn't true either: the abstractions and what they abstract over are linked, so you really can't just move names around willy-nilly. In fact, some abstractions will do better or worse than others at capturing the regularities in sense-data (and in states of the world, of course), so we end up saying that abstractions can exist on a sliding scale from "more Platonic" (those which appear to capture regularities we've always seen in all our previous data) to "more nominalist" (those which capture spurious correlations). Now, for "Bayesian epistemology", I'm taking the Jaynesian view, which is considered extreme but stated very clearly and precisely, that reasoning consists in assigning probabilities to propositions. People who oppose Bayesianism will usually then raise the Problem of the Prior, and the problem of limited model classes, and so on and so forth. IMHO, the better criticism is simply: propositions are not first-order, actually-existing objects (see above on Platonism)! Consider a proposition to be a set of states some model can be in or not be in, and we can still use Bayesian statistics,
2TheAncientGeek8y
The motivation actually seems to be the Correspondence Theory of Truth..that is mentioned several timesin subsequent comments.
2[anonymous]8y
Indeed, even though when you use a Lossy-Correspondence/Compression Theory of Truth, abstract objects become perfectly sensible as descriptions of regularities in concrete objects.
-1TheAncientGeek8y
Not really, because most maths is unphysical, ie physics is picking out the physically applicable parts of maths, ie the rest has nothing to correspond to.
4Kaj_Sotala8y
If I remember my Lakoff & Núñez correctly, they were arguing that even the most abstract and un-physical-seeming of maths is constructed on foundations that derive from the way we perceive the physical world. Let me pick up the book again... ah, right. They define two kinds of conceptual metaphor: Their argument is that for any kind of abstract mathematics, if you trace back its origin for long enough, you finally end up at some grounding and linking metaphors that have originally been derived from our understanding of physical reality. As an example of the technique, they discuss the laws of arithmetic as having been derived from four grounding metaphors: Object Collection (if you put one and one physical objects together, you have a collection of two objects), Object Construction (physical objects are made up of smaller physical objects; used for understanding expressions like "five is made up of two plus three" or "you can factor 28 into 7 times 4"), Measuring Stick (physical distances correspond to numbers; gave birth to irrational numbers, when the Pythagorean theorem was used to prove their existence by assuming that there's a number that corresponds to the length of the hypotenuse), and Motion Along A Path (used in the sixteenth century to invent the concept of the number line, and the notion of a number as lying between two other numbers). Now, they argue that these grounding metaphors, each by themselves, are not sufficient to define the laws of arithmetic for negative numbers. Rather you need to combine them into a new metaphor that uses parts of each, and then define your new laws in terms of that newly-constructed metaphor. Defining negative numbers is straightforward using these metaphors: if you have the concept of a number line, you can define negative numbers as "point-locations on the path on the side opposite the origin from positive numbers", so e.g. -5 is the point five steps to the left of the origin point, symmetrical to +5 which is five s
-2TheAncientGeek8y
To take a step back. the discussion is about mathematical Platonism, a theory of mathematical truth which is apparently motivated by the Correspondence theory of truth. That is being rivaled by another theory, also motivated by CToT, wherein the truth-makers of mathematical statements are physical facts, not some special realm of immaterial entities. The relevance of my claim that there are unphysical mathematical truths is that is an argument against the second claim. Lakoff and Nunez give an account of the origins and nature of mathematical thought that while firmly anti-Platonic doesn't back a rival theory of mathematical truth, because that is not in fact their area of interest..their interest is in mathematical thinking.
0[anonymous]8y
Who said that? Actual formal systems run on a coherence theory of truth: if the theory is consistent (and I do mean consistent according to a meta-system, so Goedel and Loeb aren't involved right now), then it's a theory. It may also be a totally uninteresting theory, or a very interesting theory. The truth-maker for a mathematical statement is just whether it has a model (and if you really wanted to, you could probably compile that into something about computation via the Curry-Howard Correspondence and some amount of Turing oracles). But the mere truth of a statement within a formal system is not the interesting thing about the statement!
2TheAncientGeek8y
Who said that CToT motivates mathematical Platonism, or who said that CToT is the outstanding theory of mathemtaical truth? I couldn't agree more that coherence is the best description of mathematical practice.
1[anonymous]8y
This one. Or rather, who claimed that the truth-makers of mathematical statements are physical facts?
0[anonymous]8y
Insofar as logic consists in information-preserving operations, the non-physically-applicable parts of math still correspond to the real world, in that they preserve the information about the real world which was put into formulating/locating the starting formal system in the first place. This is what makes mathematics so wondrously powerful: formality = determinism, and determinism = likelihood functions of 0 or 1. So when doing mathematics, you get whole formal systems where the theorems are always at least as true as the axioms. As long as any part of the system corresponds to the real world (and many parts of it do) and the whole system remains deterministic, then the whole system compresses information about the real world.
-1TheAncientGeek8y
Whereas the physically inapplicable parts don't retain real-world correspondence. Correspondence isn'ta n intrinsic, essential part of maths.
-3[anonymous]8y
Sure, you can come up with a formal system that bears no correspondence to the real world whatsoever. Mathematicians just won't consider it very interesting most of the time.
3Richard_Kennaway8y
They call it "pure mathematics".
3entirelyuseless8y
Transfinite mathematics is very interesting and currently has no correspondence to the physical world, at least not in any way that anyone knows about. And you can make the argument that even if there is a correspondence, we will never know about it, because you would have to be sure that actual infinities exist in the physical world, and that would seem pretty hard to confirm.
1IlyaShpitser8y
What do cardinals correspond to? ---------------------------------------- I suppose it's about the language, not about the model. Still.
-2[anonymous]8y
Screw it. I'll just go do a PhD thesis on how abstraction works, and then anyone who wants to actually understand can read that.
0IlyaShpitser8y
A lot of pure math takes the form of: "let's take something in the real world, like 'notion of containment in a bag' and run off with it." So it's abstracting, but then it's not about the real world anymore. There are no cardinals in the real world, but there are bags.
0[anonymous]8y
Yes it is. It still consists in information from the real world. The precise structure was chosen out of an infinite space of possible structures based precisely on its ability to generalize scenarios from "real life". Consider, for instance, real numbers and continuity. The real world is not infinitely divisible -- we know this now! But at one time, when these mathematical theories were formulated, that was a working hypothesis, and in fact, people could not divide things small enough to actually find where they became discrete. So continuity, as a mathematical construct, started out trying to describe the world, and was later found to have more interesting implications even when it was also found to be physically wrong.
3IlyaShpitser8y
Ok, so you would say inaccessible cardinals, versions of the continuum hypothesis etc. are "about the real world." At this point, I am bowing out of this conversation as we aren't using words in the same way.
1Richard_Kennaway8y
I think you're engaging in deepities here. It is clearly true that all of mathematics historically descends from thoughts about the real world. It is clearly false that all of mathematics is directly about the real world. Using the same words for both claims, "mathematics is about the real world", is the deepity. That is news to me. Physicists, even fundamental physicists, still talk about differential geometry and Hilbert spaces and so on. There are speculations about an underlying discrete structure on the Planck scale or below, but did anyone refound physics on that basis yet? Stephen Wolfram made some gestures in that direction in his magnum opus; but I read a physicist writing a review of it saying that Wolfram's idea of explaining quantum entanglement that way was already known not to work.
0[anonymous]8y
I'm actually only using it for the former.
0gjm8y
You may be thinking of Scott Aaronson's review of "A new kind of science".
-1gjm8y
I think it's at least arguable that there are plenty of cardinals in the real world (e.g., three) even though there very likely aren't the "large cardinals" that set theorists like to speculate about. (Of course there are also cardinals and cardinals.)
0IlyaShpitser8y
What was the point of writing that? Do you think I was talking about "3"?
0gjm8y
I (1) was genuinely unsure whether you were asserting that numbers (even small positive integers) are too abstract to "live" in the real world -- a reasonable assertion, I think, though I thought it probably wasn't your position -- and (2) thought it was amusing even if you weren't.
0TheAncientGeek8y
But that's not at all relevant The existence of unphysical maths is a robust argument against the theory that mathematical truth is true by correspondence to the physical world. The interestingness of such maths is neither here nor there.
0Richard_Kennaway8y
I found a partial answer to the question I asked in the sibling comment. By chance I happened to need to generate random chords of a circle covering the circle uniformly. In searching on the net for Jaynes' solution I came across a few fragments of Jaynes' views on infinity. In short, he insists on always regarding continuous situations as limits of finite ones (e.g as when the binomial distribution tends to the normal), which is unproblematic for all the mathematics he wants to do. That is how the real numbers are traditionally formalised anyway. All of analysis is left unscathed. His wider philosophical objections to such things as Cantor's transfinite numbers can be ignored, since these play no role in statistics and probability anyway. I don't know about the technicalities regarding Cox's Theorem, but I do notice a substantial number of papers arguing about exactly what hypotheses it requires or does not require, and other papers discussing counterexamples (even to the finite case). The Wikipedia article has a long list of references, and a general search shows more. Has anyone written an up to date review of what Cox-style theorems are known to be sound and how well they suffice to found the mathematics of probability theory? I can google /"Cox's theorem" review/ but it is difficult for me to judge where the results sit within current understanding, or indeed what the current understanding is.
0[anonymous]8y
I don't know. But I will say this: I am distrustful of a foundation which takes "propositions" to be primitive objects. If the Cox's Theorem foundation for probability requires that we assume a first-order logic foundation of mathematics in general, in which propositions cannot be considered as instances of some larger class of things (as they can in, for personal favoritism, type theory), then I'm suspicious. I'm also suspicious of how Cox's Theorem is supposed to map up to continuous and non-finitary applications of probability -- even discrete probability theory, as when dealing with probabilistic programming or the Solomonoff measure. In these circumstances we seem to need the measure-theoretic approach. Further: if "the extension of classical logic to continuous degrees of plausibility" and "rational propensities to bet" and "measure theory in spaces of normed measure" and "sampling frequencies in randomized conditional simulations of the world" all yield the same mathematical structure, then I think we're looking at something deeper and more significant than any one of these presentations admits. In fact, I'd go so far as to say there isn't really a "Bayesian/Frequentist dichotomy" so much as a "Bayesian-Frequentist Isomorphism", in the style of the Curry-Howard Isomorphism. Several things we thought were different are actually the same.
0Richard_Kennaway8y
I don't follow your argument re Bayesian epistemology, in fact, I find it not at all obvious. The argument looks like insisting on a different vocabulary while doing the same things, and then calling it statistics rather than epistemology. Can you give a pointer to where he disbelieves in these? He does refer to them apparently unproblematically here and there, e.g. in deducing what a noninformative prior on the chords of a circle should be.
0[anonymous]8y
1) Dissolving epistemology to get statistics of various kinds underneath is a good thing, especially since the normal prescription of Bayesian epistemology is, "Oh, just calculate the posterior", while in Bayesian statistics we usually admit that this is infeasible most of the time and use computational methods to approximate well. 2) The difference between Bayesian statistics and Bayesian epistemology is slight, but the difference between Bayesian statistics and the basic nature of traditional philosophical epistemology that the Bayesian epistemologists were trying to fit Bayesianism into is large. 3) The differences start to become large when you stop using spaces composed of N mutually-exclusive logical propositions arranged into a Boolean algebra. For instance, computational uncertainty and logical omniscience are nasty open questions in Bayesian epistemology, while for an actual statistician it is admitted from the start that models do not yield well-defined answers where computations are infeasible. I can't, since the precise page number would have to be a location number in my Kindle copy of Jaynes' book.
0Richard_Kennaway8y
A brief quote will do, enough words to find them in my copy.
-1Richard_Kennaway8y
How do you account for the fact that numbers are the same for everyone? Of course, not everyone knows the same things about numbers, but neither does everyone know the same things about Neptune. Nevertheless, the abstract objects of mathematics have the same ineluctability as physical objects. Everyone who looks at Neptune is looking at the same thing, and so is everyone who studies ZFC. These abstract objects can be used to make models of things, but they are not themselves those models.
2[anonymous]8y
Two correct maps of the same territory, designed to highlight the same regularities and obscure the same sources of noise, will be either completely the same or, in the noisy case, will approximate each-other. Just because there's no Realm of Forms doesn't mean that numbers can be different for different people without losing their ability to compressively predict regularities in the environment.
1Richard_Kennaway8y
1. What is the territory, that numbers are a map of? I can use them to assemble a map, for example, s=0.5at^2 as a map, or model, of uniformly accelerating bodies, but the components of this are more like the ink and paper used to make a map than they are like a map. 2. I have a bunch of maps, literal printed maps of various places, and the maps certainly exist as physical objects, alongside the places that they are maps of. They exist independently of me, and independently of whether anyone uses them as a map or as wrapping paper. Likewise, it seems to me, numbers.
-1VoiceOfRa8y
If there is no Realm of Forms, what territory are you referring to?
4TheAncientGeek8y
The ordinary physical universe, presumably.
0[anonymous]8y
As TheAncientGeek said, the ordinary physical universe. "Abstract" objects abstract over concrete objects.
-3VoiceOfRa8y
And where is the ordinary physical universe do these abstractions live?
3[anonymous]8y
Again: they abstract over concrete objects. You get a map that represents lots of territories at the same time by capturing their common regularities and throwing out the details that make them different.
-1VoiceOfRa8y
So do you claim these abstractions actually exist?
0[anonymous]8y
The abstract maps exist. The abstract territory does not.
-2VoiceOfRa8y
In which case we're back to the question of why numbers are the same for everyone? You said: Except you claim there's no same territory. The aliens in the Andromeda galaxy will have the same numbers as us, as will the sentient truing machines that evolved in a cellular automata. So what common territory are they all looking at?
8Kaj_Sotala8y
The physical world: see here.
-7VoiceOfRa8y
-9VoiceOfRa8y
1TheAncientGeek8y
You could equally say that everyone who looks at the rules of chess sees the same thing. In order to show some inevitability to ZFC, you have to show that unconnected parties arriving at it independently.
1Richard_Kennaway8y
On the one hand, why? I'm quite happy to say that chess exists. Not everyone will ever see chess, but not everyone will ever see Neptune. Among all the games that could be played, chess is but one grain of sand on the beach. But the grain of sand exists regardless of whether anyone sees it. On the other hand, there has been, I believe, a substantial tendency for people devising alternative axioms for the concepts of sets to come up with things equiconsistent to ZFC or to subsets of ZFC, and with fairly direct translations between them. Compare also the concept of computability, where there is a very strong tendency for different ways to answer the question "what is computation?" to come up with equivalent definitions.
2gjm8y
It is (I think) true that if you try to come up with an alternative foundation for mathematics you are likely to get something that's equivalent to some subset of ZFC perhaps augmented with some kind of large cardinal axiom. But that doesn't mean that ZFC is inevitable, it means that if you construct two theories both intended to "support" all of mathematics without too much extravagance, you can often more or less implement one inside the other. But that doesn't mean that ZFC specifically has any particular inevitability. Consider, e.g., NFU + Infinity + Choice (as used e.g. in Randall Holmes's book "Elementary set theory with a universal set") which I'll call NFUIC henceforward. This is consistent relative to ZFC, and indeed relative to something rather weaker than ZFC, and NFUIC + "all Cantorian sets are strongly Cantorian" (never mind exactly what that means) is equiconsistent with ZFC + some reasonably plausible large-cardinal axioms. OK, fine, so there's a sense in which NFUIC is ZFC-like, at least as regards consistency strength. But NFUIC's sets are most definitely not the same as ZFC's sets. NFUIC has a universal set and ZFC doesn't; ZFC's sets are the same sizes as their sets-of-singletons and NFUIC's often aren't; NFU has lots and lots of urelements and ZFC has just the single empty set; etc. NFUIC is very unlike ZFC despite these relationships in terms of consistency strength. [EDITED to add:] Here's an analogy. You get the same computable functions whether you start with (1) Turing machines, (2) register machines, (3) lambda calculus, or (4) Post production systems. But those are still four very different foundations for computing, they suggest quite different possible hardware realizations and different kinds of notation, they have quite different performance characteristics, etc. (The execution times are admittedly all bounded by polynomials in one another. We could add (5) quantum Turing machines, in which case that would no longer be known to be t
0[anonymous]8y
Yes, ZFC is not quite such an isolated landmark of thinginess as computability is, which is why I said "a strong tendency". And anyway, these alternative formalisations of set theory mostly have translations back and forth. Even ZFA (which has sets-within-sets-within-etc infintiely deep) can be modelled in ZFC. It's not a subject I've followed for a long time, but back when I did, Quine's system NF was the only significant system of set theory that was not known to be equiconsistent with ZFC, As for computable functions, yes, the different ways of getting at the class have different properties, but that just makes them different roads leading to the same Rome.
-1Richard_Kennaway8y
Yes, ZFC may be not quite such a starkly isolated landmark of thinginess as computability is, which is why I said "a strong tendency". And anyway, these alternative formalisations of set theory mostly have translations back and forth. Even ZFA (which has sets-within-sets-within-etc infinitely deep) can be modelled in ZFC. It's not a subject I've followed for a long time, but back when I did, Quine's NF was the only significant system of set theory for which this had not been done. I don't know if progress has been made on that since. (ETA: I found this review of NF from 2011. Its consistency was still open then.) As for computable functions, yes, the different ways of getting at the class have different properties, but that just makes them different roads leading to the same Rome.
0gjm8y
Randall Holmes says he has a proof of the consistency of NF relative to ZFC (and in fact something weaker, I think). He's said this for a while, he's published a few versions of his proof (mostly different in presentation in the interests of clarity, rather than patching bugs), and I think the general feeling is that he probably does have a proof but it hasn't yet been thoroughly checked by others. (Who may be holding off because he's still changing his mind about the best way of writing it down.)
1TheAncientGeek8y
The question is whether the rules of chess have mind-indepedent existence. Where are these grains, ie the rules of every possible game? Are they in our universe, or some heavenly library of babel? so how do we cash out the idea that these things are converging on an abstract object , rather than just converging? One way is put forward the counterfactual that if the abstract object were different, then the convergence would occur differently. But that seems rather against the spirit of what you are aiming.
-1Richard_Kennaway8y
Ask Max Tegmark. :) I don't believe in his Level IV multiverse, though. That is, I do draw a distinction between physical and abstract objects. That they are converging is enough. To quote an old saw variously attributed, in mathematics existence is freedom from contradiction.
1TheAncientGeek8y
If you don't know what your theory is, why undertake to defend it? Vague again. Realists think mathematical existence, is a real, literal existence for which non-contradiuction is a criterion, whereas antirealists think mathematical existence is a mere metaphor with no more content than non-contradiction. Perhaps I should finish with my usual comment that studying philosophy is useful because it allows you to articulate your theories, or, failing that, to notice when there are no clear concepts behind your words.
2Richard_Kennaway8y
Vague and empty slogans that might be picked over endlessly, and have been, to no useful purpose. Don't bother expanding them; it's a dead end. Much of the source material has failed to learn that lesson, and is useful in this regard primarily as negative examples. Some philosophers even say so. As a discipline for forcing you to discover the fallibility of subjectively clear and distinct ideas, and to arrive at actually clear ideas that demonstrably work, there are better fields of study, for those who are able to learn. Software design, for example.
2TheAncientGeek8y
Before, you seemed to be pitching in with an opinion on anti-realism versus realism, now you seem to be sayign the whole debate is meaningless. Which is your true belief? It isn't clear. Says who? As a software engineer, and philosopher (and scientist), I have found philosophy to be the best training for expressing abstract ideas clearly. Are you a software engineer? Do you believe software engineering has taught you to be clear?
2Richard_Kennaway8y
These two, for example, from opposite ends of the academic/professional spectrum: E.W. Dijkstra, "The Humble Programmer" E.S. Raymond, "How To Become A Hacker" Well, I have not. Which philosophers would you particularly recommend for this purpose? What in philosophy will assist our most gifted fellow humans in thinking previously impossible thoughts, or is worth learning for the profound enlightenment experience I will have when I finally get it? Software design and implementation has been a large part of my job and my recreations for all of my adult years. I have never taken a course of study in the subject. For example, I was familiar with the fallacy of suggestively named tokens long before reading Eliezer wrote of it on LessWrong, the fallacy of taking the subjective feeling that a task is simple for its actual simplicity, and probably various other things that are now just a part of my mental furniture. While the lessons are there to be learned, that does not mean that everyone will learn them. I have rolled my eyes many times over what I would call junk XML languages, where the creators have done no more than write down English names for every concept they can think of in some domain of discourse, sprinkle pointy brackets over them, write a DTD, and believe they've achieved something. They have not. In the field of procedural humanoid animation, in which I have worked, there have been many attempts to generate animation from a human-written script specifying the movements, but, well, it would take too long to say what I think is wrong with most of them that my own efforts do better. I once heard a distinguished researcher in the field even say "I am not interested in stupid implementation", as if the real work was in thinking up a structure and the "stupid implementation" could be left to a few graduate students.
-2TheAncientGeek8y
And are either Dijkstra or ESR in a position to directly compare the efficacy of software engineering as a means of clearly expressing philosophical ideas to the efficacy of philosophy as a means of clearly expressing philosophical ideas..ie, do they know anything about philosophy? It's not news that when two or more STEM type are gathered together they will recite the Mantra Against the Philosophers, in the expectation of reaping agreement and maybe even applause. It's just not very significant. It's also not news that software engineering teaches you to express software engineering concepts clearly, and it's not very relevant since the topic is expressing philosophical concepts. Pointing out that someone else is unclear doesn't make you clear. Pointing about that you can be clear about X doesn't make you clear about Y. Getting into conversations where there is mutual commitment to clear communication is the best practice, because you get instant feedback Learning the jargon of philosophy -- there are a number of dictionary-style works -- is also helpful: after all the jargon is tailored to just the kind of topic you were discussing. That's a different topic, but it happens...philosophers have been criticised for entertaining ideas that are too weird, among other things. That's a different topic again.
0ChristianKl8y
Weirdness is not the same thing as previously impossible thought. In the strongest form of "impossible thought" you will not even understand the claim being made enough that it registers with you. I'm not sure it makes sense to label people like E.S. Raymond who are proclaim hacker values as STEM types. Raymond isn't following the popular science narrative of logical positivism that you find with the typical STEM person.
0TheAncientGeek8y
Whatever. It's about the third or fourth change of topic.
0[anonymous]8y
Well, I have not. Which philosophers would you particularly recommend? These two, for example, from opposite ends of the academic/professional spectrum: E.W. Dijkstra, "The Humble Programmer" E.S. Raymond, "How To Become A Hacker" Software design and implementation has been a large part of my job and my recreations for all of my adult years. I have never taken a course of study in the subject. It has certainly greatly influenced me in that regard. For example, I was familiar with the fallacy of suggestively named tokens long before reading Eliezer wrote of it on LessWrong, the fallacy of taking the subjective feeling that a task is simple for its actual simplicity, and probably various other things that are now just a part of my mental furniture. While the lessons are there to be learned, that does not mean that everyone will learn them. I have rolled my eyes many times over what I would call junk XML languages, where the creators have done no more than write down English names for every concept they can think of in some domain of discourse, sprinkle pointy brackets over them, write a DTD, and believe they've achieved something. They have not. In the field of procedural humanoid animation, in which I have worked, there have been many attempts to generate animation from a human-written script specifying the movements, but, well, it would take too long to say what I think is wrong with most of them that my own efforts get right. I once heard a distinguished researcher in the field even say "I am not interested in stupid implementation", as if she could just think up a structure and leave it to a few graduate students to implement.
-8[anonymous]8y
0TheAncientGeek8y
I'm not a fan of mathematical Platonism, but physical realists, however hardline, face some very difficult problems regarding the ontologica status of physical law, which make Platonism hard to rule out. (And no, the perenially popular "laws are just descriptions" isn't a good answer). P-zombies as a subject worth discussing, or as something that can exist in our univese? But most of the people who discuss PZs don't think they can exist in our universe. There is some poor quality criticisim of philosophy about as well. The problems with Bayes are suffcieintly non-obvious to have eluded many or most at LW.
3[anonymous]8y
On the one hand, I think that page in specific is actually based on outdated Bayesian methods, and there's been a lot of good work in Bayesian statistics for complex models and cognitive science in recent years. On the other hand, I freaking love that website, despite its weirdo Buddhist-philosophical leanings and one or two things it gets Wrong according to my personal high-and-mighty ideologies. And on the gripping hand, he is very, very right that the way the LW community tends to phrase things in terms of "just Bayes it" is not only a mischaracterization of the wide world of statistics, it's even an oversimplification of Bayesian statistics as a subfield. Bayes' Law is just the update/training rule! You also need to discuss marginalization; predictive distributions; maximum-entropy priors, structural simplicity priors, and Bayesian Occam's Razor, and how those are three different views of Occam's Razor that have interesting similarities and differences; model selection; the use of Bayesian point-estimates and credible-hypothesis tests for decision-making; equivalent sample sizes; conjugate families; and computational Bayes methods. Then you're actually learning and doing Bayesian statistics. On the miniature nongripping hand, I can't help but feel that the link between probability, thermodynamics, and information theory means Eliezer and the Jaynesians are probably entirely correct that as a physical fact, real-world event frequencies and movements of information obey Bayes' Law with respect to the information embodied in the underlying physics, whether or not I can model any of that well or calculate posterior distributions feasibly.
0entirelyuseless8y
Starting out by expecting a view opposed to your own to be contrarian is a typical form of overconfidence, and not just overconfidence about other people's opinions.
-1[anonymous]8y
Sometimes, yes. However, I rather expect that naturalism should be the consensus.
0TheAncientGeek8y
He could have saved himself some trouble by writing "Philosophy: a Partly Diseased Disciplien" or "Philosophy: a Bit of a Curate's Egg".
0PhilGoetz8y
How about: * a link to the article by Luke that you're talking about * the names of some good current philosophy journals
2gjm8y
I think the article Ilya has in mind is this one: Philosophy, a diseased discipline.
1pragmatist8y
I can help with the second request: The British Journal for the Philosophy of Science
0PhilGoetz8y
That seems to be entirely analytic philosophy. My problem is that analytical philosophy is culturally irrelevant. Anthropologists, sociologists, art theorists, and artists talk about continental philosophy, Saussurian (!) linguistics, and psychoanalytic theory. The only things they use from analytic philosophy are arguments like Godel's incompleteness theorem, Wittgenstein's later stuff, or Quine's ontological relativism, that they interpret as saying that analytic philosophy doesn't work.
0IlyaShpitser8y
Analytic philosophers will find the strength to carry on, somehow.
0PhilGoetz8y
Good for analytic philosphy, but my real concern is with literature. Literature today is captive to bad philosophy. Poetry, even more explicitly so.
1SanguineEmpiricist8y
Love this, Luke is actually well read so maybe it's a bit tough on him, but the casual dismissal and elitist posturing is pretty dumb and cringe inducing. Philosophy is underrated around these parts.
0PhilGoetz8y
Someone who's studied stats and ML is much more qualified to talk about philosophy than someone who's studied academic philosophy. My comment may be irrelevant. You didn't provide a link to Luke's article, so I don't have the context, and am only guessing at your meaning.
0IlyaShpitser8y
^ this is what I am talking about. For some reason I think Luke has a bachelor's degree with a major in cognitive science (but I don't remember exactly).
2Kaj_Sotala8y
I was under the impression that he studied psychology, but dropped out before graduating. (An old interview has him mentioning that "I studied psychology in university but quickly found that I learn better and faster as an autodidact", and back when he was still employed at MIRI, his profile on the staff page didn't mention any degree whereas it did for almost everyone else.)
1IlyaShpitser8y
Just so we are clear -- I am not really attacking Luke. I met him, we talked on skype, etc. He's a sensible dude. I am just not weighing his opinion of philosophy very highly. "Mixture of experts" and all that.
6[anonymous]8y
Well, to bloat my own ego, one of my most consistently banged-on themes has been, "HEY ACADEMIA BASICALLY TELLS US THINGS FOR FREE!"
0[anonymous]8y
Well, not for free exactly. Textbooks and the internet can tell us most of the same things, for much cheaper :). (I'm not arguing against academia in general, but I do think that's a weak argument for it).
4[anonymous]8y
I had actually meant that academia provides research to the public more-or-less "for free", in the sense of "free at point of use". Textbooks and the internet are not actually of much use, as well, when most of the knowledge for how they actually go together is tribal-knowledge among university professors, and never gets written down for non-student self-studiers.
1[anonymous]8y
I'm not convinced that "free at point of use" is a useful concept - it's more useful to figure out when and where costs are snuck in, and then decide if the price is worth it and if the right people are paying for it. In terms of self learning, that hasn't been my experience as an autodidact. A course catalog and a good textbook are more than enough to provide context for learning, with google filling in the gaps.
[-][anonymous]8y130

I'm not convinced that "free at point of use" is a useful concept - it's more useful to figure out when and where costs are snuck in, and then decide if the price is worth it and if the right people are paying for it.

Go ahead and object that "nothing is really free", but "free at point of use", once we're being specific, is useful. It means, "This service is accessible without paying up-front, because the costs are being paid elsewhere." Of course there are still costs to be paid, but there are a couple of whole fields of study devoted to finding the most socially desirable ways of paying them.

So for instance, we have reason to believe that if, on top of the existing journal-subscription-and-paywall system, we added additional up-front fees for reading academic research papers, this would raise the costs of scientific research, in terms of dollars and labor-hours spent to obtain the outputs we care about.

Also, please, not every LW comment necessitates conceptual nitpicking. If I start with "academia publishes a lot of useful research which can be obtained and read for free by people who know how to do literature searches", please do not respond with, "Well what is free anyway? Shouldn't we digress into the entire field of welfare economics?"

3Kaj_Sotala8y
Upvoted for the last paragraph.
0[anonymous]8y
Fair point. It wasn't a "gotcha, you're technically wrong" comment. It was central to the point we're arguing, which is that academia is a net benefit. If the comment was meant in a tongue in cheek "I'm not actually making a real argument for academia, just saying something silly" way, it wasn't clear to me. If you were actually advancing an argument that academia is useful because it publishes research, you need to prove that the research does enough good to justify the costs it extracts from it's students. It is a meta point, but it's not an irrelevant nitpick - it's central to you're argument.
1[anonymous]8y
Ok, there's a confusion here I feel a need to correct: research is almost entirely not funded by students. Teaching is funded by students. Administration is (gratuitously and copiously, beyond anything necessary) funded by students. Teaching and administration are also often funded by endowments and state block grants. Research is (by and large) funded by research grants, and in fact, the level of research output required to justify each dollar of grant has gone solidly up.
1ChristianKl8y
The word textbook implies an adademic publisher.
-1[anonymous]8y
Well, if you're including "anyone who sells things to schools" in academia- then yes, my argument doesn't really make much sense. But, for the sake of steel-manning, let's pretend that instead of meaning that, I meant academia as the broad collection of things typically associated with it - formal schooling, tenure, teachers, students, classes, etc. Even if you include textbooks as a NARROW part of academia, the point is that you can forgo all that other stuff and JUST take the textbooks, and still be told basically the same things.
1interstice8y
I think the idea is that you're supposed to deduce the last name and domain name from identifying details in the post.

I have some questions about step 1 (find a flexible program):

My understanding is that there are two sources of inflexibility for PhD programs: A. Requirements for your funding source (e.g. TA-ing) and B. Vague requirements of the program (e.g. publish X papers). I'm excluding Quals, since you just have to pass a test and then you're done.

Elsewhere in the comments, someone wrote:

"Grad school is free. At most good PhD programs in the US, if you get in then they will offer you funding which covers tuition and pays you a stipend on the order of $25K

... (read more)
6EHeller8y
How hard your quals are depends on how well you know your field. I went to a top 5 physics program, and everyone passed their qualifying exams, roughly half of whom opted to take the qual their first year of grad school. Obviously, we weren't randomly selected though. Fellowships are a crapshoot that depend on a lot of factors outside your control, but getting funding is generally pretty easy in the sciences. When you work as an "RA" you are basically just doing your thesis research. TAing can be time consuming, but literally no one cares if you do it poorly, so it's not high pressure. But this is a red flag: That isn't how research works, at least in the sciences. Research is generally 1% "big idea" and 99% slowly grinding it out to see if it works. Your adviser, if he/she is any good, will help you find a big idea that you can make some progress on and you'll be grinding it out every week and meeting with your adviser or other collaborators if you've gotten stuck. That said, a bad adviser probably won't pay any attention to you. So you can do whatever you want for about 7 years until people realize you've made no progress and the wheels come off the bus (at which point they'll probably hand you a masters degree and send you on your way).
8gjm8y
I have heard rumours that students are actually people, and that they care about the quality of the teaching they receive.
4EHeller8y
You'd think so, but office hours and TA sections without attendance grades are very sparsely attended.
4IlyaShpitser8y
Not in my class.
3anna_macdonald8y
When I was in college, I almost never went to office hours or TA hours... except for one particular class, where the professor was a probably-brilliant guy who was completely incapable of giving a straight explanation or answer to anything. TA hours were packed full; most of the class went, and the TA explained all the stuff the teacher hadn't.
2Gram_Stone8y
Do you incur debt if this happens, due to the cost of stipends and tuition waivers to the institution?
4[anonymous]8y
In the Ukrainian academy of sciences, if your institution doesn't let you go in peace, you pay back your stipend; if you finished your PhD program without defending, you have to either work for 3 years in your department or in another governmental institution (doesn't matter which, just not for a private business), or pay back the stipend. Not Fun:)
5Vika8y
How much TAing is allowed or required depends on your field and department. I'm in a statistics department that expects PhD students to TA every semester (except their first and final year). It has taken me some effort to weasel out of around half of the teaching appointments, since I find teaching (especially grading) quite time-consuming, while industry internships both pay better and generate research experience. On the other hand, people I know from the CS department only have to teach 1-2 semesters during their entire PhD.
[-][anonymous]8y60

PhD programs in mathematics, statistics, philosophy, and theoretical computer science tend to give you a great deal of free time and flexibility, provided you can pass the various qualifying exams without too much studying.

Bolding the parts to which I object.

I have never seen anyone in a rigorous postgraduate program who had a lot of free time and could pass their quals without large amounts of studying.

Of course, I could just be, like magic, on the lower part of the intelligence curve for graduate school, but given that my actual measured IQ numbers ar... (read more)

5jsteinhardt8y
Are you talking about free time pre- or post-quals? And do you include work that goes towards your thesis but that you "have" to do (e.g. for a conference or internal deadline) as free time or non-free time? My experience (and I would guess many of my labmates, though I don't know for sure) is that quals are really easy to pass, you spend at most 2 weeks of your life studying for them, and otherwise you're just doing research plus a few classes. Stanford is an outlier in that it has particularly few class requirements compared to other top CS departments, but it seemed like MIT grad students also often started doing research fairly early on, from my perspective as an undergrad there. Depending on your funding situation, your actual time spent doing research may be more or less beholden to what grants your advisor has to do work towards. I'm on a fellowship and so can do whatever I want, the only consequences being that if my research after 5 years is uninteresting then I'll have trouble getting academic jobs.
5[anonymous]8y
I've only gotten up to doing an MSc (currently volunteering for Vikash Masinghka in my Copious Free Time), but I do know a hell of a lot of academics. From my (second-hand) knowledge, easy quals are an artifact of something very like economic privilege: your school is very prestigious and doesn't need to cull its grad-student herds as much as others, so quals are allowed to be easy. In other places, quals are used to evict many grad-students from their PhD program because resources are more scarce. I don't know anywhere where grad-students don't start doing research as early as possible. Do some programs really involve whole years of just classes?
8IlyaShpitser8y
A lot of it is historical accidents + inertia. When I was a greenhorn at UCLA, CS quals (the WQE, aka "the wookie") were two 5 hour tests (on two consecutive days) covering all of CS (e.g. networking, databases, AI, theory, systems, everything). They were not easy at all. At some point it was realized this was stupid, and neither teaches nor prepares one for research, and washes out good people. So it was changed. UCLA math quals are .. formidable.
3[anonymous]8y
That sounds a lot like Technion's course exams, which were usually designed to be two "levels" harder than the entire rest of the course, and could only really be studied for by obtaining graded copies of old exams.
6jsteinhardt8y
In Berkeley CS there are enough course requirements that I don't think people do serious research until their second year (although I'm sure they do some preliminary reading / thinking in year one).
3PhilGoetz8y
Absolutely. It's a function of grant money. MIT has more grant money than anyplace else on Earth, so it's easy to start grad students on research projects. At the U. of Buffalo, there were only 2 professors in my department who had grants, so getting onto someone's project was hard, and anyway you were taking 4 classes a semester and TAing at least one for the first 2 years, while studying for the qualifiers. I don't know anything about this "free time" OP talks about. Few students did research before their third year, after passing their qualifiers at the end of the second. A prof wouldn't really be interested in a student who hadn't passed the qualifiers. That's why the average CS PhD there took 8 years of grad school. You could do your own research, of course. I did that, but I eventually had to throw it all out, because I couldn't get anyone interested enough in it to be my advisor.
2PhilGoetz8y
At the U. of Buffalo, just taking the quals took at least a week. They were, if I recall, 7 exams, 5 taking half a day each and 2 24-hour exams.
3PhilGoetz8y
Agree. The lab work in CS is also large, though it comes in huge blocks rather than on a steady schedule.
0Transfuturist8y
Quals are the GRE, right?
3[anonymous]8y
Nope. No.
0Transfuturist8y
8(

I am of the opinion that if you do grad school and you don't attach yourself to a powerful and wise mentor in the form of your academic adviser, you're doing it wrong. Mentorship is a highly underrated phenomenon among rationalists.

I mean, if you're ~22, you really don't know what the hell you're doing. That's why you're going to grad school, basically. To get some further direction in how to cultivate your professional career.

If you happen to have access to an adviser who won a Nobel or whose adviser won a Nobel, they would make a good choice. The implici... (read more)

8satt8y
I mostly agree, but would add two caveats. Relying too much on getting one very specific advisor is risky. Most advisors are middle-aged (or outright old), especially those with Nobel Prizes, and they do sometimes die or move away with little notice. If that happens, universities can be very bad about finding replacements (let alone comparably brilliant replacements) for any students cast adrift. Also, an adviser's personality & schedule are as important as their research skills: a Nobel Prize winner who's usually away giving speeches, and is a raging, neglectful arsehole when they are around, is likely to be more of a hindrance than a help in getting a PhD. Put like that, what I just wrote is obvious, but I can imagine it being the kind of thing potential applicants would overlook.
3PhilGoetz8y
Yes... but is it about mentorship, or connections? Anyway, one problem is that powerful and wise mentors don't have anything to say to you until you've got a dissertation topic, and the curriculum is structured so that this seldom happens before someone's third year in grad school. My experience at the U. of Buffalo was that there were 2 kinds of student-advisor relationships: The exploitative kind, where the "mentor" gets the student to do gruntwork on the advisor's project, and write a whole bunch of code for him, and keeps them around, ungraduated, as long as they can; and the pro-forma kind, where the advisor cheers the student on in whatever the student is doing, then puts his or her name on the resulting papers. The idea that a dissertation advisor teaches something did not correspond to the reality I observed.
7moridinamael8y
Yeah, it's complicated. "A friend" (cough) had an exploitative advisor. But this friend also learned a tremendous amount doing all the gruntwork, writing the code, writing the papers. Yes, "my friend" did take over six years to graduate, but "my friend" was pushed harder than he'd ever been pushed in his life and probably harder than he'll ever be pushed again, and he learned the limits of his own abilities, which were far greater than he would have believed otherwise. Overall, he's glad he did his PhD even if there was a lot of suffering and struggle. An (actual) close friend of mine had an advisor who had himself been a student of a Nobel laureate. The relationship was primarily of the second type that you describe - lots of cheerleading and encouragement. But there was certainly an element of discernment which I think was passed along. I remember distinctly that my friend was extremely skeptical that his paper would be accepted by Science (the journal) but the advisor instructed him to submit it; the paper was accepted. So now my friend has a publication in Science basically just because his advisor had the judgement to know when something is important enough to submit to Science. This may seem like a small thing, but having a Science publication is not a small thing, I think. And I realize this is all highly anecdotal, but I can definitely attest that I have neither seen nor experienced any kind of mentoring relationship similar to either of the above since I left Academia.
4[anonymous]8y
My experience having an advisor wasn't quite either of those. I was certainly working on his research, but he wasn't trying to keep me around as long as possible. He also didn't want me gone as soon as possible. He seemed to have something he was trying to teach me, and I dare say I learned a few things, but I'm still not sure if they were the things he intended me to learn. He would often directly articulate things, but they weren't learnable or understandable principles, just sort of... mottoes. Things like, "Look. At. The data." The whole thing taught me the most about how many ways there are for noise and human error to creep into an experiment, and how very much prior knowledge and information you actually need just to be sure that your data is at all real in the first place. It also combined with my exposure to LW-y stuff to spark an interest in machine learning and statistics. Oh, and I learned a lot about the importance of using very nice Latex to make papers look properly professional. Adviser's mission accomplished? Fuck if I know, but I did still manage to pass a thesis defense (on what I think was sheer politics: my manuscript was quite unpolished, but nobody wanted to speak the impolitic fact that I should have had more guidance in certain aspects before submitting, so I passed with an entirely acceptable grade nonetheless).

I think there'd be value in just listing graduate programs in philosophy, economics, etc., by how relevant the research already being done there is to x-risk, AI safety, or rationality. Or by whether or not they contain faculty interested in those topics.

For example, if I were looking to enter a philosophy graduate program it might take me quite some time to realize that Carnegie Melon probably has the best program for people interested in LW-style reasoning about something like epistemology.

4Vika8y
I think it depends more on specific advisors than on the university. If you're interested in doing AI safety research in grad school, getting in touch with professors who got FLI grants might be a good idea.
3iarwain18y
Why do you say Carnegie Mellon? I'm assuming it's because they have the Center for Formal Epistemology and a very nice-looking degree program in Logic, Computation and Methodology. But don't some other universities have comparable programs? Do you have direct experience with the Carnegie Mellon program? At one point I was seriously considering going there because of the logic & computation degree, and I might still consider it at some point in the future.
5IlyaShpitser8y
Confirmed, re: CMU phil. Email me for details (ilyas at cs.jhu.edu), I know a few people there. I think Katja Grace went there at one point (?)
1fowlertm8y
I mentioned CMU for the reasons you've stated and because Lukeprog endorsed their program once (no idea what evidence he had that I don't). I have also spoken to Katja Grace about it, and there is evidently a bit of interest in LW themes among the students there. I'm unaware of other programs of a similar caliber, though there are bound to be some. If anyone knows of any, by all means list them, that was the point of my original comment.

email me (lastname@thisdomain.com)

That makes good sense over on your own domain whence this is cross-posted, but not here on LW. Here you might either want to describe your email address differently, or encourage people to PM you using the LW message system instead of emailing you.

0PhilGoetz8y
Is Academician's identity supposed to be secret?
2gjm8y
No; there's a link from his profile page to his website, where his name is in plain sight.
-1PhilGoetz8y
If there is, it's well-hidden.
4Vaniver8y
In the top-right section of the userpage, where the "send message" button is, is the username, karma, karma in last 30 days, location, and website link.

PhD programs in mathematics, statistics, philosophy, and theoretical computer science tend to give you a great deal of free time and flexibility, provided you can pass the various qualifying exams without too much studying.

Economics also has good opinion among the EA/rationality crowd:

Is there any way to do these things without paying a large pricetag? Could you just lurk around campus or something? Only half-joking here.

be sure to first consider the most useful version of grad that you could reliably make for yourself... and then decide whether or not to do it.

Planning fallacy is going to eat you alive if you use this technique.

6Unnamed8y
Grad school is free. At most good PhD programs in the US, if you get in then they will offer you funding which covers tuition and pays you a stipend on the order of $25K per year. In return, you may have to do some work as a TA or in a professor's lab. The real cost is the ~5 years of your life.
5[anonymous]8y
This is most assuredly the case in the biological sciences. And you DO have to do work with your mentor, and sometimes also TA.
1jsteinhardt8y
I don't think lurking around campus is going to lead to the same results as being immersed in a research environment full-time (especially if you're not doing research yourself). I generally think that a large amount of useful knowledge is tacit and that it's hard to absorb without being pretty directly involved. Also as others have noted, a PhD is free / paid for so (economic) cost isn't that much of a consideration.
0nyralech8y
Moving to europe, and (maybe) not exactly GB, should for the most part allow you to do that.

Teach classes.

Yeah, this was much more valuable than I realized at the time. I think it's a better way to learn to speak than most, because you have something to communicate, and you get to measure later on how well you communicated it. You don't have time to worry about being nervous.

Me previously on the topic of getting a PhD.