It happens every now and then that someone encounters some of my transhumanist-side beliefs—as opposed to my ideas having to do with human rationality—strange, exotic-sounding ideas like superintelligence and Friendly AI. And the one rejects them.
If the one is called upon to explain the rejection, not uncommonly the one says, “Why should I believe anything Yudkowsky says? He doesn’t have a PhD!”
And occasionally someone else, hearing, says, “Oh, you should get a PhD, so that people will listen to you.” Or this advice may even be offered by the same one who expressed disbelief, saying, “Come back when you have a PhD.”
Now, there are good and bad reasons to get a PhD. This is one of the bad ones.
There are many reasons why someone might actually have an initial adverse reaction to transhumanist theses. Most are matters of pattern recognition, rather than verbal thought: the thesis calls to mind an associated category like “strange weird idea” or “science fiction” or “end-of-the-world cult” or “overenthusiastic youth.”1 Immediately, at the speed of perception, the idea is rejected.
If someone afterward says, “Why not?” this launches a search for justification, but the search won’t necessarily hit on the true reason. By “‘true reason,” I don’t mean the best reason that could be offered. Rather, I mean whichever causes were decisive as a matter of historical fact, at the very first moment the rejection occurred.
Instead, the search for justification hits on the justifying-sounding fact, “This speaker does not have a PhD.” But I also don’t have a PhD when I talk about human rationality, so why is the same objection not raised there?
More to the point, if I had a PhD, people would not treat this as a decisive factor indicating that they ought to believe everything I say. Rather, the same initial rejection would occur, for the same reasons; and the search for justification, afterward, would terminate at a different stopping point.
They would say, “Why should I believe you? You’re just some guy with a PhD! There are lots of those. Come back when you’re well-known in your field and tenured at a major university.”
But do people actually believe arbitrary professors at Harvard who say weird things? Of course not.
If you’re saying things that sound wrong to a novice, as opposed to just rattling off magical-sounding technobabble about leptical quark braids in N + 2 dimensions; and if the hearer is a stranger, unfamiliar with you personally and unfamiliar with the subject matter of your field; then I suspect that the point at which the average person will actually start to grant credence overriding their initial impression, purely because of academic credentials, is somewhere around the Nobel Laureate level. If that. Roughly, you need whatever level of academic credential qualifies as “beyond the mundane.”
This is more or less what happened to Eric Drexler, as far as I can tell. He presented his vision of nanotechnology, and people said, “Where are the technical details?” or “Come back when you have a PhD!” And Eric Drexler spent six years writing up technical details and got his PhD under Marvin Minsky for doing it. And Nanosystems is a great book. But did the same people who said, “Come back when you have a PhD,” actually change their minds at all about molecular nanotechnology? Not so far as I ever heard.
This might be an important thing for young businesses and new-minted consultants to keep in mind—that what your failed prospects tell you is the reason for rejection may not make the real difference; and you should ponder that carefully before spending huge efforts. If the venture capitalist says, “If only your sales were growing a little faster!” or if the potential customer says, “It seems good, but you don’t have feature X,” that may not be the true rejection. Fixing it may, or may not, change anything.
And it would also be something to keep in mind during disagreements. Robin Hanson and I share a belief that two rationalists should not agree to disagree: they should not have common knowledge of epistemic disagreement unless something is very wrong.2
I suspect that, in general, if two rationalists set out to resolve a disagreement that persisted past the first exchange, they should expect to find that the true sources of the disagreement are either hard to communicate, or hard to expose. E.g.:
- Uncommon, but well-supported, scientific knowledge or math;
- Long inferential distances;
- Hard-to-verbalize intuitions, perhaps stemming from specific visualizations;
- Zeitgeists inherited from a profession (that may have good reason for it);
- Patterns perceptually recognized from experience;
- Sheer habits of thought;
- Emotional commitments to believing in a particular outcome;
- Fear that a past mistake could be disproved;
- Deep self-deception for the sake of pride or other personal benefits.
If the matter were one in which all the true rejections could be easily laid on the table, the disagreement would probably be so straightforward to resolve that it would never have lasted past the first meeting.
“Is this my true rejection?” is something that both disagreers should surely be asking themselves, to make things easier on the other person. However, attempts to directly, publicly psychoanalyze the other may cause the conversation to degenerate very fast, from what I’ve seen.
Still—“Is that your true rejection?” should be fair game for Disagreers to humbly ask, if there’s any productive way to pursue that sub-issue. Maybe the rule could be that you can openly ask, “Is that simple straightforward-sounding reason your true rejection, or does it come from intuition-X or professional-zeitgeist-Y ?” While the more embarrassing possibilities lower on the table are left to the Other’s conscience, as their own responsibility to handle.
1See “Science as Attire” in Map and Territory.
2See Hal Finney, “Agreeing to Agree,” Overcoming Bias (blog), 2006, http://www.overcomingbias.com/2006/12/agreeing_to_agr.html.
As a current grad student myself, I could not disagree with poke's comment and this comment more. I work for a very respected adviser in computer vision from a very prestigious university. The reason I was accepted to this lab is because I am an NDSEG fellow. Many other qualified people lost out because my attendance here frees up a lot of my adviser's money for more students. In the mean time, I have a lot of pretty worthwhile ideas in physical vision and theories of semantic visual representations. However, I spend most of my days building Python GUI widgets for a group of collaborating sociologists. They collect really mundane data by annotating videos and no off the shelf stuff does quite what they want... so guess who gets to do that grunt work for a summer? Grad students.
You should really read the good Economist article The Disposable Academic. Graduate studentships are business acquisitions in all but the utmost theoretical fields. Advisers want the most non-linear things imaginable. For example, I am a pure math guy, with heavy emphasis on machine learning methods and probability theory. Yet my day job is seriously creative-energy-draining Python programming. The programming isn't even related to original math, it's just a novel thing for some sociologists to use. My adviser doesn't want me to split my time between reading background theory, etc. He wants me to develop this code because it makes him look good in front of collaborators.
Academia is mostly a bad bad place. I think Eliezer's desire to circumvent all the crap of grad school is totally right. The old way was a real, true apprenticeship. It isn't like that anymore. Engineering is especially notorious for this. Minimize the amount of tenured positions, and balloon the number of grad students in order to farm out the work that profs don't want to do. For almost all of these people, they will just go through the motions, do mundane bullshit, and write a thesis not really worth the paper it gets printed on. The few who don't follow this route usually just take it upon themselves to go and read on their own and become experts across many different disciplines and then make interconnections between previously independent fields or results. Eliezer has certainly done this with discussions of Newcombe-like problems and friendly A.I. from a philosophical perspective. He's done more honest academic work here than almost anyone I know in academia.
When I used to work at MIT Lincoln Laboratory, a colleague of mine had a great saying about grad school: "Grad school is 99% about putting your ass in the chair." It is indeed about spending X years in building Y and getting Z publications. Pure mathematics is somewhat of an exception at elite schools. To boot, people don't take you any more seriously when you finish. It continues on as a political/social process where you must win grants and provide widgets for collaborators and funding agencies.
There is much more to be said, and of course, I am a grad student, so I must feel that at least for myself it is a good decision despite all of the issues. Well, that's not quite true. Part of it is that as an undergrad, all of my professors just paid attention to the fact that I was energetic and attentive when they talked about topics they liked, and it created a cumulative jazzed-up feeling for those topics. I expected grad school to be very different than it really is. I should also add that I have been in two different PhD programs, both Ivy League (I can talk more specifically in private). I transferred because the adviser situation at the first school was pretty grim. There was only one faculty member doing things close to what I wanted to study, and he was such a famous name that he had not the time of day for me. For example, I once scheduled a meeting with him to discuss possible research and when I arrived he let me know it was going to be a jointly held meeting with his current doctoral student. While that student wrote on a chalkboard, I got to ask questions. When the student was finished, this prof addressed the student for various intervals of time and then came back to me. This sort of pedantic garbage is the rule rather than the exception.
I find myself having to constantly fight the urge go home from a long, wrist-achingly terrible day of mundane Python programming and just mentally check out. Instead, I read about stuff here on LW, or I read physics books, or now A.I. complexity books. Hopefully my thesis will be a contribution that I enjoy and find interesting. Even better if it helps move science along in a "meaningful way", but the standard PhD process is absolutely not going to let that happen unless I actively intervene and do things all on my own.
Anyone considering a PhD should consider this heavily. My experience is that it is nothing like the description above or poke's comment. I think Bostrom should advise a thesis with Eliezer because it would be a great addition to philosophy, and I don't want Eliezer burdened with nuisance coursework requirements. We should be unyoking uncommonly smart people when we find them, not forcing them to jump through extra hoops just for the pedantic sake of standardization.
Ok, so - I hear what you're saying, but a) that is not the way it's supposed to be, and b) you are missing the point.
First, a), even in the current academia, you are in a bad position. If I were you, I would switch mentors or programs ASAP.
I understand where you're coming from perfectly. I had a very similar experience: I spent three years in a failed PhD (the lab I was working in went under at the same time as the department I was in), and I ended up getting a MS instead. But even in that position, which was all tedious gruntwork, I understood the hypot... (read more)