Best career models for doing research?
Ideally, I'd like to save the world. One way to do that involves contributing academic research, which raises the question of what's the most effective way of doing that.
The traditional wisdom says if you want to do research, you should get a job in a university. But for the most part the system seems to be set up so that you first spend a long time working for someone else and research their ideas, after which you can lead your own group, but then most of your time will be spent on applying for grants and other administrative trivia rather than actually researching the interesting stuff. Also, in Finland at least, all professors need to also spend time doing teaching, so that's another time sink.
I suspect I would have more time to actually dedicate on research, and I could get doing it quicker, if I took a part-time job and did the research in my spare time. E.g. the recommended rates for a freelance journalist in Finland would allow me to spend a week each month doing work and three weeks doing research, of course assuming that I can pull off the freelance journalism part.
What (dis)advantages does this have compared to the traditional model?
Some advantages:
- Can spend more time on actual research.
- A lot more freedom with regard to what kind of research one can pursue.
- Cleaner mental separation between money-earning job and research time (less frustration about "I could be doing research now, instead of spending time on this stupid administrative thing").
- Easier to take time off from research if feeling stressed out.
Some disadvantages:
- Harder to network effectively.
- Need to get around journal paywalls somehow.
- Journals might be biased against freelance researchers.
- Easier to take time off from research if feeling lazy.
- Harder to combat akrasia.
- It might actually be better to spend some time doing research under others before doing it on your own.
EDIT: Note that while I certainly do appreciate comments specific to my situation, I posted this over at LW and not Discussion because I was hoping the discussion would also be useful for others who might be considering an academic path. So feel free to also provide commentary that's US-specific, say.
Aieee! The stupid! it burns!
Last Wednesday (2010 Dec 01), BBC Radio 4 broadcast a studio discussion on the question: "should we actively try to extend life itself?" The programme can be listened to from the BBC here for one week from broadcast, and is also being repeated tomorrow (Saturday Dec 04) at 22:15 BST. (ETA: not BST, GMT.)
All of the dreadful arguments for why death is good came out. For uninteresting reasons I missed a few minutes here and there, but in what I heard, not one of the speakers on any side of the question said anything like, "This is a no-brainer! Death is evil. Disease is evil. The less of both we have, the better. There is nothing good about death, at all, and all the arguments to the contrary are moral imbecility."
Instead, I heard people saying that work on life extension is disrespectful to the old, that to prolong life would be like prolonging an opera, which has a certain natural size and shape, that the old are wise, so if we make them physically young then old people won't be old, so they won't be wise. Whatever cockeyed argument you can construct by scattering into a Deeply Wise template the words "old", "young", "wise", "decrepit", "healthy", "natural", "unnatural", "boredom", "inevitable", "denial", I heard worse.
If I can bear to listen again to the whole thing just to check I didn't miss anything important, I may write something on their discussion board.
"Nahh, that wouldn't work"
After having it recommended to me for the fifth time, I finally read through Harry Potter and the Methods of Rationality. It didn't seem like it'd be interesting to me, but I was really mistaken. It's fantastic.
One thing I noticed is that Harry threatens people a lot. My initial reaction was, "Nahh, that wouldn't work."
It wasn't to scrutinize my own experience. It wasn't to do a google search if there's literature available. It wasn't to ask a few friends what their experiences were like and compare them.
After further thought, I came to realization - almost every time I've threatened someone (which is rarely), it's worked. Now, I'm kind of tempted to write that off as "well, I had the moral high ground in each of those cases" - but:
1. Harry usually or always has the moral high ground when he threatens people in MOR.
2. I don't have any personal anecdotes or data about threatening people from a non-moral high ground, but history provides a number of examples, and the threats often work.
This gets me to thinking - "Huh, why did I write that off so fast as not accurate?" And I think the answer is because I don't want the world to work like that. I don't want threatening people to be an effective way of communicating.
It's just... not a nice idea.
And then I stop, and think. The world is as it is, not as I think it ought to be.
And going further, this makes me consider all the times I've tried to explain something I understood to someone, but where they didn't like the answer. Saying things like, "People don't care about your product features, they care about what benefit they'll derive in their own life... your engineering here is impressive, but 99% of people don't care that you just did an amazing engineering feat for the first time in history if you can't explain the benefit to them."
Of course, highly technical people hate that, and tend not to adjust.
Or explaining to someone how clothing is a tool that changes people's perceptions of you, and by studying the basics of fashion and aesthetics, you can achieve more of your aims in life. Yes, it shouldn't be like that in an ideal world. But we're not in that ideal world - fashion and aesthetics matter and people react to it.
I used to rebel against that until I wizened up, studied a little fashion and aesthetics, and started dressing to produce outcomes. So I ask, what's my goal here? Okay, what kind of first impression furthers that goal? Okay, what kind of clothing helps make that first impression?
Then I wear that clothing.
And yet, when confronted with something I don't like - I dismiss it out of hand, without even considering my own past experiences. I think this is incredibly common. "Nahh, that wouldn't work" - because the person doesn't want to live in a world where it would work.
What is Evidence?
"The sentence 'snow is white' is true if and only if snow is white."
—Alfred Tarski
"To say of what is, that it is, or of what is not, that it is not, is true."
—Aristotle, Metaphysics IV
If these two quotes don't seem like a sufficient definition of "truth", read this. Today I'm going to talk about "evidence". (I also intend to discuss beliefs-of-fact, not emotions or morality, as distinguished here.)
Walking along the street, your shoelaces come untied. Shortly thereafter, for some odd reason, you start believing your shoelaces are untied. Light leaves the Sun and strikes your shoelaces and bounces off; some photons enter the pupils of your eyes and strike your retina; the energy of the photons triggers neural impulses; the neural impulses are transmitted to the visual-processing areas of the brain; and there the optical information is processed and reconstructed into a 3D model that is recognized as an untied shoelace. There is a sequence of events, a chain of cause and effect, within the world and your brain, by which you end up believing what you believe. The final outcome of the process is a state of mind which mirrors the state of your actual shoelaces.
The Trolley Problem: Dodging moral questions
The trolley problem is one of the more famous thought experiments in moral philosophy, and studies by psychologists and anthropologists suggest that the response distributions to its major permutations remain roughly the same throughout all human cultures. Most people will permit pulling the lever to redirect the trolley so that it will kill one person rather than five, but will balk at pushing one fat person in front of the trolley to save the five if that is the only available option of stopping it.
However, in informal settings, where the dilemma is posed by a peer rather than a teacher or researcher, it has been my observation that there is another major category which accounts for a significant proportion of respondents' answers. Rather than choosing to flip the switch, push the fat man, or remain passive, many people will reject the question outright. They will attack the improbability of the premise, attempt to invent third options, or appeal to their emotional state in the provided scenario ("I would be too panicked to do anything",) or some combination of the above, in order to opt out of answering the question on its own terms.
What I've learned from Less Wrong
Related to: Goals for which Less Wrong does (and doesn’t) help
I've been compiling a list of the top things I’ve learned from Less Wrong in the past few months. If you’re new here or haven’t been here since the beginning of this blog, perhaps my personal experience from reading the back-log of articles known as the sequences can introduce you to some of the more useful insights you might get from reading and using Less Wrong.
1. Things can be correct - Seriously, I forgot. For the past ten years or so, I politely agreed with the “deeply wise” convention that truth could never really be determined or that it might not really exist or that if it existed anywhere at all, it was only in the consensus of human opinion. I think I went this route because being sloppy here helped me “fit in” better with society. It’s much easier to be egalitarian and respect everyone when you can always say “Well, I suppose that might be right -- you never know!”
2. Beliefs are for controlling anticipation (Not for being interesting) - I think in the past, I looked to believe surprising, interesting things whenever I could get away with the results not mattering too much. Also, in a desire to be exceptional, I naïvely reasoned that believing similar things to other smart people would probably get me the same boring life outcomes that many of them seemed to be getting... so I mostly tried to have extra random beliefs in order to give myself a better shot at being the most amazingly successful and awesome person I could be.
Should I believe what the SIAI claims?
Major update here.
The state of affairs regarding the SIAI and its underlying rationale and rules of operation are insufficiently clear.
Most of the arguments involve a few propositions and the use of probability and utility calculations to legitimate action. Here much is uncertain to an extent that I'm not able to judge any nested probability estimations. Even if you tell me, where is the data on which you base those estimations?
There seems to be an highly complicated framework of estimations to support and reinforce each other. I'm not sure how you call this in English, but in German I'd call that a castle in the air.
Information Hazards
Nick Bostrom recently posted the article "Information Hazards", which is about the myriad of ways in which information can harm us.
You can read it at his website: Direct PDF Link
No Universally Compelling Arguments
Followup to: The Design Space of Minds-in-General, Ghosts in the Machine, A Priori
What is so terrifying about the idea that not every possible mind might agree with us, even in principle?
For some folks, nothing—it doesn't bother them in the slightest. And for some of those folks, the reason it doesn't bother them is that they don't have strong intuitions about standards and truths that go beyond personal whims. If they say the sky is blue, or that murder is wrong, that's just their personal opinion; and that someone else might have a different opinion doesn't surprise them.
For other folks, a disagreement that persists even in principle is something they can't accept. And for some of those folks, the reason it bothers them, is that it seems to them that if you allow that some people cannot be persuaded even in principle that the sky is blue, then you're conceding that "the sky is blue" is merely an arbitrary personal opinion.
Yesterday, I proposed that you should resist the temptation to generalize over all of mind design space. If we restrict ourselves to minds specifiable in a trillion bits or less, then each universal generalization "All minds m: X(m)" has two to the trillionth chances to be false, while each existential generalization "Exists mind m: X(m)" has two to the trillionth chances to be true.
This would seem to argue that for every argument A, howsoever convincing it may seem to us, there exists at least one possible mind that doesn't buy it.
And the surprise and/or horror of this prospect (for some) has a great deal to do, I think, with the intuition of the ghost-in-the-machine—a ghost with some irreducible core that any truly valid argument will convince.
The Curve of Capability
or: Why our universe has already had its one and only foom
In the late 1980s, I added half a megabyte of RAM to my Amiga 500. A few months ago, I added 2048 megabytes of RAM to my Dell PC. The later upgrade was four thousand times larger, yet subjectively they felt about the same, and in practice they conferred about the same benefit. Why? Because each was a factor of two increase, and it is a general rule that each doubling tends to bring about the same increase in capability.
That's a pretty important rule, so let's test it by looking at some more examples.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)