Ideally, I'd like to save the world. One way to do that involves contributing academic research, which raises the question of what's the most effective way of doing that.
The traditional wisdom says if you want to do research, you should get a job in a university. But for the most part the system seems to be set up so that you first spend a long time working for someone else and research their ideas, after which you can lead your own group, but then most of your time will be spent on applying for grants and other administrative trivia rather than actually researching the interesting stuff. Also, in Finland at least, all professors need to also spend time doing teaching, so that's another time sink.
I suspect I would have more time to actually dedicate on research, and I could get doing it quicker, if I took a part-time job and did the research in my spare time. E.g. the recommended rates for a freelance journalist in Finland would allow me to spend a week each month doing work and three weeks doing research, of course assuming that I can pull off the freelance journalism part.
What (dis)advantages does this have compared to the traditional model?
Some advantages:
- Can spend more time on actual research.
- A lot more freedom with regard to what kind of research one can pursue.
- Cleaner mental separation between money-earning job and research time (less frustration about "I could be doing research now, instead of spending time on this stupid administrative thing").
- Easier to take time off from research if feeling stressed out.
Some disadvantages:
- Harder to network effectively.
- Need to get around journal paywalls somehow.
- Journals might be biased against freelance researchers.
- Easier to take time off from research if feeling lazy.
- Harder to combat akrasia.
- It might actually be better to spend some time doing research under others before doing it on your own.
EDIT: Note that while I certainly do appreciate comments specific to my situation, I posted this over at LW and not Discussion because I was hoping the discussion would also be useful for others who might be considering an academic path. So feel free to also provide commentary that's US-specific, say.
I'm sure someone else can explain this better than me, but: As I understand it, a util understood timelessly (rather than like money, which there are valid reasons to discount because it can be invested, lost, revalued, etc. over time) builds into how it's counted all preferences, including preferences that interact with time. If you get 10 utils, you get 10 utils, full stop. These aren't delivered to your door in a plain brown wrapper such that you can put them in an interest-bearing account. They're improvements in the four-dimensional state of the entire universe over all time, that you value at 10 utils. If you get 11 utils, you get 11 utils, and it doesn't really matter when you get them. Sure, if you get them 20 years from now, then they don't cover specific events over the next 20 years that could stand improvement. But it's still worth eleven utils, not ten. If you value things that happen in the next 20 years more highly than things that happen later, then utils according to your utility function will reflect that, that's all.
That (timeless utils) is a perfectly sensible convention about what utility ought to mean. But, having adopted that convention, we are left with (at least) two questions:
I would answer yes to the first question. As I understand it, Eliezer would answer yes t... (read more)