Ideally, I'd like to save the world. One way to do that involves contributing academic research, which raises the question of what's the most effective way of doing that.
The traditional wisdom says if you want to do research, you should get a job in a university. But for the most part the system seems to be set up so that you first spend a long time working for someone else and research their ideas, after which you can lead your own group, but then most of your time will be spent on applying for grants and other administrative trivia rather than actually researching the interesting stuff. Also, in Finland at least, all professors need to also spend time doing teaching, so that's another time sink.
I suspect I would have more time to actually dedicate on research, and I could get doing it quicker, if I took a part-time job and did the research in my spare time. E.g. the recommended rates for a freelance journalist in Finland would allow me to spend a week each month doing work and three weeks doing research, of course assuming that I can pull off the freelance journalism part.
What (dis)advantages does this have compared to the traditional model?
Some advantages:
- Can spend more time on actual research.
- A lot more freedom with regard to what kind of research one can pursue.
- Cleaner mental separation between money-earning job and research time (less frustration about "I could be doing research now, instead of spending time on this stupid administrative thing").
- Easier to take time off from research if feeling stressed out.
Some disadvantages:
- Harder to network effectively.
- Need to get around journal paywalls somehow.
- Journals might be biased against freelance researchers.
- Easier to take time off from research if feeling lazy.
- Harder to combat akrasia.
- It might actually be better to spend some time doing research under others before doing it on your own.
EDIT: Note that while I certainly do appreciate comments specific to my situation, I posted this over at LW and not Discussion because I was hoping the discussion would also be useful for others who might be considering an academic path. So feel free to also provide commentary that's US-specific, say.
If you believe that "decision is a failure" is evidence that the decision theory is not adequate, you believe that "decision is a success" is evidence that the decision theory is adequate.
Since a decision theory's adequacy is determined by how successful its decisions are, you appear to be saying "if a decision theory makes a bad decision, it is a bad decision theory" which is tautologically true.
Correct me if I'm wrong, but Vladimir_Nesov is not interested in whether the the decision theory is good or bad, so restating an axiom of decision theory evaluation is irrelevant.
The decision was made by a certain decision theory. The factual question "was the decision-maker holding to this decision theory in making this decision?" is entirely unrelated to the question "should the decision-maker hold to this decision theory given that it makes bad decisions?". To suggest otherwise blurs the prescriptive/descriptive divide, which is what Vladimir_Nesov is referring to when he says
I believe that if the decision theory clearly led to an incorrect result (which it clearly did in this case, despite Vladimir Nesov's energetic equivocation), then it is important to examine the limits of the decision theory.
As I understand it, the purpose of bothering to advocate TDT is that it beats CDT in the hypothetical case of dealing with Ome... (read more)