Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Stuart_Armstrong comments on The Doomsday argument in anthropic decision theory - Less Wrong Discussion

5 Post author: Stuart_Armstrong 31 August 2017 01:44PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 31 August 2017 10:39:50PM *  9 points [-]

Nothing particularly riveting... Submitted it to a large numbers of philosophy journals in sequence, got either rejections or please modify and resubmit, did re-writings, and these were all eventually rejected. A few times it looked like it might get accepted, but then it was borderline rejected. Basically, they felt it was either not important enough, or they were committed to a probability view of anthropics, and didn't like the decision based approach.

Or that I was explaining it badly, but the reviewers were not consistent as to what was bad about the explanations.

Comment author: Wei_Dai 01 September 2017 12:16:04AM *  7 points [-]

Thanks for posting this! I wonder how much negative utility academia causes just in terms of this kind of frustrating experience, and how many kids erroneously start down an academic path because they never hear people tell stories like this.

Here's my own horror story with academic publishing. I was an intern at an industry research lab, and came up with a relatively simple improvement to a widely used cryptographic primitive. I spent a month or two writing it up (along with relevant security arguments) as well as I could using academic language and conventions, etc., with the help of a mentor who worked there and who used to be a professor. Submitted to a top crypto conference and weeks later got back a rejection with comments indicating that all of the reviewers completely failed to understand the main idea. The comments were so short that I had no way to tell how to improve the paper and just got the impression that the reviewers weren't interested in the idea and made little effort to try to understand it. My mentor acted totally unsurprised and just said something like, "let's talk about where to submit it next." That's the end of the story because I decide if that's how academia works I wanted to have nothing to do with it when there's, from my perspective, an obviously better way to do things, i.e., writing up the idea informally, posting it to a mailing list and getting immediate useful feedback/discussions from people who actually understand and are interested in the idea.

Comment author: Stuart_Armstrong 01 September 2017 06:51:21AM 4 points [-]

writing up the idea informally, posting it to a mailing list and getting immediate useful feedback/discussions from people who actually understand and are interested in the idea.

Note that I was doing that as well. And many academics similarly do both routes.

Comment author: Wei_Dai 01 September 2017 08:26:20AM 2 points [-]

What's your and their motivation to also do the academic publishing route (or motivation to go into or remain in academia which forces them to also do the academic publishing route)? I guess I can understand people at FHI doing it for the prestige associated with being in academia which they can convert into policy influence, but why do, say, academic decision theorists do it? Do they want the prestige as a terminal goal? Is it the easiest way for them to make a living while also doing research? Did they go down the academic path not knowing it would be like this and it was too late when they found out?

Assuming the above exhausts the main reasons, it seems like a good idea for someone who doesn't care much about the prestige and can make money more easily elsewhere to skip academic publishing and use the time to instead do more research, make more money, or just for leisure?

Comment author: Stuart_Armstrong 01 September 2017 11:36:27AM 4 points [-]

If you're a published academic in field X, you're part of the community of published academics in field X, which, in many but not all cases, is the entirety of people doing serious work in field X. "Prestige" mainly translates to "people who know stuff in this area take me seriously".

Comment author: Wei_Dai 01 September 2017 01:10:48PM 5 points [-]

When I was involved in crypto there were forums that both published academics and unpublished hobbyists participated in, and took each other seriously. If this isn't true in a field, it makes me doubt that intellectual progress is still the highest priority in that field. If I were a professional philosopher working in anthropic reasoning, I don't see how I can justify not taking a paper about anthropic reasoning seriously unless it passed peer review by anonymous reviewers whose ideas and interests may be very different from my own. How many of those papers can I possibly come across per year, that I'd justifiably need to outsource my judgment about them to unknown peers?

(I think peer review does have a legitimate purpose in measuring people's research productivity. University admins have to count something to determine who to hire and promote, and number of papers that pass peer review is perhaps one of the best measure we have. And it can also help outsiders to know who can be trusted as experts in a field, which is what I was thinking of by "prestige". But there's no reason for people who are already experts in a field to rely on it instead of their own judgments.)

Comment author: Stuart_Armstrong 01 September 2017 01:23:17PM 3 points [-]

If I were a professional philosopher working in anthropic reasoning, I don't see how I can justify not taking a paper about anthropic reasoning seriously

Depends on how many cranks there are in anthropic reasoning (lots) and how many semi-serious people post ideas that have already been addressed or refuted in papers already (in philosophy in general, this is huge; in anthropic reasoning, I'm not sure).

Comment author: Wei_Dai 01 September 2017 05:10:58PM *  0 points [-]

Lots of places attract cranks and semi-serious people, including the crypto forums I mentioned, LW, and everything-list which was a mailing list I created to discuss anthropic reasoning as one of the main topics, and they're not that hard to deal with. Basically it doesn't take a lot of effort to detect cranks and previously addressed ideas, and everyone can ignore the cranks and the more experienced hobbyists can educate the less experienced hobbyists.

EDIT: For anyone reading this, the discussion continues here.

Comment author: Stuart_Armstrong 02 September 2017 06:23:45PM 0 points [-]

Basically it doesn't take a lot of effort to detect cranks and previously addressed ideas

This is news to me. Encouraging news.

Comment author: turchin 02 September 2017 12:11:01PM *  0 points [-]

If specially dedicated journal for anthropic reasoning exists (or say for AI risks and other x-risks), it will probably improve quality of peer review and research? Or it will be no morŅƒ useful than Lesswrong?

Comment author: Stuart_Armstrong 02 September 2017 06:27:13PM 2 points [-]

If I were a professional philosopher working in anthropic reasoning, I don't see how I can justify not taking a paper about anthropic reasoning seriously

But there are no/few philosophers working in "anthropic reasoning" - there are many working in "anthropic probability", to which my paper is an interesting irrelevance. it's essentially asking and answering the wrong question, while claiming that their own question is meaningless (and doing so without quoting some of the probability/decision theory stuff which might back up the "anthropic probabilities don't exist/matter" claim from first principles).

I expected the paper would get published, but I always knew it was a bit of a challenge, because it didn't fit inside the right silos. And the main problem with academia here is that people tend to stay in their silos.

Comment author: Wei_Dai 05 September 2017 02:47:50PM 1 point [-]

But there are no/few philosophers working in "anthropic reasoning" - there are many working in "anthropic probability", to which my paper is an interesting irrelevance. it's essentially asking and answering the wrong question, while claiming that their own question is meaningless

Seems like a good explanation of what happened to this paper specifically.

(and doing so without quoting some of the probability/decision theory stuff which might back up the "anthropic probabilities don't exist/matter" claim from first principles)

I guess that would be the thing to try next, if one was intent on pushing this stuff back into academia.

And the main problem with academia here is that people tend to stay in their silos.

By doing that they can better know what the fashionable topics are, what referees want to see in a paper, etc., which help them maximize the chances of getting papers published. This seems to be another downside of the current peer review system as well as the larger publish-or-perish academic culture.