Adam Mastroianni writes about "actually caring about stuff, and for the right reasons", rather than just LARPing. The opening is excerpted below.
I once saw someone give a talk about a tiny intervention that caused a gigantic effect, something like, “We gave high school seniors a hearty slap on the back and then they scored 500 points higher on the SAT.”1
Everyone in the audience was like, “Hmm, interesting, I wonder if there were any gender effects, etc.”
I wanted to get up and yell: “EITHER THIS IS THE MOST POTENT PSYCHOLOGICAL INTERVENTION EVER, OR THIS STUDY IS TOTAL BULLSHIT.”
If those results are real, we should start a nationwide backslapping campaign immediately. We should be backslapping astronauts before their rocket launches and Olympians before their floor routines. We should be running followup studies to see just how many SAT points we can get—does a second slap get you another 500? Or just another 250? Can you slap someone raw and turn them into a genius?
Or—much more likely—the results are not real, and we should either be a) helping this person understand where they screwed up in their methods and data analysis, or b) kicking them out for fraud.
Those are the options. Asking a bunch of softball questions (“Which result was your favorite?”) is not a reasonable response. That’s like watching someone pull a rabbit out of a hat actually for real, not a magic trick, and then asking them, “What’s the rabbit’s name?”
I can't speak to what the OP meant by that. But scientific publishing does require spin, at least if you are aiming for a good journal. There is not some magic axis by which people care about some things and not about others, so its your job as an author to persuade people to care about your results. This shifts the dial in all sorts of little ways.
"Well, in the end it seems like we learned nothing." If that is the conclusion you don't get to publish the paper, which is not good for your career. Where-as "In conclusion, we have shown {really important result} beyond any shadow of a doubt" is good. But real results are in the middle. You have something, but there are caveats, assumptions, details that you don't think are important but who knows maybe they are? On any particular weakness, how much emphasis does it get? A paragraph? A sentence? A footnote? In the supplementary information? Entirely missing? How much emphasis would you give that weakness in your methodology if the publishing process was not incentivising you to put it as far down that list as possible?
The manufacturing process was not reliable, but the 32nd device tested worked fairly well, as plotted in fig.3.
"Why mention the 31 devices that the paper is not about?"
Assumption 1, assumption 2....
"Its nice that you understand and explain these assumptions so well. But they are all pretty standard in the field. I think we can drop these paragraphs, just say "using standard approximations", actually "standard methods" sounds better."
The end result:
PhD student's draft: "It is possible that quantum information technology might be important at some point in the future. One aspect of that is 2-bit operations, but they need to be robust. One particular 2 bit operation is the CNOT gate. In this paper we demonstrate a CNOT quantum gate, that only worked on the 32nd tested device. Its nowhere near good enough for a useful quantum computer, but it worked OK on Tuesday, which is something (when I returned to get more data on Wednesday it had permanently stopped working for inexplicable reasons.)."
Final (published) paper after the professor has had an edit: "Quantum technology will soon revolutionise all aspects of human society, bringing vast social and economic benefits. The key obstacle to realising these enormous gains is a reliable the 2-bit quantum gate. In this paper we propose a novel design for such a gate, and find that a high level of reliability can be achieved simultaneously with improving the device's speed."