Neuroscientist Tal Yarkoni denounces many of his colleagues' tendency to appeal to publish-or-perish incentives as an excuse for sloppy science (October 2018, ~4600 words). Perhaps read as a complement to our recent discussion of Moral Mazes?
Neuroscientist Tal Yarkoni denounces many of his colleagues' tendency to appeal to publish-or-perish incentives as an excuse for sloppy science (October 2018, ~4600 words). Perhaps read as a complement to our recent discussion of Moral Mazes?
There is a lot of arguing in the comments about what the 'tradeoffs' are for individuals in the scientific community and whether making those tradeoffs is reasonable. I think what's key in the quoted article is that fraudsters are trading so much for so little. They are actively obscuring and destroying scientific progress while contributing to the norm of obscuring and destroying scientific progress. Potentially preventing cures to diseases, time and life-saving technology, etc. This is REALLY BAD. And for what? A few dollars and an ego trip? An 11% instead of a 7% chance at a few dollars and an ego trip? I do not think it is unreasonable to judge this behavior as reprehensible, reguargless if it is the 'norm'.
Using peers in a field as a proxy for good vs. bad behavior doesn't make sense if the entire field is corrupt and destroying value. If 100% of scam artists steal people's money, I don't forgive a scam artist for stealing less money than the average scam artist. They are not 'making things better' by in theory reducing the average amount of money stolen per scam artist. They are still stealing money. DO NOT BECOME...
Using peers in a field as a proxy for good vs. bad behavior doesn't make sense if the entire field is corrupt and destroying value.
This seems to imply that you think that the world would be better off without academia at all. Do you endorse that?
Perhaps you only mean that if the world would be better off without academia at all, and nearly everyone in it is net negative / destroying value, then no one could justify joining it. I can agree with the implication, but I disagree with the premise.
I do not like this post. I think it gets most of its rhetorical oomph from speaking in a very moralizing tone, with effectively no data, and presenting everything in the worst light possible; I also think many of its claims are flat-out false. Let's go through each point in order.
1. You can excuse anything by appealing to The Incentives
No, seriously—anything. Once you start crying that The System is Broken in order to excuse your actions (or inactions), you can absolve yourself of responsibility for all kinds of behaviors that, on paper, should raise red flags. Consider just a few behaviors that few scientists would condone:
- Fabricating data or results
- Regulary threatening to fire trainees in order to scare them into working harder
- Deliberately sabotaging competitors’ papers or grants by reviewing them negatively
Wow, that would be truly shocking; indeed that would be truly an indictment of academia. What's the evidence?
When Diederik Stapel confessed to fabricating the data used in over 50 publications, he didn’t explain his actions by saying “oh, you know, I’m probably a bit of a psychopath”; instead, he placed much of the blame squarely on The Incentives:
... Did you expect people who...
I agree with most of this review, and also didn't really like this post when it came out.
I think the first one could plausibly be a reason that we would want to promote this on LW. Unfortunately, I think it is wrong: I do not think that people should usually feel upon themselves the burden of bucking bad incentives. There are many, many bad incentives in the world; you cannot buck them all simultaneously and make the world a better place. Rather, you need to conform with the bad incentives, even though it makes your blood boil, and choose a select few areas in which you are going to change the world, and focus on those.
Just for the record, and since I think this is actually an important point, my perspective is that indeed people cannot take on themselves the burden of bucking bad all bad incentives, but that there are a few domains of society where not following these incentives is much worse than others and where I currently expect the vast majority of contributors to be net-negative participants because of those incentives (and as such establishing standards of "deal with it or leave it" is a potentially reasonable choice).
I think truth-seeking institutions are one of thos...
The discussion around It's Not the Incentives, It's You, was pretty gnarly. I think at the time there were some concrete, simple mistakes I was making. I also think there were 4-6 major cruxes of disagreement between me and some other LessWrongers. The 2019 Review seemed like a good time to take stock of that.
I've spent around 12 hours talking with a couple people who thought I was mistaken and/or harmful last time, and then 5-10 writing this up. And I don't feel anywhere near done, but I'm reaching the end of the timebox so here goes.
I think this post and the surrounding commentary (at least on the “pro” side) was making approximately these claims:
...A. You are obligated to buck incentives. You might be tempted sometimes to blame The Incentives rather than take personal responsibility for various failures of virtue (epistemic or otherwise). You should take responsibility.
B. Academia has gotten more dishonest, and academics are (wrongly) blaming “The Incentives” instead of taking responsibility.
C. Epistemics are the most important thing. Epistemic Integrity is the most important virtue. Improving societal epistemics is the top cause area.
- Possible stronger claim:
Personal Anecdote:
The forceful, moralizing tone of the article was helpful for me to internalize that I need the skill of noticing, and then bucking, incentives.
Just a few days ago, on Dec 31st, I found myself trying to rush an important blogpost out before 2020 ended, so it could show up next year in the 2020 LW Review. I found myself writing to some people, tongue-in-cheekly saying “Hey guys, um, the incentives say I should try to publish this today. Can you give feedback on it, and/or tell me that it’s not ready and I should take more time?”
And… well, sure I can hide behind the tongue-in-cheekness. And, "Can you help review a blogpost?" is a totally reasonable thing to ask my friends to do.
But, also, as I clicked ‘send’ on the email, I felt a little squirming in my heart. Because I knew damn well the post wasn’t ready. I was just having trouble admitting it to myself because I’d be sad if it were delayed a year from getting into the next set of LW Books. And this was a domain where I literally invented the incentives I was responding to.
It was definitely not the Incentives, It Was Me.
I still totally should have asked my fri...
Cognitive processes vs right answers; Median vs top thinkers
My frame here is “what cognitive strategy is useful for the median person to find the right answers”.
I think that people I’ve argued against here were focused more directly on “What are the right answers?” or “What should truthseekers with high standards and philosophical sophistication do?”.
I expect there to be a significant difference between the median academic and the sort of person participating in this conversation.
I think the median academic is running on social cognition, which is very weak. Fixing that should be their top priority. I think fixing that is cognitively very different from “not being academically dishonest.” (Though this may depend somewhat on what sort of academic dishonesty we’re talking about, and how prevalent it is)
I think the people I’ve argued with probably disagree about that, and maybe think that ‘be aligned with the truth’ is a central cognitive strategy that is entangled across the board. This seems false to me for most people, although I can imagine changing my mind.
Arranging coordinated-efforts-that-work (i.e. Stag Hunts) is the most important thing, most ...
I can't upvote this strongly enough. It's the perfect followup to discussion and analysis of Moloch and imperfect equilibria (and Moral Mazes) - goes straight to the heart of "what is altruism?" If you're not taking actions contrary to incentives, choosing to do something you value that "the system" doesn't, you're not making moral choices, only economic ones.
Very nice. Few notes:
1. Wrong incentives are no excuse for bad behaviour, they should rather quit their jobs than engaging in one.
2. World isn't black or white, sometimes there is a gray zone where you contribute enough to be net+, while cut some corners to get your contribution accepted.
3. People tend to overestimate their contribution and underestimate the impact of their behaviour, so 2. is quite dangerous.
4. In an environment with sufficiently strong wrong incentives, the only result is that only those with weak morals survive. Natural selection.
5. There is lot of truth in Taleb's position that research should not be a source of your income, rather a hobby.
Is this specific to research? Given unaligned incentives and Goodheart, I think you could make an argument that nothing important should be a source of income. All long-term values-oriented work should be undertaken as hobbies.
This is an interesting argument for funding something like the EA Hotel over traditional EA orgs.
There's been a great deal of discussion of the EA Hotel on the EA Forum. Here's one relevant thread:
Here's another:
It's possible the hotel's funding troubles have more to do with weirdness aversion than anything else.
I personally spent 6 months at the hotel, thought it was a great environment, and felt the time I spent there was pretty helpful for my career as an EA. The funding situation is not as dire as it was a little while ago. But I've donated thousands of dollars to the project and I encourage others to donate too.
Is it necessary so? Today science means you spend considerable portion of your time doing bullshit instead of actual research. Wouldn't you be in a much better position doing quality research if you're earning good salary, saving a big portion of it, and doing science as a hobby?
Note that Wei Dei also notes that he chose exit from academia, as did many others on Less Wrong and in our social circles (combined with surprising non-entry).
If this is the model of what is going on, that quality and useful research is much easier without academia, but academia is how one gains credibility, then destroying the credibility of academia would be the logical useful action.
I thought peer-review had always been a core part of science in some form or another. I think you might be confusing external peer-view and editorial peer-review. As this Wikipedia article says:
The first record of an editorial pre-publication peer-review is from 1665 by Henry Oldenburg, the founding editor of Philosophical Transactions of the Royal Society at the Royal Society of London.[2][3][4]
The first peer-reviewed publication might have been the Medical Essays and Observationspublished by the Royal Society of Edinburgh in 1731. The present-day peer-review system evolved from this 18th-century process,[5] began to involve external reviewers in the mid-19th-century,[6] and did not become commonplace until the mid-20th-century.[7]
Peer review became a touchstone of the scientific method, but until the end of the 19th century was often performed directly by an editor-in-chief or editorial committee.[8][9][10]Editors of scientific journals at that time made publication decisions without seeking outside input, i.e. an external panel of reviewers, giving established authors latitude in their journalistic discretion. For example, Albert Einstein's four revolutionary Annus Mirabil...
I disagree with most of the post and most of the comments here. I think most academics are not explicitly committing fraud, but bad science results anyway. I also think that for the vast majority of (non-tenured) academics, if you don't follow the incentives, you don't make it in academia. If you intervened on ~100 entering PhD students and made them committed to always not following the incentives where they are bad, I predict that < 10% of them will become professors -- maybe an expected 2 of them would. So you can't say "why don't the academics just not follow the incentives"; any such person wouldn't have made it into academia. I think the appropriate worlds to consider are: science as it exists now with academics following incentives or ~no academia at all.
It is probably correct that each individual instance of having to deal with bad incentives doesn't make that much of a difference, but there are many such instances. Probably there's an 80-20 thing to do here where you get 80% of the benefit by not following the worst 20% of bad incentives, but it's actually quite hard to identify these, and it requires you to be able to p...
Survey and other data indicate that in these fields most people were doing p-hacking/QRPs (running tests selected ex post, optional stopping, reporting and publication bias, etc), but a substantial minority weren't, with individual, subfield, and field variation. Some people produced ~100% bogus work while others were ~0%. So it was possible to have a career without the bad practices Yarkoni criticizes, aggregating across many practices to look at overall reproducibility of research.
And he is now talking about people who have been informed about the severe effects of the QRPs (that they result in largely bogus research at large cost to science compared to reproducible alternatives that many of their colleagues are now using and working to reward) but choose to continue the bad practices. That group is also disproportionately tenured, so it's not a question of not getting a place in academia now, but of giving up on false claims they built their reputation around and reduced grants and speaking fees.
I think the core issue is that even though the QRPs that lead to mostly bogus research in fields such as social psych and neuroimaging often started off without intentional bad conduct, their bad effects have now become public knowledge, and Yarkoni is right to call out those people on continuing them and defending continuing them.
The former is a statement about outcomes while the latter is a statement about intentions.
My model for how most academics end up following bad incentives is that they pick up the incentivized bad behaviors via imitation. Anyone who doesn't do this ends up doing poorly and won't make it in academia (and in any case such people are rare, imitation is the norm for humans in general). As part of imitation, people come up with explanations for why the behavior is necessary and good for them to do. (And this is also usually the right thing to do; if you are imitating a good behavior, it makes sense to figure out why it is good, so that you can use that underlying explanation to reason about what other behaviors are good.)
I think that I personally am engaging in bad behaviors because I incorrectly expect that they are necessary for some goal (e.g. publishing papers to build academic credibility). I just can't tell which ones really are necessary and which ones aren't.
I did not follow the Moral Mazes discussion as it unfolded. I came across this article context-less. So I don't know that it adds much to Lesswrong. If that context is relevant, it should get a summary before diving in. From my perspective, its inclusion in the list was a jump sideways.
It's written engagingly. I feel Yarkoni's anger. Frustration bleeds off the page, and he has clearly gotten on a roll. Not performing moral outrage, just *properly, thoroughly livid* that so much has gone wrong in the science world.
We might need that.
What he wrote does not o...
This post substantially updated my thinking about personal responsibility. While I totally disagree with the one-side framing of the post, the framing of it made me see that the "personal responsibility" vs. "incentives" thing wasn't really about beliefs at all, but was in fact about the framing.
I think it articulates the "personal responsibility" frame particularly well, and helps see how choosing "individuals" as the level of abstraction naturally leads to a personal responsibility framing.
In general, I think this post does a great job of articulatng a single, incomplete frame. Others in the review take umbrage with the moralizing tone, but I think the moralizing tone is actually quite useful to give an inside view of this frame.
I believe this frame is incomplete, but gives an important perspective that is often ignored in the Lesswrong/Gray tribe.
While I don't think this post is actually eligible for the Best of LW 2019 book (since it's written offsite and is only a linkpost here), I think it's reasonable to nominate the comments here for some kind of "what do we collectively feel about this 1.5 years later?" discussion.
Definitely think this is an important point in the conversation.
I think my take is something like "The incentives are the problem" is a useful frame for how to look at systems and (often but not always) other people, but should throw up a red flag when you use it as an excuse for your own behavior.
I'm not sure I endorse this post precisely as written, because "take ownership of your behavior" is a cause that will be Out To Get You for everything you've got (while leaving you vulnerable to Asymmetric Justice in the meanwhile). ...
If you're an academic and you're using fake data or misleading statistics, you are doing harm rather than good in your academic career. You are defrauding the public, you are making our academic norms be about fraud, you are destroying both public trust in academia in particular and knowledge in general, and you are creating justified reasons for this destruction of trust. You are being incredibly destructive to the central norms of how we figure things out about the world - one of many of which is whether or not it is bad to eat meat, or how we should uphold moral standards.
And you're doing it in order to extract resources from the public, and grab your share of the pie.
I would not only rather you eat meat. I would rather you literally go around robbing banks at gunpoint to pay your rent.
If one really, really did think that personally eating meat was worse than committing academic fraud - which boggles my mind, but supposing that - what the hell are you doing in academia in the first place, and why haven't you quit yet? Unless your goal now is to use academic fraud to prevent people from eating meat, which I'd hope is something you wouldn't endorse, and not what 99%+ of these people are doing. As the author of OP points out, if you can make it in academia, you can make more money outside of it, and have plenty of cash left over for salads and for subsidizing other people's salads, if that's what you think life is about.
It's fair to say that fake data is a Boolean and a Rubicon, where once you do it once, at all, all is lost. Whereas there are varying degrees of misleading statistics versus clarifying statistics, and how one draws conclusions from those statistics, and one can engage in some amount of misleading without dooming the whole enterprise, so long as (as you note) the author is explicit and clear about what the data was and what tests were applied, so anyone reading can figure out what was actually found.
However, I think it's not that hard for it to pass a threshold where it's clearly fraud, although still a less harmful/dangerous fraud than fake data, if you accept that an opinion columnist cherry-picking examples is fraud (e.g. for it to be more fraudulent than that, especially if the opinion columnist isn't assumed to be claiming that the examples are representative). And I like that example more the more I think about it, because that's an example of where I expect to be softly defrauded in the sense that I assume that the examples and arguments are words written are soldiers chosen to make a point slash sell papers, rather than an attempt to create common knowledge and seek truth. If scientific papers are in the same reference class as that...
I am very surprised that you still endorse this comment on reflection, but given that you do, it's not unreasonable to ask: Given that most people lie a lot, and you think personally not eating meat is more important than not lying, your track record actually not eating meat, and your claim that it's reasonable to be a 51st percentile moral person, why should we then trust your statements to be truthful? Let alone in good faith. I mean, I don't expect you to lie because I know you, but if you actually believed the above for real, wouldn't my expectation be foolish?
I'm trying to square your above statement and make it make sense for you to have said it and I just... can't?
True. But I do think we've run enough experiments on 'don't say anyone is a bad person, only point out bad actions and bad logic and false beliefs' to know that people by default read that as claims about who is bad, and we need better tech for what to do about this.
The thing is, I don't think that shorthand (along with similar things like "You're an idiot") ever stays understood outside of very carefully maintained systems of people working closely together in super high trust situations, even if it starts out understood.
I think two different things are going on here:
1. The OP read as directly moralizing to me. I do realize it doesn't necessarily spell it out directly, but moralizing language rarely is. I don't know the author of the OP. There are individuals I trust on LW to be able to have this sort of conversation without waging subtle-or-unsubtle wars over who is a bad person, but they are rare. I definitely don't assume that for random people on the internet.
2. My "Be in the top 50% morally" statement was specifically meant to be in the context of the full Scott Alexander post, which is explicitly about (among other things) people being worried about being a good person.
And, yes, I brought the second point up (and I did bring it up in an offhand way without doing much to establish the context, which was sloppy. I do apologize for that).
But afterward providing the link, it seemed like people were still criticizing that point. And... I'm not sure I have a good handle on how this played out. But my impression is something like you and maybe a couple others were criticizing the 50% comment as if it were part of a different context, whereas if you read the original po...
Note: I may not be able to weigh in on this more until Sunday.
Clarifying some things all at once since a few people have brought up related points. I'm probably not going to get to address the "which is worse – lying or eating meat" issue until Sunday (in the meanwhile, to be clear, I think "don't lie" is indeed one of the single most important norms to coordinate on, and to create from scratch if you don't have such a norm, regardless of whether there are other things that are as or more important)
A key clause in the above comment was:
If the norm in academia is to use bad statistics, or fake data (I don't know whether it is or not, or how common it is)
In a world where the norm in academia is to not use bad statistics, or not to fake data, then absolutely the correct thing is to uphold that norm.
In a world where the norm is explicitly not to do those things (i.e. greater than 50% of academics would fake data), then we have very big problems, and unilaterally deciding to stop faking data... is nice, but isn't actually going to help unless it is part of a broader, more concerted strategy.
I don't actually know the state of academia en...
In a world where the norm is explicitly not to do those things (i.e. greater than 50% of academics would fake data), then we have very big problems, and unilaterally deciding to stop faking data... is nice, but isn't actually going to help unless it is part of a broader, more concerted strategy.
I think this claim is a hugely important error.
One scientist unilaterally deciding to stop faking data isn't going to magically make the whole world come around. But the idea that it doesn't help? That failing to do so, and not only being complicit in others faking data but also faking data, doesn't make it worse?
I don't understand how one can think that.
That's not unique to the example of faking data. That's true of anything (at least partially) observable that you'd like to change.
One can argue that coordinated action would be more efficient, and I'd agree. One can argue that in context, it's not worth the trade-off to do the thing that reinforces good norms and makes things better, versus the thing that's better for you and makes things generally worse. Sure. Not everything that would improve norms is worth doing.
But don't ...
One can argue that coordinated action would be more efficient, and I'd agree. One can argue that in context, it's not worth the trade-off to do the thing that reinforces good norms and makes things better, versus the thing that's better for you and makes things generally worse. Sure. Not everything that would improve norms is worth doing.
But don't pretend it doesn't matter.
This reads as enormously uncharitable to Raemon, and I don't actually know where you're getting it from. As far as I can tell, not a single person in this conversation has made the claim that it "doesn't matter"--and for good reason: such a claim would be ridiculous. That you seem willing to accuse someone else in the conversation of making such a claim (or "pretending" it, which is just as bad) doesn't say good things about the level of conversation.
What has been claimed is that "doing the thing that reinforces good norms" is ineffective, i.e. it doesn't actually reinforce the good norms. The claim is that without a coordinated effort, changes in behavior on an individual level have almost no effect on the behavior of the field as a ...
We all know that falsifying data is bad. But if that's the way the incentives point (and that's a very important if!), then it's also bad to call people out for doing it.
No. No. Big No. A thousand times no.
(We all agree with that first sentence, everyone here knows these things are bad, that's just quoted for context. Also note that everyone agrees that those incentives are bad and efficient action to change them would be a good idea.)
I believe the above quote is a hugely important crux. Likely it, or something upstream of it, is the crux. Thank you for being explicit here. I'm happy to know that this is not a straw-man, that this is not going to get the Mott and Bailey treatment.
I'm still worried that such treatment will mostly occur...
There is a position, that seems to be increasingly held and openly advocated for, that if someone does something according to their local, personal, short-term amoral incentives, that this is, if not automatically praiseworthy (although I believe I have frequently seen this too, increasingly explicitly, but not here or by anyone in this discussion), at least an immunity from being blameworthy, no matter the magnitud...
Here's another further-afield steelman, inspired by blameless postmortem culture.
When debriefing / investigating a bad outcome, it's better for participants to expect not to be labeled as "bad people" (implicitly or explicitly) as a result of coming forward with information about choices they made that contributed to the failure.
More social pressure against admitting publicly that one is contributing poorly contributes to systematic hiding/obfuscation of information about why people are making those choices (e.g. incentives). And we need all that information to be out in the clear (or at least available to investigators who are committed & empowered to solve the systemic issues), if we are going to have any chance of making lasting changes.
In general, I'm curious what Zvi and Ben think about the interaction between "I expect people to yell at me if I say I'm doing this" and promoting/enabling "honest accounting".
Trying to steelman the quoted section:
If one were to be above average but imperfect (e.g. not falsifying data or p-hacking but still publishing in paid access journals) then being called out for the imperfect bit could be bad. That person’s presence in the field is a net positive but if they don’t consider themselves able to afford the penalty of being perfect then they leave and the field suffers.
I’m not sure I endorse the specific example there but in a personal example:
My incentive at work is to spend more time on meeting my targets (vs other less measurable but important tasks) than is strictly beneficial for the company.
I do spend more time on these targets than would be optimal but I think I do this considerably less than is typical. I still overfocus on targets as I’ve been told in appraisals to do so.
If someone were to call me out on this I think I would be justified in feeling miffed, even if the person calling me out was acting better than me on this axis.
Thank you.
I read your steelman as importantly different from the quoted section.
It uses the weak claim that such action 'could be bad' rather than that it is bad. It also re-introduces the principle of being above average as a condition, which I consider mostly a distinct (but correlated) line of thought.
It changes the standard of behavior from 'any behavior that responds to local incentives is automatically all right' to 'behaviors that are above average and net helpful, but imperfect.'
This is an example of the kind of equivalence/transformation/Mott and Bailey I've observed, and am attempting to highlight - not that you're doing it, you're not because this is explicitly a steelman, but that I've seen. The claim that it is reasonable to focus on meeting explicit targets rather than exclusively what is illegibly good for the company versus the claim that it is cannot be blameworthy to focus exclusively on what you are locally personally incentivized to do, which in this case is meeting explicit targets and things you would be blamed for, no matter the consequence to the company (unless it would actually suffer enough to destroy its ability to pay you).
That is no straw man. In the companies described in Moral Mazes, managers do in fact follow that second principle, and will punish those seen not doing so. In exactly this situation.
I don't endorse the quoted statement, I think it's just as perverse as you do. But I do think I can explain how people get there in good faith. The idea is that moral norms have no independent existence, they are arbitrary human constructions, and therefore it's wrong to shame someone for violating a norm they didn't explicitly agree to follow. If you call me out for falsifying data, you're not recruiting the community to enforce its norms for the good of all. There is no community, there is no all, you're simply carrying out an unprovoked attack against me, which I can legitimately respond to as such.
(Of course, I think this requires an illogical combination of extreme cynicism towards object-level norms with a strong belief in certain meta-norms, but proponents don't see it that way.)
But that doesn't mean that everyone who fails to do what they did is an exceptionally bad person, and lambasting them for it isn't actually a very good way to get them to change.
I haven't said 'bad person' unless I'm missing something. I've said things like 'doing net harm in your career' or 'making it worse' or 'not doing the right thing.' I'm talking about actions, and when I say 'right thing' I mean shorthand for 'that which moves things in the directions you'd like to see' rather than any particular view on what is right or wrong to move towards, or what moves towards what, leaving those to the individual.
It's a strange but consistent thing that people's brains flip into assuming that anyone thinking some actions are better than other actions are accusing others who don't take the better actions of being bad people. Or even, as you say, 'exceptionally bad' people.
Interesting. I am curious how widely endorsed this dynamic is, and what rules it operates by.
On two levels.
Level one is the one where some level of endorsement of something means that I'm making the accusations in it. Which at some levels that it happens often in the wild is clearly reasonable, and at some other levels that it happens in the wild often, is clearly unreasonable.
Level two is that the OP doesn't make the claim that anyone is a bad person. I re-read the OP to check. My reading is this. It claims that they are engaging in bad actions, and that there are bad norms that seem to have emerged, that together are resulting in bad outcomes. And it argues that people are using bad justifications for that. And it importantly claims that these bad outcomes will be bad not only for 'science' or 'the world' but for the people that are taking the actions in question, who the OP believes misunderstand their own incentives, in addition to having false beliefs as to what impact actions will have on others, and sometimes not caring about such impacts.
That is importantly different from claiming that these are bad people.
Is it possible to say 'your actions are bad and maybe you should stop' or even 'your actions are having these results and maybe you should stop' without saying 'you are bad and you should feel bad'?
I actually am asking, because I don't know.
Is it possible to say 'your actions are bad and maybe you should stop' or even 'your actions are having these results and maybe you should stop' without saying 'you are bad and you should feel bad'?
I actually am asking, because I don't know.
I've touched on this elsethread, but my actual answer is that if you want to do that, you either need to create a dedicated space of trust for it, that people have bought into. Or you need to continuously invest effort in it. And yes, that sucks. It's hugely inefficient. But I don't actually see alternatives.
It sucks even more because it's probably anti-inductive, where as some phrases become commonly understood they later become carrier waves for subtle barbs and political manipulations. (I'm not confident how common this is. I think a more prototypical example is "southern politeness" with "Oh bless your heart").
So I don't think there's a permanent answer for public discourse. There's just costly signaling via phrasing things carefully in a way that suggests you're paying attention to your reader's mental state (including their mental map ...
Optimizing for anything is costly if you’re not counting the thing itself as a benefit.
Suppose I do count the thing itself (call it X) as a benefit. Given that I'm also optimizing for other things at the same time, the outcome I end up choosing will generally be a compromise that leaves some X on the table. If everyone is leaving some X on the table, then deciding when to blame or "call out" someone for leaving some X on the table (i.e., not being as honest in their research as they could be) becomes an issue of selective prosecution (absent some bright line in the sand such just making up data out of thin air). I think this probably underlies some people's intuitions that calling people out for this is bad.
Being in a moral maze is not worth it. They couldn’t pay you enough, and even if they could, they definitely don’t. Even if you end up CEO, you still lose. These lives are not worth it. Do not be a middle manager at a major corporation that looks like this. Do not sell your soul.
What if Moral Mazes is the inevitable outcome of trying to coordinate a large group of humans in order to take advantage of some economy of scale? (My guess is that Moral Mazes is just part of the
...I think these are (at least some of) the right questions to be asking.
The big question of Moral Mazes, as opposed to conclusions worth making more explicit, is: Are these dynamics the inevitable result of large organizations? If so, to what extent should we avoid creating large organizations? Has this dynamic ever been different in the past in other places and times, and if so why and can we duplicate those causes?
Which I won't answer here, because it's a hard question, but my current best guess on question one is: It's the natural endpoint if you don't create a culture that explicitly opposes it (e.g. any large organization that is not explicitly in opposition to being an immoral maze will increasingly become one, and things generally only get worse over time on this axis rather than better unless you have a dramatic upheaval which usually means starting over entirely) and also that the more other large organizations around you are immoral mazes, the faster and harder such pressures will be, and the more you need to push back to stave them off.
My best guess on question two is: Quite a lot. At least right here, right now any sufficiently large organization, be ...
Let me try to be a little clearer here.
If someone defrauds me, and I object, and they explain that the incentive structure society has set up for them pays more on net for fraud than for honest work, then this is at least a relevant reply, and one that is potentially consistent with owning one's decision to participate in corruption rather than fighting it or opting out. (Though I think the article makes a pretty good case that in the specific case of academia, "fighting it or opting out" is better for most reasonable interests.)
If someone defrauds me, and I object, and they explain that they're instead spending their goodness budget on avoiding eating meat, this is not a relevant reply in the same sense. Factory farmed animals aren't a party we're negotiating with or might want to win the trust of, and the public interest in accurate information is different in kind from the public interest in people not causing animals to suffer.
This is especially important in the light of a fairly recent massive grass-roots effort in academia - originated by academics in multiple disciplines volunteering their spare time - to do the work that led to the replication crisis, because academics in many fields are actually still trying to get the right answer along some dimensions and are willing to endure material costs (including reputational damage to their own fields) to do so. So, that's not actually a proposal to decline to initiate a stag hunt, that's a proposal to unilaterally choose Rabbit in a context where close to a critical quorum might be choosing Stag.
Another distinction I think is important, for the specific example of "scientific fraud vs. cow suffering" as a hypothetical:
Science is a terrible career for almost any goal other than actually contributing to the scientific endeavor.
I have a guess that "science, specifically" as a career-with-harmful-impacts in the hypothetical was not specifically important to Ray, but that it was very important to Ben. And that if the example career in Ray's "which harm is highest priority?" thought experiment had been "high-frequency-trading" (or something else that some folks believe has harms when ordinarily practiced, but is lucrative and thus could have benefits worth staying for, and is not specifically a role of stewardship over our communal epistemics) that Ben would have a different response. I'm curious to what extent that's true.
You're right that I'd respond to different cases differently. Doing high frequency trading in a way that causes some harm - if you think you can do something very good with the money - seems basically sympathetic to me, in a sufficiently unjust society such as ours.
Any info good (including finance and trading) is on some level pretending to involve stewardship over our communal epistemics, but the simulacrum level of something like finance is pretty high in many respects.
One distinction I see getting elided here:
I think one's limited resources (time, money, etc) are a relevant question in one's behavior, but a "goodness budget" is not relevant at all.
For example: In a world where you could pay $50 to the electric company to convert all your electricity to renewables, or pay $50 more to switch from factory to pasture-raised beef, then if someone asks "hey, your household electrical bill is destroying the environment, why didn't you choose the green option", a relevant reply is "because I already spent my $50 on cow suffering".
However, if both options cost $0, then "but I already switched to pasture-raised beef" is just irrelevant in its entirety.
I almost wrote a reply to that post when it came up (but didn't because one should not respond too much when Someone Is Wrong On The Internet, even Scott), because this neither seemed like an economic perspective on moral standards, nor did it work under any equilibrium (it causes a moral purity cascade, or it does little, rarely anything in between), nor did it lead to useful actions on the margin in many cases as it ignores cost/benefit questions entirely. Strictly dominated actions become commonplace. It seems more like a system for avoiding being scapegoated and feeling good about one's self, as Benquo suggests.
(And of course, >50% of people eat essentially the maximum amount of quality meat they can afford.)
I don't actually understand how to be "more charitable" or "less charitable" here - I'm trying to make sense of what you're saying, and don't see any point in making up a different but similar-sounding opinion which I approve of.
If I try to back out what motives lead to tracking the average level of morality (as opposed to trying to do decision theory on specific cases), it ends up to be about managing how much you blame yourself for things (i.e. trying to "be" "good"); I actually don't see how thinking about global outcomes would get you there.
If you have a different motivation that led you there, you're in a better position to explain it than I am.
Writing posts a certain way to get more karma on lesswrong is an area of application for this stance.
As a synthesis of points 1 and 4: it is both the incentives and you. The incentives explain why the game is so bad, but you have to ask yourself why you still keep playing it.
A researcher with more personal integrity would avoid the temptation/pressure to do sloppy science... and perhaps lose the job as a result. The sloppy science itself would remain, only done by someone else.