Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: paulfchristiano 08 May 2017 02:13:28AM 2 points [-]

I think this is very unlikely.

Comment author: Benquo 08 May 2017 06:56:07AM 0 points [-]

I think this would be valuable to work out eventually, but this probably isn't the right time and place, and in the meantime I recognize that my position isn't obviously true.

Comment author: paulfchristiano 08 May 2017 02:11:54AM 1 point [-]

a. Private discussion is nearly as efficient as public discussion for information-transmission, but has way fewer political consequences. On top of that, the political context is more collaborative between participants and so it is less epistemically destructive.

b. I really don't want to try to use collectively-enforced norms to guide epistemology and don't think there are many examples of this working out well (whereas there seem to be examples of avoiding epistemically destructive norms by moving into private).

Comment author: Benquo 08 May 2017 06:49:49AM *  0 points [-]

Private discussion is nearly as efficient as public discussion for information-transmission, but has way fewer political consequences.

If this is a categorical claim, then what are academic journals for? Should we ban the printing press?

If your claim is just that some public forums are too corrupted to be worth fixing, not a categorical claim, then the obvious thing to do is to figure out what went wrong, coordinate to move to an uncorrupted forum, and add the new thing to the set of things we filter out of our new walled garden.

Comment author: entirelyuseless 06 May 2017 02:22:52PM *  0 points [-]

I would be a bit surprised if that was explicitly what Nate meant, but it is what we should be concerned about, in terms of being concerned about whether someone is a bad person.

To make my general claim clearer: "doing evil to bring about good, is still doing evil," is necessarily true, for exactly the same reason that "blue objects touching white objects, are still blue objects," is true.

I agree that many utilitarians understand their moral philosophy to recommend doing evil for the sake of good. To the extent that it does, their moral philosophy is mistaken. That does not necessarily mean that utilitarians are bad people, because you can be mistaken without being bad. But this is precisely the reason that when you present scenarios where you say, "would you be willing to do such and such a bad thing for the sake of good," many utilitarians will reply, "No! That's not the utilitarian thing to do!" And maybe it is the utilitarian thing, and maybe it isn't. But the real reason they feel the impulse to say no, is that they are not bad people, and therefore they do not want to do bad things, even for the sake of good.

This also implies, however, that if someone understands utilitarianism in this way and takes it too seriously, they will indeed start down the road towards becoming a bad person. And that happened even in the context of the present discussion (understood more broadly to include its antecedents) when certain people insisted, saying in effect, "What's so bad about lying and other deceitful tactics, as long as they advance my goals?"

Comment author: Benquo 06 May 2017 05:43:57PM 0 points [-]

I agree that this exists, and claim that it ought to be legitimate discourse to claim that someone else is doing it.

Comment author: Raemon 03 May 2017 02:59:14AM 1 point [-]

I agree that this is a big and complicated deal and "never resort to sensationalist tactics" isn't a sufficient answer for reasons close to what you describe. I'm not sure what the answer is, but I've been thinking about ideas.

Basically, I think were automatically fail if we have no way to punish defectors, and we also automatically fail controversy/sensationalism-as-normally-practiced is our main tool of doing so.

I think the threat of sensationalist tactics needs to be real. But it needs to be more like Nuclear Deterrence than it is like tit-for-tat warfare.

We've seen where sensationalism/controversy leads - American journalism. It is a terrible race to the bottom of inducing as much outrage as you can. It is anti-epistemic, anti-instrumental, anti-everything. Once you start down the dark path, forever will it dominate your destiny.

I am very sympathetic to the fact that Ben tried NOT doing that, and it didn't work.

Comment author: Benquo 06 May 2017 02:40:08AM *  1 point [-]

Comments like this make me want to actually go nuclear, if I'm already not getting credit for avoiding doing so.

I haven't really called anyone in the community names. I've worked hard to avoid singling people out, and instead tried to make the discussion about norms and actions, not persons. I haven't tried to organize any material opposition to the interests of the organizations I'm criticizing. I haven't talked to journalists about this. I haven't made any efforts to widely publicize my criticisms outside of the community. I've been careful to bring up the good points as well as the bad of the people and institutions I've been criticizing.

I'd really, really like it if there were a way to get sincere constructive engagement with the tactics I've been using. They're a much better fit for my personality than the other stuff. I'd like to save our community, not blow it up. But we are on a path towards enforcing norms to suppress information rather than disclose it, and if that keeps going, it's simply going to destroy the relevant value.

(On a related note, I'm aware of exactly one individual who's been accused of arguing in bad faith in the discourse around Nate's post, and that individual is me.)

Comment author: entirelyuseless 05 May 2017 01:14:00PM 1 point [-]

This seems completely false. Most people think that Hitler and Stalin were intrinsically bad, and they would be likely to think this with or without systems of dominance.

Kant and Thomas Aquinas explain it quite well: we call someone a "bad person" when we think they have bad will. And what does bad will mean? It means being willing to do bad things to bring about good things, rather than wanting to do good things period.

Comment author: Benquo 05 May 2017 08:22:28PM *  0 points [-]

Do you think Nate's claim was that we oughtn't so often jump to the conclusion that people are willing to do bad things in order to bring about good things? That this is the accusation that's burning the commons? I'm pretty sure many utilitarians would say that this is a fair description of their attitude at least in principle.

Comment author: paulfchristiano 01 May 2017 03:10:35PM *  11 points [-]

I agree that most relevant bad behavior isn't going to feel from the inside like an attempt to mislead, and I think that rationalists sometimes either ignore this or else have an unfounded optimism about nominal alignment.

It would be surprising, if bad intent were so rare in the relevant sense, that people would be so quick to jump to the conclusion that it is present. Why would that be adaptive?

In the evolutionary context, our utterances and conscious beliefs are optimized for their effects on others, and not merely for accuracy. Believing and claiming bad things about competitors is a typical strategy. Prima facie, accusations of bad faith are particularly attractive since they can be levied on sparse evidence yet are rationally compelling. Empirically, accusations of bad faith are particularly common.

Acting in bad faith doesn’t make you intrinsically a bad person, because there’s no such thing.

This makes an interesting contrast with the content of the post. The feeling that some people are bad is a strong and central social intuition. Do you think you've risen to the standard of evidence you are requesting here? It seems to me that you are largely playing the same game people normally play, and then trying to avoid norms that regulate the game by disclaiming "I'm not playing the game."

For the most part these procedural issues seem secondary to disputes about facts on the ground. But all else equal they're a reason to prefer object-level questions to questions about intent, logical argument and empirical data to intuition, and private discussion to public discussion.

Comment author: Benquo 05 May 2017 06:31:26AM 0 points [-]

Why would these procedural issues be a reason to prefer private discourse?

Comment author: Raemon 01 May 2017 11:54:19PM *  10 points [-]

I don't actually know how not to play the same old game yet, but I am trying to construct a way.

I see you aiming to construct a way and making credible progress, but I worry that you're trying to do to many things at once and are going to cause lasting damage by the time you figure it out.

Specifically, the "confidence game" framing of the previous post moved it from "making an earnest good faith effort to talk about things" to "the majority of the post's content is making a status move"[1] (in particular in the context of your other recent posts, and is exacerbated by this one), and if I were using the framing of this current post I'd say both the previous post and this one have bad intent.

I don't think that's a good framing - I think it's important that you (and folk at OpenPhil and at CEA) do not just have an internally positive narrative but are actually trying to do things that actually cache out to "help each other" (in a broad sense of "help each other"). But I'm worried that this will not remain the case much longer if you continue on your current trajectory.

A year ago, I was extremely impressed with the work you were doing and points you were making, and frustrated that those points were not having much impact.

My perception was "EA Has A Lying Problem" was an inflection point where a) yeah, people started actually paying attention to the class of criticism you're doing, but the mechanism by which people started paying attention was by critics invoking rhetoric and courting controversy, which was approximately as bad as the problem it was trying to solve. (or at least, within an order of magnitude as bad)

[1] I realize there was a whole lot of other content of the Confidence Game post that was quite good. But, like, the confidence game part is the part I remember easily. Which is the problem.

Comment author: Benquo 05 May 2017 06:27:48AM *  2 points [-]

Overall, a lot of this feels to me like asking me to do more work, with no compensation, and no offers of concrete independent help, and putting the burden of making the interaction go well on the critic.

A year ago, I was extremely impressed with the work you were doing and points you were making, and frustrated that those points were not having much impact.

It would have been very, very helpful at that time to have public evidence that anyone at all agreed or at least thought that particular points I was making needed an answer. I'm getting that now, I wasn't getting that then, so I find it hard to see the appeal in going back to a style that wasn't working.

My perception was "EA Has A Lying Problem" was an inflection point where a) yeah, people started actually paying attention to the class of criticism you're doing, but the mechanism by which people started paying attention was by critics invoking rhetoric and courting controversy, which was approximately as bad as the problem it was trying to solve. (or at least, within an order of magnitude as bad)

That was a blog post by Sarah Constantin. I am not Sarah Constantin. I wrote my own post in response and about the same things, which no one is bringing up here because no one remembers it. It got a bit of engagement at the time, but I think most of that was spillover from Sarah's post.

If you want higher-quality discourse, you can engage more publicly with what you see as the higher-quality discourse. My older posts are still available to engage with on the public internet, and were written to raise points that would still be relevant in the future.

Comment author: Raemon 03 May 2017 02:43:35AM *  4 points [-]

Things I notice you doing:

  1. Meta discussion of how to have conversations / high quality discourse / why this is important
  2. Evaluating OpenPhil and CEA as institutions, in a manner that's aiming to be evenhanded and fair
  3. Making claims and discussing OpenPhil and CEA that seem pretty indistinguishable from "punishing them and building public animosity towards them."

Because of #3, I think it's a lot harder to have credibility when doing #1 or #2. I think there is now enough history with #3 (perceived or actual, whatever your intent), that if you want to be able to do #1 or #2 you need to signal pretty hard that you're not doing #3 anymore, and specifically take actions aiming to rebuild trust. (And if you were doing #3 by accident, this includes figuring out why your process was outputting something that looked like 3)

I have thoughts about "what to do to cause OpenPhil and CEA to change their behavior" which'll be a response to tristanm's comment.

Comment author: Benquo 05 May 2017 06:23:38AM 0 points [-]

Sometimes a just and accurate evaluation shows that someone's not as great as they said they were. I'm not trying to be evenhanded in the sense of never drawing any conclusions, and I don't see the value in that.

Comment author: paulfchristiano 01 May 2017 03:10:35PM *  11 points [-]

I agree that most relevant bad behavior isn't going to feel from the inside like an attempt to mislead, and I think that rationalists sometimes either ignore this or else have an unfounded optimism about nominal alignment.

It would be surprising, if bad intent were so rare in the relevant sense, that people would be so quick to jump to the conclusion that it is present. Why would that be adaptive?

In the evolutionary context, our utterances and conscious beliefs are optimized for their effects on others, and not merely for accuracy. Believing and claiming bad things about competitors is a typical strategy. Prima facie, accusations of bad faith are particularly attractive since they can be levied on sparse evidence yet are rationally compelling. Empirically, accusations of bad faith are particularly common.

Acting in bad faith doesn’t make you intrinsically a bad person, because there’s no such thing.

This makes an interesting contrast with the content of the post. The feeling that some people are bad is a strong and central social intuition. Do you think you've risen to the standard of evidence you are requesting here? It seems to me that you are largely playing the same game people normally play, and then trying to avoid norms that regulate the game by disclaiming "I'm not playing the game."

For the most part these procedural issues seem secondary to disputes about facts on the ground. But all else equal they're a reason to prefer object-level questions to questions about intent, logical argument and empirical data to intuition, and private discussion to public discussion.

Comment author: Benquo 05 May 2017 06:21:23AM 0 points [-]

My intuition around whether some people are intrinsically bad (as opposed to bad at some things), is that it's an artifact of systems of dominance like schools designed to create insecure attachment, and not a thing nonabused humans will think of on their own.

Comment author: Stuart_Armstrong 03 May 2017 05:28:53AM 2 points [-]

This suggests it's more useful to compare human groups and see how they manage the problem, rather than trying to parse the ins and outs of evolutionary psychology.

Comment author: Benquo 04 May 2017 09:01:35PM 1 point [-]

Agreed.

View more: Next