Presumably the reason why people are roleplaying everything in the first place is because, you'll be seen badly if you stop roleplaying, and being seen badly hurts if you don't have enough emotional resilience. Here's my best attempt at how to break people out of this.
Man, most people are roleplaying everything. It's not fixable by just telling them what concrete stuff they're doing wrong, because they're still running on the algorithm of roleplaying things. Which is why rationality, an attempted account of how to not do stuff wrong, ended in a social club, because it didn't directly address that people are roleplaying everything anyways.
Nice, but the second paper is less on track, as the idea is more "people, society etc. coerce you to do things you don't want" than "long vs short term preferences".
Not something you'll see in papers, but the point of willpower is to limit the amount of time doing stuff you don't want to do. So, your community has some morality that isn't convenient for you? That's why it costs willpower to follow that morality. Your job is tiring? Maybe deep down you don't believe it's serving your interests.
If you have a false belief about what you want, e.g. "I actually want to keep this prestigious position because yay prestige, even though I get tired all the time at work", well, that'...
If you want to spend time predictably spinning in circles in your analysis because you can't bring yourself to believe someone is lying, be my guest.
As for the specific authors: the individual reports written seem fine in themselves, and as for the geoengineering one, I know a guy who did a PhD under the author and said he's generally trustworthy (I recall Vaniver was in his PhD program too). Like what I'm saying is the specific reports, e.g. Bickel's report on geoengineering, seem fine, but Lomborg's synthesis of them is shit, and you're obscuring things with your niceness-and-good-faith approach.
b/c of doing the analysis and then not ranking shit in order.
Further down the list, we find a very controversial project, that is geo-engineering to reduce the intensity of incoming solar radiation to counteract global warming. According to a background paper, such investments would give a return rate of about 1,000. In spite of this enormous return rate, this is given moderate priority, apparently because it is deemed rather uncertain if this will actually work as intended.
> The lowest ranking accepted project, project no. 16, is called "B...
I prefer to reserve "literally lying" for when people intentionally say things that are demonstrably false. It's useful to have words for that kind of thing. As long as things are plausibly defensible, it seems better to say that he made "misleading statements", or something like that.
Actually, I'm not even sure that this was a particularly egregious error. Given that they never say they're going to rank things after the explicit cost-effectiveness estimates, not doing that seems quite reasonable to me. See for example g...
Responding to your Dehaene book review and IFS thoughts as well as this:
On Dehaene: I read the 2018 version of Dehaene's Consciousness and the Brain a while ago and would recommend it as a good intro to cognitive neurosci, your summary looks correct.
On meditation: it's been said before, but >90% of people reading this are going to be high on "having models of how their brain works", and low on "having actually sat down and processed their emotions through meditation or IFS or whatevs". Double especially true for all the dep...
For the love of the spark, fucking don't. At least separate yourself from the social ladder of EA and learn the real version of rationality first.
Or: ignore that advice, but at least don't do the actual MCB implementation worldwide that costs a billion a year, talk with the scientists who worked on it and figure out the way that MCB could be done most efficiently. And then get things to the point of having a written plan, like, "hey government, here's exactly how you can do MCB if you want, now you can execute this plan as written if/...
I'll give an answer that considers the details of the Copenhagen Consensus Center (CCC) and geoengineering, rather than being primarily a priori. I've spent a day and a half digging around and have zero prior knowledge. I spent too much time reading Lomborg and CCC in retrospect, so I mention him disproportionately relative to other sources.
Here's what I notice:
1. Lomborg and his CCC seem very cost-benefit focused in their analysis. A few others are too, but see point 4. Basically, it's easy to compare climate in...
Thank you for looking into this! <3
I do think you might have put too much energy into thinking about the CCC though, haha. Maybe I should apologise for having mentioned them, without mentioning that I knew they'd taken money from dirty energy and I never got good epistemic vibes from them.
When I saw that stuff, I just read that as one of the many things we'd expect to see if MCB was legit, like, there would be a think-tank funded by dirty energy singing its praises, and even if that thinktank were earnest, I would still expect anyone who actua...
I'd say: stop wanting MCB to work out so much. Don't just hope that it's gonna get approved, mate. Convincing people of stuff if fricking impossible. I think you're seriously overestimating how likely this is.
Instead we just have a bunch of moderate liberal democracies who are institutionally incapable of doing anything significant.
Awesome burn! :D
a group of nations can do it without needing very much political energy.
I mean, if your plan is "convince people or governments to do a thing" rather than "do this thing myself", you're gonna have a bad time. It's probably within the scope of an individual NGO or maybe a hella determined individual to pull this sort of thing off, no? I guess you'd have to try, and see if anyone decid...
Edit: I ended up spending a bit over a day looking into geoengineering and the Copenhagen Consensus Center after writing this, so go look at my answer for a more informed take that includes what I learned from doing that. My below 2 long-form comments are not exactly wrong, but more poorly informed than that answer.
---
Awesome! I'd wanted to know what the actually useful geoengineering stuff was.
I do buy the claim that public support for any sort of emission control will evaporate the moment geoengineering is realised as a tolerable alternative... Ma...
because such discussion would make it harder to morally pressure people into reducing carbon emissions. I don’t know how to see this as anything other than an adversarial action against reasonable discourse
ffs, because incentives. You're playing tragedy of the commons, and your best move is to make there be more shared resources people can just take?
Basically, don't let your thinking on what is useful affect your thinking on what's likely.
It's a pretty clear way of endorsing something to call it "honest reporting".
Sure if you just call it "honest reporting". But that was not the full phrase used. The full phrase used was "honest reporting of unconsciously biased reasoning".
I would not call trimming that down to "honest reporting" a case of honest reporting! ;-)
If I claim, "Joe says X, and I think he honestly believes that, though his reasoning is likely unconsciously biased here", then that does not at all seem to me like an endorsement of X, and certainly not a clear endorsement.
It also seems like there's an argument for weighting urgency in planning that could lead to 'distorted' timelines while being a rational response to uncertainty.
It's important to do the "what are all the possible outcomes and what are the probabilities of each" calculation before you start thinking about weightings of how bad/good various outcomes are.
I'm wary of using words like "lie" or "scam" to mean "honest reporting of unconsciously biased reasoning"
When someone is systematically trying to convince you of a thing, do not be like, "nice honest report", but be like, "let me think for myself whether that is correct".
but be like, "let me think for myself whether that is correct".
From my perspective, describing something as "honest reporting of unconsciously biased reasoning" seems much more like an invitation for me to think for myself whether it's correct than calling it a "lie" or a "scam".
Calling your opponent's message a lie and a scam actually gets my defenses up that you're the one trying to bamboozle me, since you're using such emotionally charged language.
Maybe others react to these words differently though.
Yeah, 10/10 agreement on this. Like it'd be great if you could "just" donate to some AI risk org and get the promised altruistic benefits, but if you actually care about "stop all the fucking suffering I can", then you should want to believe AI risk research is a scam if it is a scam.
At which point you go oh fuck, I don't have a good plan to save the world anymore. But not having a better plan shouldn't change your beliefs on whether AI risk research is effective.
Putting communication through a filter imposes a cost, which will inevitably tend to discourage communication in the long term.
As does allowing people to be unduly abrasive. But on top of that, communities where conversations are abrasive attract a lower caliber of person than one where they aren't. Look at what happened to LW.
Moreover, the cost is not the same for everyone
It's fairly common for this cost to go down with practice. Moreover, it seems like there's an incentive gradient at work here; the only way to gauge how costly it is for someone t...
communities where conversations are abrasive attract a lower caliber of person than one where they aren't. Look at what happened to LW.
To whatever extent this is accurate and not just a correlation-causation conversion, this very dynamic is the kind of thing that LW exists (existed) to correct. To yield to it is essentially to give up the entire game.
What it looks like to me is that LW and its associated "institutions" and subcultures are in the process of dissolving and being absorbed into various parts of general society. You are basically ...
I appreciate your offer to talk things out together! To the extent that I'm feeling bad and would feel better after talking things out, I'm inclined to say that my current feelings are serving a purpose, i.e. to encourage me to keep pressing on this issue whenever doing so is impactful. So I prefer to not be consoled until the root issue has been addressed, though that wouldn't have been at all true of the old version of myself. This algorithm is a bit new to me, and I'm not sure if it'll stick.
Overall, I'm not aware that I've caused the balance of the dis...
Your comment was perfectly fine, and you don't need to apologize; see my response to komponisto above for my reasons for saying that. Apologies on my part as there's a strong chance I'll be without internet for several days and likely won't be able to further engage with this topic.
Duncan's original wording here was fine. The phrase "telling the humans I know that they're dumb or wrong or sick or confused" is meant in the sense of "socially punishing them by making claims in a certain way, when those claims could easily be made without having that effect".
To put it another way, my view is that Duncan is trying to refrain from adopting behavior that lumps in values (boo trans people) with claims (trans people disproportionately have certain traits). I think that's a good thing to do for a number of reasons, and hav...
Your principal mistake lies here:
"socially punishing them by making claims in a certain way, when those claims could easily be made without having that effect
Putting communication through a filter imposes a cost, which will inevitably tend to discourage communication in the long term. Moreover, the cost is not the same for everyone: for some people "diplomatic" communication comes much more naturally than for others; as I indicate in another comment, this often has to do with their status, which, the higher it is, the less necessary dire...
assess why the community has not yet shunned them
Hi! I believe I'm the only person to try shunning them, which happened on Facebook a month ago (since Zack named himself in the comments, see here, and here). The effort more or less blew up in my face and got a few people to publicly say they were going to excluded me, or try to get others to exclude me from future community events, and was also a large (but not the only) factor in getting me to step down from a leadership position in a project I'm spending about half of my time on. To be fair, there are...
This all sounds right, but the reasoning behind using the wording of "bad faith" is explained in the second bullet point of this comment.
Tl;dr the module your brain has for detecting things that feel like "bad faith" is good at detecting when someone is acting in ways that cause bad consequences in expectation but don't feel like "bad faith" to the other person on the inside. If people could learn to correct a subset of these actions by learning, say, common social skills, treating those actions like they're taken in "bad...
nod. This does seem like it should be a continuous thing, rather than System 1 solely figuring things out in some cases and System 2 figuring it out alone in others.
Good observation.
Amusingly, one possible explanation is that the people who gave Gleb pushback on here were operating on bad-faith-detecting intuitions--this is supported by the quick reaction time. I'd say that those intuitions were good ones, if they lead to those folks giving Gleb pushback on a quick timescale, and I'd also say that those intuitions shaped healthy norms to the extent that they nudged us towards establishing a quick reality-grounded social feedback loop.
But the people who did give Gleb pushback more frequently framed things in terms othe...
I'm very glad that you asked this! I think we can come up with some decent heuristics:
I think the burden of evidence is on the side disagreeing with the intuitions behind this extremely common defensive response
Note also that most groups treat their intuitions about whether or not someone is acting in bad faith as evidence worth taking seriously, and that we're remarkable in how rarely we tend to allow our bad-faith-detecting intuitions to lead us to reach the positive conclusion that someone is acting in bad faith. Note also that we have a serious problem with not being able to effectively deal with Gleb-like people, sexual predators, e...
For more explanation on how incentive gradients interact with and allow the creation of mental modules that can systematically mislead people without intent to mislead, see False Faces.
Well, that's embarrassing for me. You're entirely right; it does become visible again when I log out, and I hadn't even considered that as a possibility. I guess I'll amend the paragraph of my above comment that incorrectly stated that the thread had been hidden on the EA Forum; at least I didn't accuse anyone of anything in that part of my reply. I do still stand by my criticisms, though knowing what I do now, I would say that it wasn't necessary of me to post this here if my original comment and the original post on the EA Forum are still publicly visible.
Some troubling relevant updates on EA Funds from the past few hours:
...We only want to focus on the Effective Altruism Funds if the community believes it will improve the effectiveness of their donations and that it will provide substantial value to the EA community. Accordingly, we plan to run the project for the next 3 months and then reassess whether the project should contin
GiveWell reanalyzed the data it based its recommendations on, but hasn’t published an after-the-fact retrospective of long-run results. I asked GiveWell about this by email. The response was that such an assessment was not prioritized because GiveWell had found implementation problems in VillageReach's scale-up work as well as reasons to doubt its original conclusion about the impact of the pilot program.
This seems particularly horrifying; if everyone already knows that you're incentivized to play up the effectiveness of the charities you're recommendin...
It seems to me that Givewell has already acknowledged perfectly well that VillageReach is not a top effective charity. It also seems to me that there's lots of reasons one might take GiveWell's recommendations seriously, and that getting "particularly horrified" about their decision not to research exactly how much impact their wrong choice didn't have is a rather poor way to conduct any sort of inquiry on the accuracy of organizations' decisions.
Ok, thank you, this helps a lot and I feel better after reading this, and if I do start crying in a minute it'll be because you're being very nice and not because I'm sad. So, um, thanks. :)
Second edit: Dagon is very kind and I feel ok; for posterity, my original comment was basically a link to the last paragraph of this comment, which talked about helping depressed EAs as some sort of silly hypothetical cause area.
Edit: since someone wants to emphasize how much they would "enjoy watching [my] evaluation contortions" of EA ideas, I elect to delete what I've written here.
I'm not crying.
There's actually a noteworthy passage on how prediction markets could fail in one of Dominic's other recent blog posts I've been wanting to get a second opinion on for a while:
...NB. Something to ponder: a) hedge funds were betting heavily on the basis of private polling [for Brexit] and b) I know at least two ‘quant’ funds had accurate data (they had said throughout the last fortnight their data showed it between 50-50 and 52-48 for Leave and their last polls were just a point off), and therefore c) they, and others in a similar position, had a strong ince
The idea that there's much to be gained by crafting institutions, organizations, and teams which can train and direct people better seems like it could flower into an EA cause, if someone wanted it to. From reading the first post in the series, I think that that's a core part of what Dominic is getting at:
We could significantly improve the decisions of the most powerful 100 people in the UK or the world for less than a million dollars (~£10^6) and a decade-long project on a scale of just ~£10^7 could have dramatic effects.
Regarding tone specifically, you have two strong options: one would be to send strong "I am playing" signals, such as by dropping the points which men's rights people might make, and, say, parodying feminist points. Another would be to keep the tone as serious as it currently is, but qualify things more; in some other contexts, qualifying your arguments sounds low-status, but in discussions of contentious topics on a public forum, it can nudge participants towards cooperative truth-seeking mode.
Amusingly, I emphasized the points of your comment t...
Fair enough! I am readily willing to believe your statement that that was your intent. It wasn't possible to tell from the comment itself, since the metric regarding sexual harassment report handling is much more serious than the other metrics.
(This used to be a gentle comment which tried to very indirectly defend feminism while treating James_Miller kindly, but I've taken it down for my own health)
Let's find out how contentious a few claims about status are.
Lowering your status can be simultaneously cooperative and self-beneficial. [pollid:1186]
Conditional on status games being zero-sum in terms of status, it’s possible/common for the people participating in or affected by a status game to end up much happier or much worse off, on average, than they were before the status game. [pollid:1187]
Instinctive trust of high status people regularly obstructs epistemic cleanliness outside of the EA and rationalist communities. [pollid:1188]
Instinctive
Most of my friends can immediately smell when a writer using a truth-oriented approach to politics has a strong hidden agenda, and will respond much differently than they would to truth-oriented writers with weaker agendas. Some of them would even say that, conditional on you having an agenda, it's dishonest to note that you believe that you're using a truth-oriented approach; in this case, claiming that you're using a truth-oriented approach reads as an attempt to hide the fact that you have an agenda. This holds regardless of whether your argument is cor...
It helps that you shared the dialogue. I predict that Jane doesn't System-2-believe that Trump is trying to legalize rape; she's just offering the other conversation participants a chance to connect over how much they don't like Trump. This may sound dishonest to rationalists, but normal people don't frown upon this behavior as often, so I can't tell if it would be epistemically rational of Jane to expect to be rebuffed in the social environment you were in. Still, making claims like this about Trump may be an instrumentally rational thing for Jane to do i...
I think that Merlin and Alicorn should be praised for Merlin's good behavior. :)
I was happy with the Berkeley event overall.
Next year, I suspect that it would be easier for someone to talk to the guardian of a misbehaving child if there was a person specifically tasked to do so. This could be one of the main event organizers, or perhaps someone directly designated by them. Diffusion of responsibility is a strong force.
I've noticed that sometimes, my System 2 starts falsely believing there are fewer buckets when I'm being socially confronted about a crony belief I hold, and that my System 2 will snap back to believing that there are more buckets once the confrontation is over. I'd normally expect my System 1 to make this flavor of error, but whenever my brain has done this sort of thing during the past few years, it's actually been my gut that has told me that I'm engaging in motivated reasoning.
"Epistemic status" metadata plays two roles: first, it can be used to suggest to a reader how seriously they should consider a set of ideas. Second, though, it can have an effect on signalling games, as you suggest. Those who lack social confidence can find it harder to contribute to discussions, and having the ability to qualify statements with tags like "epistemic status: not confident" makes it easier for them to contribute without feeling like they're trying to be the center of attention.
"Epistemic effort" metadata fulfill...
It was good of you to write this post out of a sense of civic virtue, Anna. I'd like to share a few thoughts on the incentives of potential content creators.
Most humans, and most of us, appreciate being associated with prestigious groups, and receiving praise. However, when people speak about LessWrong being dead, or LessWrong having been taken over by new folks, or about LessWrong simply not being fun, this socially implies that the people saying these things hold LessWrong posters in low esteem. You could reasonably expect that replacing these sorts of r...
Gleb, given the recent criticisms of your work on the EA forum, it would be better for your mental health, and less wasteful of our time, if you stopped posting this sort of thing here. Please do take care of yourself, but don't expect the average rationalist to be more sympathetic to you than the average EA.
What the hell? It's just a more specific version of the point in inadequate equilibria, and don't you want to know if you can do something better?