If you're talking about recruiting new EAs, it sounds like you mean people that agree enough with the entire meme set that they identify as EAs. Have there been any polls on what percentage of self-identifying EAs hold which beliefs? It seems like the type of low-hanging fruit .impact could pick off. That poll would give you an idea of how common it is for EAs to believe only small portions of the meme set. I expect that people agree with the majority of the meme set before identifying as EA. I believe a lot more than most people and I only borderline identify as EA.
So you expect movement building / outreach to be a lot less successful than community building ("inreach", if you will)?
Yes, especially if the same strategies are expected to accomplish both. They're two very different tasks.
Some of this comes down to what counts as an "EA". What kind of conversion do we need to do, and how much? I also think I'll be pretty unsuccessful at getting new core EAs, but what can I get? How hard is it? These are things I'd like to know, and things I believe would be valuable to know.
I think you can con...
"A stronger community for the effective altruist movement should better encourage existing EAs to contribute more and better attract new people to consider becoming EA. By building the EA Community, we hope to indirectly improve recruitment and retention in the effective altruist movement, which in turn indirectly results in more total altruistic effort, in turn resulting in more reduced suffering and increased happiness."
I'm going to predict that .impact struggles to meet this objective.
I think you're taking a naive view of how movement building...
You're right. I think scientific thinkers can sometimes misinterpret skepticism as meaning that nothing short of peer-reviewed, well-executed experiments can be considered evidence. I think sometimes anecdotal evidence is worth taking seriously. It isn't the best kind of evidence, but it falls above 0 on the continuum.
The good news is that our higher cognitive abilities also allow us to overcome depression in many situations. In Stumbling on Happiness, Daniel Gilbert explains how useful it is that we can rationalize away bad events in our lives (such as rejection). This capability, which Gilbert refers to as our psychological immune system, explains why people are able to bounce back from negative events much more quickly than they expect to.
I think speaking in terms of probabilities also clears up a lot of epistemological confusion. "Magical" thinkers tend to believe that a lack of absolute certainty is more or less equivalent to total uncertainty (I know I did). At the same time, they'll understand that a 50% chance is not a 99% chance even though neither of them is 100% certain. It might also be helpful to point out all the things they are intuitively very certain of (that the sun will rise, that the floor will not cave in, that the carrot they put in their mouth will taste like c...
"Keeping busy" is based mainly on my personal experience and from what I've heard other people say. But in the book Flow: The Psychology of Optimal Experience (which I didn't cite because I assume you're familiar with it), it's suggested based on self-reports on subjective wellbeing that people are, on average, happier while at work than they are in their leisure time - even though they don't feel as if this is the case.
In Stumbling on Happiness, Daniel Gilbert also suggests that when making decisions about the future, we rely on our own specula...
If you start a PR campain about AI risk that results into bringing a lot of luddites into the AGI debate, it might be harder for MIRI to convince AI researchers to treat UFAI as a serious risk not easier because the average AI person might think how the luddites oppose AGI for all the wrong reasons. He's not a luddite so why should he worry about UFAI?
Fair enough. I still believe there could be benefits to gaining wider support but I agree that this is an area that will be mainly determined by the actions of elite specialized thinkers and the very power...
Oprah doesn't need everyone to like her. She wants the largest viewership possible. MIRI doesn't need everyone to support it. It wants the most supporters possible.
They don't need to appeal to everyone but they probably should appeal to a wider audience of people than they currently do (evidenced by the only ~10 FAI researches in the world) - and a different audience requires a different presentation of the ideas in order to be optimally effective.
I don't think pointing new people toward Less Wrong would be as effective as just creating a new pitch just fo...
"Nudge" says more people means more eating.
http://ajcn.nutrition.org/content/90/2/282.full.pdf
http://europepmc.org/articles/PMC2276563/reload=0;jsessionid=pcefhmQohIp1CPVmEpqG.2
This article suggests it's more complicated:
http://www.ncbi.nlm.nih.gov/pubmed/14599286
It's also believed that obesity is cont...
I think we might be using different definitions of "radical" and "moderate, socially acceptable." I'm not referring to things that massively impact society, but to things that clash with widely held values and attitudes.
3D printing doesn't strike me as an idea most people negatively associate with "radical." More importantly, even if it was, it is possible to present a "radical" or "weird" or "unfamiliar" idea in a way that it appears not to clash with people's values and attitudes.
That's what I s...
That sounds naive. If you ask yourself whether there disinformation in the coverage of topic X in the mainstream media, the answer is "yes" no matter the issue. Journalist write stories under tight timetables without much time to fact check. They are also under all sorts of other pressures that aren't about telling the truth as it happens to be.
Yes, it's ubiquitous, but some fields and issues are more affected than others, usually due to politicization. Tight timetables may apply to all stories but not all pressures do.
...From what I personally
I think the reason for the downvotes is that people on LW have generally already formulated their ethical views past the point of wanting to speculate about entirely new normative theories.
Your post probably would have received a better reaction had you framed it as a question ("What flaws can you guys find in a utilitarian theory that values the maximization of the amount of computation energy causes before dissolving into high entropy?") rather than as some great breakthrough in moral reasoning.
As for constructive feedback, I think Creutzer's r...
Upvoted.
I meant to say that if you believe a scientific claim to be legitimate, there should/are going to be implications of that on other parts of your worldview. When we misjudge what the implications of a belief are, we can believe it while simultaneously rejecting something it implies. (That's what reductio ad absurdum's are for.)
I was under the impression that GPS was such a technology. I also don't see much room for reasonably believing in evolutionary medicine without accepting macro-evolution - but that's a bit of a stretch from my original point. After struggling to find examples, I'm going to downshift my probability of there being many around.
Good point, thanks. Skepticism of specific scientific claims is fully consistent with a "pro-science" outlook. I would maintain that people rejecting legitimate scientific claims often are inconsistent, though. Case in point, Young Earth Creationists that completely trust technology and medication that could only work if the scientific case for YEC is false.
Calling anti-vaccination people "anti-science" is a transparently bad persuasion tactic. Leave a social line of retreat.
Also, it probably isn't even true that they're anti-science. It's more likely their stances on science are inconsistent, trusting it to varying degrees in different situations depending on the political and social implications of declaring belief.
I'm not sure it would lead to better politicians as much as would it lead to politicians adapting their bullshit skills to better fit the new interview set up.
Many of the bullshit explanations politicians give are perceived as perfectly acceptable to the wider public.
MODERATOR: Should gay marriage be legal?
POLITICIAN: Nope.
MODERATOR: Why not?
POLITICIAN: It goes against the teachings of my religion. It says in passage X:YZ of the Bible that homosexuality is a sin. I refuse to go against the command of God in my time in office.
That answer is fine to many, ma...
"In general, this suggests that we should give relatively more weight to tastes and values that we expect to be more universal among civilizations across the multiverse."
This is a pretty interesting idea to me, Brian. It makes intuitive sense but when would we apply it? Can it only be used as a tiebreaker? It's difficult for me to imagine scenarios where this consideration would sway my decision.
To clarify, I also don't think EA has much potential as a social movement even if marketed properly. Specific EA beliefs are much more spreadable memes down the line IMO.