Never claimed to be - I have long argued for the most effective communication techniques to promote EA ends.
I don't believe I am wrong here. My rich uncle doesn't read Less Wrong. However, those who have rich uncles do read Less Wrong. If I can sway even a single individual to communicate effectively, as opposed to maximizing transparency, in swaying people to give money effectively, I'll be glad to have done so.
You seem to be suggesting that I had previously advocated being as transparent as possible. On the contrary - I have long advocated for the most effective communication techniques to achieve EA ends.
Why should anyone believe you?
Since to you the ends justify the means, why should we accept that your ends are EA ends? You might well be lying about it and by your set of criteria that's fine.
Let's consider the hypothesis that what you want is money and social status. These ends would justify the means of setting up an "EA" charity and collecting donations from gullible people, wouldn't they? It's just what you believe to be an effective method of reaching your goals. Since things like integrity and honesty are subservient to reaching your goals, there is no problem here, is there?
Sarah's post highlights some of the essential tensions at the heart of Effective Altruism.
Do we care about "doing the most good that we can" or "being as transparent and honest as we can"? These are two different value sets. They will sometimes overlap, and in other cases will not.
And please don't say that "we do the most good that we can by being as transparent and honest as we can" or that "being as transparent and honest as we can" is best in the long term. Just don't. You're simply lying to yourself and to ever...
"I got caught lying — again — so now I'm going to tell you why lying is actually better than telling the truth."
Seriously ... just stop already.
"Would you do so, whether lying by omission or in any other way, in order to get much more money for AMF, given that no one else would find out about this situation?"
No, I would not. Because if I would, they would find out about the situation, not by investigating those facts, but by checking my comments on Less Wrong when I said I would do that. Or in other words, if you ever are talking to a billionaire uncle in real life, they may well have read your comments, and so there will be no chance of persuading them to do what you want even if you re...
Thank you!
This is probably too complex to hash out in comments - lots of semantics issues and some strategic/tactical information that might be best to avoid discussing publicly. If you're interested in getting involved in the project and want to chat on Skype, email me at gleb [at] intentionalinsights [dot] org
We chose the issue of lies specifically because it is something a bunch of people can get behind opposing, across the political spectrum. Otherwise, we have to choose political virtues, and it's always a trade-off. So the two fundamental orientations of this project are utilitarianism and anti-lies.
FYI, we plan to tackle sloppy thinking too, as I did in this piece, but that's more complex, and it's important to start with simple messages first. Heck, if we can get people to realize the simple difference between truth and comfort, I'd be happy.
Agreed with the issues around measuring lies, and noting the concession of the point - LW gold to you for highlighting the concession.
I hear you about "rationalism in politics." The public-facing aspect of this project will be using terms like "post-lies movement" and so on. We're using "Rational Politics" as the internal and provisional name for now, while we are gathering allies and spreading word about the project rather than doing much public outreach.
I'm talking about prioritizing the good of the country as a whole, not necessarily distant strangers - although in my personal value stance, that would be nice. Like I said, it's an EA project :-)
At this point, I'm finished engaging with you, since you're clearly not making statements based on reality. Good luck with growing more rational!
I'm going with the official definition of post-truth here, and am comfortable standing by it.
Nice, didn't know that - thanks for pointing it out! Updated slightly on credibility of NYTimes on this basis.
I see the situation right now as more liberals being closer to rational thinking than more conservatives, but it hasn't been the case in the past. I don't know how this document would read if more conservatives were closer to rational thinking.
Regarding the Muslim issue, you might want to check out the radio interview I linked in the document. It shows very clearly how I got a conservative talk show host to update toward being nicer to Muslims.
If you're interested in participating in this project, email me at gleb [at] intentionalinsights [dot] org
Agree that the attempts to rid academia of conservatives are bad.
Can you be comfortable saying that Trump lies more often, and more intensely, than prominent liberal politicians; usually does not back away from lies when called out; slams the credibility of those who call him out on lies; focuses on appealing to emotions over facts; tends to avoid providing evidence for assertions (such as that Russia was not behind the hack), etc.? This is what is meant by post-truth in Oxford Dictionary definition of this term.
Yup, agreed that it may well be not wise for those who have racist beliefs to be open about them. The same applies to the global warming stuff.
This is why I say this is a project informed by EA values - it comes from the perspective that voting is like donating thousands of dollars to charity and that voters care about the public good. It's not meant to target those who don't care about the public good - just those mistaken about what is the best way to achieve the public good. For instance, plenty of voters are mistaken about the state of reality, and so...
Yup, agreed that it may well be not worthwhile for voters who vote for reasons that are not oriented toward the most social good to vote rationally. This is why I say this is a project informed by EA values - it comes from the perspective that voting is like donating thousands of dollars to charity. For those who are purely self-interested, it's really not rational to vote.
So to be clear, it's not meant to target those who don't care about the public good - just those mistaken about what is the best way to achieve the public good. For instance, plenty of ...
I am comfortable with saying that my post is anti-post truth politics. I think most LWs would agree that Trump relies more on post-truth tactics than other politicians. Note that I also called out Democrats for doing so as well.
Um, Breitbart news is hardly a credible site to use to attack Politifact. Besides, that citations also had Washington Post and The New York Times - do you call them fake news as well?
This is described in the "How Is This Project Different From Others Trying To Do Somewhat Similar Things?" and "Do You Have Any Evidence That This Will Work?" sections in the document linked above - here's the link for convenience.
I hear you about the interesting articles.
This piece was not aimed at folks who want interesting articles, but to the smaller proportion of folks who are concerned about the election outcome and want to do something to help out.
I'm very comfortable with people downvoting my posts, if they reach the minority of folks receptive to them.
I was invited on a radio show to talk further about this piece: https://www.youtube.com/watch?v=RNXw6ifqcNg
A number of other venues republished this piece as well, showing general interest in making politics less irrational:
Salon
Fact-checking doesn’t matter: Human biases control whether or not we’re going to believe politicians
The Dallas Morning News in Dallas, Texas
It's not what Trump and Clinton say, but how they say i...
Thanks for your good words about my insights on EA marketing, really appreciate it!
Regarding having InIn in the video, the goal is not to establish any sort of equivalence. In fact, it would be hard to compare the other organizations with each other as well. For instance, GiveWell has a huge budget and vastly more staff than any of the other organizations mentioned in the video. The goal is to give people information on various venues where they could get different types of information. For example, ACE is there for people who care about animal rights, and...
I like those other examples for labeling others, though - might be a nice general strategy to employ.
I agree that it does produce disassociation, but I don't think, for me, it's about disassociating from emotions. It's a disassociation from an identity label. It helps keep my identity small in way that speaks to my System 1 well.
Weird works for me, and I actually associate positive value with weirdness. But of course your mileage may vary. Any term that works to indicate distance from an identity label viscerally to one's System 1 will do, as Gram_Stone pointed out.
Agreed, to me it also makes no sense to do cash transfers to people with above average income. I see basic income as mainly about a social safety net.
Here's my piece in Salon about updating my beliefs about basic income. The goal of the piece was to demonstrate the rationality technique of updating beliefs in the hard mode of politics. Another goal was to promote GiveDirectly, a highly effective charity, and its basic income experiment. Since it had over 1K shares in less than 24 hours and the comment section is surprisingly decent, I'm cautiously optimistic about the outcome.
I'm curious about why this got downvoted, if anyone would like to explain.
Applying probabilistic thinking to fears about terrorism in this piece for the 16th largest newspaper in the US, reaching over 320K with its printed version and over 5 million hits on its website per month. The title was chosen by the newspaper, and somewhat occludes the points. The article is written from a liberal perspective to play into the newspaper's general bent, and its main point was to convey the benefits of applying probabilistic thinking to evaluating political reality.
Edit Updated somewhat based on conversation with James Miller here
Applying probabilistic thinking to fears about terrorism in this piece for the 16th largest newspaper in the US, reaching over 320K with its printed version and over 5 million hits on its website per month. The title was chosen by the newspaper, and somewhat occludes the points. The article is written from a liberal perspective to play into the newspaper's general bent, and its main point was to convey the benefits of applying probabilistic thinking to evaluating political reality.
[Edit] Updated somewhat based on conversation with James Miller here
Consider reposting this on the EA Forum, might get more hits that way.
Speed Giving Games involve having people make a decision between two charities. In SGGs, participants who come to the table are given a 1-minute introduction to the concept of effective giving and the two charities involved in the SGG, and are then invited to make a decision about which of the two charities to support. Their vote results in a dollar each going to either charity, sponsored by an outside party, usually The Life You Can Save. For the SGG, we chose GiveDirectly as the effective charity, and the Mid-Ohio Food Bank as a local and not so effective charity.
Will keep in mind about the photo, thanks for the feedback.
Yeah, totally hear you about the file drawer effect, which is why I found two separate citations besides the Center for Policing Equity, which I cited in the piece - this one, and this one. One is a poll, and the other is a government statistical analysis on traffic stops that includes race information. Neither of these is something to which the file drawer effect (publication bias) would apply.
An article based on rationality-informed strategies of probabilistic thinking and de-anchoring to deal with police racial profiling. Note that the data on racial profiling is corrected for the higher rate of crimes committed by black people. This is a very by-the-numbers piece.
Eugine strikes again - this is really creating a great deal of noise and reversing any indications of salience for posts. Previously, he mainly did only one downvote, now he's doing ten at a time, if my -20 karma that appeared in the last hour for the two comments I made is anything to judge by. He seems to also not only be targeting posts he dislikes, but also specific people he dislikes, such as Elo and me. Makes it really hard to judge the quality of my posts, as who knows who actually downvotes them. Frustrating.
Also good to keep in mind this article by Danny Kahneman: "Why Moving to California Won’t Make You Happy".
BTW, sad to see this post downvoted, pretty good post.
its eugine.
This video discusses the most effective science-based strategies for communicating AI Risk to a broad audience. It focuses on issues such as minimizing the inference gap, using emotional engagement, avoiding pattern-matching to sci-fi narratives and instead pattern-matching to unemployment narratives and other topics that the audience would find realistic. It's unlisted, so you can watch and share it with others only if you have a link. Feel free to pass it on to those who you think might benefit from it.
An article on Psychology Today on map and terriotry and fundamental attribution error, and another one on false consensus effect.
Agreed on the benefits of trying things, such as links and an additional Open section. That will give us additional data to go on.
For those interested in longevity research, on the Intentional Insights videocast, we interviewed the project leader and outreach coordinator for the Major Mouse Testing Project, which focuses on how we can advance the science on longevity.
We also published a blog on strategies to resist impulsive temptations, which I think some here might find interesting.
Nice ideas! I think you highlighted well the fundamental problem of lack of social rewards for writing content for LW, and having strong criticism for doing so.
Regarding changing things, I think it makes sense to work with people like Scott who have a lot of credibility, and figure out what would work for them.
However, it also seems that LW itself has a certain brand, and attracts a sizable community. I would like to see a version of the voting system you described implemented here, with people who have more karma having votes that weigh more. I'd also l...
Interesting, didn't think of it that way. The purpose for the threads is to organize in one place the things we do to advance rationality. I can see where it might pattern-match to bragging. So what would be another alternative to organizing in one venue the things done to advance rationality outreach?
Perhaps this is something best for CFAR staff to determine rather than yourself - they have certain standards for scholarships.
Yeah, one of the big failure modes is that people think that attending the workshop will magically result in internalizing all the benefits of CFAR materials. It's vital to keep working on them afterward, as I described in my post. For instance, in about an hour I will attend a weekly Google hangout with CFAR staff following up on some of the materials from the workshop. I'm not sure how many others from the workshop will be there, we'll see. Besides, as Kaj_Sotaja noted here, you can get your money back as well.
I have plenty of social status, and sufficient money, as a professor. I don't need any more personally. In fact, I've donated about $38K to charity over the last 2 years. My goal is EA ends. You can choose to believe me or not :-)