All of Gleb_Tsipursky's Comments + Replies

I have plenty of social status, and sufficient money, as a professor. I don't need any more personally. In fact, I've donated about $38K to charity over the last 2 years. My goal is EA ends. You can choose to believe me or not :-)

Never claimed to be - I have long argued for the most effective communication techniques to promote EA ends.

I don't believe I am wrong here. My rich uncle doesn't read Less Wrong. However, those who have rich uncles do read Less Wrong. If I can sway even a single individual to communicate effectively, as opposed to maximizing transparency, in swaying people to give money effectively, I'll be glad to have done so.

You seem to be suggesting that I had previously advocated being as transparent as possible. On the contrary - I have long advocated for the most effective communication techniques to achieve EA ends.

Lumifer
120

Why should anyone believe you?

Since to you the ends justify the means, why should we accept that your ends are EA ends? You might well be lying about it and by your set of criteria that's fine.

Let's consider the hypothesis that what you want is money and social status. These ends would justify the means of setting up an "EA" charity and collecting donations from gullible people, wouldn't they? It's just what you believe to be an effective method of reaching your goals. Since things like integrity and honesty are subservient to reaching your goals, there is no problem here, is there?

Sarah's post highlights some of the essential tensions at the heart of Effective Altruism.

Do we care about "doing the most good that we can" or "being as transparent and honest as we can"? These are two different value sets. They will sometimes overlap, and in other cases will not.

And please don't say that "we do the most good that we can by being as transparent and honest as we can" or that "being as transparent and honest as we can" is best in the long term. Just don't. You're simply lying to yourself and to ever... (read more)

"I got caught lying — again — so now I'm going to tell you why lying is actually better than telling the truth."

Seriously ... just stop already.

3bogus
The EA movement does not really have to be "as transparent and honest as we can" - that's an unrealistic standard from any real-world organization, for reasons that have very little to do with any sort of 'lying' or 'dishonesty'. It only has to be markedly better than the bulk of the charitable-aid industry, which is not a very high bar at all. That still does not justify many of the things reported about in Sarah's article (I've tried to explain my view about these in a different comment to this post). It may be true that "we’re just haggling over the price" at this point, but I think I can tell when something is a bad deal. I would lie, because my billionaire uncle is smart enough to discount some things I say as exaggerations, and to do anything else might just be too confusing given the short timeframe. :-P If you think the EA movement is in a similar position, by all means feel free to advocate for the same choice!

"Would you do so, whether lying by omission or in any other way, in order to get much more money for AMF, given that no one else would find out about this situation?"

No, I would not. Because if I would, they would find out about the situation, not by investigating those facts, but by checking my comments on Less Wrong when I said I would do that. Or in other words, if you ever are talking to a billionaire uncle in real life, they may well have read your comments, and so there will be no chance of persuading them to do what you want even if you re... (read more)

Thank you!

This is probably too complex to hash out in comments - lots of semantics issues and some strategic/tactical information that might be best to avoid discussing publicly. If you're interested in getting involved in the project and want to chat on Skype, email me at gleb [at] intentionalinsights [dot] org

2bogus
No worries - I trust you to get the strategic/tactical side right, and it's quite promising to see that you're aware of these issues as well. I now think that this can be a very promising project, since you're clearly evading the obvious pitfalls I was concerned about when I read the initial announcement!

We chose the issue of lies specifically because it is something a bunch of people can get behind opposing, across the political spectrum. Otherwise, we have to choose political virtues, and it's always a trade-off. So the two fundamental orientations of this project are utilitarianism and anti-lies.

FYI, we plan to tackle sloppy thinking too, as I did in this piece, but that's more complex, and it's important to start with simple messages first. Heck, if we can get people to realize the simple difference between truth and comfort, I'd be happy.

2bogus
Utilitarianism is nice of course, but since you're operating in a political context here, it's important to go for a politically-mindful variety of utilitarianism, that treats other people's existing political stances as representational, or at least as useful evidence for what they actually care about. Virtues like adaptation, compromise, conciliation - even humor, sometimes - can be seen as ways to operationalize this sort of utilitarianism in practice - and also promote it to average folks who generally don't know what "utilitarianism" is actually about!

Agreed with the issues around measuring lies, and noting the concession of the point - LW gold to you for highlighting the concession.

I hear you about "rationalism in politics." The public-facing aspect of this project will be using terms like "post-lies movement" and so on. We're using "Rational Politics" as the internal and provisional name for now, while we are gathering allies and spreading word about the project rather than doing much public outreach.

0bogus
Good call - this will give folks a much better idea of what the project is actually about. (Of course, ordinary sloppy thinking is just as problematic as overt "lies", and I still think your project could usefully expand to encompass other facets of "wise decision-making" in a political context. There are many other "dark patterns" - particularly, black-and-white rhetoric that blatantly rejects any sort of compromise or adaptation as possible virtues - which are just as foolish and dangerous in practice (and there's basically no controversy that this is the case, at least in the abstract). Again, Donald Trump's campaign provides the best example of this as of late, but we've seen similar rhetoric from the "left" in the past - and some people would even say that the current campaign was no exception!)

I'm talking about prioritizing the good of the country as a whole, not necessarily distant strangers - although in my personal value stance, that would be nice. Like I said, it's an EA project :-)

0Decius
A political group composed only of people who prioritize the good of the country over their own subtribe or self will lack the support needed to flourish. It's not that people disagree or don't know about the object level facts. It's that people are actively fighting to gain relative advantage over others. And that is a cultural problem, not a political one.

At this point, I'm finished engaging with you, since you're clearly not making statements based on reality. Good luck with growing more rational!

I'm going with the official definition of post-truth here, and am comfortable standing by it.

0Science
Your linked definition of 'post-truth' is: Note, the implicit inference that such circumstances are more common now than in the past, when this is almost certainly not true.
0The_Jaded_One
I didn't realise the term "post truth" had a precise, official meaning? Anyway I would still say there is a bit of an issue measuring lies, but I definitely concede the point that Donald is very, very far from a truth teller.

Nice, didn't know that - thanks for pointing it out! Updated slightly on credibility of NYTimes on this basis.

I see the situation right now as more liberals being closer to rational thinking than more conservatives, but it hasn't been the case in the past. I don't know how this document would read if more conservatives were closer to rational thinking.

Regarding the Muslim issue, you might want to check out the radio interview I linked in the document. It shows very clearly how I got a conservative talk show host to update toward being nicer to Muslims.

If you're interested in participating in this project, email me at gleb [at] intentionalinsights [dot] org

Agree that the attempts to rid academia of conservatives are bad.

Can you be comfortable saying that Trump lies more often, and more intensely, than prominent liberal politicians; usually does not back away from lies when called out; slams the credibility of those who call him out on lies; focuses on appealing to emotions over facts; tends to avoid providing evidence for assertions (such as that Russia was not behind the hack), etc.? This is what is meant by post-truth in Oxford Dictionary definition of this term.

2The_Jaded_One
The problem I have is a measure problem. How are we measuring lies? If I say that whales are fish and then I say all birds can fly, and you say the holocaust didn't happen, that's 2 for me and 1 for you so I'm a worse liar?
1Science
I'm not sure about The_Jaded_One, he seems to be willing to assert false things under peer pressure. However, that statement is in fact false. Where by "false" I mean it doesn't correspond to mapping to external observable reality. Specifically, I mean that Trump's statements tend to map to reality better than those of liberal politicians.

Yup, agreed that it may well be not wise for those who have racist beliefs to be open about them. The same applies to the global warming stuff.

This is why I say this is a project informed by EA values - it comes from the perspective that voting is like donating thousands of dollars to charity and that voters care about the public good. It's not meant to target those who don't care about the public good - just those mistaken about what is the best way to achieve the public good. For instance, plenty of voters are mistaken about the state of reality, and so... (read more)

0Science
In particular, you're not interested in reaching the voters who don't want say Muslim migrants raping and occasionally murdering girls in their neighborhoods. Good to know.
0The_Jaded_One
I feel like these requirements are kind of contradictory. What if a lot of people are selfish, racist (in a broad sense) and want to procrastinate global warming? Are we saying that "rational" political debate benefits them, or that it doesn't? If it doesn't benefit them then why should they be on board? Are we saying that you have to be a globalist effective altruist who puts the needs of distant strangers above those of their own families to benefit from rational politics? Very few people have values like that! IIRC even Peter Singer struggled with that! Do you have to care about the public good of the whole world, or is it OK if you only care about the public good of your tribe/country/race?

Yup, agreed that it may well be not worthwhile for voters who vote for reasons that are not oriented toward the most social good to vote rationally. This is why I say this is a project informed by EA values - it comes from the perspective that voting is like donating thousands of dollars to charity. For those who are purely self-interested, it's really not rational to vote.

So to be clear, it's not meant to target those who don't care about the public good - just those mistaken about what is the best way to achieve the public good. For instance, plenty of ... (read more)

I am comfortable with saying that my post is anti-post truth politics. I think most LWs would agree that Trump relies more on post-truth tactics than other politicians. Note that I also called out Democrats for doing so as well.

0Decius
I would only agree that every major political party uses post-truth rhetorical methods and it is sad that each of them does. If you want to propose a unit of measurement for truthiness I'd consider comparing them.
0Science
Are you comfortable providing actual evidence for the claim that "Trump relies more on post-truth tactics than other politicians" or are you trying to argue for an epistemology of truth based on whatever the consensus by "experts" is?
1The_Jaded_One
Personally I think that Trump abuses epistemology in different ways than the left/PC establishment, rather than more. For example, how much weight do we put on very successful liberal attempts to rid social-science academia of conservatives? Is it worse to lie about global warming, or to attempt to purge a universities of all academics who are conservative, so that every paper that comes out of academia has a liberal bias? What is the most common political affiliation of people who work for, for example, the IPCC? Probably quite liberal.

Um, Breitbart news is hardly a credible site to use to attack Politifact. Besides, that citations also had Washington Post and The New York Times - do you call them fake news as well?

2Science
Yes, as I mentioned in my other comment. Now care to explain why you cited them in an article supposedly devoted to opposing fake news. Or is your definition of "fake news", news that contradicts something written in an "official true news source" as opposed to something that contradicts reality?

This is described in the "How Is This Project Different From Others Trying To Do Somewhat Similar Things?" and "Do You Have Any Evidence That This Will Work?" sections in the document linked above - here's the link for convenience.

I hear you about the interesting articles.

This piece was not aimed at folks who want interesting articles, but to the smaller proportion of folks who are concerned about the election outcome and want to do something to help out.

I'm very comfortable with people downvoting my posts, if they reach the minority of folks receptive to them.

I was invited on a radio show to talk further about this piece: https://www.youtube.com/watch?v=RNXw6ifqcNg

A number of other venues republished this piece as well, showing general interest in making politics less irrational:

Salon

Fact-checking doesn’t matter: Human biases control whether or not we’re going to believe politicians

http://www.salon.com/2016/10/26/fact-checking-doesnt-matter-human-biases-control-whether-or-not-were-going-to-believe-politicians_partner/

The Dallas Morning News in Dallas, Texas

It's not what Trump and Clinton say, but how they say i... (read more)

Thanks for your good words about my insights on EA marketing, really appreciate it!

Regarding having InIn in the video, the goal is not to establish any sort of equivalence. In fact, it would be hard to compare the other organizations with each other as well. For instance, GiveWell has a huge budget and vastly more staff than any of the other organizations mentioned in the video. The goal is to give people information on various venues where they could get different types of information. For example, ACE is there for people who care about animal rights, and... (read more)

8Raemon
Seconding resuf's comments: both that this is a pretty good, professional looking video, but also that it's another instance of you seeming to listen to some of the exact-letter-of-the-request when people ask you to stop or do things differently, without understanding the underlying reasons why people are upset. And that this is especially important if your goal is to be a public-facing outreach organization.

I like those other examples for labeling others, though - might be a nice general strategy to employ.

I agree that it does produce disassociation, but I don't think, for me, it's about disassociating from emotions. It's a disassociation from an identity label. It helps keep my identity small in way that speaks to my System 1 well.

Weird works for me, and I actually associate positive value with weirdness. But of course your mileage may vary. Any term that works to indicate distance from an identity label viscerally to one's System 1 will do, as Gram_Stone pointed out.

1ChristianKl
I can find plenty of people who report that chakra healing worked for them. There are self-reports for a lot of things working for people. That doesn't mean those things are necessarily good. In this case it likely works for you in the sense that it produces a disassociation. Disassociating emotions is however generally not a optimal strategy for dealing with emotions. Mainstream psychology is generally against it. There advantages of becoming a psychopath, but doing disassociative techniques that move in that direction is still not something I would recommend.

Agreed, to me it also makes no sense to do cash transfers to people with above average income. I see basic income as mainly about a social safety net.

Here's my piece in Salon about updating my beliefs about basic income. The goal of the piece was to demonstrate the rationality technique of updating beliefs in the hard mode of politics. Another goal was to promote GiveDirectly, a highly effective charity, and its basic income experiment. Since it had over 1K shares in less than 24 hours and the comment section is surprisingly decent, I'm cautiously optimistic about the outcome.

2delton137
Decent article but pretty basic. Still, a glimmer of reason in the dark pit of Salon. Didn't know Y Combinator was doing a pilot. They don't mention how many people will be in the pilot in the announcement, but it will be interesting to see. One thing I never understood is why it makes sense to do cash transfers to people that are already wealthy - or even above average income. A social safety net (while admittedly more difficult to manage) consisting solely of cash income seems to makes more sense. I guess the issue is with the practical implementation details of managing the system and making sure everyone who needs to be enrolled is.

I'm curious about why this got downvoted, if anyone would like to explain.

Applying probabilistic thinking to fears about terrorism in this piece for the 16th largest newspaper in the US, reaching over 320K with its printed version and over 5 million hits on its website per month. The title was chosen by the newspaper, and somewhat occludes the points. The article is written from a liberal perspective to play into the newspaper's general bent, and its main point was to convey the benefits of applying probabilistic thinking to evaluating political reality.

Edit Updated somewhat based on conversation with James Miller here

Applying probabilistic thinking to fears about terrorism in this piece for the 16th largest newspaper in the US, reaching over 320K with its printed version and over 5 million hits on its website per month. The title was chosen by the newspaper, and somewhat occludes the points. The article is written from a liberal perspective to play into the newspaper's general bent, and its main point was to convey the benefits of applying probabilistic thinking to evaluating political reality.

[Edit] Updated somewhat based on conversation with James Miller here

[This comment is no longer endorsed by its author]Reply

Consider reposting this on the EA Forum, might get more hits that way.

2James_Miller
I tried. I guess it wasn't accepted.

Speed Giving Games involve having people make a decision between two charities. In SGGs, participants who come to the table are given a 1-minute introduction to the concept of effective giving and the two charities involved in the SGG, and are then invited to make a decision about which of the two charities to support. Their vote results in a dollar each going to either charity, sponsored by an outside party, usually The Life You Can Save. For the SGG, we chose GiveDirectly as the effective charity, and the Mid-Ohio Food Bank as a local and not so effective charity.

Will keep in mind about the photo, thanks for the feedback.

A videotaped virtual meeting on effective ways of marketing EA to a broad audience.

Yeah, totally hear you about the file drawer effect, which is why I found two separate citations besides the Center for Policing Equity, which I cited in the piece - this one, and this one. One is a poll, and the other is a government statistical analysis on traffic stops that includes race information. Neither of these is something to which the file drawer effect (publication bias) would apply.

1James_Miller
OK, good point.

An article based on rationality-informed strategies of probabilistic thinking and de-anchoring to deal with police racial profiling. Note that the data on racial profiling is corrected for the higher rate of crimes committed by black people. This is a very by-the-numbers piece.

6James_Miller
Good article overall but if the people at the Center for Policing Equity found no racial bias in police shootings do you think they would have published the results? And if they did publish such a study would Salon have let you publish an article uncritically citing its conclusions? In short: shouldn't file drawer effects cause us to be wary of this part of your article?

Eugine strikes again - this is really creating a great deal of noise and reversing any indications of salience for posts. Previously, he mainly did only one downvote, now he's doing ten at a time, if my -20 karma that appeared in the last hour for the two comments I made is anything to judge by. He seems to also not only be targeting posts he dislikes, but also specific people he dislikes, such as Elo and me. Makes it really hard to judge the quality of my posts, as who knows who actually downvotes them. Frustrating.

Also good to keep in mind this article by Danny Kahneman: "Why Moving to California Won’t Make You Happy".

BTW, sad to see this post downvoted, pretty good post.

Elo
100

its eugine.

This video discusses the most effective science-based strategies for communicating AI Risk to a broad audience. It focuses on issues such as minimizing the inference gap, using emotional engagement, avoiding pattern-matching to sci-fi narratives and instead pattern-matching to unemployment narratives and other topics that the audience would find realistic. It's unlisted, so you can watch and share it with others only if you have a link. Feel free to pass it on to those who you think might benefit from it.

Did some rationality-informed commenting for my university television about guns and racism.

An article on Psychology Today on map and terriotry and fundamental attribution error, and another one on false consensus effect.

Agreed on the benefits of trying things, such as links and an additional Open section. That will give us additional data to go on.

For those interested in longevity research, on the Intentional Insights videocast, we interviewed the project leader and outreach coordinator for the Major Mouse Testing Project, which focuses on how we can advance the science on longevity.

We also published a blog on strategies to resist impulsive temptations, which I think some here might find interesting.

Nice ideas! I think you highlighted well the fundamental problem of lack of social rewards for writing content for LW, and having strong criticism for doing so.

Regarding changing things, I think it makes sense to work with people like Scott who have a lot of credibility, and figure out what would work for them.

However, it also seems that LW itself has a certain brand, and attracts a sizable community. I would like to see a version of the voting system you described implemented here, with people who have more karma having votes that weigh more. I'd also l... (read more)

Interesting, didn't think of it that way. The purpose for the threads is to organize in one place the things we do to advance rationality. I can see where it might pattern-match to bragging. So what would be another alternative to organizing in one venue the things done to advance rationality outreach?

5Sithlord_Bayesian
It doesn't help your case that you're the main one posting in these threads. Just post in the bragging thread that's already posted monthly. Thanks.

Perhaps this is something best for CFAR staff to determine rather than yourself - they have certain standards for scholarships.

Yeah, one of the big failure modes is that people think that attending the workshop will magically result in internalizing all the benefits of CFAR materials. It's vital to keep working on them afterward, as I described in my post. For instance, in about an hour I will attend a weekly Google hangout with CFAR staff following up on some of the materials from the workshop. I'm not sure how many others from the workshop will be there, we'll see. Besides, as Kaj_Sotaja noted here, you can get your money back as well.

Load More