"3 Reasons It’s Irrational to Demand ‘Rationalism’ in Social Justice Activism"
The lead article on everydayfeminism.com on March 25:
3 Reasons It’s Irrational to Demand ‘Rationalism’ in Social Justice Activism
The scenario is always the same: I say we should abolish prisons, police, and the American settler state— someone tells me I’m irrational. I say we need decolonization of the land — someone tells me I’m not being realistic.... When those who are the loudest, the most disruptive — the ones who want to destroy America and all of the oppression it has brought into the world — are being silenced even by others in social justice groups, that is unacceptable.
(The link from "decolonization" is to "Decolonization is not a metaphor", to make it clear s/he means actually giving the land back to the Native Americans.)
I regularly see people who describe how social justice activists act accused of setting up a straw man. This article show that the bias of some SJWs against reason is impossible to strawman. The author argues at length that rationality is bad, and that justice arguments shouldn't be rational or be defended rationally. Ze is, or was, confused about what "rationality" means, but clearly now means it to include reason-based argumentation.
This isn't just some wacko's blog; it was chosen as the headline article for the website. I had to click around to a few other articles to make sure it wasn't a parody site.
But it isn't just a sign of how irrational the social justice movement is—it has clues to how it got that way.
Market Failure: Sugar-free Tums
In theory, the free market and democracy both work because suppliers are incentivized to provide products and services that people want. Economists consider it a perverse situation when the market does not provide what people want, and look for explanations such as government regulation.
The funny thing is that sometimes the market doesn't work, and I look and look for the reason why, and all I can come up with is, People are stupid.
I've written before about the market's apparent failure to provide cup holders in cars. I saw another example this week in the latest Wired magazine, a piece on page 42 about a start-up called Thinx to make re-usable women's underwear that absorbs menstrual fluid--all of it, so women don't have to slip out of the middle of meetings to change tampons. The piece's angle was that venture capitalists rejected the idea because they were mostly men and so didn't "get it".
I'd guess they "got it". It isn't a complicated idea. The thing is, there are already 3 giant companies battling for that market. The first thing a VC would say when you tell him you're going to make something better than a tampon is, "Why haven't Playtex, Kotex, or Tampax already done that?"
So, Thinx did a kickstarter and has now sold hundreds of thousands of thousands of absorbent underwear for about $30 each.
The failure in this case is not that VCs are sexist, but that Playtex, etc., never developed this product, although there evidently is a demand for it, and there is no evident reason it couldn't have been produced 20 years ago. The belief that the market doesn't fail then almost led to a further failure, the failure to develop the product at the present time, because the belief that the market doesn't fail implied the product could not be profitable.
I just now came across an even clearer case of market failure: Sugar-free Tums.
The AI That Pretends To Be Human
The hard part about containing AI, is restricting it's output. The AI can lie, manipulate, and trick. Some speculate that it might be able to do far worse, inventing infohazards like hypnosis or brain hacking.
A major goal of the control problem is preventing AIs from doing that. Ensuring that their output is safe and useful.
Awhile ago I wrote about an approach to do this. The idea was to require the AI to use as little computing power as it needed to perform a task. This prevents the AI from over-optimizing. The AI won't use the full power of superintelligence, unless it really needs it.
The above method isn't perfect, because a superintelligent AI may still have super abilities, even when it has a limited compute budget. And it will still try to do all those bad things mentioned above, unless it can find a better way that uses less computing power.
We don't really want to minimize the AIs compute budget. We just want it to act more human-like. When we ask it to solve a problem, we want it to try to solve the problem as a human would solve it. All of the problems arise from the AI behaving totally different than a human. Not necessarily from it using more computing power.
Is there any way to make an AI act like a human? Or is that as impossible as just solving FAI?
Stuart Armstrong posted an idea on how to achieve that. It involves an AI that tries to mimic output that is indistinguishable from what a human would produce, but conditional on it also solving the problem. However his idea also involves whole brain emulation and a complicated method for the AI to predict the future and avoid changing it too much.
We don't need that. The idea for the AI to try to be indistinguishable from a human is actually a really interesting control idea. It significantly limits the AI's ability to Do Sneaky Things, since humans don't do those things.
Here is my idea on how to achieve this. It's the same as my limited computing power idea mentioned earlier, but applied to "appearing human" as a goal, instead of minimizing computing power.
We start with two AIs. The first AI tries to appear human. The second AI does a sort of Turing test, and tries to tell the AI apart from an actual human.
The human AI's first goal is to produce a correct solution. It will predict how likely it's solution is to be accepted. And it will try to produce a solution that has a >50% chance of working. This is a satisficing goal. That is, it doesn't optimize without end to create the best solution possible, it just has to meet some minimum threshold, then stop.
It's second goal will be to fool the second AI. It will do everything in it's power to make the solution appear like it was produced by a human.
The second AI tries to predict how likely it's input was produced by a human. It is also conditioned on the input being a solution to the problem, so the fact the AI solved a problem that was really hard for humans does not count against it.
This prevents the first AI from doing evil things with it's output. If it tries to insert complicated infohazards or subagents into it's output stream, it will be easily detected as an AI. Instead it needs to mimic humans as closely as possible.
Note that the first AI only cares about the outcome of this test. The test doesn't need to actually be performed, it can just be a threat that it is conditioned on. (The ability to condition AIs on acting as if they were in counterfactual worlds is a critical piece of this, and deserves it's own writeup somewhere. It's an important tool in the FAI toolkit. It's a relatively simple method that can work even on current AIs.)
Another point is the second AI should have more computing power than the first AI. That way the first AI can't get a slight advantage over it and be able to hack it or manipulate it somehow. Instead it has to focus it's efforts on being indistinguishable from a human, even by an agent smarter than itself.
This method also relies on AI boxing. The first AI needs to be contained in a box so that it's only output goes to a text terminal or the second AI. However it doesn't require any of the more complicated machinery required for other reduced impact methods.
The AIs also aren't agents, in the sense they don't need to maximize any long term utility function. The first AI only cares about answering the question it has been asked, and the second AI only does prediction.
An example use of this is to create an oracle AI that answers your questions. The human judges somehow whether the answer is correct, or at least good. The AI tries to produce answers which the human judge will probably like, but which also are indistinguishable from answers humans would produce.
Such an oracle would be tremendously useful. The human could ask the AI to produce new AI and FAI papers, which would help immensely speed up our progress towards true FAI.
Similar techniques could be applied to very different domains. For example controlling a robot. The AI will try to produce outputs that look like a human is controlling it.
This would require some modifications to the above method, so that the AI doesn't try to use the robot to hack it's own brain or anything. This could be done by conditioning it on a counterfactual world where it's outputs are going to the second AI rather than a real robot.
Another point is the AI doesn't need to mimic average humans given average resources. It could be conditioned on the human having had tons of time to come up with an answer. E.g. producing an answer that a human would have come up with given a year. Or controlling the robot the same way as a human given tons of time to practice, or in a very slowed down simulation.
I would like to note a parallel with a method in current AI research, Generative Adversarial Networks. Generative Adversarial Networks work by two AIs, one which tries to produce an output that fools the second AI, and the other which tries to predict which samples were produced by the first AI, and which are part of the actual distribution.
It's quite similar to this. GANs have been used successfully to create images that look like real images, which is a hard problem in AI research. In the future GANs might be used to produce text that is indistinguishable from human (the current method for doing that, by predicting the next character a human would type, is kind of crude.)
Consequences of the Non-Existence of Perfect Theoretical Rationality
Caveats: Dependency (Assumes truth of the arguments against perfect theoretical rationality made in the previous post), Controversial Definition (perfect rationality as utility maximisation, see previous thread)
This article is a follow up to: The Number Choosing Game: Against the existence of perfect theoretical rationality. It discusses the consequences of The Number Choosing Game, which is roughly, that you name the decimal representation of any number and you gain that much utility. It takes place in a theoretical world where there are no real world limitations in how large a number you can name or any costs. We can also assume that this game takes place outside of regular time, so there is no opportunity cost. Needless to say, this was all rather controversial.
Update: Originally I was trying to separate the consequences from the arguments, but it seems that this blog post slipped away from it.
What does this actually mean for the real world?
This was one of the most asked questions in the previous thread. I will answer this, but first I want to explain why I was reluctant to answer. I agree that it is often good to tell people what the real world consequences are as this isn't always obvious. Someone may miss out on realising how important an idea is if this isn't explained to them. However, I wanted to fight against the idea that people should always be spoonfed the consequences of every argument. A rational agent should have some capacity to think for themselves - maybe I tell you that the consequences are X, but they are actually Y. I also see a great deal of value from discussing the truth of ideas separate from the practical consequences. Ideally, everyone would be blind to the practical consequences when they were first discussing the truth of an idea as it would lead to a reduction in motivated reasoning.
The consequences of this idea are in one sense quite modest. If perfect rationality doesn't exist in at least some circumstances(edited), then if you want to assume it, you have to prove that a relevant class of problems has a perfectly rational agent. For example, if there are a finite number of options, each with a measurable, finite utility, we can assume that a perfectly rational agent exists. I'm sure we can prove that such agents exist for a variety of situations involving infinite options as well. However, there will also be some weird theoretical situations where it doesn't apply. This may seem irrelevant to some people, but if you are trying to understand strange theoretical situations, knowing that perfect rationality doesn't exist for some of these situations will allow you to provide an answer when someone hands you something unusual and says, "Solve this!". Now, I know my definition for rationality is controversial, but even if you don't accept it, it is still important to realise that the question, "What would a utility maximiser do?" doesn't always have an answer, as sometimes there is no maximum utility. Assuming perfectly rational agents as defined by utility maximisation is incredibly common in game theory and economics. This is helpful for a lot of situations, but after you've used this for a few years in situations where it works you tend to assume it will work everywhere.
Is missing out on utility bad?
One of the commentators on the original thread can be paraphrased as arguing, "Well, perhaps the agent only wanted a million utility". This misunderstands that the nature of utility. Utility is a measure of things that you want, so it is something you want more of by definition. It may be that there's nothing else you want, so you can't actually receive any more utility, but you always want more utility(edited).
Here's one way around this problem. The original problem assumed that you were trying to optimise for your own utility, but lets pretend now that you are an altruistic agent and that when you name the number, that much utility is created in the world by alleviating some suffering. We can assume in infinite universe so there's infinite suffering to alleviate, but that isn't strictly necessary, as no matter how large a number you name, it is possible that it might turn out that there is more suffering than that in the universe (while still being finite). So let's suppose you name a million, million, million, million, million, million (by its decimal representation of course). The gamemaker then takes you to a planet where the inhabitants are suffering the most brutal torture imaginable by a dictator that is using their planet's advanced neuroscience knowledge to maximise the suffering. The gamemaker tells you that if you had added an extra million on the end, then these people would have had their suffering alleviated. If rationality is winning, does a planet full of tortured people count as winning? Sure, the rules of the game prevent you from completely winning, but nothing in the game stopped you from saving those people. The agent that also saved those people is a more effective agent and hence a more rational agent than you are. Further if you accept that there is no difference between acts and omissions, then there is no moral difference between torturing those people yourself and failing to say the higher number (Actually, I don't really believe this last point. I think this is more a flaw with arguing acts and omissions are the same in the specific case of an unbounded set of options. I wonder if anyone has ever made this argument before, I wouldn't be surprised if it this wasn't the case and if there was a philosophy paper in this).
But this is an unwinnable scenario, so a perfectly rational agent will just pick a number arbitrarily? Sure you don't get the most utility, but why does this matter?
If we say that the only requirement here for an agent to deserve the title of "perfectly rational" is to pick an arbitrarily stopping point, then there's no reason why we can't declare the agent that arbitrarily stops at 999 as "perfectly rational". If I gave an agent the option of picking a utility of 999 or a utility of one million, the agent who picked a utility of 999 would be quite irrational. But suddenly, when given even more options, the agent who only gets 999 utility counts as rational. It actually goes further than this, there's no objective reason that the agent can't just stop at 1. The alternative is that we declare any agent who picks at least a "reasonably large" number to be rational. The problem is that there is no objective definition of "reasonably large". This would create a situation where our definition of "perfectly rational" would be subjective, which is precisely what the idea of perfectly rational was created to avoid. It gets worse than this still. Let's pretend that before the agent plays the game they lose a million utility (and that the first million utility they get from the game goes towards reversing these effects, time travel is possible in this universe). We then get our "perfectly rational" agent a million (minus one) utility in the red, ie. suffering a horrible fate, which they could have easily chosen to avoid. Is it really inconceivable that the agent who gets positive one million utility instead of negative one million could be more rational?
What if this were a real life situation? Would you really go, "meh" and accept the torture because you think a rational agent can pick an arbitrary number and still be perfectly rational?(edit)
The argument that you can't choose infinity, so you can't win anyway, is just a distraction. Suppose perfect rationality didn't exist for a particular scenario, what would this imply about this scenario? The answer is that it would imply that there was no way of conclusively winning, because, if there was, then an agent following this strategy would be perfectly rational for this scenario. Yet, somehow people are trying to twist it around the other way and conclude that it disproves my argument. You can't disprove an argument by proving what it predicts(edit).
What other consequences are there?
The fact that there is no perfectly rational agent for these situations means that any agent will seem to act rather strangely. Let's suppose that a particular agent who plays this game will always stop at a certain number X, say a Googleplex. If we try to sell them this item for more than that, they would refuse as they wouldn't make money on it, despite the fact that they could make money on it if they chose a higher number.
Where this gets interesting is that the agent might have special code to buy the right to play the game for any price P, and then choose the number X+P. It seems that sometimes it is rational to have path-dependent decisions despite the fact that the amount paid doesn't affect the utility gained from choosing a particular number.
Further, with this code, you could buy the right to play the game back off the agent (before it picks the number) for X+P+1. You could then sell it back to the agent for X+P+one billion and repeatedly buy and sell the right to play the game back to the agent. (If the agent knows that you are going to offer to buy the game off it, then it could just simulate the sale by increasing the number it asks for, but it has no reason not to simulate the sale and also accept a higher second offer)
Further, if the agent was running code to choose the number 2X instead, we would end up with a situation where it might be rational for the agent to pay you money to charge it extra for the right to play the game.
Another property is that you can sell the right to play the game to any number of agents, add up all their numbers, and add your profit on top and ask for that much utility.
It seems like the choices for these games obey rather unusual rules. If these choices are allowed to count as "perfectly rational" as per the people who disagree with me that perfect rationality exists, it seems at the very least that perfect rationality is something that behaves very differently from what we might expect.
At the end of the day, I suppose whether you agree with my terminology regarding rationality or not, we can see that there are specific situations where we it seems reasonable to act in a rather strange manner.
Variations on the Sleeping Beauty
This post won't directly address the Sleeping Beauty problem so you may want to read the above link to understand what the sleeping beauty problem is first.
Half*-Sleeping Beauty Problem
The asterisk is because it is only very similar to half of the sleeping beauty problem, not exactly half.
A coin is flipped. If it is heads, you are woken up with 50% chance and interrogated about the probability of the coin having come up heads. The other 50% of the time you are killed. If it is tails you are woken up and similarly interrogated. Given that you are being interrogated, what is the probability that the coin came up heads? And have you received any new information?
Double-Half*-Sleeping Beauty problem
A coin is flipped. If it is heads, a coin is flipped again. If this second coin is heads you are woken up and interrogated on Monday, if it is tails you are woken up and interrogated on Tuesday. If it is tails, then you are woken up on Monday and Tuesday and interrogated both days (having no memory of your previous interrogation). If you are being interrogated, what is the chance the coin came up heads? And have you received any new information?
Double-Half*-Sleeping Beauty problem with Known Day Variation
Sleeping Couples Problem
A man and his identical-valued wife have lived together for so many years that they have reached Aumann agreement on all of their beliefs, including core premises, so that they always make the same decision in every situation.
A coin is flipped. If it is heads, one of the couple is randomly woken up and interrogated about the probability of the coin having come up heads. The other is killed. If it is tales, both are woken up separately and similarly interrogated. If you are being interrogated, what is the probability that the coin came up heads? And have you received any new information?
Sleeping Clones Problem
A coin is flipped. If it is heads, you are woken up and interrogated about the probability of the coin having come up heads. If it is tails, then you are cloned and both copies are interrogated separately without knowing whether they are the clone or not. If you are being interrogated, what is the probability that the coin came up heads? And have you received any new information?
My expectation is that the Double-Half Sleeping Beauty and Sleeping Clones will be controversial, but I am optimistic that there will be a consensus on the other three.
Solutions (or at least what I believe to be the solutions) will be forthcoming soon.
The Number Choosing Game: Against the existence of perfect theoretical rationality
In order to ensure that this post delivers what it promises, I have added the following content warnings:
Content Notes:
Pure Hypothetical Situation: The claim that perfect theoretical rationality doesn't exist is restricted to a purely hypothetical situation. No claim is being made that this applies to the real world. If you are only interested in how things apply to the real world, then you may be disappointed to find out that this is an exercise left to the reader.
Technicality Only Post: This post argues that perfectly theoretical rationality doesn't exist due to a technicality. If you were hoping for this post to deliver more, well, you'll probably be disappointed.
Contentious Definition: This post (roughly) defines perfect rationality as the ability to maximise utility. This is based on Wikipedia, which defines rational agents as an agent that: "always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions".
We will define the number choosing game as follows. You name any single finite number x. You then gain x utility and the game then ends. You can only name a finite number, naming infinity is not allowed.
Clearly, the agent that names x+1 is more rational than the agent that names x (and behaves the same in every other situation). However, there does not exist a completely rational agent, because there does not exist a number that is higher than every other number. Instead, the agent who picks 1 is less rational than the agent who picks 2 who is less rational than the agent who picks 3 and so on until infinity. There exists an infinite series of increasingly rational agents, but no agent who is perfectly rational within this scenario.
Furthermore, this hypothetical doesn't take place in our universe, but in a hypothetical universe where we are all celestial beings with the ability to choose any number however large without any additional time or effort no matter how long it would take a human to say that number. Since this statement doesn't appear to have been clear enough (judging from the comments), we are explicitly considering a theoretical scenario and no claims are being made about how this might or might not carry over to the real world. In other words, I am claiming the the existence of perfect rationality does not follow purely from the laws of logic. If you are going to be difficult and argue that this isn't possible and that even hypothetical beings can only communicate a finite amount of information, we can imagine that there is a device that provides you with utility the longer that you speak and that the utility it provides you is exactly equal to the utility you lose by having to go to the effort to speak, so that overall you are indifferent to the required speaking time.
In the comments, MattG suggested that the issue was that this problem assumed unbounded utility. That's not quite the problem. Instead, we can imagine that you can name any number less than 100, but not 100 itself. Further, as above, saying a long number either doesn't cost you utility or you are compensated for it. Regardless of whether you name 99 or 99.9 or 99.9999999, you are still choosing a suboptimal decision. But if you never stop speaking, you don't receive any utility at all.
I'll admit that in our universe there is a perfectly rational option which balances speaking time against the utility you gain given that we only have a finite lifetime and that you want to try to avoid dying in the middle of speaking the number which would result in no utility gained. However, it is still notable that a perfectly rational being cannot exist within a hypothetical universe. How exactly this result applies to our universe isn't exactly clear, but that's the challenge I'll set for the comments. Are there any realistic scenarios where the lack of existence of perfect rationality has important practical applications?
Furthermore, there isn't an objective line between rational and irrational. You or I might consider someone who chose the number 2 to be stupid. Why not at least go for a million or a billion? But, such a person could have easily gained a billion, billion, billion utility. No matter how high a number they choose, they could have always gained much, much more without any difference in effort.
I'll finish by providing some examples of other games. I'll call the first game the Exploding Exponential Coin Game. We can imagine a game where you can choose to flip a coin any number of times. Initially you have 100 utility. Every time it comes up heads, your utility triples, but if it comes up tails, you lose all your utility. Furthermore, let's assume that this agent isn't going to raise the Pascal's Mugging objection. We can see that the agent's expected utility will increase the more times they flip the coin, but if they commit to flipping it unlimited times, they can't possibly gain any utility. Just as before, they have to pick a finite number of times to flip the coin, but again there is no objective justification for stopping at any particular point.
Another example, I'll call the Unlimited Swap game. At the start, one agent has an item worth 1 utility and another has an item worth 2 utility. At each step, the agent with the item worth 1 utility can choose to accept the situation and end the game or can swap items with the other player. If they choose to swap, then the player who now has the 1 utility item has an opportunity to make the same choice. In this game, waiting forever is actually an option. If your opponents all have finite patience, then this is the best option. However, there is a chance that your opponent has infinite patience too. In this case you'll both miss out on the 1 utility as you will wait forever. I suspect that an agent could do well by having a chance of waiting forever, but also a chance of stopping after a high finite number. Increasing this finite number will always make you do better, but again, there is no maximum waiting time.
(This seems like such an obvious result, I imagine that there's extensive discussion of it within the game theory literature somewhere. If anyone has a good paper that would be appreciated).
Link to part 2: Consequences of the Non-Existence of Rationality
"The Difference Between Medicine and Poison is Dosage" Shirts and Bags
Happy to share that, after multiple rounds of feedback, here is the set of rationality-themed T-shirts and bags with the slogan that got most support from the community, "The Difference Between Medicine and Poison is Dosage."

These t-shirts and bags are an effort to update based on the feedback received on the original set of rationality shirts and optimizing suggestions. This includes use of graphic images in the words, use of more professional designers, etc. We went back to the drawing board, and tried to design a new shirt that used a slogan that was quite popular. We ran the design by the Less Wrong FB group a couple of times (1, 2) and this is the final product.
As you can see, there are two styles available, one with "Dosage" empty and with the image of the dropper, and one with "Dosage" filled in and without the dropper. Various colors, sizes, and materials are available for each shirt/bag.
The first style is available for purchase on CafePress here.
The second style is available for purchase on CafePress here.
Look forward to hearing about your experience and thoughts about these t-shirts and bags, and the impact they make in carrying good rationality-themed memes into the world! We will be making more shirts and running them by the community as we did with these. All revenue will go into promoting rational thinking strategies to a broad audience.
Effective Giving vs. Effective Altruism
This is mainly of interest to Effective Altruists, and was cross-posted on the EA forum
Why separate effective giving from Effective Altruism? Isn't the whole point of EA about effective giving, meaning giving to the most impactful charities to advance human flourishing? Sure, effective giving is the point of EA, but there might be a lot of benefit to drawing a distinct line between the movement of Effective Altruism itself, and the ideas of effective giving that it promotes. That's something that Kerry Vaughn, the Executive Director of Effective Altruism Outreach, and I, the President of Intentional Insights, discussed in our recent phone call, after having an online discussion on this forum. To be clear, Kerry did not explicitly endorse the work of Intentional Insights, and is not in a position to do so - this just reflects my recollection of our conversations.
Why draw that line? Because there's quite a bit of danger in rapid movement growth of attracting people who might dilute the EA movement and impair the building of good infrastructure down the road (see this video and paper). This exemplifies the dangers of simply promoting Effective Altruism indiscriminately, and just trying to grow the movement as fast as possible.
Thus, what we can orient toward is using modern marketing strategies to spread the ideas of effective altruism - what Kerry and I labeled effective giving in our conversations - without necessarily trying to spread the movement. We can spread the notion of giving not simply from the heart, but also using the head. We can talk about fighting the drowning child problem. We can talk about researching charities and using GiveWell, The Life You Can Save, and other evidence-based charity evaluators to guide one's giving. We can build excitement about giving well, and encourage people to think of themselves as Superdonors or Mega-Lifesavers. We can use effective marketing strategies such as speaking to people's emotions and using stories, and contributing to meta-charities such as EA Outreach and others that do such work. That's why we at Intentional Insights focus on spending our resources on spreading the message of effective giving, as we believe that getting ten more people to give effectively is more impactful than us giving of our resources to effective charities ourselves. At the same time, Kerry and I spoke of avoiding heavily promoting effective altruism as a movement or using emotionally engaging narratives to associate positive feelings with it - instead, just associating positive feelings with effective giving, and leaving bread crumbs for people who want to explore Effective Altruism through brief mentions and links.
Let's go specific and concrete. Here's an example of what I mean: an article in The Huffington Post that encourages people to give effectively, and only briefly mention Effective Altruism. Doing so balances the benefits of using marketing tactics to channel money to effective charities, while not heavily promoting EA itself to ameliorate the dangers of rapid movement growth.
Check out the sharing buttons on it, and you'll see it was shared quite widely, over 1K times. As you'll see from this Facebook comment on my personal page, it helped convince someone to decide to donate to effective charities. Furthermore, this comment is someone who is the leader of a large secular group in Houston, and he thus has an impact on a number of other people. Since people rarely make actual comments, and far from all are fans of my Facebook page, we can estimate that many more made similar decisions but chose not to comment about it.
Another example. Here is a link to the outcome of an Intentional Insights collaboration with The Life You Can Save to spread effective giving to the reason-oriented community through Giving Games. In a Giving Game, participants in a workshop learn about a few pre-selected charities, think about and discuss their relative merits, and choose which charity will get a real donation, $10 per participant. We have launched a pilot program with the Secular Student Alliance to bring Giving Games to over 300 secular student groups throughout the world, with The Life You Can Save dedicating $10,000 to the pilot program, and easily capable of raising more if it works well. As you'll see from the link, it briefly mentions Effective Altruism, and focuses mainly on education in effective giving itself.
Such articles as the one in The Huffington Post, shared widely in social media, attest to the popularity of effective giving as a notion, separate from Effective Altruism itself. As you saw, it is immediately impactful in getting some people to give to effective charities, and highly likely gets others to think in this direction. I had a conversation with a number of leaders of local EA groups, for example with Alfredo Parra in Munich, excited about the possibility of translating and adapting this article to their local context, and all of you are free to do so as well - I encourage you to cite me/Intentional Insights in doing so, but if you can't, it's fine as well.
That gets to another point that Kerry and I discussed, namely the benefits of having some EAs who specialize in promoting ideas about effective giving, and more broadly integrating promotion of effective giving as something that EAs do in general. Some EAs can do the most good by working hard and devoting 10% of their money to charity. Some can do the most good by thinking hard about the big issues. Some can do the most good by growing the internal capacity and infrastructures of the movement, and getting worthy people on board. Others can do the most good by getting non-EAs to channel their money toward effective charities through effective marketing and persuasion tactics.
Intentional Insights orients toward providing the kind of content that can be easily adapted and shared by these EAs widely. It's a work in progress, to create and improve this content. We are also working with other EA meta-charities such as The Life You Can Save and others. Another area to work on is not only content creation, but content optimization and testing - I talked with Konrad Seifert from Geneva about testing our content at a university center there. Moreover, we should develop the infrastructure to integrate spreading effective giving into EA activities, something EA Outreach may potentially collaborate with us on, depending on further discussions.
So these are some initial thoughts, which I wanted to bring to the community for discussion. What do you think of this line of work, and what are your ideas for optimization? Thanks!
**EDIT** Edited to clarify that Kerry Vaughn did not explicitly endorse the work of Intentional Insights.
LINK: Most of EvoPsych is pseudoscience
The evolutionary origin of human behavior is doubtless a valuable scientific field, but the way the research is currently being conducted raises several concerns.
By request from readers, I've added some excerpts:
EvoPsych’s most common failing is its fallacious methodology, often consisting of not even acknowledging the need to describe, much less pass, any adequate falsification test.
(1) This is most commonly the case in its frequent failure to even confirm that a behavior widely exists cross-culturally [...]
(2) EvoPsych also rarely finds any genetic correlation to a behavior [...]
(3) More problematic still is the rarity of ever even acknowledging the need to rule out accidental (byproduct) explanations of a behavior [...]
(4) And one of the most common confounding factors for creating accidental behavior effects will be the sudden radical changes in our environment caused by civilization and technology.
[...] This makes EvoPsych almost impossible to practice as a genuine science. What it wants to know, is almost always simply impossible to know (at least currently).
First, EvoPsych imagines such a vast repertoire of evolved stimulus-response psychological mechanisms as to require a vast genetic apparatus that simply isn’t found in the human genome.
Second, [...] EvoPsych needs to test the non-adaptive hypothesis for any claim first. It should not be assuming every human behavior is a product of biological adaptation.
[...]
(1) The evidence actually suggests human evolution may operate at a faster pace than EvoPsych requires, such that its assumption of ancient environments being wholly determinative of present biology is false.
(2) “Neuroscientists have been aware since the 1980s that the human brain has too much architectural complexity for it to be plausible that genes specify its wiring in detail,”
(3) “The view that a universal genetic programme underpins human cognition is also not fully consistent with current genetic evidence.”
(4) “Human behavioral genetics has also identified genetic variation underlying an extensive list of cognitive and behavioural characteristics,” thus challenging any claim that certain traits were adaptively selected for—when clearly, after tens of thousands of years, the variance was clearly adaptively selected for.
(5) “The thesis of massive modularity is not supported by the neuroscientific evidence,”
(6) “Evolutionary psychologists rarely examine whether their hypotheses regarding evolved psychological mechanisms are supported by what is known about how the brain works.”
(7) EvoPsych needs to start doing experiments in social learning, to see what can and can’t be unlearned by a change in culture and cognition, so as to isolate what actually is biological, and what is actually instead just picked up [...]
(8) [...] such studies do not test the evolutionary hypotheses themselves [...] by failing to rule out plausible alternative explanations for all of its results, EvoPsych has actually failed to prove anything at all.
[Link] A rational response to the Paris attacks and ISIS
Here's my op-ed that uses long-term orientation, probabilistic thinking, numeracy, consider the alternative, reaching our actual goals, avoiding intuitive emotional reactions and attention bias, and other rationality techniques to suggest more rational responses to the Paris attacks and the ISIS threat. It's published in the Sunday edition of The Plain Dealer, a major newspaper (16th in the US). This is part of my broader project, Intentional Insights, of conveying rational thinking, including about politics, to a broad audience to raise the sanity waterline.
View more: Next

Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)