What's the right way to think about how much to give to charity?
I'd like to hear from people about a process they use to decide how much to give to charity. Personally, I have very high income, and while we donate significant money in absolute terms, in relative terms the amount is <1% of our post-tax income. It seems to me that it's too little, but I have no moral intuition as to what the right amount is.
I have a good intuition on how to allocate the money, so that's not a problem.
Background: I have a wife and two kids, one with significant health issues (i.e. medical bills - possibly for life), most money we spend goes to private school tuition x 2, the above mentioned medical bills, mortgage, and miscellaneous life expenses. And we max out retirement savings.
If you have some sort of quantitative system where you figure out how much to spend on charity, please share. If you just use vague feelings, and you think there can be no reasonable quantitative system, please tell me that as well.
Update: as suggested in the comments, I'll make it more explicit: please also share how you determine how much to give.
Circular belief updating
This article is going to be in the form of a story, since I want to lay out all the premises in a clear way. There's a related question about religious belief.
Let's suppose that there's a country called Faerie. I have a book about this country which describes all people living there as rational individuals (in a traditional sense). Furthermore, it states that some people in Faerie believe that there may be some individuals there known as sorcerers. No one has ever seen one, but they may or may not interfere in people's lives in subtle ways. Sorcerers are believed to be such that there can't be more than one of them around and they can't act outside of Faerie. There are 4 common belief systems present in Faerie:
- Some people believe there's a sorcerer called Bright who (among other things) likes people to believe in him and may be manipulating people or events to do so. He is not believed to be universally successful.
- Or, there may be a sorcerer named Invisible, who interferes with people only in such ways as to provide no information about whether he exists or not.
- Or, there may be an (obviously evil) sorcerer named Dark, who would prefer that people don't believe he exists, and interferes with events or people for this purpose, likewise not universally successfully.
- Or, there may either be no sorcerers at all, or perhaps some other sorcerers that no one knows about, or perhaps some other state of things hold, such as that there are multiple sorcerers, or these sorcerers don't obey the above rules. However, everyone who lives in Faerie and is in this category simply believes there's no such thing as a sorcerer.
This is completely exhaustive, because everyone believes there can be at most one sorcerer. Of course, some individuals within each group have different ideas about what their sorcerer is like, but within each group they all absolutely agree with their dogma as stated above.
Since I don't believe in sorcery, a priori I assign very high probability for case 4, and very low (and equal) probability for the other 3.
I can't visit Faerie, but I am permitted to do a scientific phone poll. I call some random person, named Bob. It turns out he believes in Bright. Since P(Bob believes in Bright | case 1 is true) is higher than the unconditional probability, I believe I should adjust the probability of case 1 up, by Bayes rule. Does everyone agree? Likewise, the probability of case 3 should go up, since disbelief in Dark is evidence for existence of Dark in exactly the same way, although perhaps to a smaller degree. I also think the case 2 and case 4 have to lose some probability, since it adds up to 1. If I further call a second person, Daisy, who turns out to believe in Dark, I should adjust all probabilities in the opposite direction. I am not asking either of them about the actual evidence they have, just what they believe.
I think this is straightforward so far. Here's the confusing part. It turns out that both Bob and Daisy are themselves aware of this argument. So, Bob says, one of the reasons he believes in Bright, is because that's positive evidence for Bright's existence. And Daisy believes in Dark despite that being evidence against his existence (presumably because there's some other evidence that's overwhelming).
Here are my questions:
- Is it sane for Bob and Daisy to be in such a positive or negative feedback loop? How is this resolved?
- If Bob and Daisy took the evidence provided by their belief into account already, how does this affect my own evidence updating? Should I take it into account regardless, or not at all, or to a smaller degree?
I am looking forward to your thoughts.
Reasons to believe
I've been thinking recently that I believe in the Theory of Evolution on about the same level as in the Theory of Plate Tectonics. I have grown up being taught that both are true, and I am capable of doing research in either field, or at least reading the literature to examine them for myself. I have not done so in either case, to any reasonable extent.
I am not swayed by the fact that some people consider the former (and not so much the latter) to be controversial, primarily because those people aren't scientists. I tend to be self-congratulatory about this fact, but then I think that I am essentially not interested in examining the evidence, but I am essentially taking it on faith (which the creationists are quick to point out). I think I have good Bayesian reasons to take science on faith (rather than, say, mythology that is being offered in its stead), but do I therefore have good reasons to accept a particular well-established scientific theory on faith, or is it incumbent upon me to examine it, if I think its conclusions are important to my life?
In other words, is it epistemologically wrong to rely on an authority that has produced a number of correct statements (that I could and did verify) to be more or less correct in the future? If I think of this problem as a sort of belief network, with a parent node that has causal connections to hundreds of children, I think such a reliance is reasonable, once you establish that the authority is indeed accurate. On the other hand, appeal to authority is probably the most famous fallacy there is.
Any thoughts? If Eliezer or other people have written on this exact topic, a reference would be appreciated.
Truth & social graces
I've seen an article on LW about Santa Claus and most people were very keen on not lying to their kids (and I agree). I have a little kid who is generally quite truthful, innocent enough not to lie in most cases. I noticed recently that when someone asks him, "How are you", he usually answers in detail because, well, you asked, didn't you? When I was a teenager I hated people who lied and I tended to ignore these unwritten social rules to the extent I could. I.e. I didn't ask if I didn't want to know and people thought I was rude. So, my question is, should I teach him to lie upon these occasions?
More broadly, I was thinking, why am I committed to being truthful, in general? I guess because I would hate to be lied to myself. This is a kind of magical thinking maybe, or maybe it's a part of the social contract. This sort of lying in fact promotes the social well-being because to answer truthfully creates an unwelcome burden on my interlocutor who asked out of politeness and is not in truth interested. But it still feels wrong to lie. Even more wrong to teach your kid to do so.
On self-deception
(Meta-note: First post on this site)
I have read the sequence on self-deception/doublethink and I have some comments for which I'd like to solicit feedback. This post is going to focus on the idea that it's impossible to deceive oneself, or to make oneself believe something which one knows apriori to be wrong. I think Eliezer believes this to be true, e.g. as discussed here. I'd like to propose a contrary position.
Let's suppose that a super-intelligent AI has been built, and it knows plenty of tricks that no human ever thought of, in order to present a false argument which is not easily detectable to be false. Whether it can do that by presenting subtly wrong premises, or by incorrect generalization, or word tricks, or who knows what, is not important. It can, however, present an argument in a Socratic manner, and like Socrates' interlocutors, you find yourself agreeing with things you don't expect to agree with. I now come to this AI, and request it to make a library of books for me (personally). Each is to be such that if I (specifically) were to read it, I would very likely come to believe a certain proposition. It should take into account that initially I may be opposed to the proposition, and that I am aware that I am being manipulated. Now, AI produces such a library, on the topic of religion, for all major known religions, A to Z. It has a book called "You should be an atheist", and "You should be a Christian", etc, up to "You should be a Zoroastrian".
Suppose, I now want to deceive myself. I throw fair dice, and end up picking a Zoroastrian book. I now commit to reading the entire book and do so. In the process I become convinced that indeed, I should be a Zoroastrian, despite my initial skepticism. Now my skeptical friend comes to me:
Q: You don't really believe in Zoroastrianism.
A: No, I do. Praise Ahura Mazda!
Q: You can't possibly mean it. You know that you didn't believe it and you read a book that was designed to manipulate you, and now you do? Don't you have any introspective ability?
A: I do. I didn't intend to believe it, but it turns out that it is actually true! Just because I picked this book up for the wrong reason, doesn't mean I can't now be genuinely convinced. There are many examples where people would study religion of their enemy in order to discredit it and in the process become convinced of its truth. I think St. Augustine was in a somewhat similar case.
Q: But you know the book is written in such a way as to convince you, whether it's true or not.
A: I took that into account, and my prior was really low that I would ever believe it. But the evidence presented in the book was so significant and convincing that it overcame my skepticism.
Q: But the book is a rationalization of Zoroastrianism. It's not an impartial analysis.
A: I once read a book trying to explain and prove Gödel's theorem. It was written explicitly to convince the reader that the theorem was true. It started with the conclusion and built all arguments to prove it. But the book was in fact correct in asserting this proposition.
Q: But the AI is a clever arguer. It only presents arguments that are useful to its cause.
A: So is the book on Gödel's theorem. It never presented any arguments against Gödel, and I know there are some, at least philosophical ones. It's still true.
Q: You can't make a new decision based on such a book which is a rationalization. Perhaps it can only be used to expand one's knowledge. Even if it argues in support of a true proposition, a book that is a rationalization is not really evidence for the proposition's truth.
A: You know that our AI created a library of books to argue for most theological positions. Do you agree that with very high probability one of the books in the library argues for a true proposition? E.g. the one about atheism? If I were to read it now, I'd become an atheist again.
Q: Then do so!
A: No, Ahura Mazda will punish me. I know I would think he's not there after I read it, but he'll punish me anyway. Besides, at present I believe that book to be intentionally misleading. Anyway, if one of the books argues for a true proposition, it may also use a completely valid argument without any tricks. I think this is true of this book on Zoroastrianism, and is false of all other books in AI's library.
Q: Perhaps I believe the Atheism book argues for a true proposition, but it is possible that all the books written by the AI use specious reasoning, even the one that argues for a true proposition. In this case, you can't rely on any of them being valid.
A: Why should the AI do that? Valid argument is the best way to demonstrate the truth of something that is in fact true. If tricks are used, this may be uncovered which would throw doubt onto the proposition being argued.
Q: If you picked a book "You should believe in Zeus", you'd believe in Zeus now!
A: Yes, but I would be wrong. You see, I accidentally picked the right one. Actually, it's not entirely accidental. You see, if Ahura Mazda exists, he would with some positive probability interfere with the dice and cause me to pick the book on the true religion because he would like me to be his worshiper. (Same with other gods, of course). So, since P(I picked the book on Zoroastrianism|Zoroastrianism is a true religion) > P(I picked the book on Zoroastrianism|Zoroastrianism is a false religion), I can conclude by Bayes' rule that me picking that book up is evidence for Zoroastrianism. Of course, if the prior P(Zoroastrianism is a true religion) is low, it's not a lot of evidence, but it's some.
Q: So you are really saying you won the lottery.
A: Yes. A priori, the probability is low, of course. But I actually have won the lottery: some people do, you know. Now that I have won it, the probability is close to 1 (It's not 1, because I recognize that I could be wrong, as a good Bayesian should. But the evidence is so overwhelming, my model says it's really close to 1).
Q: Why don't you ask your super-intelligent AI directly whether the book's reasoning is sound?
A: According to the book, I am not supposed to do it because Ahura Mazda wouldn't like it.
Q: Of course, the book is written by the superintelligent AI in such a way that there's no trick I can think of that it didn't cover. Your ignorance is now invincible.
A: I still remain a reasonable person and I don't like being denied access to information. However, I am now convinced that while having more information is useful, it is not my highest priority anymore. I know it is possible for me to disbelieve again if given certain (obviously false!) information, but my estimate of the chance that any further true information could change my opinion is very low. In fact, I am far more likely to be deceived by false information about Ahura Mazda, because I am not superintelligent. This is why Ahura Mazda (who is superintelligent, by the way) advises that one should not tempt oneself into sin by reading any criticism of Zoroastrianism.
Q: Just read that atheist book and become normal again!
A: You are possessed by demons! Repent and become the follower of Ahura Mazda!
So, are you now convinced that you should be a Zoroastrian one can really intentionally deceive oneself and be in a mentally consistent (although weird) state?
To answer one potential objection, an ordinary person may well not need a super-intelligent AI to fool him, just a very convincing book or very clever interlocutor. As to why someone would want to submit to this, I'll discuss this in a separate post.
Update:
Here are some points that I think are useful to add from various comments.
- Require the AI to use tricks as little as possible. In particular, every true proposition it argues should be argued for in an entirely logically valid fashion. This shortcuts the part of the Q&A where I discuss whether the book you picked can possibly be entirely valid.
- From the above point, and assuming that really only one religion of our set can be "true", P(Book on X is valid) = P(X is true). If more than one religion can be true at the same time, it's possible that P(Book on X is valid) > P(X is true).
- P(X is true) for any religion X is some value strictly between 0 and 1. It's obvious, but just to ensure we are not ruling out anything a priori. This is my ordinary probability of belief in X.
- P(Book on X will convince me to believe X with probability > 1-epsilon) is some value close to 1-f(epsilon), where f(epsilon) is small, but presumably may increase as epsilon -> 0. In other words, it's harder for the AI to convince me without a shadow of a doubt. Epsilon is picked by AI to keep f(epsilon) reasonably small. I don't know what value was picked, it clearly depends on X: the more ridiculous it is, the larger epsilon is. If the AI is infinitely more intelligent than I am, perhaps it can keep f(epsilon) small even as epsilon -> 0.
- However, whatever P(Book on X will convince me to believe X with probability > 1-epsilon) for X=Zoroastrianism is, it was high enough that the book succeeded in my case.
- I do not think it is valid to make a meta-statement on what the value of the posterior P(X is true|I have read the book on X) can be, without actually reading the book. In particular, the book has at least this probability of being true: P(Book on X is valid) >= P(X is true) > 0, so you cannot claim that the posterior is the same as prior because you believe that the book will convince you of X and it does. Additionally, any meta-argument clearly depends on f(epsilon), which I don't know.
- The book can convince me to adjust my world view in such a way that will rule out the invisible elephant problem, at least where modern science is concerned. I will remember what the science says, of course, but where it conflicts with my religion I will really believe what the religion says, even if it says it's turtles all the way down and will really be afraid of falling of the edge of the Earth if that's what my religion teaches.
Any thoughts on whether I should post this on the main site?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)