Proofs, Implications, and Models
Followup to: Causal Reference
From a math professor's blog:
One thing I discussed with my students here at HCSSiM yesterday is the question of what is a proof.
They’re smart kids, but completely new to proofs, and they often have questions about whether what they’ve written down constitutes a proof. Here’s what I said to them.
A proof is a social construct – it is what we need it to be in order to be convinced something is true. If you write something down and you want it to count as a proof, the only real issue is whether you’re completely convincing.
This is not quite the definition I would give of what constitutes "proof" in mathematics - perhaps because I am so used to isolating arguments that are convincing, but ought not to be.
Or here again, from "An Introduction to Proof Theory" by Samuel R. Buss:
There are two distinct viewpoints of what a mathematical proof is. The first view is that proofs are social conventions by which mathematicians convince one another of the truth of theorems. That is to say, a proof is expressed in natural language plus possibly symbols and figures, and is sufficient to convince an expert of the correctness of a theorem. Examples of social proofs include the kinds of proofs that are presented in conversations or published in articles. Of course, it is impossible to precisely define what constitutes a valid proof in this social sense; and, the standards for valid proofs may vary with the audience and over time. The second view of proofs is more narrow in scope: in this view, a proof consists of a string of symbols which satisfy some precisely stated set of rules and which prove a theorem, which itself must also be expressed as a string of symbols. According to this view, mathematics can be regarded as a 'game' played with strings of symbols according to some precisely defined rules. Proofs of the latter kind are called "formal" proofs to distinguish them from "social" proofs.
In modern mathematics there is a much better answer that could be given to a student who asks, "What exactly is a proof?", which does not match either of the above ideas. So:
Meditation: What distinguishes a correct mathematical proof from an incorrect mathematical proof - what does it mean for a mathematical proof to be good? And why, in the real world, would anyone ever be interested in a mathematical proof of this type, or obeying whatever goodness-rule you just set down? How could you use your notion of 'proof' to improve the real-world efficacy of an Artificial Intelligence?
Constructing fictional eugenics (LW edition)
Yvain asked:
So if you had to design a eugenics program, how would you do it? Be creative.
I'm asking because I'm working on writing about a fictional society that practices eugenics. I want them to be interesting and sympathetic, and not to immediately pattern-match to a dystopia that kills everyone who doesn't look exactly alike.
My reply was too long for LiveJournal, so I'm posting it here:
1. The real step 1 in any program like this would be to buy the 3 best modern textbooks on animal breeding and read them. (My grandfather is a researcher in this field so I'm unusually aware that it exists.)
2. If you give me genetic selection on multiple possible embryos where I can read off the genome of each one, I can do much better, much faster, than if I'm only allowed to look at the mother and father's genome and predict on that basis. If I can only look at the mother and father's relatives and life achievements, I do worse, but modern tech is very rapidly advancing to be able to read off the parents' genome cheaply.
3. If society's utility has a large component for genius production, then you probably want a very diverse mix of different high-IQ genes combined into different genotypes and phenotypes. (Although some recent research suggests that the most important thing for IQ may be avoiding mutational load, i.e., the modal genome would be super-von-Neumann. Even so, we'd want a diverse mix of everything else cognitive that wasn't about modality.)
4. Doing a Bayesian value-of-information calculation on rare alleles and potentially interesting allele combinations will automatically include a value for diversity into your eugenic program, based on the value of promoting a gene / combo in much larger numbers if that gene or gene combo is found to be successful. You would get much *more* interesting diversity in the next generation automatically, as many previously low-frequency alleles were combined in greater numbers and greater diversity than before. *Not* doing a value-of-info calculation accounts for a lot of the dystopic load of alleged dystopias.
5. The obvious basic instrument in a society depicted as well-intentioned would be an economic policy of trying to internalize the externalities of a child, just like a well-intentioned society might try to internalize the externalities of e.g. carbon dioxide emissions, instead of regulating/capping them directly, in order to maximize net social welfare. There would be a tax or benefit based on how much your child is expected to cost society (not just governmental costs in the form of health care, schooling etc., but costs to society in general, including foregone labor of a working parent, etc.) and how much that child is expected to benefit society (not lifetime tax revenue or lifetime earnings, but lifetime value generated - most economic actors only capture a fraction of the value they create). If it looks like you're going to have a valuable child, you get your benefit in the form of a large cash bonus up-front (love that hyperbolic discounting) and lots of free childcare so you can go on having more children. The marketed social goal would be to avert the modern trope where parenthood is this dreadful burdensome inconvenience compared to playing video games, and this is bad for society because society runs out of valuable future workers whose benefits-to-society the parents mostly don't capture. Probably the hard part from a marketing standpoint would be the proposal to do actual genetic calculations, even if it's to allegedly increase social benefit and prevent the system from being "exploited" (i.e. going dysgenic-Malthusian).
6. As suggested in an earlier comment, financializing progressive shares of future income (as diverted from tax streams, maybe) is an obvious way to privatize prediction, but only of tax streams, or at best revenue earned by the prospective individual. (I hadn't thought of this until I read that comment, so credit where it's due.)
7. Taxes on expected-negative kids are more icky but would still have the obvious economic justification. A nicer-sounding way of framing it would be requiring parents to post bond corresponding to the baseline government cost of each child in schooling and healthcare, with expected value potentially helping to make up the bond. An interesting question is whether anyone would really work out to expected-net-negative under this system, which question is isomorphic to asking whether it ever makes selfish sense for a country to restrict immigration. But adding at least some burden here makes sense from a cognitive perspective, because adding a cost is better at shaping behavior than adding a potentially foregone benefit.
8. The incentive for e.g. taking advantage of sperm banks is automatic in this system - you can either pay a bunch of money to have a kid with your current husband, or you can be paid thousands of dollars and get free child care to be inseminated by the sperm of a Nobel winner who never had to diet. I think that, in practice, the basic test of a system like this would be whether it could get people to go over the inconvenience threshold of actually using sperm banks and egg donors.
9. More interestingly, there's a built-in incentive for most people to have daughters rather than sons under this system. If we take the expected externalities of grandchildren into account in calculating the expected externalities of a child, then daughters can bear children using the best sperm via gene banks, while men have a harder time getting at the best eggs, making the grandchildren of daughters much more valuable if you'll assume they'll all be Nobel-laureate-descendants. Daughters also add more marginal children to society than sons, since adding another son does not increase the marginal reproductive capacity of society unless single women aren't willing to reproduce using sperm banks (even taking into account subsidized childcare) and the polyamory factor has gone over what women with children are willing to tolerate. So if grandchildren are net positive, daughters are more marginally valuable to society until the sex ratio has gone well over 1:1. This is leaving aside generally larger criminal downsides of men, the fact that men do worse in school (which may be a mere artifact of our horror of a school system), and so on. However, if the sex ratio becomes very extreme and the system is supposed to stick around for many generations, then most of the males generated will be by people defying system incentives; and unless very few women reproduce with those males, there will be a large selective advantage for having sons outside the system. I.e., the system will be selecting for those who defy its incentives, which is a key design criterion for avoiding. (Though on yet further reflection, if there are many males with suboptimal genetics being produced and then reproducing, child-value calculations would rapidly yield the social advice to start birthing more above-average males even if they won't win the sperm-bank contest; and if women have a strong preference for present fathers, you could directly calculate that as social value as well as a factor in calculating expected genetic impact of males.)
10. In the end, all of this just adds up to, "If you can correctly internalize these externalities, the following social welfare factor will be increased..." and the key part is of course that "If".
Stuff That Makes Stuff Happen
Followup to: Causality: The Fabric of Real Things
Previous meditation:
"You say that a universe is a connected fabric of causes and effects. Well, that's a very Western viewpoint - that it's all about mechanistic, deterministic stuff. I agree that anything else is outside the realm of science, but it can still be real, you know. My cousin is psychic - if you draw a card from his deck of cards, he can tell you the name of your card before he looks at it. There's no mechanism for it - it's not a causal thing that scientists could study - he just does it. Same thing when I commune on a deep level with the entire universe in order to realize that my partner truly loves me. I agree that purely spiritual phenomena are outside the realm of causal processes that can be studied by experiments, but I don't agree that they can't be real."
Reply:
Fundamentally, a causal model is a way of factorizing our uncertainty about the universe. One way of viewing a causal model is as a structure of deterministic functions plus uncorrelated sources of background uncertainty.
Let's use the Obesity-Exercise-Internet model (reminder: which is totally made up) as an example again:
We can also view this as a set of deterministic functions Fi, plus uncorrelated background sources of uncertainty Ui:
This says is that the value x3 - how much someone exercises - is a function of how obese they are (x1), how much time they spend on the Internet (x2), plus some other background factors U3 which don't correlate to anything else in the diagram, all of which collectively determine, when combined by the mechanism F3, how much time someone spends exercising.
EY "Politics is the Mind Killer" sighting at Washington Examiner and Reason.com
Original at Washington Examiner
http://washingtonexaminer.com/down-with-politics/article/2508882#.UGSscI0iYZm
...
Politics makes us worse because "politics is the mindkiller," as intelligence theorist Eliezer Yudkowsky puts it. "Evolutionary psychology produces strange echoes in time," he writes, "as adaptations continue to execute long after they cease to maximize fitness." We gorge ourselves sick on sugar and fat, and we indulge our tribal hard-wiring by picking a political "team" and denouncing the "enemy."
But our atavistic Red/Blue tribalism plays to the interests of "individual politicians in getting you to identify with them instead of judging them," Yudkowsky writes.
...
Examiner Columnist Gene Healy is a vice president at the Cato Institute and the author of "The Cult of the Presidency."
Repost at Reason.com
http://reason.com/archive/2012/09/25/why-politics-are-bad-for-us
Eliezer's Sequences and Mainstream Academia
Due in part to Eliezer's writing style (e.g. not many citations), and in part to Eliezer's scholarship preferences (e.g. his preference to figure out much of philosophy on his own), Eliezer's Sequences don't accurately reflect the close agreement between the content of The Sequences and work previously done in mainstream academia.
I predict several effects from this:
- Some readers will mistakenly think that common Less Wrong views are more parochial than they really are.
- Some readers will mistakenly think Eliezer's Sequences are more original than they really are.
- If readers want to know more about the topic of a given article, it will be more difficult for them to find the related works in academia than if those works had been cited in Eliezer's article.
I'd like to counteract these effects by connecting the Sequences to the professional literature. (Note: I sort of doubt it would have been a good idea for Eliezer to spend his time tracking down more references and so on, but I realized a few weeks ago that it wouldn't take me much effort to list some of those references.)
I don't mean to minimize the awesomeness of the Sequences. There is much original content in them (edit: probably most of their content is original), they are engagingly written, and they often have a more transformative effect on readers than the corresponding academic literature.
I'll break my list of references into sections based on how likely I think it is that a reader will have missed the agreement between Eliezer's articles and mainstream academic work.
(This is only a preliminary list of connections.)
Under-acknowledged Value Differences
I've been reading a lot of the recent LW discussions on politics and gender, and noticed that people rarely bring up or explicitly acknowledge that different people affected by some political or gender issue have different values/preferences, and therefore solving the problem involves a strong element of bargaining and is not just a matter of straightforward optimization. Instead, we tend to talk as if there is some way to solve the problem that's best for everyone, and that rational discussion will bring us closer to finding that one best solution.
For example, when discussing gender-related problems, one solution may be generally better for men, while another solution may be generally better for women. If people are selfish, then they will each prefer the solution that's individually best for them, even if they can agree on all of the facts. (It's unclear whether people should be selfish, but it seems best to assume that most are, for practical purposes.)
Unfortunately, in bargaining situations, epistemic rationality is not necessarily instrumentally rational. In general, convincing others of a falsehood can be useful for moving the negotiated outcome closer to one's own preferences and away from others', and this may be done more easily if one honestly believes the falsehood. (One of these falsehoods may be, for example, "My preferred solution is best for everyone.") Given these (subconsciously or evolutionarily processed) incentives, it seems reasonable to think that the more solving a problem resembles bargaining, the more likely we are to be epistemicaly irrationality when thinking and talking about it.
If we do not acknowledge and keep in mind that we are in a bargaining situation, then we are less likely to detect such failures of epistemic rationality, especially in ourselves. We're also less likely to see that there's an element of Prisoner's Dilemma in participating in such debates: your effort to convince people to adopt your preferred solution is costly (in time and in your and LW's overall sanity level) but may achieve little because someone else is making an opposite argument. Both of you may be better off if neither engaged in the debate.
Jews and Nazis: a version of dust specks vs torture
This is based on a discussion in #lesswrong a few months back, and I am not sure how to resolve it.
Setup: suppose the world is populated by two groups of people, one just wants to be left alone (labeled Jews), the other group hates the first one with passion and want them dead (labeled Nazis). The second group is otherwise just as "good" as the first one (loves their relatives, their country and is known to be in general quite rational). They just can't help but hate the other guys (this condition is to forestall the objections like "Nazis ought to change their terminal values"). Maybe the shape of Jewish noses just creeps the hell out of them, or something. Let's just assume, for the sake of argument, that there is no changing that hatred.
Is it rational to exterminate the Jews to improve the Nazi's quality of life? Well, this seems like a silly question. Of course not! Now, what if there are many more Nazis than Jews? Is there a number large enough where exterminating Jews would be a net positive utility for the world? Umm... Not sure... I'd like to think that probably not, human life is sacred! What if some day their society invents immortality, then every death is like an extremely large (infinite?) negative utility!
Fine then, not exterminating. Just send them all to concentration camps, where they will suffer in misery and probably have a shorter lifespan than they would otherwise. This is not an ideal solutions from the Nazi point of view, but it makes them feel a little bit better. And now the utilities are unquestionably comparable, so if there are billions of Nazis and only a handful of Jews, the overall suffering decreases when the Jews are sent to the camps.
This logic is completely analogous to that in the dust specks vs torture discussions, only my "little XML labels", to quote Eliezer, make it more emotionally charged. Thus, if you are a utilitarian anti-specker, you ought to decide that, barring changing Nazi's terminal value of hating Jews, the rational behavior is to herd the Jews into concentration camps, or possibly even exterminate them, provided there are enough Nazi's in the world who benefit from it.
This is quite a repugnant conclusion, and I don't see a way of fixing it the way the original one is fixed (to paraphrase Eliezer, "only lives worth celebrating are worth creating").
EDIT: Thanks to CronoDAS for pointing out that this is known as the 1000 Sadists problem. Once I had this term, I found that lukeprog has mentioned it on his old blog.
Dragon Ball's Hyperbolic Time Chamber
A time dilation tool from an anime is discussed for its practical use on Earth; there seem surprisingly few uses and none that will change the world, due to the severe penalties humans would incur while using it, and basic constraints like Amdahl's law limit the scientific uses. A comparison with the position of an Artificial Intelligence such as an emulated human brain seems fair, except most of the time dilation disadvantages do not apply or can be ameliorated and hence any speedups could be quite effectively exploited. I suggest that skeptics of the idea that speedups give advantages are implicitly working off the crippled time dilation tool and not making allowance for the disanalogies.
Master version on gwern.net
Let's be friendly to our allies
Less Wrong was created to produce rationalists, so that many causes could benefit from the efforts of those rationalists. The point is not just to have nice place to talk about rationality, but to really make ourselves stronger, to apply the lessons that we learn here to improve our own lives, and to improve the world.
80,000 Hours is an organization created to provide direct domain specific help to people who want to support charitable causes, the same causes Less Wrong is supposed to produce rationalists to support. 80,000 Hours has goals clearly aligned with ours. Provided we think they are pursuing their aligned goals effectively, we should be excited about this. We should be happy when they reach out to us, to see how we can work together.
So, I am very disappointed to see the negative reception of a Less Wrong post by 80,000 Hours member Benjamin Todd, asking us what questions we would like 80,000 Hours to answer for us. They are basically offering to do free research for us on things that we care about, because our goals are aligned. And yet, as of this writing, that post has a score of -7, and it has received comments complaining that it is an ad. To be clear, ads of the sort that we want to avoid do not offer free services relevant to a core purpose of our community. I won't argue whether or not the post was an ad, but I will say that it belongs on Less Wrong and we should give it a good reception.
I would like to thank Benjamin Todd and others at 80,000 hours for their work in helping people be more effective philanthropists and otherwise support important causes, and for engaging Less Wrong in this project. I also thank everyone who responded to post with their actual questions about making a difference.
And, please, can we be nice to people who help us?
Neuroscience basics for LessWrongians
The origins of this article are in my partial transcript of the live June 2011 debate between Robin Hanson and Eliezer Yudkowsky. While I still feel like I don't entirely understand his arguments, a few of his comments about neuroscience made me strongly go, "no, that's not right."
Furthermore, I've noticed that while LessWrong in general seems to be very strong on the psychological or "black box" side of cognitive science, there isn't as much discussion of neuroscience here. This is somewhat understandable. Our current understanding of neuroscience is frustratingly incomplete, and too much journalism on neuroscience is sensationalistic nonsense. However, I think what we do know is worth knowing. (And part of what makes much neuroscience journalism annoying is that it makes a big deal out of things that are totally unsurprising, given what we already know.)
My qualifications to do this: while my degrees are in philosophy, for awhile in undergrad I was a neuroscience major, and ended up taking quite a bit of neuroscience as a result. This means I can assure you that most of what I say here is standard neuroscience which could be found in an introductory textbook like Nichols, Martin, Wallace, & Fuchs' From Neuron to Brain (one of the text books I used as an undergraduate). The only things that might not be totally standard are the conjecture I make about how complex currently-poorly-understood areas of the brain are likely to be, and also some of the points I make in criticism of Eliezer at the end (though I believe these are not a very big jump from current textbook neuroscience.)
One of the main themes of this article will be specialization within the brain. In particular, we know that the brain is divided into specialized areas at the macro level, and we understand some (though not very much) of the micro-level wiring that supports this specialization. It seems likely that each region of the brain has its own micro-level wiring to support its specialized function, and in some regions the wiring is likely to be quite complex.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)