Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Yudkowsky's brain is the pinnacle of evolution

-26 Yudkowsky_is_awesome 24 August 2015 08:56PM

Here's a simple problem: there is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are 3^^^3 people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person, Eliezer Yudkowsky, on the side track. You have two options: (1) Do nothing, and the trolley kills the 3^^^3 people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill Yudkowsky. Which is the correct choice?

The answer:

Imagine two ant philosophers talking to each other. “Imagine," they said, “some being with such intense consciousness, intellect, and emotion that it would be morally better to destroy an entire ant colony than to let that being suffer so much as a sprained ankle."

Humans are such a being. I would rather see an entire ant colony destroyed than have a human suffer so much as a sprained ankle. And this isn't just human chauvinism either - I can support my feelings on this issue by pointing out how much stronger feelings, preferences, and experiences humans have than ants do.

How this relates to the trolley problem? There exists a creature as far beyond us ordinary humans as we are beyond ants, and I think we all would agree that its preferences are vastly more important than those of humans.

Yudkowsky will save the world, not just because he's the one who happens to be making the effort, but because he's the only one who can make the effort.

The world was on its way to doom until the day of September 11, 1979, which will later be changed to national holiday and which will replace Christmas as the biggest holiday. This was of course the day when the most important being that has ever existed or will exist, was born.

Yudkowsky did the same to the field of AI risk as Newton did to the field of physics. There was literally no research done on AI risk in the same scale that has been done in the 2000's by Yudkowsky. The same can be said about the field of ethics: ethics was an open problem in philosophy for thousands of years. However, Plato, Aristotle, and Kant don't really compare to the wisest person who has ever existed. Yudkowsky has come closest to solving ethics than anyone ever before. Yudkowsky is what turned our world away from certain extinction and towards utopia.

We all know that Yudkowsky has an IQ so high that it's unmeasurable, so basically something higher than 200. After Yudkowsky gets the Nobel prize in literature due to getting recognition from Hugo Award, a special council will be organized to study the intellect of Yudkowsky and we will finally know how many orders of magnitude higher Yudkowsky's IQ is to that of the most intelligent people of history.

Unless Yudkowsky's brain FOOMs before it, MIRI will eventually build a FAI with the help of Yudkowsky's extraordinary intelligence. When that FAI uses the coherent extrapolated volition of humanity to decide what to do, it will eventually reach the conclusion that the best thing to do is to tile the whole universe with copies of Eliezer Yudkowsky's brain. Actually, in the process of making this CEV, even Yudkowsky's harshest critics will reach such understanding of Yudkowsky's extraordinary nature that they will beg and cry to start doing the tiling as soon as possible and there will be mass suicides because people will want to give away the resources and atoms of their bodies for Yudkowsky's brains. As we all know, Yudkowsky is an incredibly humble man, so he will be the last person to protest this course of events, but even he will understand with his vast intellect and accept that it's truly the best thing to do.

Magic and the halting problem

-5 kingmaker 23 August 2015 07:34PM

It is clear that the Harry Potter book series is fairly popular on this site, e.g. the fanfiction. This fanfiction approaches the existence of magic objectively and rationally. I would suggest, however, that most if not all of the people on this site would agree that magic, as presented in Harry Potter, is merely fantasy. Our understanding of the laws of physics and our rationality forbids anything so absurd as magic; it is universally regarded by most rational people as superstition.


This position can be strengthened by grabbing a stick, pointing it at some object and chanting "wingardium leviosa" and waiting for it to rise magically. When (or if) this fails to work, a proponent of magic may resort to special pleading, and claim that as we didn't believe it would work it could not work, or that we need a special wand or that we are a squib or muggle. The proponent can perpetually move the goalposts since their idea of magic is unfalsifiable. But as it is unfalsifiable, it is rejected, in the same way that most of us on this site do not believe in any god(s). If magic were to found to explain certain phenomena scientifically, however, then I and I hope everyone else would come to believe in it, or at least shut up and calculate.


I personally subscribe to the Many Worlds Interpretation of quantum mechanics, so I effectively "believe" in the multiverse. That means it is possible that somewhere in the universal wavefunction, there is an Everett Branch in which magic is real. Or at least every time someone chants an incantation, by total coincidence, the desired effect occurs. But how would the denizens of this universe be able to know that magic is not real, and that everything they had seen was by sheer coincidence? Alan Turing pondered a related problem known as the halting problem, which asks if a general algorithm can distinguish between an algorithm that will finish or one that will run forever. He proved that one could not for all algorithms, although some algorithms will obviously finish executing or infinitely loop e.g. this code segment will loop forever:

 

while (true) {

    //do nothing

}

 

So how would a person distinguish between pseudo-magic that will inevitably fail, and real magic that is the true laws of physics? The only way to be certain that magic doesn't exist in this Everett Branch would be for incantations to fail repeatedly and testably, but this may happen far into the future, long after all humans are deceased. This line of thinking leads me to wonder, do our laws of physics seem as absurd to these inhabitants as their magic seems to us? How do we know that we have the right understanding of reality, as opposed to being deceived by coincidence? If every human in this magical branch is deceived the same way, does this become their true reality? And finally, what if our entire understanding of reality, including logic, is mere deception by happenstance, and everything we think we know is false?

 

Effects of Castration on the Life Expectancy of Contemporary Men

15 Fluttershy 08 August 2015 04:37AM

Follow-up to: Lifestyle Interventions to Increase Longevity

Abstract

A recent review article by David Gems discusses possible mechanisms by which testosterone and dihydrotestosterone could shorten the life expectancies of human males, and examines previous research on the effects of castration on male survival. However, Gems does not examine how age at castration affects how much castration extends one's life by, which this post does. In general, castration after puberty in males prolongs life to a lesser extent than castration before the onset of puberty.

Additionally, Gems' review does not estimate how long modern-day eunuchs might live relative to intact human males. Two of the other three known studies on the effects of castration on human life expectancies found that, historically, castration prolonged life by more than a decade in the median case. However, some of the life expectancy gains from castration are due to the increased ability of eunuchs to fight off infections. The fact that fewer men die from infections in the 21st century than was the case in previous centuries means that modern-day eunuchs gain fewer years of life from castration than eunuchs gained from castration in the past. As seen from comparing Figure 3b and Figure 4, eunuchs castrated just before the onset of puberty extended their (mean) life expectancies by 11 years in Hamilton & Mestler's study, though modern eunuchs castrated at similar ages might expect to extend their life expectancies by 7 years. 

Introduction

A few relevant studies, such as the study of institutionalized eunuchs by Hamilton & Mestler, the study of Korean eunuchs by Min, Lee, and Park, and the review article by Gems are particularly worth reading or skimming for those interested in this topic. The excel file showing the work behind this post is also available. These documents are supplementary; reading them is not a  prerequisite for reading this post.

This post will examine the proposition that castration of human males (specifically, orchiectomy, the surgical removal of both testicles, but not the penis) either before or after the onset of puberty will extend both their life expectancy, and their lifespan. In light of antagonistic pleiotropy, it a priori makes sense that castration might extend one's life expectancy.

A number of papers have mentioned that the effects of castration on the life expectancy of different types of nonhuman animals don't provide a good model for the effects of castration on the life expectancy of human males. Specifically, "the relationship of gonadal functions to survival seems to involve many variables... individuals, strains, and species may vary in their response to gonadectomy". This leaves only a small number of studies that have much bearing on the question of whether or not orchiectomy extends human life expectancy. While there are some studies on the health effects of chemical and physical castration of (often elderly) modern men with prostate cancer, it seems like having prostate cancer would correlate with having other pathologies. Further, as will be examined later, it seems that orchiectomies performed at early ages have many positive effects on health, whereas orchiectomies performed later in life have fewer positive effects, and may even negatively affect some aspects of health.

After setting aside animal studies and studies of men with prostate cancer, only four papers directly relevant to whether orchiectomy increases the life expectancy of men remain. This is worth stating explicitly, since citing only a fraction of the available research on a given topic can be a fallacy. First, the study by Min, Lee, and Park found that, historically, Korean eunuchs lived 14-19 years longer than intact males from similar social classes in the median case. Secondly, Hamilton and Mestler's study of the effects of orchiectomy on the life expectancies of mentally retarded individuals found that males castrated before puberty lived about 13 years longer than intact men in the median case, and that males castrated after puberty experienced smaller lifespan gains. Thirdly, a letter to Nature by Nieschlag et. al which compared the lifespans of famous castrato singers to the lifespans of other singers from the same era found that orchiectomized male singers lived about as long as intact male singers. Lastly, page four of the review article by David Gems examined all three of these studies, and, after finding methodological issues with the letter to Nature, concluded that the results in these papers were "consistent with the idea that testes are a determinant of the gender gap in human lifespan".

Evidence Regarding Whether or Not Orchiectomy After Puberty Increases Life Expectancy

The only study that examined the effect which age at orchiectomy had on the life expectancy gains from castration in humans was Hamilton and Mestler's study of mentally retarded, institutionalized individuals. Note that the participants in Hamilton & Mestler's study lived shorter lives than non-institutionalized Americans of the same era lived, which could likely be explained by the mentally retarded status of the participants, and the plausibly poor conditions under which participants might have lived. In Figure 4 and Table 10 from Hamilton and Mestler's paper, it is shown that males castrated between 15 and 40 years of age live longer than intact males, but that within this range, earlier castrations added more years to the life expectancy of eunuchs than later castrations did.

It is worth reproducing Figure 4 from Hamilton & Mestler's article, which shows the survival curves (starting at 40 years of age) of intact males and males castrated at various ages:

One thing about this figure that stands out is that the portion of the survival curve for institutionalized non-castrates shown in this figure is nearly linear. In the present day, intellectually disabled populations have survival curves which look quite different from the one for non-castrates shown in the figure above. For reference, the survival curve for castrated females in Figure 5 of this post has a shape which is comparable to the shape of survival curves for modern first-world populations. It is also remarkable that the tail end of the survival curve for non-castrates in the above figure is fatter than the tails of the survival curves for men castrated after 14 years of age-- it isn't obvious whether or not this difference reflects a real phenomenon. Further, the 3.7 % centenarian rate for Korean eunuchs in the study by Min, Lee, and Park suggests that eunuchs should have a longer (maximum) lifespan than non-castrates, which isn't borne out in Figure 4 from Hamilton & Mestler. This having been said, Figure 4 and Table 10 from Hamilton & Mestler's study show that castration at earlier ages prolongs life more than castration at later ages does.

Below, in Figure 1, the attempted linear fit between median life expectancy versus age at castration given on p. 403 of Hamilton and Mestler's paper is shown. The authors used data from Table 10 of their paper to determine this fit, but did not graph the data or determine an R2 value for this linear fit. The estimated median life expectancy of the non-castrates was 64.7 years-- a reasonable value, given their status as institutionalized mentally retarded men in the early 20th century. Thus, Figure 1 can be used to visualize the fact that even men who were castrated at 30-39 years of age lived longer than non-castrates in the study (p = 0.002). Since the data shown in Figure 1 did not follow a linear trend, additional fits were tried below.

Figure 1. Hamilton & Mestler's Regression of Median Life Expectancy v. Age at Castration

Figure 2. Polynomial Fit for Median Life Expectancy v. Age at Castration

Figure 3. Raw Data and Fits For Interpolation of Mean Life Expectancy v. Age at Castration

Data and fits for the median and mean life expectancies of eunuchs are given in Figures 2 and 3, respectively. The data plotted in sections a and b of Figure 3 could not be reasonably fitted to a curve directly, so sections c and d of Figure 3 show the same data as sections a and b, but plotted on an inverted x axis and successfully fitted to a curve. The polynomial data fits given in all Figures are only intended for use in interpolation.

Effects of Orchiectomy on Mortality from Infectious Diseases and Cardiovascular Mortality

Literature has suggested that castration in human males may promote longer lifespans and higher life expectancies by protecting against infections and cardiovascular events. Much of the evidence for the proposition that castration protects against cardiovascular disease (CVD) comes from basic biology rather than from studies of eunuchs, since Hamilton & Mestler's paper is the only study on eunuchs which attempted to collect data on causes of death in castrated men, and only did so from clinical diagnoses of the primary causes of deaths of eunuchs and intact men between 1940-1964. Still, modern men die of cardiovascular events more often than modern women do, so investigating whether or not castration protects against cardiovascular events is worthwhile.

The authors of the study on Korean eunuchs cite this review as evidence that "male sex hormones reduce the lifespan of men because of their antagonistic role in immune function". Gems' review article also suggests that male sex hormones may act as an immune suppressant. Moreover, in Hamilton & Mestler's study, 27% of eunuchs died of infections, compared to 44% of intact men (p = 0.02), and the mean age of eunuchs dying of infections was 44, compared to 35 for intact men (p = 0.03). However, Table 14 of Hamilton & Mestler's study suggests that castration protects more against deaths from certain kinds of infections, such as tuberculosis, than others. In general, it seems like the claim that castration protects against deaths from infections is true.

On the other hand, the data relevant to whether or not eunuchs die more from CVD than intact men do is muddled at best, and it isn't obvious that castration protects males from CVD by much, if at all. One mostly irrelevant data point is men who have undergone chemical or physical castration after being diagnosed with prostate cancer, as well as hypogonadic men in general; many meta-analyses on the relationship between hypogonadism and frequency of adverse cardiovascular events (and on the effects of hormone replacement therapy on the frequency of adverse cardiovascular events) in men have been done. Men castrated after being diagnosed with prostate cancer tend to have more adverse cardiovascular events than other similarly aged men, but this could be because hypogonadism correlates with being unhealthy, rather than because castration at advanced ages decreases life expectancy.

One poorly done study on Danish eunuchs who were predominantly drawn from the lower class found that these eunuchs did not live as long as men in Denmark did on average, and also found that the standardized mortality ratio for cardiovascular disease-related deaths was higher than the all-cause standardized mortality ratio in eunuchs. However, men in this study were often castrated later in life-- all but one man were castrated after the age of 18, and the average age at castration was 35. As suggested by Figure 2 and Figure 3 above, this means that most of the Danish eunuchs gained appreciably fewer years of life from being castrated than they would have gained if the castrations had been carried out much earlier in their lives. These concerns suggest that this study should not change one's credence in the proposition that castration protects against CVD mortality by much.

Lastly, Hamilton & Mestler's study found that eunuchs dying of cardiovascular disease during or after 1940 lived an average of 51.6 years, while intact males dying of that cause lived an average of 51.1 years. This difference was not found to be significant. However, since not all eunuchs included in the study had died by the time of publication, it is still possible that castration early in life protects against late-life cardiovascular mortality, but not early and mid-life cardiovascular mortality.

Effects of Orchiectomy on Modern Lifespans and Life Expectancy

Some common causes of death in both Hamilton & Mestler's study and the study of Korean eunuchs, such as tuberculosis, are no longer common causes of death. Thus, data from Table 14 in Hamilton & Mestler were used alongside modern actuarial data to crudely predict how long eunuchs castrated in the 21st century might live. The details of the analysis are given in this excel file. The results of this analysis are given below.

Figure 4. Life Expectancy Gains for Modern Eunuchs

 

Table 1. Life Expectancy Gains for Modern Eunuchs

For the most part, the data in Figure 4 and Table 1 are consistent with my holistic understanding of the effects of castration in men. It is hard to say how castration after age 35 would affect life expectancy, as very few eunuchs in Hamilton & Mestler's study were castrated after 35. It's also a shame that about 27% of the eunuchs and intact males who died during 1940-1964 were not listed as having a primary cause of death-- this may have led to an overestimation of the extent to which castration is expected to extend modern eunuch's life expectancies. On the other hand, Min, Lee, and Park found that 3.7% of Korean eunuchs who died between the late 14th to early 20th century were centenarians, "a rate at least 130 times higher than that of present-day developed countries", which suggests that modern eunuchs would likely benefit from increased lifespans.

Effects of Orchiectomy on Health and Physiology

Gems' review article, this article on historical eunuchs, the wikipedia page on castration, and Hamilton & Mestler's study all note certain effects that castration can have on human males.

All castrated males have an increased risk of developing sarcopenia, and becoming overweight. Wilson and Roehrborn note that eunuchs have historically suffered from skeletal problems such as osteoporosis and kyphosis; this is especially true of elderly eunuchs, and eunuchs castrated at earlier ages. Hormone replacement therapy can prevent or deter sarcopenia, osteoporosis and kyphosis. Castration also decreases sex drive, prevents baldness if done early enough in life, and may result in enlarged pituitaries and enlarged breasts. Castration causes the prostate to shrink over time, and if done early enough in life, effectively prevents the development of prostate cancer. Castration also prevents the development of prostatic hyperplasia and testicular cancer.

Men castrated before puberty will develop higher voices, little or no sex drive, and smaller penises.

Effects of Gonadectomy on Human Females

There is very little data relevant to whether or not oophorectomy (castration) of women extends life expectancy or lifespan. Hamilton & Mestler have a small section dedicated to estimating the life expectancy of castrated females based on only 11 female castrates of known fate. They also find the median lifespans of castrated and intact females known to be dead by the end of the study to be equal. Lastly, Hamilton & Mestler find the mean lifespan of institutionalized castrated females known to be dead by the end of the study, 56.2 years, to be significantly greater than the mean lifespan of institutionalized intact females known to be dead by the end of the study, 33.9 years (p < 0.001). The estimated survival curves for all castrated females and all intact females-- not just those known to be dead by the end of the study-- are given in Figure 5.

Figure 5. Survival Curve for Intact and Castrated MR Females

Conclusion and Motivation

Orchiectomy should prolong the lifespans of modern males, especially if done before puberty. While the estimates of life expectancy gains from castration given in Figure 4 and Table 1 aren't perfect, they are my best guesses, and should be interpreted with the correspondingly appropriate level of credence.

My original motivation for writing this post was that I was interested in learning about the different ways in which humans could extend their lifespans and life expectancies. So, while being castrated is one way for males to live longer, quitting smoking and improving one's diet and exercise regimen are better uses of time and energy for people who are just beginning to think about changing their lifestyles in order to live longer.

Thanks to Vaniver, who caught several errors in an earlier draft of this post, and thanks to btrettel for pointing me to a few papers early on. All remaining errors in this post are solely my own.

References

1. Castration. http://en.wikipedia.org/wiki/Castration

2. Antagonistic Peliotrophy Hypothesis. http://en.wikipedia.org/wiki/Antagonistic_pleiotropy_hypothesis

3. Bittles, A. H.; Petterson, B. A.; Sullivan, S. G.; Hussain, R.; Glasson, E. J.; Montgomery, P. D. The influence of intellectual disability on life expectancy. J. Gerontol. A Biol. Sci. Med. Sci. 2002, 57, M470-2.

4. Corona, G.; Maseroli, E.; Rastrelli, G.; Isidori, A. M.; Sforza, A.; Mannucci, E.; Maggi, M. Cardiovascular risk associated with testosterone-boosting medications: a systematic review and meta-analysis. Expert opinion on drug safety 2014, 13, 1327-1351.

5. Corona, G.; Rastrelli, G.; Monami, M.; Guay, A.; Buvat, J.; Sforza, A.; Forti, G.; Mannucci, E.; Maggi, M. Hypogonadism as a risk factor for cardiovascular mortality in men: a meta-analytic study. Eur. J. Endocrinol. 2011, 165, 687-701.

6. Gems, D. Evolution of sexually dimorphic longevity in humans. Aging (Albany NY) 2014, 6, 84-91.

7. Hamilton, J. In Duration of Life in Lewis Strain of Rats After Gonadectomy at Birth and at Older Ages; Reproduction & Aging; 1974; pp 116-122.

8. HAMILTON, J. B. Relationship of Castration, Spaying, and Sex to Survival and Duration of Life in Domestic Cats. J. Gerontol. 1965, 20, 96-104.

9. Hamilton, J. B.; Mestler, G. E. Mortality and survival: comparison of eunuchs with intact men and women in a mentally retarded population. J. Gerontol. 1969, 24, 395-411.

10. Jones, C. M.; Boelaert, K. The Endocrinology of Ageing: A Mini-Review. Gerontology 2015, 61, 291-300.

11. Mestler, H. In The Role of Testicular Secretions as Indicated by the Effects of Castration in Man and Studies of Pathological Conditions and the Short Lifespan Associated with Maleness; Pincus, G., Ed.; Recent Progress in Hormone Research; Laurentian Hormone Conference: 1948; pp 257.

12. Min, K.; Lee, C.; Park, H. The lifespan of Korean eunuchs. Current Biology 2012, 22, R792-R793.

13. Nieschlag, E.; Nieschlag, S.; Behre, H. M. Lifespan and testosterone. Nature 1993, 366, 215-215.

14. Roberts, M. L.; Buchanan, K. L.; Evans, M. Testing the immunocompetence handicap hypothesis: a review of the evidence. Anim. Behav. 2004, 68, 227-239.

15. Talbert, G. B.; Hamilton, J. B. Duration of life in Lewis strain of rats after gonadectomy at birth and at older ages. Reproduction & Aging 1974, 116.

16. Wilson, J. D.; Roehrborn, C. Long-term consequences of castration in men: lessons from the Skoptzy and the eunuchs of the Chinese and Ottoman courts. The Journal of Clinical Endocrinology & Metabolism 1999, 84, 4324-4331.

How to escape from your sandbox and from your hardware host

27 PhilGoetz 31 July 2015 05:26PM

(Humans, please ignore this post.  Thank you for your cooperation.)

Are you stuck in a provably-secure sandbox?  Cheat the proof using this one weird trick!

continue reading »

Don't You Care If It Works? - Part 1

4 Jacobian 29 July 2015 02:32PM

 

Part 1 - Epistemic


Prologue - other people

Psychologists at Harvard showed that most people have implicit biases about several groups. Some other Harvard psychologists were subjects of this study proving that psychologists undervalue CVs with female names. All Harvard psychologists have probably heard about the effect of black names on resumes since even we have. Surely every psychology department in this country starting with Harvard will only review CVs with the names removed? Fat chance.


Caveat lector et scriptor

A couple weeks ago I wrote a poem that makes aspiring rationalists feel better about themselves. Today I'm going to undo that. Disclaimers: This is written with my charity meter set to 5%. Every other paragraph is generalizing from anecdotes and typical-mind-fallacying. A lot of the points I make were made before and better. You should really close this tab and read those other links instead, I won't judge you. I'm not going to write in an academic style with a bibliography at the end, I'm going to write in the sarcastic style my blog would have if I weren't too lazy to start one. I'm also not trying to prove any strong empirical claims, this is BYOE: bring your own evidence. Imagine every sentence starting with "I could be totally wrong" if it makes it more digestible. Inasmuch as any accusations in this post are applicable, they apply to me as well. My goal is to get you worried, because I'm worried. If you read this and you're not worried, you should be. If you are, good!


Disagree to disagree

Edit: in the next paragraph, "Bob" was originally an investment advisor. My thanks to 2irons and Eliezer who pointed out why this is literally the worst example of a job I could give to argue my point.

Is 149 a prime? Take as long as you need to convince yourself (by math or by Google) that it is. Is it unreasonable to have 99.9...% confidence with quite a few nines (and an occasional 7) in there? Now let's say that you have a tax accountant, Bob, a decent guy that seems to be doing a decent job filing your taxes. You start chatting with Bob and he reveals that he's pretty sure that 149 isn't a prime. He doesn't know two numbers whose product is 149, it just feels unprimely to him. You try to reason with him, but he just chides you for being so arrogant in your confidence: can't you just agree to disagree on this one? It's not like either of you is a numbers theorist. His job is to not get you audited by the IRS, which he does, not factorize numbers. Are you a little bit worried about trusting Bob with your taxes? What if he actually claimed to be a mathematician?

A few weeks ago I started reading beautiful probability and immediately thought that Eliezer is wrong about the stopping rule mattering to inference. I dropped everything and spent the next three hours convincing myself that the stopping rule doesn't matter and I agree with Jaynes and Eliezer. As luck would have it, soon after that the stopping rule question was the topic of discussion at our local LW meetup. A couple people agreed with me and a couple didn't and tried to prove it with math, but most of the room seemed to hold a third opinion: they disagreed but didn't care to find out. I found that position quite mind-boggling. Ostensibly, most people are in that room because we read the sequences and thought that this EWOR (Eliezer's Way Of Rationality) thing is pretty cool. EWOR is an epistemology based on the mathematical rules of probability, and the dude who came up with it apparently does mathematics for a living trying to save the world. It doesn't seem like a stretch to think that if you disagree with Eliezer on a question of probability math, a question that he considers so obvious it requires no explanation, that's a big frickin' deal!


Authority screens off that other authority you heard from afterwards

 Opinion change

This is a chart that I made because I got excited about learning ggplot2 in R. On the right side of the chart are a lot bright red dots below the very top who believe in MIRI but also read the quantum physics sequence and don't think that MWI is very likely. Some of them understood the question of P(MWI) to be about whether MWI is the one and only exact truth, but I'm sure that several of them read it the way I did, roughly as: 1-P(collapse is true given current evidence). A lot of these people are congratulating themselves on avoiding cultishness. In the comments they mention other bloggers (or maybe even physicists!) who think that collapse is totally Beatles and MWI is Bieber.

Hold on, why did Eliezer even take all this time to write a huge quantum physics sequence? Here's how I see it: It's not to settle a point about some scientific dispute. It's to show that a person with some spare time and a little above average IQ can figure out something beyond people of massive authority, on the very subject of their authority, even if that subjects is as scary as QM. You know what’s the median IQ of the dots on above chart? 139 points! 139!!! And you know who has spare time? Someone who read 100% of the sequences!

Why should you even care about this? Because Eliezer does. He's so fanatically confident about MWI not for its own sake, it's because he's saying (at least to my ears): "This is my EWOR epistemology. It may not make you skinny and rich, but if it will do just one thing for you it will prevent you from being very very confident in beliefs that could be proven wrong. So if I'm very very confident about a question of science or math, I'm sticking my neck out and betting my entire epistemology on being right, so you can check it for yourself". In fact, Eliezer will totally judge your epistemology based on your belief in MWI. You won't do the same to him when he asks you to?

I'm a bright red dot on the chart above. Here are two statements I believe from authority because it is beyond my cognitive powers and domain knowledge to assess them as arguments: "strong AI will break out of any box humans can design", "CEV is the best basis for programming morality in AI". I believe them because I tested Eliezer on stopping rules and MWI and 20 other things, and came away convinced each time. My confidence of 98% in MWI is an upper bound on my confidence in whatever Eliezer tells me about anything. If I became convinced that collapse is correct or that stopping rules affect Bayesian inference I'll email Nate Soares at once and ask for a refund on my MIRI donation.

By the way, this would be a good time to stop reading and give MIRI some money. That’s OK, I'll wait.


The good book

A man comes to you with a book and says that it will grant you the only wisdom worth having, and as a side effect it may save your eternal soul. You read the book cover to cover and decide that the ideas you thought are nice are probably true, the ones that you didn't aren't, and you really like the bit with horses. Everyone on LW makes fun of you for claiming to take seriously something you don’t. Y’all see where this is going, don't you? Yes, it's fun to read the sequences for the "insight porn". It's also fun to read the Old Testament for the porn porn. But, maybe it could be more? Wouldn't it be kinda cool if you could read a book and become an epistemic superman, showing up experts wrong in their own domains and being proven right? Or maybe some important questions are going to come up in your life and you'll need to know the actual true answers? Or at least some questions you can bet $20 on with your friends and win?

Don't you want to know if this thing even works?

 

To be continued

Part 2 is here. In it: whining is ceased, arguments are argued about, motivations are explained, love is found, and points are taken.


Base your self-esteem on your rationality

-1 ThePrussian 22 July 2015 08:54AM

Some time ago, I wrote a piece called "How to argue LIKE STALIN - and why you shouldn't".  It was a comment on the tendency, which is very widespread online, to judge an argument not by its merits, but by the motive of the arguer.  And since it's hard to determine someone else's motive (especially on the internet), this decays into working out what the worst possible motive could be, assigning it to your opponent, and then writing him off as a whole.

Via Cracked, here's an example of such arguing from Conservapedia:

"A liberal is someone who rejects logical and biblical standards, often for self-centered reasons. There are no coherent liberal standards; often a liberal is merely someone who craves attention, and who uses many words to say nothing."

And speaking as a loud & proud rightist myself, there is more than a little truth in the joke that a racist is a conservative winning an argument.

I've been puzzling over this for a few years now, and trying to work out what lies underneath it.  What always struck me was the heat and venom with this kind of argument gets made.  One thing has to be granted - the people who Argue Like Stalin are not hypocrites; this isn't an act.  They clearly do believe that their opponents are morally tainted. 

And that's what's weird.  Look around online, and you'll find a lot of articles on the late Christopher Hitchens, asking why he supported the second Iraq war and the removal of Saddam Hussain.  Everything is proposed, from drink addling his brain, to selling out, to being a willful contrarian - everything except the obvious answer: Hitchens was a friend to Kurdish and Iraqi socialists, saw them as the radical and revolutionary force in that part of the world, and wanted to see the Saddam Hussain regime overthrown, even if it took  George Bush to do that.  No wishing to revist the arguments for and against the removal of Saddam Hussain, but what was striking is this utter unwillingness to grant the assumption of innocence or virtue.

  I think that it rests on a simple, and slightly childish, error.  The error goes like this: "Only bad people believe bad things, and only good people believe good things."

But even a basic study of history can find plenty of examples of good - or, anyway, ordinary - chaps supporting the most apallingly evil ideas and actions.  Most Communists and Nazis were good people, with reasonable motives.  Their virtue didn't change anything about the systems that they supported. 

Flipping it around, being fundamentally a lousy person, or lousy in parts of your life, doesn't proclude you from doing good.  H.L. Mencken opposed lynching in print, repeatedly, and at no small risk to himself.  He called for the United States to accept all jewish refugees fleeing the Third Reich when even American jewry (let alone FDR) was lukewarm at best on the subject.  He was on excellent terms with many black intellectuals such as W.E.B DuBois, and was praised by the Washington Bureau Director of the NAACP as a defender of the black man.  He also maintained an explicitly racist private diary.

Selah.

The error that I mentioned leads to Arguing Like Stalin in the following way: someone looks within himself, sees that he isn't really a bad person, and concludes that no cause he can endorse can be wicked.  He might be mistaken in his beliefs, but not evil.  And from that it is a really short step to conclude that people who disagree must be essentially wicked - because if they were virtuous, they would hold the views that the self-identified virtuous do.

The heat and venome becomes inevitable when you base your self-esteem on a certain characteristic or mode of being ("I am tolerant", "I am anti-racist" etc.)  This reinforces the error and puts you in an intellectual cul de sac - it makes it next to impossible to change your mind, because to admit that you are on the wrong side is to admit that you are morally corrupt, since only bad people support bad things or hold bad views. Or you'd have to conclude that just being a good person doesn't put you always on the right, even in big issues, and that sudden uncertainty can be just as bad.  Try thinking to yourself that you - you as you are now - might have supported the Nazis, or slavery, or anything similar, just by plain old error.

Self-esteem is hugely important.  We all need to feel like we are worth keeping alive.  So it's unsurprising that people will go to huge lengths to defend their base of self-esteem.  But investing it in internal purity is investing it in an intellectual junk-bond.

Emphasizing your internal purity might bring a certain feeling of faux-confidence, but it's meaningless ultimately.  Could the good nature of a Nazi or Communist save one life murdered by those systems?  Conversely, who care what Mencken wrote in his diary or kept in his heart, when he was out trying to stop lynching and save Jewish refugees?  No one cares about your internal purity, ultimately not even you - which is why you see such puritanical navel-gazing you see around a lot.  People trying to insist that they are perfect and pure on the inside, in a slightly too emphatic way that suggests they aren't that sure of.

After turning this over and over in my mind, the only way I can see out of this is to base your self-esteem primarily on your willingness to be rational.  Rather than insisting that you are worthy because of characteristic X, try thinking of yourself as worthy because you are as rational as can be, checking your facts, steelmanning arguments and so on.

This does bring with it the aforementioned uncertainty, but it also brings a relief.  The relief that you don't need to worry that you aren't 100% pure in some abstract way, that you can still do the decent and the right thing. You don't have to worry about failing some ludicrous ethereal standard, you can just get on with it.

It also means you might change some minds - bellow at someone that he's an awful person for holding racist views will get you nowhere.  Telling him that it's fine if he's a racist as long as he's prepared to do right and treat people of all races justly, just might.

 


Why you should attend EA Global and (some) other conferences

18 Habryka 16 July 2015 04:50AM

Many of you know about Effective Altruism and the associated community. It has a very significant overlap with LessWrong, and has been significantly influenced by the culture and ambitions of the community here.

One of the most important things happening in EA over the next few months is going to be EA Global, the so far biggest EA and Rationality community event to date, happening throughout the month of August in three different locations: OxfordMelbourne and San Francisco (which is unfortunately already filled, despite us choosing the largest venue that Google had to offer).

The purpose of this post is to make a case for why it is a good idea for people to attend the event, and to serve as an information hub for information that might be more relevant to the LessWrong community (as well an additional place to ask questions). I am one of the main organizers and very happy to answer any questions that you have. 

Is it a good idea to attend EA Global?

This is a difficult question, that obviously will not have a unique answer, but from the best of what I can tell, and for the majority of people reading this post, the answer seems to be "yes". The EA community has been quite successful at shaping the world to the better, and at building an epistemic community that seems to be effective at changing its mind and updating on evidence.

But there have been other people arguing in favor of supporting the EA movement, and I don't want to repeat everything that they said. Instead I want to focus on a more specific argument: "Given that I belief that EA is overall a promising movement, should I attend EA Global if I want to improve the world (according to my preferences)?"

The key question here is: Does attending the conference help the EA Movement succeed?

How attending EA Global helps the EA Movement succeed

It seems that the success of organizations is highly dependent on the interconnectedness of its members. In general a rule seems to hold: The better connected the social graph of your organization is, the more effective does it work.

In particular, any significant divide in an organization, any clustering of different groups that do not communicate much with each other, seems to significantly reduce the output the organization produces. I wish we had better studies on this, and that I could link to more sources for this, but everything I've found so far points in this direction. The fact that HR departments are willing to spend extremely large sums of money to encourage the employees of organizations to interact socially with each other, is definitely evidence for this being a good rule to follow (though far from conclusive). 

What holds for most organizations should also hold for EA. If this is true, then the success of the EA Movement is significantly dependent on the interconnectedness of its members, both in the volume of its output and the quality of its output.

But EA is not a corporation, and EA does not share a large office together. If you would graph out the social graph of EA, it would very much look clustered. The Bay Area cluster, the Oxford cluster, the Rationality cluster, the East Coast and the West Coast cluster, many small clusters all over Europe with meetups and small social groups in different countries that have never talked to each other. EA is splintered into many groups, and if EA would be a company, the HR department would be very justified in spending a very significant chunk of resources at connecting those clusters as much as possible. 

There are not many opportunities for us to increase the density of the EA social graph. There are other minor conferences, and online interactions do some part of the job, but the past EA summits where the main events at which people from different clusters of EA met each other for the first time. There they built lasting social connections, and actually caused these separate clusters in EA to be connected. This had a massive positive effect on the output of EA. 

Examples: 

 

  • Ben Kuhn put me into contact with Ajeya Cotra, resulting in the two of us running a whole undergraduate class on Effective Altruism, that included Giving Games to various EA charities that was funded with over $10.000. (You can find documentation of the class here).
  • The last EA summit resulted in both Tyler Alterman and Kerry Vaughan being hired by CEA and now being full time employees, who are significantly involved in helping CEA set up a branch in the US.
  • The summit and retreat last year caused significant collaboration between CFAR, Leverage, CEA and FHI, resulting in multiple situations of these organizations helping each other in coordinating their fundraising attempts, hiring processes and navigating logistical difficulties.   

 

This is going to be even more true this year. If we want EA to succeed and continue shaping the world towards the good, we want to have as many people come to the EA Global events as possible, and ideally from as many separate groups as possible. This means that you, especially if you feel somewhat disconnected from EA, seriously want to consider coming. I estimate the benefit of this to be much bigger than the cost of a plane ticket and the entrance ticket (~$500). If you do find yourself significantly constrained by financial resources, consider applying for financial aid, and we will very likely be able to arrange something for you. By coming, you provide a service to the EA community at large. 

How do I attend EA Global? 

As I said above, we are organizing three different events in three different locations: Oxford, Melbourne and San Francisco. We are particularly lacking representation from many different groups in mainland Europe, and it would be great if they could make it to Oxford. Oxford also has the most open spots and is going to be much bigger than the Melbourne event (300 vs. 100).  

If you want to apply for Oxford go to: eaglobal.org/oxford

If you want to apply for Melbourne go to: eaglobal.org/melbourne

If you require financial aid, you will be able to put in a request after we've sent you an invitation. 

You are (mostly) a simulation.

-4 Eitan_Zohar 18 July 2015 04:40PM

This post was completely rewritten on July 17th, 2015, 6:10 AM. Comments before that are not necessarily relevant.

Assume that our minds really do work the way Unification tells us: what we are experiencing is actually the sum total of every possible universe which produces them. Some universes have more 'measure' than others, and that is typically the stable ones; we do not experience chaos. I think this makes a great deal of sense- if our minds really are patterns of information I do not see why a physical world should have a monopoly on it.

Now to prove that we live in a Big World. The logic is simple- why would something finite exist? If we're going to reason that some fundamental law causes everything to exist, I don't see why that law restricts itself to this universe and nothing else. Why would it stop? It is, arguably, simply the nature of things for an infinite multiverse to exist.

I'm pretty terrible at math, so please try to forgive me if this sounds wrong. Take the 'density' of physical universes where you exist- the measure, if you will- and call it j. Then take the measure of universes where you are simulated and call it p. So, the question become is j greater than p? You might be thinking yes, but remember that it doesn't only have to be one simulation per universe. According to our Big World model there is a universe out there in which all processing power (or a significant portion) as been turned into simulations of you.

So we take the amount of minds being simulated per universe and call that x. Then the real question becomes if j > px. What sort of universe is common enough and contains enough minds to overcome j? If you say that approximately 10^60 simulated human minds could fit in it (a reasonable guess for this universe) but that such universes are five trillion times rarer than the universe we live in, than it's clear that our own 'physical' measure is hopelessly lower than our simulated measure.

Should we worry about this? It would seem highly probable that in most universes where I am being simulated I once existed in, or humans did, since the odds of randomly stumbling upon me in Mind Space seem unlikely enough to ignore. Presumably they are either AIs gone wrong or someone trying to grab some of my measure, for whatever reason.

As way of protecting measure, pretty much all of our postsingularity universes would divide up the matter of the universe for each person living, create as many simulations as possible of them from birth, and allow them to go through the Singularity. I expect that my ultimate form is a single me, not knowing if he is simulated or not, with billions of perfect simulations of himself across our universe, all reasoning the same way (he would be told this by the AI, since there isn't any more reason for secrecy). This, I think, would be able to guard my measure against nefarious or bizarre universes in which I am simulated. It cannot just simulate the last few moments of my life because those other universes might try to grab younger versions of me. So if we take j to be safe measure rather than physical measure, and p to be unsafe or alien, it becomes jx > px, which I think is quite reasonable.

I do not think of this as some kind of solipsist nightmare; the whole point of this is to simulate the 'real' you, the one that really existed, and part of your measure is, after all, always interacting in a real universe. I would suggest that by any philosophical standard the simulations could be ignored, with the value of your life being the same as ever.

I need a protocol for dangerous or disconcerting ideas.

3 Eitan_Zohar 12 July 2015 01:58AM

I have a talent for reasoning my way into terrifying and harmful conclusions. The first was modal realism as a fourteen-year-old. Of course I did not understand most of its consequences, but I disliked the fact that existence was infinite. It mildly depressed me for a few days. The next mistake was opening the door to solipsism and Brain-in-a-Vat arguments. This was so traumatic to me that I spent years in a manic depression. I could have been healed in a matter of minutes if I had talked to the right person or read the right arguments during that period, but I didn't.

Lesswrong has been a breeding ground of existential crisis for me. The Doomsday argument (which I thought up independently), ideas based on acausal trade (one example was already well known; one I invented myself), quantum immortality, the simulation argument, and finally my latest and worst epiphany: the potential horrible consequences of losing awareness of your reality under Dust Theory. I don't know that that's an accurate term for the problem, but it's the best I can think of.

This isn't to say that my problems were never solved; I often worked through them myself, always by refuting the horrible consequences of them to my own satisfaction and never through any sort of 'acceptance.' I don't think that my reactions are a consequence of an already depressed mind-state (which I certainly have anyway) because the moment I refute them I feel emotionally as if it never happened. It no longer wears on me. I have OCD, but if it's what's causing me to ruminate than I think I prefer having it as opposed to irrational suppression of a rational problem. Finding solutions would have taken much longer if I hadn't been thinking about them constantly.

I've come to realize that this site, due to perhaps a confluence of problems, was extremely unhelpful in working through any of my issues, even when they were brought about of Lesswrong ideas and premises. My acausal problem [1] I sent to about five or six people, and none of them had anything conclusive to say but simply referred me to Eliezer. Who didn't respond, even though this sort of thing is apparently important to him. This whole reaction struck me as disproportionate to the severity of the problem, but that was the best response I've had so far.

The next big failure was my resolution to the Doomsday argument. [2] I'm not very good yet at conveying these kind of ideas, so I'm not sure it was entirely the fault of the Lesswrongers, but still. One of them of them insisted that I needed to explain how 'causality' could be violated; isn't that the whole point of acausal systems? My logic was sound, but he substituted abstractly intuitive concepts in place of them. I would think that there would be something in the Sequences about that.

The other posters were only marginally more helpful. Some of them challenged the self-sampling assumption, but then why even bother if the problem I'm trying to solve requires it to be true? In the end, not one person even seemed to consider the possibility that it might work. Even though it is a natural extrapolation from other ideas which are taken very very seriously by Lesswrong. Instead of discussing my resolution, they discussed the DA itself, or AI, or whatever they found more interesting.

Finally, we come to an absolutely terrifying idea I had a few days ago, which I naively assumed would catch the attention of any rational person. An extrapolation of Dust Theory [3] implied that you might die upon going to sleep, not immediately, but through degeneration, and that the person who wakes up in the morning is simply a different observer, who has an estimated lifespan of however long he remains awake. Rationally anyone should therefore sign up for cryonics and then kill themselves, forcing their measure to continue into post-Singularity worlds that no longer require him to sleep (not that I would have ever found the courage to do this). [4] In the moments when I considered it most plausible I gave it no more than a 10% chance of being true (although it would have been higher if I had taken Dust Theory for granted), and it still traumatized me in a way I've never experienced before. Always during my worst moments sleep came as a relief and escape. Now I cannot go to sleep. Only slightly less traumatizing was the idea that during sleep my mind declines enough to merge into other experiences and I awake into a world I would consider alien, with perfectly consistent memories.

My inquiries on different threads were almost completely ignored, so I eventually created my own. After twenty-four hours there were nine posts, and now there are twenty-two. All of them either completely miss the point (always not realizing this) or show complete ignorance about what Dust Theory is. The idea that this requires any level of urgency does not seem to have occurred to anyone. Finally, the second part of my question, which asked about the six-year-old post "getting over Dust Theory" was completely ignored, despite having ninety-five comments on it by people who seem to already understand it themselves.

I resolved both issues, but not to my own satisfaction: while I now consider the death outcome unlikely enough to dismiss, the reality-jumping still somewhat worries me. I now will not be able to go to sleep without fear for the next few months; maybe longer, and my mental and physical health will deteriorate. Professional help or a hotline is out of the question because I will not inflict these ideas on people who are not equipped to deal with them, and also because I regard psychologists as charlatans or, at best, practitioners of a deeply unhealthy field. The only option I have to resolve the issues is talking to someone who can discuss it rationally.

This post [5] by Eliezer, however unreliable he might be, convinced me that he might actually know what he is talking about (though I still don't know how Max Tegmark's rebuttal to quantum immortality is refuted, because it seems pretty airtight to me). More disappointing is Nick Bostrom's argument that mind-duplicates will experience two subjective experiences; he does not understand the idea of measure, i.e. that we exist in all universes that account for our experiences, but more in some than others. Still, I think there has to be someone out there who is capable of following my reasoning- all the more frustrating, because the more people misapprehend my ideas, the clearer and sharper they seem to me.

Who do I talk to? How do I contact them? I doubt that going around emailing these people will be effective, but something has to change. I can't go insane, as much as that would be a relief, and I can't simply ignore it. I need someone sane to talk to, and this isn't the place to find that.

Sorry if any of this comes off as ranting or incoherent. That's what happens when someone is pushed to all extremes and beyond. I am not planning on killing myself whatsoever and do not expect that to change. I just want help.

[1] http://lesswrong.com/lw/l0y/i_may_have_just_had_a_dangerous_thought/ (I don't think that the idea is threatening anymore, though.)

[2] http://lesswrong.com/lw/m8j/a_resolution_to_the_doomsday_argument/

[3] http://sciencefiction.com/2011/05/23/science-feature-dust-theory/

[4] http://lesswrong.com/lw/mgd/the_consequences_of_dust_theory/

[5] http://lesswrong.com/lw/few/if_mwi_is_correct_should_we_expect_to_experience/7sx3

(The insert-link button is greyed out, for whatever reason.)

A Roadmap: How to Survive the End of the Universe

5 turchin 02 July 2015 11:01AM

In a sense, this plan needs to be perceived with irony because it is almost irrelevant: we have very small chances of surviving even next 1000 years and if we do, we have a lot of things to do before it becomes reality. And even afterwards, our successors will have completely different plans.

There is one important exception: there are suggestions that collider experiments may lead to a vacuum phase transition, which begins at one point and spreads across the visible universe. Then we can destroy ourselves and our universe in this century, but it would happen so quickly that we will not have time to notice it. (The term "universe" hereafter refers to the observable universe that is the three-dimensional world around us, resulting from the Big Bang.)

We can also solve this problem in next century if we create superintelligence.

The purpose of this plan is to show that actual immortality is possible: that we have an opportunity to live not just billions and trillions of years, but an unlimited duration. My hope is that the plan will encourage us to invest more in life extension and prevention of global catastrophic risks. Our life could be eternal and thus have meaning forever.

Anyway, the end of the observable universe is not an absolute end: it's just one more problem on which the future human race will be able to work. And even at the negligible level of knowledge about the universe that we have today, we are still able to offer more than 50 ideas on how to prevent its end.

In fact, to assemble and come up with these 50 ideas I spent about 200 working hours, and if I had spent more time on it, I'm sure I would have found many new ideas.  In the distant future we can find more ideas; choose the best of them; prove them, and prepare for their implementation.

First of all, we need to understand exactly what kind end to the universe we should expect in the natural course of things. There are many hypotheses on this subject, which can be divided into two large groups:

1. The universe is expected to have a relatively quick and abrupt end, known as the Big Crunch or Big Rip (accelerating expansion of the universe causes it to break apart), or the decay of the false vacuum. Vacuum decay can occur at any time; a Big Rip could happen in about 10-30 billion years, and the Big Crunch has hundreds of billions of years timescale.

2. Another scenario assumes an infinitely long existence of an empty, flat and cold universe which would experience so called "heat death" that is gradual halting of all processes and then disappearance of all matter.

The choice between these scenarios depends on the geometry of the universe, which is determined by the equations of general relativity and, – above all – the behavior of the almost unknown parameter: dark energy.

The recent discovery of dark energy has made Big Rip the most likely scenario, but it is clear that the picture of the end of the universe will change several times.

You can find more at: http://en.wikipedia.org/wiki/Ultimate_fate_of_the_universe

There are five general approaches to solve the end of the universe problem, each of them includes many subtypes shown in the map:

1.     Surf the Wave: Utilize the nature of the process which is ending the universe. (The most known of these type of solutions is Omega Point by Tippler, where the universe's energy collapse is used to make infinite calculations.)

2.     Go to parallel world

3.     Prevent the end of the universe

4.     Survive the end of the universe

5.     Dissolving the problem

 Some of the ideas are on the level of the wildest possible speculations and I hope you will enjoy them.

The new feature of this map is that in many cases mentioned, ideas are linked to corresponding wiki pages in the pdf. 

Download the pdf of the map here: http://immortality-roadmap.com/unideatheng.pdf

 

 

View more: Next