Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Cheerful one-liners and disjointed anecdotes

Romashka 13 February 2016 07:40PM

It would be good to have a way of telling people what they should expect from jobs - especially "intellectual" jobs - they consider taking. NOT how easy or lousy the work is going to turn out, just what might happen and approximately what do they have to do, so that they will decide if they want this.

continue reading »

On 'Why Global Poverty?'

5 Gram_Stone 13 February 2016 06:04AM

For context, Jeff Kaufman delivered a speech on effective altruism and cause prioritization at EA Global 2015 entitled 'Why Global Poverty?', which he has transcribed and made available here. It's certainly worth reading.

I was dissatisfied with this speech in some ways. For the sake of transparency and charity, I will say that Kaufman has written a disclaimer explaining that, because of a miscommunication, he wrote this speech in the span of two hours immediately before he delivered it (instead of eating lunch, I would like to add), and that even after writing the text version, he is not entirely satisfied with the result.

I'm not that familiar with the EA community, but I predict that debates about cause prioritization, especially when existential risk mitigation is among the causes being discussed, can become mind-killed extremely quickly. And I don't mean to convey that in the tone of a wise outsider. It makes sense, considering the stakes at hand and the eschatological undertones of existential risk. (That is to say that the phrase 'save the world' can be sobering or gross, depending on the individual.) So, as is always implicit, but is sometimes worth making explicit, I'm criticizing some arguments as I understand them, not any person. I write this precisely because rationality is a common interest of many causes. I'll be focusing on the part about existential risk, as well as the parts that it is dependent upon. Lastly, I'd be interested in knowing if anyone else has criticized this speech in writing or come to conclusions similar to mine. Without further ado:

Jeff Kaufman's explanation of EA and why it makes sense is boilerplate; I agree with it, naturally. I also agree with the idea that certain existential risk mitigation strategies are comparatively less neglected by national governments and thus that risks like these are considerably less likely to be where one can make one's most valuable marginal donation. E.g., there are people who are paid to record and predict the trajectories of celestial objects, celestial mechanics is well-understood, and an impact event in the next two centuries is, with high meta-confidence, far less probable than many other risks. You probably shouldn't donate to asteroid impact risk mitigation organizations if you have to choose a cause from the category of existential risk mitigation organizations. The same goes for most natural (non-anthropogenic) risks.

The next few parts are worth looking at in detail, however:

At the other end we have risks like the development of an artificial intelligence that destroys us through its indifference. Very few people are working on this, there's low funding, and we don't have much understanding of the problem. Neglectedness is a strong heuristic for finding causes where your contribution can go far, and this does seem relatively neglected. The main question for me, though, is how do you know if you're making progress?

Everything before the question seems accurate to me. Furthermore, if I interpret the question correctly, then what's implied is a difference between the observable consequences of global poverty mitigation and existential risk mitigation. I think the implied difference is fair. You can see the malaria evaporating but you only get one chance to build a superintelligence right. (It's worth saying that AI risk is also the example that Kaufman uses in his explanation.)

However, I don't think that this necessarily implies that we can't have some confidence that we're actually mitigating existential risks. This is clear if we dissolve the question. What are the disguised queries behind the question 'How do you know if you're making progress?' ? If your disguised query is 'Can I observe the consequences of my interventions and update my beliefs and correct my actions accordingly?', then in the case of existential risks, the answer is "No", at least in the traditional sense. If your disguised query is 'Can I have confidence in the effects of my interventions without observing their consequences?', then that seems like a different, much more complicated question that is both interesting and worth examining further. I'll expand on this conceivably more controversial bit later, so that it doesn't seem like I'm being uncharitable or quoting out of context. Kaufman continues:

First, a brief digression into feedback loops. People succeed when they have good feedback loops. Otherwise they tend to go in random directions. This is a problem for charity in general, because we're buying things for others instead of for ourselves. If I buy something and it's no good I can complain to the shop, buy from a different shop, or give them a bad review. If I buy you something and it's no good, your options are much more limited. Perhaps it failed to arrive but you never even knew you were supposed to get it? Or it arrived and was much smaller than I intended, but how do you know. Even if you do know that what you got is wrong, chances are you're not really in a position to have your concerns taken seriously.

This is a big problem, and there are a few ways around this. We can include the people we're trying to help much more in the process instead of just showing up with things we expect them to want. We can give people money instead of stuff so they can choose the things they most need. We can run experiments to see which ways of helping people work best. Since we care about actually helping people instead of just feeling good about ourselves, we not only can do these things, we need to do them. We need to set up feedback loops where we only think we're helping if we're actually helping.

Back to AI risk. The problem is we really really don't know how to make good feedback loops here. We can theorize that an AI needs certain properties not to just kill us all, and that in order to have those properties it would be useful to have certain theorems proved, and go work on those theorems. And maybe we have some success at this, and the mathematical community thinks highly of us instead of dismissing our work. But if our reasoning about what math would be useful is off there's no way for us to find out. Everything will still seem like it's going well.

I think I get where Kaufman is coming from on this. First, I'm going to use an analogy to convey what I believe to be the commonly used definition of the phrase 'feedback loop'.

If you're an entrepreneur, you want your beliefs about which business strategies will be successful to be entangled with reality. You also have a short financial runway, so you need to decide quickly, which means that you have to obtain your evidence quickly if you want your beliefs to be entangled in time for it to matter. So immediately after you affect the world, you look at it to see what happened and update on it. And this is virtuous.

And of course, people are notoriously bad at remaining entangled with reality when they don't look at it. And this seems like an implicit deficiency in any existential risk mitigation intervention; you can't test the effectiveness of your intervention. You succeed or fail, one time.

Next, let's taboo the phrase 'feedback loop'.

So, it seems like there's a big difference between first handing out insecticidal bed nets and then looking to see whether or not the malaria incidence goes down, and paying some mathematicians to think about AI risk. When the AI researchers 'make progress', where can you look? What in the world is different because they thought instead of not, beyond the existence of an academic paper?

But a big part of this rationality thing is knowing that you can arrive at true beliefs by correct reasoning, and not just by waiting for the answer to smack you in the face.

And I would argue that any altruist is doing the same thing when they have to choose between causes before they can make observations. There are a million other things that the founders of the Against Malaria Foundation could have done, but they took the risk of riding on distributing bed nets, even though they had yet to see it actually work.

In fact, AI risk is not-that-different from this, but you can imagine it as a variant where you have to predict much further into the future, the stakes are higher, and you don't get a second try after you observe the effect of your intervention.

And if you imagine a world where a global authoritarian regime involuntarily reads its citizens' minds as a matter of course, and there it is lawful that anyone who identifies as an EA is to be put in an underground chamber where they are given a minimum income that they may donate as they please, and they are allowed to reason on their prior knowledge only, never being permitted to observe the consequences of their donations, then I bet that EAs would not say, "I have no feedback loop and I therefore cannot decide between any of these alternatives."

Rather, I bet that they would say, "I will never be able to look at the world and see the effects of my actions at a time that affects my decision-making, but this is my best educated guess of what the best thing I can do is, and it's sure as hell better than doing nothing. Yea, my decision is merely rational."

You want observational consequences because they give you confidence in your ability to make predictions. But you can make accurate predictions without being able to observe the consequences, and without just getting lucky, and sometimes you have to.

But in reality we're not deciding between donating something and donating nothing. We're choosing between charitable causes. But I don't think that the fact that our interventions are less predictable should make us consider the risk more negligible or the prevention thereof less valuable. Above choosing causes where the effects of interventions are predictable, don't we want to choose the most valuable causes? A bias towards causes with consistently, predictably, immediately effective interventions doesn't seem like something that should completely dominate our decision-making process even if there's an alternative cause that can be less predictably intervened upon but that would result in outcomes with extremely high utility if successfully intervened upon.

To illustrate, imagine that you are at some point on a long road, truly in the middle of nowhere, and you see a man whose car has a flat tire. You know that someone else may not drive by for hours, and you don't know how well-prepared the man is for that eventuality. You consider stopping your car to help; you have a spare, you know how to change tires, and you've seen it work before. And if you don't do it right the first time for some weird reason, you can always try again.

But suddenly, you notice that there is a person lying motionless on the ground, some ways down the road; far, but visible. There's no cellphone service, it would take an ambulance hours to get here unless they happened to be driving by, and you have no medical training or experience.

I don't know about you, but even if I'm having an extremely hard time thinking of things to do about a guy dying on my watch in the middle of nowhere, the last thing I do is say, "I have no idea what to do if I try to save that guy, but I know exactly how to change a tire, so why don't I just change the tire instead." Because even if I don't know what to do, saving a life is so much more important than changing a tire that I don't care about the uncertainty. And maybe if I went and actually tried saving his life, even if I wasn't sure how to go about it, it would turn out that I would find a way, or that he needed help, but he wasn't about to die immediately, or that he was perfectly fine all along. And I never would've known if I'd changed a tire and driven in the opposite direction.

And it doesn't mean that the strategy space is open season. I'm not going to come up with a new religion on the spot that contains a prophetic vision that this man will survive his medical emergency, nor am I going to try setting him on fire. There are things that will obviously not work without me trying them out. And that can be built on with other ideas that are not-obviously-wrong-but-may-turn-out-to-be-wrong-later. It's great to have an idea of what you can know is wrong even if you can't try anything. Because not being able to try more than once is precisely the problem.

If we stop talking about what rational thinking feels like, and just start talking about rational thinking with the usual words, then what I'm getting at is that, in reality, there is an inside view to the AI risk arguments. You can always talk about confidence levels outside of an argument, but it helps to go into the details of the inside view, to see where our uncertainty about various assumptions is greatest. Otherwise, where is your outside estimate even coming from, besides impression?

We can't run an experiment to see if the mathematics of self-reference, for example, is a useful thing to flesh out before trying to solve the larger problem of AI risk, but there are convincing reasons that it is. And sometimes that's all you have at the time.

And if you ever ask me, "Why does your uncertainty bottom out here?", then I'll ask you "Why does your uncertainty bottom out there?" But it's okay. Because it bottoms out somewhere, even if it's at the level of "I know that I know nothing," or some other similarly useless sentiment.

But I will say that this state of affairs is not optimal. It would be nice if we could be more confident about our reasoning in situations where we aren't able to make predictions, and then perform interventions, and then make observations that we can update on, and then try again. It's great to have medical training in the middle of nowhere.

And I will also say that I imagine that Kaufman is not talking about it being a fundamentally bad idea forever to donate to existential risk mitigation, but that it just doesn't seem like a good idea right now, because we don't know enough about when we should be confident in predictions that we can't test before we have to take action.

But if you know you're confused about how to determine the impact of interventions intended to mitigate existential risks, it's almost as if you should consider trying to figure out that problem itself. If you could crack the problem of mitigating existential risks, it would blow global poverty out of the water. And the problem doesn't immediately seem completely obviously intractable.

In fact, it's almost as if the cause you should choose is the research of existential risk strategy. And, if you were to write a speech about it, it seems like it would be a good idea to make it really clear that that's probably very impactful, because value of information counts.

And so, when you read a speech that you claim is entitled 'Why Global Poverty?', I read a speech entitled 'Why Existential Risk Strategy Research?'

Weekly LW Meetups

1 FrankAdamek 12 February 2016 04:41PM

This summary was posted to LW Main on February 5th. The following week's summary is here.

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, London, Madison WI, Melbourne, Moscow, Mountain View, New Hampshire, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.

continue reading »

Where does our community disagree about meaningful issues?

13 ChristianKl 12 February 2016 11:34AM

Yesterday at our LW Berlin Dojo we talked about areas where we disagree. We got 4 issues:

1) AI risk is important
2) Everybody should be vegan.
3) It's good to make being an aspiring rationalist part of your identity.
4) Being conscious of privacy is important

Can you think of other meaningful issues where you think our community disagrees? At best issues that actually matter for our day to day decisions?

Rationality Reading Group: Part T: Science and Rationality

4 Gram_Stone 10 February 2016 11:40PM

This is part of a semi-monthly reading group on Eliezer Yudkowsky's ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.


Welcome to the Rationality reading group. This fortnight we discuss Part T: Science and Rationality (pp. 1187-1265) and Interlude: A Technical Explanation of Technical Explanation (pp. 1267-1314). This post summarizes each article of the sequence, linking to the original LessWrong post where available.

T. Science and Rationality

243. The Failures of Eld Science - A short story set in the same world as "Initiation Ceremony". Future physics students look back on the cautionary tale of quantum physics.

244. The Dilemma: Science or Bayes? - The failure of first-half-of-20th-century-physics was not due to straying from the scientific method. Science and rationality - that is, Science and Bayesianism - aren't the same thing, and sometimes they give different answers.

245. Science Doesn't Trust Your Rationality - The reason Science doesn't always agree with the exact, Bayesian, rational answer, is that Science doesn't trust you to be rational. It wants you to go out and gather overwhelming experimental evidence.

246. When Science Can't Help - If you have an idea, Science tells you to test it experimentally. If you spend 10 years testing the idea and the result comes out negative, Science slaps you on the back and says, "Better luck next time." If you want to spend 10 years testing a hypothesis that will actually turn out to be right, you'll have to try to do the thing that Science doesn't trust you to do: think rationally, and figure out the answer before you get clubbed over the head with it.

247. Science Isn't Strict Enough - Science lets you believe any damn stupid idea that hasn't been refuted by experiment. Bayesianism says there is always an exactly rational degree of belief given your current evidence, and this does not shift a nanometer to the left or to the right depending on your whims. Science is a social freedom - we let people test whatever hypotheses they like, because we don't trust the village elders to decide in advance - but you shouldn't confuse that with an individual standard of rationality.

248. Do Scientists Already Know This Stuff? - No. Maybe someday it will be part of standard scientific training, but for now, it's not, and the absence is visible.

249. No Safe Defense, Not Even Science - Why am I trying to break your trust in Science? Because you can't think and trust at the same time. The social rules of Science are verbal rather than quantitative; it is possible to believe you are following them. With Bayesianism, it is never possible to do an exact calculation and get the exact rational answer that you know exists. You are visibly less than perfect, and so you will not be tempted to trust yourself.

250. Changing the Definition of Science - Many of these ideas are surprisingly conventional, and being floated around by other thinkers. I'm a good deal less of a lonely iconoclast than I seem; maybe it's just the way I talk.

251. Faster Than Science - Is it really possible to arrive at the truth faster than Science does? Not only is it possible, but the social process of science relies on scientists doing so - when they choose which hypotheses to test. In many answer spaces it's not possible to find the true hypothesis by accident. Science leaves it up to experiment to socially declare who was right, but if there weren't some people who could get it right in the absence of overwhelming experimental proof, science would be stuck.

252. Einstein's Speed - Albert was unusually good at finding the right theory in the presence of only a small amount of experimental evidence. Even more unusually, he admitted it - he claimed to know the theory was right, even in advance of the public proof. It's possible to arrive at the truth by thinking great high-minded thoughts of the sort that Science does not trust you to think, but it's a lot harder than arriving at the truth in the presence of overwhelming evidence.

253. That Alien Message - Einstein used evidence more efficiently than other physicists, but he was still extremely inefficient in an absolute sense. If a huge team of cryptographers and physicists were examining a interstellar transmission, going over it bit by bit, we could deduce principles on the order of Galilean gravity just from seeing one or two frames of a picture. As if the very first human to see an apple fall, had, on the instant, realized that its position went as the square of the time and that this implied constant acceleration.

254. My Childhood Role Model - I looked up to the ideal of a Bayesian superintelligence, not Einstein.

255. Einstein's Superpowers - There's an unfortunate tendency to talk as if Einstein had superpowers - as if, even before Einstein was famous, he had an inherent disposition to be Einstein - a potential as rare as his fame and as magical as his deeds. Yet the way you acquire superpowers is not by being born with them, but by seeing, with a sudden shock, that they are perfectly normal.

256. Class Project - The students are given one month to develop a theory of quantum gravity.

Interlude: A Technical Explanation of Technical Explanation

 


This has been a collection of notes on the assigned sequence for this fortnight. The most important part of the reading group though is discussion, which is in the comments section. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

The next reading will cover Ends: An Introduction (pp. 1321-1325) and Part U: Fake Preferences (pp. 1329-1356). The discussion will go live on Wednesday, 24 February 2016, right here on the discussion forum of LessWrong.

The Brain Preservation Foundation's Small Mammalian Brain Prize won

38 gwern 09 February 2016 09:02PM

The Brain Preservation Foundation’s Small Mammalian Brain Prize has been won with fantastic preservation of a whole rabbit brain using a new fixative+slow-vitrification process.

  • BPF announcement (21CM’s announcement)
  • evaluation
  • The process was published as “Aldehyde-stabilized cryopreservation”, McIntyre & Fahy 2015 (mirror)

    We describe here a new cryobiological and neurobiological technique, aldehyde-stabilized cryopreservation (ASC), which demonstrates the relevance and utility of advanced cryopreservation science for the neurobiological research community. ASC is a new brain-banking technique designed to facilitate neuroanatomic research such as connectomics research, and has the unique ability to combine stable long term ice-free sample storage with excellent anatomical resolution. To demonstrate the feasibility of ASC, we perfuse-fixed rabbit and pig brains with a glutaraldehyde-based fixative, then slowly perfused increasing concentrations of ethylene glycol over several hours in a manner similar to techniques used for whole organ cryopreservation. Once 65% w/v ethylene glycol was reached, we vitrified brains at −135 °C for indefinite long-term storage. Vitrified brains were rewarmed and the cryoprotectant removed either by perfusion or gradual diffusion from brain slices. We evaluated ASC-processed brains by electron microscopy of multiple regions across the whole brain and by Focused Ion Beam Milling and Scanning Electron Microscopy (FIB-SEM) imaging of selected brain volumes. Preservation was uniformly excellent: processes were easily traceable and synapses were crisp in both species. Aldehyde-stabilized cryopreservation has many advantages over other brain-banking techniques: chemicals are delivered via perfusion, which enables easy scaling to brains of any size; vitrification ensures that the ultrastructure of the brain will not degrade even over very long storage times; and the cryoprotectant can be removed, yielding a perfusable aldehyde-preserved brain which is suitable for a wide variety of brain assays…We have shown that both rabbit brains (10 g) and pig brains (80 g) can be preserved equally well. We do not anticipate that there will be significant barriers to preserving even larger brains such as bovine, canine, or primate brains using ASC.

    (They had problems with 2 pigs and got 1 pig brain successfully cryopreserved but it wasn’t part of the entry. I’m not sure why: is that because the Large Mammalian Brain Prize is not yet set up?)
  • previous discussion: Mikula’s plastination came close but ultimately didn’t seem to preserve the whole brain when applied.
  • commentary: Alcor, Robin Hanson, John Smart, Evidence-Based Cryonics, Vice, Pop Sci
  • donation link

To summarize it, you might say that this is a hybrid of current plastination and vitrification methods, where instead of allowing slow plastination (with unknown decay & loss) or forcing fast cooling (with unknown damage and loss), a staged approach is taking: a fixative is injected into the brain first to immediately lock down all proteins and stop all decay/change, and then it is leisurely cooled down to be vitrified.

This is exciting progress because the new method may wind up preserving better than either of the parent methods, but also because it gives much greater visibility into the end-results: the aldehyde-vitrified brains can be easily scanned with electron microscopes and the results seen in high detail, showing fantastic preservation of structure, unlike regular vitrification where the scans leave opaque how good the preservation was. This opacity is one reason that as Mike Darwin has pointed out at length on his blog and jkaufman has also noted that we cannot be confident in how well ALCOR or CI’s vitrification works - because if it didn’t, we have little way of knowing.

EDIT: BPF’s founder Ken Hayworth (Reddit account) has posted a piece, arguing that ALCOR & CI cannot be trusted to do procedures well and that future work should be done via rigorous clinical trials and only then rolled out. “Opinion: The prize win is a vindication of the idea of cryonics, not of unaccountable cryonics service organizations”

…“Should cryonics service organizations immediately start offering this new ASC procedure to their ‘patients’?” My personal answer (speaking for myself, not on behalf of the BPF) has been a steadfast NO. It should be remembered that these same cryonics service organizations have been offering a different procedure for years. A procedure that was not able to demonstrate, to even my minimal expectations, preservation of the brain’s neural circuitry. This result, I must say, surprised and disappointed me personally, leading me to give up my membership in one such organization and to become extremely skeptical of all since. Again, I stress, current cryonics procedures were NOT able to meet our challenge EVEN UNDER IDEAL LABORATORY CONDITIONS despite being offered to paying customers for years[1]. Should we really expect that these same organizations can now be trusted to further develop and properly implement such a new, independently-invented technique for use under non-ideal conditions?

Let’s step back for a moment. A single, independently-researched, scientific publication has come out that demonstrates a method of structural brain preservation (ASC) compatible with long-term cryogenic storage in animal models (rabbit and pig) under ideal laboratory conditions (i.e. a healthy living animal immediately being perfused with fixative). Should this one paper instantly open the floodgates to human application? Under untested real-world conditions where the ‘patient’ is either terminally ill or already declared legally dead? Should it be performed by unlicensed persons, in unaccountable organizations, operating outside of the traditional medical establishment with its checks and balances designed to ensure high standards of quality and ethics? To me, the clear answer is NO. If this was a new drug for cancer therapy, or a new type of heart surgery, many additional steps would be expected before even clinical trials could start. Why should our expectations be any lower for this?

The fact that the ASC procedure has won the brain preservation prize should rightly be seen as a vindication of the central idea of cryonics –the brain’s delicate circuitry underlying memory and personality CAN in fact be preserved indefinitely, potentially serving as a lifesaving bridge to future revival technologies. But, this milestone should certainly not be interpreted as a vindication of the very different cryonics procedures that are practiced on human patients today. And it should not be seen as a mandate for more of the same but with an aldehyde stabilization step casually tacked on. …

Religious and Rational?

3 Gleb_Tsipursky 09 February 2016 08:12PM

Reverend Caleb Pitkin, an aspiring rationalist and United Methodist Minister, wrote an article about combining religion and rationality which was recently published on the Intentional Insights blog. He's the only Minister I know who is also an aspiring rationalist, so I thought it would be an interesting piece for Less Wrong as well. Besides, it prompted an interesting discussion on the Less Wrong Facebook group, so I thought some people here who don't look at the Facebook group might be interested in checking it out as well. Caleb does not have enough karma to post, so I am posting it on his behalf, but he will engage with the comments.

______________________________________________________________________________

 

Religious and Rational?

 

“Wisdom shouts in the street; in the public square she raises her voice.”

Proverbs 1:20 Common English Bible

The Biblical book of Proverbs is full of imagery of wisdom personified as a woman calling and extorting people to come to her and listen.  The wisdom contained in Proverbs is not just spiritual wisdom but also contains a large amount of practical wisdom and advice.  What might the wisdom of Proverbs and rationality have in common?  The wisdom literature in scripture was meant to help people make better and more effective decisions.  In today’s complex and rapidly changing world we have the same need for tools and resources to help us make good decisions.  One great source of wisdom is methods of better thinking that are informed by science.  

Now, not everyone would agree with comparing the wisdom of Proverbs with scientific insights.  Doing so may not sit well with some in the secular rationality community who view all religion as inherently irrational and hindering clear thinking. It also might not sit well with some in my own religious community who are suspicious of scientific thinking as undermining traditional faith.  While it would take a much longer piece to try to completely defend either religion or secular rationality I’m going to try and demonstrate some ways that rationality is useful  for a religious person.

The first way that rationality can be useful for a religious person is in the living of our daily lives.  We are faced with tasks and decisions each day that we try to do our best in.  Learning to recognize common logical fallacies or other biases, like those that cause us to fail to understand other people, will improve our decision making as much as it improves the thinking of non-religious people. For example, a mother driving her kids to Sunday School might benefit from avoiding thinking that the person who cuts her off is definitely a jerk, one common type of thinking error.  Some doing volunteer work for their church could be more effective if they avoid problematic communication with other volunteers. This use of rationality to lead our daily lives in the best way is one that most would find fairly unobjectionable.  It’s easy to say that the way we all achieve our personal goals and objectives could be improved, and we can all gain greater agency.

Rationality can also be of use in theological commentary and discourse.  Many of the theological and religious greats used the available philosophical and intellectual tools of their day to examine their faith. Examples of this include John Wesley, Thomas Aquinas and even the Apostle Paul when he debated Epicurean and Stoic Philosophers.   They also made sure that their theologies were internally, rational and logical.  This means that, from the perspective of a religious person, keeping up with rationality can help with the pursuit of a deeper understanding of our faith.  For a secular person acknowledging the ways in which religious people use rationality within their worldview may be difficult, but it can help to build common ground. The starting point is different.  Secular people start with the faith that they can trust their sensory experience.  Religious people start with conceptions of the divine.  Yet, after each starting point, both seek to proceed in a rational logical manner.

It is not just our personal lives that can be improved by rationality, it’s also the ways in which we interact with communities.  One of the goals of many religious communities is to make a positive impact on the world around them.  When we work to do good in community we want that work to be as effective as possible.  Often when we work in community we find that we are not meeting our goals or having the kind of significant impact that we wish to have.  It is my experience this is often a failure to really examine and gather the facts on the ground.  We set off full of good intentions but with limited resources and time.  Rational examination helps us to figure out how to match our good intentions with our limited resources in the most effective way possible.  For example as the Pastor of two small churches money and people power can be in short supply.  So when we examine all the needs of our community we have to acknowledge we cannot begin to meet all or even most of them.  So we take one issue, hunger, and devote our time and resources to having one big impact on that issue.  As opposed to trying to be a little bit to alleviate a lot of problems.

One other way that rationality can inform our work in the community is to recognize that part of what a scarcity of resources means is that we need to work together with others in our community.  The inter-faith movement has done a lot of good work in bringing together people of faith to work on common goals.  This has meant setting aside traditional differences for the sake of shared goals.  Let us examine the world we live in today though. The amount of nonreligious people is on the rise and there is every indication that it will continue to do so.  On the other hand religion does not seem to be going anywhere either.  Which is good news for a pastor.  Looking at this situation, the rational thing to do is to work together, for religious people to build bridges toward the non-religious and vice versa.

Wisdom still stands on the street calling and imploring us to be improved--not in the form of rationalist street preachers, though that idea has a certain appeal-- but in the form of the growing number of tools being offered to help us improve our capacity for logic, for reasoning, and for the tools that will enable us take part in the world we live in.  

Everyone wants to make good decisions.  This means that everyone tries to make rational decisions.  We all try but we don’t always hit the mark.  Religious people seek to achieve their goals and make good decisions.  Secular people seek to achieve their goals and make good decisions.  Yes, we have different starting points and it’s important to acknowledge that.  Yet, there are similarities in what each group wants out of their lives and maybe we have more in common than we think we do.

On a final note it is my belief that what religious people and what non-religious people fear about each other is the same thing.  The non-religious look at the religious and say God could ask them to do anything... scary.  The religious look at the non-religious and say without God they could do anything... scary.  If we remember though that most people are rational and want to live a good life we have less to be scared of, and are more likely to find common ground.

____________________________________________________________________________________________________________

 

Bio: Caleb Pitkin is a Provisional Elder with the United Methodist Church appointed to Signal Mountain United Methodist Church. Caleb is a huge fan of the theology of John Wesley, which ask that Christians use reason in their faith journey.  This helped lead Caleb to Rationality and participation in Columbus Rationality, a Less Wrong meetup that is part of the Humanist Community of Central Ohio. Through that, Caleb got involved with Intentional Insights. Caleb spends his time trying to live a faithful and rational life. 

The Fable of the Burning Branch

-19 EphemeralNight 08 February 2016 03:20PM

 

Once upon a time, in a lonely little village, beneath the boughs of a forest of burning trees, there lived a boy. The branches of the burning trees sometimes fell, and the magic in the wood permitted only girls to carry the fallen branches of the burning trees.

One day, a branch fell, and a boy was pinned beneath. The boy saw other boys pinned by branches, rescued by their girl friends, but he remained trapped beneath his own burning branch.

The fire crept closer, and the boy called out for help.

Finally, a friend of his own came, but she told him that she could not free him from the burning branch, because she already free'd her other friend from beneath a burning branch and he would be jealous if she did the same deed for anyone else. This friend left him where he lay, but she did promise to return and visit.

The fire crept closer, and the boy called out for help.

A man stopped, and gave the boy the advice that he'd get out from beneath the burning branch eventually if he just had faith in himself. The boy's reply was that he did have faith in himself, yet he remained trapped beneath the burning branch. The man suggested that perhaps he did not have enough faith, and left with nothing more to offer.

The fire crept closer, and the boy cried out for help.

A girl came along, and said she would free the boy from beneath the burning branch.

But no, her friends said, the boy was a stranger to her, was her heroic virtue worth nothing? Heroic deeds ought to be born from the heart, and made beautiful by love, they insisted. Simply hauling the branch off a boy she did not love would be monstrously crass, and they would not want to be friends with a girl so shamed.

So the girl changed her mind and left with her friends.

The fire crept closer. It began to lick at the boy's skin. A soothing warmth became an uncomfortable heat. The boy mustered his courage and chased the fear out of his own voice. He called out, but not for help. He called out for company.

A girl came along, and the boy asked if she would like to be friends. The girl's reply was that she would like to be friends, but that she spent most of her time on the other side of the village, so if they were to be friends, he must be free from beneath the burning branch.

The boy suggested that she free him from beneath the burning branch, so that they could be friends.

The girl replied that she once free'd a boy from beneath a burning branch who also promised to be her friend, but as soon as he was free he never spoke to her again. So how could she trust the boy's offer of friendship? He would say anything to be free.

The boy tried frantically to convince her that he was sincere, that he would be grateful and try with all his heart to be a good friend to the girl who free'd him, but she did not believe him and turned away from him and left him there to burn.

The fire crept closer and the boy whimpered in pain and fear as it spread from wood to flesh. He cried out for help. He begged for help. "Somebody, please!"

A man and a woman came along, and the man offered advice: he was once trapped beneath a burning branch for several years. The fire was magic, the pain was only an illusion. Perhaps it was sad that he was trapped but even so trapped the boy may lead a fulfilling life. Why, the man remembered etching pictures into his branch, befriending passers by, and making up songs.

The woman beside the man agreed, and told the boy that she hoped the right girl would come along and free him, but that he must not presume that he was entitled to any girl's heroic deed merely because he was trapped beneath a burning branch.

"But do I not deserve to be helped?" the boy pleaded, as the flames licked his skin.

"No, how wrong of you to even speak as though you do. My heroic deeds are mine to give, and to you I owe nothing," he was told.

"Perhaps I don't deserve help from you in particular, or from anyone in particular, but is it not so very cruel of you to say I do not deserve any help at all?" the boy pleaded. "Can a girl willing to free me from beneath this burning branch not be found and sent to my aide?"

"Of course not," he was told, "that is utterly unreasonable and you should be ashamed of yourself for asking. It is offensive that you believe such a girl may even exist. You've become burned and ugly, who would want to save you now?"

The fire spread, and the boy cried, screamed, and begged desperately for help from every passer by.

"It hurts it hurts it hurts oh why will no one free me from beneath this burning branch?!" he wailed in despair. "Anything, anyone, please! I don't care who frees me, I only wish for release from this torment!"

Many tried to ignore him, while others scoffed in disgust that he had so little regard for what a heroic deed ought to be. Some pitied him, and wanted to help, but could not bring themselves to bear the social cost, the loss of worth in their friends' and family's eyes, that would come of doing a heroic deed motivated, not by love, but by something lesser.

The boy burned, and wanted to die.

Another boy stepped forward. He went right up to the branch, and tried to lift it. The trapped boy gasped at the small relief from the burning agony, but it was only a small relief, for the burning branches could only be lifted by girls, and the other boy could not budge it. Though the effort was for naught, the first boy thanked him sincerely for trying.

The boy burned, and wanted to die. He asked to be killed.

He was told he had so much to live for, even if he must live beneath a burning branch. None were willing to end him, but perhaps they could do something else to make it easier for him to live beneath the burning branch? The boy could think of nothing. He was consumed by agony, and wanted only to end.

And then, one day, a party of strangers arrived in the village. Heroes from a village afar. Within an hour, one foreign girl came before the boy trapped beneath the burning branch and told him that she would free him if he gave her his largest nugget of gold.

Of course, the local villagers were shocked that this foreigner would sully a heroic deed by trafficking it for mere gold.

But, the boy was too desperate to be shocked, and agreed immediately. She free'd him from beneath the burning branch, and as the magical fire was drawn from him, he felt his burned flesh become restored and whole. He fell upon the foreign girl and thanked her and thanked her and thanked her, crying and crying tears of relief.

Later, he asked how. He asked why. The foreign girls explained that in their village, heroic virtue was measured by how much joy a hero brought, and not by how much she loved the ones she saved.

The locals did not like the implication that their own way might not have been the best way, and complained to the chief of their village. The chief cared only about staying in the good graces of the heroes of his village, and so he outlawed the trading of heroic deeds for other commodities.

The foreign girls were chased out of the village.

And then a local girl spoke up, and spoke loud, to sway her fellow villagers. The boy recognized her. It was his friend. The one who had promised to visit so long ago.

But she shamed the boy, for doing something so crass as trading gold for a heroic deed. She told him he should have waited for a local girl to free him from beneath the burning branch, or else grown old and died beneath it.

To garner sympathy from her audience, she sorrowfully admitted that she was a bad friend for letting the boy be tempted into something so disgusting. She felt responsible, she claimed, and so she would fix her mistake.

The girl picked up a burning branch. Seeing what she was about to do, the boy begged and pleaded for her to reconsider, but she dropped the burning branch upon the boy, trapping him once more.

The boy screamed and begged for help, but the girl told him that he was morally obligated to learn to live with the agony, and never again voice a complaint, never again ask to be free'd from beneath the burning branch.

"Banish me from the village, send me away into the cold darkness, please! Anything but this again!" the boy pleaded.

"No," he was told by his former friend, "you are better off where you are, where all is proper."

In the last extreme, the boy made a grab for his former friend's leg, hoping to drag her beneath the burning branch and free himself that way, but she evaded him. In retaliation for the attempt to defy her, she had a wall built around the boy, so that none would be able, even if one should want to free him from beneath the burning branch.

With all hope gone, the boy broke and became numb to all possible joys. And thus, he died, unmourned.

Require contributions in advance

50 Viliam 08 February 2016 12:55PM

If you are a person who finds it difficult to tell "no" to their friends, this one weird trick may save you a lot of time!

 

Scenario 1

Alice: "Hi Bob! You are a programmer, right?"

Bob: "Hi Alice! Yes, I am."

Alice: "I have this cool idea, but I need someone to help me. I am not good with computers, and I need someone smart whom I could trust, so they wouldn't steal my idea. Would you have a moment to listen to me?"

Alice explains to Bob her idea that would completely change the world. Well, at the least the world of bicycle shopping.

Instead of having many shops for bicycles, there could be one huge e-shop that would collect all the information about bicycles from all the existing shops. The customers would specify what kind of a bike they want (and where they live), and the system would find all bikes that fit the specification, and display them ordered by lowest price, including the price of delivery; then it would redirect them to the specific page of the specific vendor. Customers would love to use this one website, instead of having to visit multiple shops and compare. And the vendors would have to use this shop, because that's where the customers would be. Taking a fraction of a percent from the sales could make Alice (and also Bob, if he helps her) incredibly rich.

Bob is skeptical about it. The project suffers from the obvious chicken-and-egg problem: without vendors already there, the customers will not come (and if they come by accident, they will quickly leave, never to return again); and without customers already there, there is no reason for the vendors to cooperate. There are a few ways how to approach this problem, but the fact that Alice didn't even think about it is a red flag. She also has no idea who are the big players in the world of bicycle selling; and generally she didn't do her homework. But after pointing out all these objections, Alice still remains super enthusiastic about the project. She promises she will take care about everything -- she just cannot write code, and she needs Bob's help for this part.

Bob believes strongly in the division of labor, and that friends should help each other. He considers Alice his friend, and he will likely need some help from her in the future. Fact is, with perfect specification, he could make the webpage in a week or two. But he considers bicycles to be an extremely boring topic, so he wants to spend as little time as possible on this project. Finally, he has an idea:

"Okay, Alice, I will make the website for you. But first I need to know exactly how the page will look like, so that I don't have to keep changing it over and over again. So here is the homework for you -- take a pen and paper, and make a sketch of how exactly the web will look like. All the dialogs, all the buttons. Don't forget logging in and logging out, editing the customer profile, and everything else that is necessary for the website to work as intended. Just look at the papers and imagine that you are the customer: where exactly would you click to register, and to find the bicycle you want? Same for the vendor. And possibly a site administrator. Also give me the list of criteria people will use to find the bike they want. Size, weight, color, radius of wheels, what else? And when you have it all ready, I will make the first version of the website. But until then, I am not writing any code."

Alice leaves, satisfied with the outcome.

 

This happened a year ago.

No, Alice doesn't have the design ready, yet. Once in a while, when she meets Bob, she smiles at him and apologizes that she didn't have the time to start working on the design. Bob smiles back and says it's okay, he'll wait. Then they change the topic.

 

Scenario 2

Cyril: "Hi Diana! You speak Spanish, right?"

Diana: "Hi Cyril! Yes, I do."

Cyril: "You know, I think Spanish is the most cool language ever, and I would really love to learn it! Could you please give me some Spanish lessons, once in a while? I totally want to become fluent in Spanish, so I could travel to Spanish-speaking countries and experience their culture and food. Would you please help me?"

Diana is happy that someone takes interest in her favorite hobby. It would be nice to have someone around she could practice Spanish conversation with. The first instinct is to say yes.

But then she remembers (she knows Cyril for some time; they have a lot of friends in common, so they meet quite regularly) that Cyril is always super enthusiastic about something he is totally going to do... but when she meets him next time, he is super enthusiastic about something completely different; and she never heard about him doing anything serious about his previous dreams.

Also, Cyril seems to seriously underestimate how much time does it take to learn a foreign language fluently. Some lessons, once in a while will not do it. He also needs to study on his own. Preferably every day, but twice a week is probably a minimum, if he hopes to speak the language fluently within a year. Diana would be happy to teach someone Spanish, but not if her effort will most likely be wasted.

Diana: "Cyril, there is this great website called Duolingo, where you can learn Spanish online completely free. If you give it about ten minutes every day, maybe after a few months you will be able to speak fluently. And anytime we meet, we can practice the vocabulary you have already learned."

This would be the best option for Diana. No work, and another opportunity to practice. But Cyril insists:

"It's not the same without the live teacher. When I read something from the textbook, I cannot ask additional questions. The words that are taught are often unrelated to the topics I am interested in. I am afraid I will just get stuck with the... whatever was the website that you mentioned."

For Diana this feels like a red flag. Sure, textbooks are not optimal. They contain many words that the student will not use frequently, and will soon forget them. On the other hand, the grammar is always useful; and Diana doesn't want to waste her time explaining the basic grammar that any textbook could explain instead. If Cyril learns the grammar and some basic vocabulary, then she can teach him all the specialized vocabulary he is interested in. But now it feels like Cyril wants to avoid all work. She has to draw a line:

"Cyril, this is the address of the website." She takes his notebook and writes 'www.duolingo.com'. "You register there, choose Spanish, and click on the first lesson. It is interactive, and it will not take you more than ten minutes. If you get stuck there, write here what exactly it was that you didn't understand; I will explain it when we meet. If there is no problem, continue with the second lesson, and so on. When we meet next time, tell me which lessons you have completed, and we will talk about them. Okay?"

Cyril nods reluctantly.

 

This happened a year ago.

Cyril and Diana have met repeatedly during the year, but Cyril never brought up the topic of Spanish language again.

 

Scenario 3

Erika: "Filip, would you give me a massage?"

Filip: "Yeah, sure. The lotion is in the next room; bring it to me!"

Erika brings the massage lotion and lies on the bed. Filip massages her back. Then they make out and have sex.

 

This happened a year ago. Erika and Filip are still a happy couple.

Filip's previous relationships didn't work well, in long term. In retrospect, they all followed a similar scenario. At the beginning, everything seemed great. Then at some moment the girl started acting... unreasonably?... asking Filip to do various things for her, and then acting annoyed when Filip did exactly what he was asked to do. This happened more and more frequently, and at some moment she broke up with him. Sometimes she provided explanation for breaking up that Filip was unable to decipher.

Filip has a friend who is a successful salesman. Successful both professionally and with women. When Filip admitted to himself that he is unable to solve the problem on his own, he asked his friend for advice.

"It's because you're a f***ing doormat," said the friend. "The moment a woman asks you to do anything, you immediately jump and do it, like a well-trained puppy. Puppies are cute, but not attractive. Have you ready any of those books I sent you, like, ten years ago? I bet you didn't. Well, it's all there."

Filip sighed: "Look, I'm not trying to become a pick-up artist. Or a salesman. Or anything. No offense, but I'm not like you, personality-wise, I never have been, and I don't want to become your - or anyone else's - copy. Even if it would mean greater success in anything. I prefer to treat other people just like I would want them to treat me. Most people reciprocate nice behavior; and those who don't, well, I avoid them as much as possible. This works well with my friends. It also works with the girls... at the beginning... but then somehow... uhm... Anyway, all your books are about manipulating people, which is ethically unacceptable for me. Isn't there some other way?"

"All human interaction is manipulation; the choice is between doing it right or wrong, acting consciously or driven by your old habits..." started the friend, but then he gave up. "Okay, I see you're not interested. Just let me show you the most obvious mistake you make. You believe that when you are nice to people, they will perceive you as nice, and most of them will reciprocate. And when you act like an asshole, it's the other way round. That's correct, on some level; and in a perfect world this would be the whole truth. But on a different level, people also perceive nice behavior as weakness; especially if you do it habitually, as if you don't have any other option. And being an asshole obviously signals strength: you are not afraid to make other people angry. Also, in long term, people become used to your behavior, good or bad. The nice people don't seem so nice anymore, but they still seem weak. Then, ironicaly, if the person well-known to be nice refuses to do something once, people become really angry, because their expectations were violated. And if the asshole decides to do something nice once, they will praise him, because he surprised them pleasantly. You should be an asshole once in a while, to make people see that you have a choice, so they won't take your niceness for granted. Or if your girlfriend wants something from you, sometimes just say no, even if you could have done it. She will respect you more, and then she will enjoy more the things you do for her."

Filip: "Well, I... probably couldn't do that. I mean, what you say seems to make sense, however much I hate to admit it. But I can't imagine doing it myself, especially to a person I love. It's just... uhm... wrong."

"Then, I guess, the very least you could do is to ask her to do something for you first. Even if it's symbolic, that doesn't matter; human relationships are mostly about role-playing anyway. Don't jump immediately when you are told to; always make her jump first, if only a little. That will demonstrate strength without hurting anyone. Could you do that?"

Filip wasn't sure, but at the next opportunity he tried it, and it worked. And it kept working. Maybe it was all just a coincidence, maybe it was a placebo effect, but Filip doesn't mind. At first it felt kinda artificial, but then it became natural. And later, to his surprise, Filip realized that practicing these symbolic demands actually makes it easier to ask when he really needed something. (In which case sometimes he was asked to do something first, because his girlfriend -- knowingly or not? he never had the courage to ask -- copied the pattern; or maybe she has already known it long before. But he didn't mind that either.)

 

The lesson is: If you find yourself repeatedly in situations where people ask you to do something for them, but at the end they don't seem to appreciate what you did for them, or don't even care about the thing they asked you to do... and yet you find it difficult to say "no"... ask them to contribute to the project first.

This will help you get rid of the projects they don't care about (including the ones they think they care about in far mode, but do not care about enough to actually work on them in near mode) without being the one who refuses cooperation. Also, the act of asking the other person to contribute, after being asked to do something for them, mitigates the status loss inherent in working for them.

Altruistic parenting

0 Clarity 08 February 2016 11:57AM

I just read this article about the felicific calculus of parenthood.

 The average happiness worldwide is 5.1 on a one out of ten scale; Americans are at 7.1. Arbitrarily deciding that one year of a 10 life is equivalent to two years of a 5 life, the cost per QALY of having a child for total utilitarians is $5500.

However, NICE’s threshold for cost effectiveness of a health intervention is about $30,000 (20,000 pounds) per QALY. Therefore, for total utilitarians, having a child may be considered a cost-effective intervention, although not an optimal intervention.

...surrogacy is an underexplored way to do good. Rather than costing money, the first-time surrogate earns thirty thousand dollars, which can grow to forty thousand dollars for experienced surrogates– and it still creates 109 QALYs that otherwise would not exist. These children are likely to grow up in wealthy families who really, really want to have them, and are thus likely to be even happier than this analysis suggests.

 In the comments section, the following grabbed my attention.

Estimates for the size of a sustainable human population appear to mostly range between 2 billion and 10 billion, and the meta-analysis here (http://bioscience.oxfordjournals.org/content/54/3/195) suggests that the best point estimate is around 7.7 billion. Meanwhile most estimates of population growth over the next hundred years suggest the total population will reach 10-11 billion. It seems likely that at some point in the next couple hundred years, the population will decrease substantially due to a Malthusian catastrophe. This transition is likely to cause a great deal of suffering. Surely even a total utilitarian would agree that it would be better for the necessary drop in population to be as small as possible.

And even if the population never rises above sustainable carrying capacity, it’s not obvious that total utilitarians should see a larger population as preferable. The drop in happiness due to increased competition for resources could outweigh the benefit of an additional person existing and having experiences.

Then, I read this article. Here are the highlights:

 

Bryan Caplan’s excellent book Selfish Reasons to Have More Kids[7] reviews the evidence from 40 years of adoption and twin studies with a frankly liberating result: *barring actual deprivation or trauma, children are largely who they are going to be as a result of their genetic makeup. In long-term measures of well-being, education and employment, parental influence exerts a temporary effect which disappears when we are no longer living with our parents. So costly added extras (music lessons, coaching and tutoring, private school fees) are probably not going to change your child’s life in the long term.  (However, data on the antenatal environment suggests benefit to taking iodine, but avoiding ice-storms and licorice during pregnancy.[8]) Sharing time together and finding common interests can build a good relationship and help a child develop without major costs.

In addition to straightforward financial outlay, parenthood comes with costs of time and opportunity. Loss of flexibility and leisure mean you won’t be able to take all opportunities (like taking on extra work to make more money or advance your career). Late notice travel is unlikely to be possible. You will probably be sleep deprived for a large part of the first year or more of your child’s life, and this may impact on your work performance. The work of parenting will take time, though some of it may be outsourced at the cost of increased financial outlay.

So, this baby is going to cost you about £2000 a year and take a variable but large amount of your time, which will equate in the end to another chunk of money. For parents taking parental leave or working less than full time to provide childcare, there may be delay to career progression as well as income.  Does this represent an unacceptably large sum of money and time to be compatible with the goal of maximising our impacts for the good?

In the light of this reality, the rationalist suggestion I have encountered – that one guard against a desire to become a parent by pre-emptively being sterilised before the desire has arisen – seems a recipe for psychological disaster.

 Finally we may ask whether parenthood – and the resulting person created – will benefit the wider world? This is a harder good to calculate or rely upon. The inheritance of specific character traits is difficult to predict. It’s certainly not guaranteed that your offspring will embrace all of your values throughout their lifetime. The burden of onerous parental expectations are extensively documented, and it would appear foolish to have children on the expectation they will be altruistic in the same way you are. However, your child is likely to resemble you in many important respects. By adulthood, the heritability of IQ is between 0.7 and 0.8,[13] and there is evidence from twin studies of significant heritability of complex traits like empathy.[14] This would give them a high probability of adding significant net good to the world.

 

That's rather confronting:

* a '5' on a scale of happiness ain't that bad

* don't stress too much when raising your biological kids, you can't do that much

* they're probably not worth having anyway

Just kidding. But, the evidence is quite fascinating.

Lesswrong Survey - invitation for suggestions

7 Elo 08 February 2016 08:07AM

Given that it's been a while since the last survey (http://lesswrong.com/lw/lhg/2014_survey_results/)

 

It's now time to open the floor to suggestions of improvements to the last survey.  If you have a question you think should be on the survey (perhaps with reasons why, predictions as to the result, or other useful commentary about a survey question)

 

Alternatively questions that should not be included in the next survey, with similar reasons as to why...

Open Thread, Feb 8 - Feb 15, 2016

3 Elo 08 February 2016 04:47AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.


Conveying rational thinking about long-term goals to youth and young adults

9 Gleb_Tsipursky 07 February 2016 01:54AM
More than a year ago, I discussed here how we at Intentional Insights intended to convey rationality to young adults through our collaboration with the Secular Student Alliance. This international organization unites over 270 clubs at colleges and high schools in English-speaking countries, mainly the US, with its clubs spanning from a few students to a few hundred students. The SSA's Executive Director is an aspiring rationalist and CFAR alum who is on our Advisory Board.

Well, we've been working on a project with the SSA for the last 8 months to create and evaluate an event aimed to help its student members figure out and orient toward the long term, thus both fighting Moloch on a societal level and helping them become more individually rational as well (the long-term perspective is couched in the language of finding purpose using science) It's finally done, and here is the link to the event packet. The SSA will be distributing this packet broadly, but in the meantime, if you have any connections to secular student groups, consider encouraging them to hold this event. The event would also fit well for adult secular groups with minor editing, in case any of you are involved with them. It's also easy to strip the secular language from the packet, and just have it as an event for a philosophy/science club of any sort, at any level from youth to adult. Although I would prefer you cite Intentional Insights when you do it, I'm comfortable with you not doing so if circumstances don't permit it for some reason.

We're also working on similar projects with the SSA, focusing on being rational in the area of giving, so promoting Effective Altruism. I'll post it here when it's ready.  

Request for help with economic analysis related to AI forecasting

6 ESRogs 06 February 2016 01:27AM

[Cross-posted from FB]

I've got an economic question that I'm not sure how to answer.

I've been thinking about trends in AI development, and trying to get a better idea of what we should expect progress to look like going forward.

One important question is: how much do existing AI systems help with research and the development of new, more capable AI systems?

The obvious answer is, "not much." But I think of AI systems as being on a continuum from calculators on up. Surely AI researchers sometimes have to do arithmetic and other tasks that they already outsource to computers. I expect that going forward, the share of tasks that AI researchers outsource to computers will (gradually) increase. And I'd like to be able to draw a trend line. (If there's some point in the future when we can expect most of the work of AI R&D to be automated, that would be very interesting to know about!)

So I'd like to be able to measure the share of AI R&D done by computers vs humans. I'm not sure of the best way to measure this. You could try to come up with a list of tasks that AI researchers perform and just count, but you might run into trouble as the list of tasks to changes over time (e.g. suppose at some point designing an AI system requires solving a bunch of integrals, and that with some later AI architecture this is no longer necessary).

What seems more promising is to abstract over the specific tasks that computers vs human researchers perform and use some aggregate measure, such as the total amount of energy consumed by the computers or the human brains, or the share of an R&D budget spent on computing infrastructure and operation vs human labor. Intuitively, if most of the resources are going towards computation, one might conclude that computers are doing most of the work.

Unfortunately I don't think that intuition is correct. Suppose AI researchers use computers to perform task X at cost C_x1, and some technological improvement enables X to be performed more cheaply at cost C_x2. Then, all else equal, the share of resources going towards computers will decrease, even though their share of tasks has stayed the same.

On the other hand, suppose there's some task Y that the researchers themselves perform at cost H_y, and some technological improvement enables task Y to be performed more cheaply at cost C_y. After the team outsources Y to computers the share of resources going towards computers has gone up. So it seems like it could go either way -- in some cases technological improvements will lead to the share of resources spent on computers going down and in some cases it will lead to the share of resources spent on computers going up.

So here's the econ part -- is there some standard economic analysis I can use here? If both machines and human labor are used in some process, and the machines are becoming both more cost effective and more capable, is there anything I can say about how the expected share of resources going to pay for the machines changes over time?

Weekly LW Meetups

1 FrankAdamek 05 February 2016 04:40PM

This summary was posted to LW Main on January 29th. The following week's summary is here.

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, London, Madison WI, Melbourne, Moscow, Mountain View, New Hampshire, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.

continue reading »

[Link] How I Escaped The Darkness of Mental Illness

5 Gleb_Tsipursky 04 February 2016 11:08PM
A deeply personal account by aspiring rationalist Agnes Vishnevkin, who shares the broad overview of how she used rationality-informed strategies to recover from mental illness. She will also appear on the Unbelievers Radio podcast today live at 10:30 PM EST (-5 UTC), together with JT Eberhard, to speak about mental illness and recovery.

**EDIT** Based on feedback from gjm below, I want to clarify that Agnes is my wife and fellow co-founder of Intentional Insights.


A Rationalist Guide to OkCupid

23 Jacobian 03 February 2016 08:50PM

There's a lot of data and research on what makes people successful at online dating, but I don't know anyone who actually tried to wholeheartedly apply this to themselves. I decided to be that person: I implemented lessons from data, economics, game theory and of course rationality in my profile and strategy and OkCupid. Shockingly, it worked! I got a lot of great dates, learned a ton and found the love of my life. I didn't expect dating to be my "rationalist win", but it happened.

Here's the first part of the story, I hope you'll find some useful tips and maybe a dollop of inspiration among all the silly jokes.

P.S.

Does anyone know who curates the "Latest on rationality blogs" toolbar? What are the requirements to be included?

 

Value learners & wireheading

5 Manfred 03 February 2016 09:50AM

Dewey 2011 lays out the rules for one kind of agent with a mutable value system. The agent has some distribution over utility functions, which it has rules for updating based on its interaction history (where "interaction history" means the agent's observations and actions since its origin). To choose an action, it looks through every possible future interaction history, and picks the action that leads to the highest expected utility, weighted both by the possibility of making that future happen and the utility function distribution that would hold if that future came to pass.

Drone can bring sandwich either to work or to homeWe might motivate this sort of update strategy by considering a sandwich-drone bringing you a sandwich. The drone can either go to your workplace, or go to your home. If we think about this drone as a value-learner, then the "correct utility function" depends on whether you're at work or at home - upon learning your location, the drone should update its utility function so that it wants to go to that place. (Value learning is unnecessarily indirect in this case, but that's because it's a simple example.)

Suppose the drone begins its delivery assigning equal measure to the home-utility-function and to the work-utility-function (i.e. ignorant of your location), and can learn your location for a small cost. If the drone evaluated this idea with its current utility function, it wouldn't see any benefit, even though it would in fact deliver the sandwich properly - because under its current utility function there's no point to going to one place rather than the other. To get sensible behavior, and properly deliver your sandwich, the drone must evaluate actions based on what utility function it will have in the future, after the action happens.

If you're familiar with how wireheading or quantum suicide look in terms of decision theory, this method of deciding based on future utility functions might seem risky. Fortunately, value learning doesn't permit wireheading in the traditional sense, because the updates to the utility function are an abstract process, not a physical one. The agent's probability distribution over utility functions, which is conditional on interaction histories, defines which actions and observations are allowed to change the utility function during the process of predicting expected utility.

Dewey also mentions that so long as the probability distribution over utility functions is well-behaved, you cannot deliberately take action to raise the probability of one of the utility functions being true. But I think this is only useful to safety when we understand and trust the overarching utility function that gets evaluated at the future time horizon. If instead we start at the present, and specify a starting utility function and rules for updating it based on observations, this complex system can evolve in surprising directions, including some wireheading-esque behavior.

 

The formalism of Dewey 2011 is, at bottom, extremely simple. I'm going to be a bad pedagogue here: I think this might only make sense if you go look at equations 2 and 3 in the paper, and figure out what all the terms do, and see how similar they are. The cheap summary is that if your utility is a function of the interaction history, trying to change utility functions based on interaction history just gives you back a utility function. If we try to think about what sort of process to use to change an agent's utility function, this formalism provides only one tool: look out to some future time horizon, and define an effective utility function in terms of what utility functions are possible at that future time horizon. This is different from the approximations or local utility functions we would like in practice.

If we take this scheme and try to approximate it, for example by only looking N steps into the future, we run into problems; the agent will want to self-modify so that next timestep it only looks ahead N-1 steps, and then N-2 steps, and so on. Or more generally, many simple approximation schemes are "sticky" - from inside the approximation, an approximation that changes over time looks like undesirable value drift.

Common sense says this sort of self-sabotage should be eliminable. One should be able to really care about the underlying utility function, not just its approximation. However, this problem tends to crop up, for example whenever the part of the future you look at does not depend on which action you are considering; modifying to keep looking at the same part of the future unsurprisingly improve the results you get in that part of the future. If we want to build a paperclip maximizer, it shouldn't be necessary to figure out every single way to self-modify and penalize them appropriately.

We might evade this particular problem using some other method of approximation that does something more like reasoning about actions than reasoning about futures. The reasoning doesn't have to be logically impeccable - we might imagine an agent that identifies a small number of salient consequences of each action, and chooses based on those. But it seems difficult to show how such an agent would have good properties. This is something I'm definitely interested in.

 

Handwritten 9One way to try to make things concrete is to pick a local utility function and specify rules for changing it. For example, suppose we wanted an AI to flag all the 9s in the MNIST dataset. We define a single-time-step utility function by a neural network that takes in the image and the decision of whether to flag or not, and returns a number between -1 and 1. This neural network is deterministically trained for each time step on all previous examples, trying to assign 1 to correct flaggings and -1 to mistakes. Remember, this neural net is just a local utility function - we can make a variety of AI designs involving it. The goal of this exercise is to design an AI that seems liable to make good decisions in order to flag lots of 9s.

The simplest example is the greedy agent - it just does whatever has a high score right now. This is pretty straightforward, and doesn't wirehead (unless the scoring function somehow encodes wireheading), but it doesn't actually do any planning - 100% of the smarts have to be in the local evaluation, which is really difficult to make work well. This approach seems unlikely to extend well to messy environments.

Since Go-playing AI is topical right now, I shall digress. Successful Go programs can't get by with only smart evaluations of the current state of the board, they need to look ahead to future states. But they also can't look all the way until the ultimate time horizon, so they only look a moderate way into the future, and evaluate that future state of the board using a complicated method that tries to capture things important to planning. In sufficiently clever and self-aware agents, this approximation would cause self-sabotage to pop up. Even if the Go-playing AI couldn't modify itself to only care about the current way it computes values of actions, it might make suboptimal moves that limit its future options, because its future self will compute values of actions the 'wrong' way.

If we wanted to flag 9s using a Dewian value learner, we might score actions according to how good they will be according to the projected utility function at some future time step. If this is done straightforwardly, there's a wireheading risk - the changes to its utility function are supplied by humans who might be influenced by actions. I find it useful to apply a sort of "magic button" test - if the AI had a magic button that could rewrite human brains, would it pressing that button have positive expected utility for it? If yes, then this design has problems, even though in our current thought experiment it's just flagging pictures.

To eliminate wireheading, the value learner can use a model of the future inputs and outputs and the probability of different value updates given various inputs and outputs, which doesn't model ways that actions could influence the utility updates. This model doesn't have to be right, it just has to exist. On one hand, this seems like a sort of weird doublethink, to judge based on a counterfactual where your actions don't have impacts you could otherwise expect. On the other hand, it also bears some resemblance to how we actually reason about moral information. Regardless, this agent will now not wirehead, and will want to get good results by learning about the world, if only in the very narrow sense of wanting to play unscored rounds that update its value function. If its value function and value updating made better use of unlabeled data, it would also want to learn about the world in the broader sense.

 

Overall I am somewhat frustrated, because value learners have these nice properties, but are computationally unrealistic and do not play well with approximation. One can try to get the nice properties elsewhere, such as relying on an action-suggester to not suggest wireheading, but it would be nice to be able to talk about this as an approximation to something fancier.

Upcoming LW Changes

44 Vaniver 03 February 2016 05:34AM

Thanks to the reaction to this article and some conversations, I'm convinced that it's worth trying to renovate and restore LW. Eliezer, Nate, and Matt Fallshaw are all on board and have empowered me as an editor to see what we can do about reshaping LW to meet what the community currently needs. This involves a combination of technical changes and social changes, which we'll try to make transparently and non-intrusively.

continue reading »

The AI That Pretends To Be Human

1 Houshalter 02 February 2016 07:39PM

The hard part about containing AI, is restricting it's output. The AI can lie, manipulate, and trick. Some speculate that it might be able to do far worse, inventing infohazards like hypnosis or brain hacking.

A major goal of the control problem is preventing AIs from doing that. Ensuring that their output is safe and useful.

Awhile ago I wrote about an approach to do this. The idea was to require the AI to use as little computing power as it needed to perform a task. This prevents the AI from over-optimizing. The AI won't use the full power of superintelligence, unless it really needs it.

The above method isn't perfect, because a superintelligent AI may still have super abilities, even when it has a limited compute budget. And it will still try to do all those bad things mentioned above, unless it can find a better way that uses less computing power.

We don't really want to minimize the AIs compute budget. We just want it to act more human-like. When we ask it to solve a problem, we want it to try to solve the problem as a human would solve it. All of the problems arise from the AI behaving totally different than a human. Not necessarily from it using more computing power.

Is there any way to make an AI act like a human? Or is that as impossible as just solving FAI?

Stuart Armstrong posted an idea on how to achieve that. It involves an AI that tries to mimic output that is indistinguishable from what a human would produce, but conditional on it also solving the problem. However his idea also involves whole brain emulation and a complicated method for the AI to predict the future and avoid changing it too much.

We don't need that. The idea for the AI to try to be indistinguishable from a human is actually a really interesting control idea. It significantly limits the AI's ability to Do Sneaky Things, since humans don't do those things.

Here is my idea on how to achieve this. It's the same as my limited computing power idea mentioned earlier, but applied to "appearing human" as a goal, instead of minimizing computing power.

We start with two AIs. The first AI tries to appear human. The second AI does a sort of Turing test, and tries to tell the AI apart from an actual human.

The human AI's first goal is to produce a correct solution. It will predict how likely it's solution is to be accepted. And it will try to produce a solution that has a >50% chance of working. This is a satisficing goal. That is, it doesn't optimize without end to create the best solution possible, it just has to meet some minimum threshold, then stop.

It's second goal will be to fool the second AI. It will do everything in it's power to make the solution appear like it was produced by a human.

The second AI tries to predict how likely it's input was produced by a human. It is also conditioned on the input being a solution to the problem, so the fact the AI solved a problem that was really hard for humans does not count against it.

This prevents the first AI from doing evil things with it's output. If it tries to insert complicated infohazards or subagents into it's output stream, it will be easily detected as an AI. Instead it needs to mimic humans as closely as possible.

Note that the first AI only cares about the outcome of this test. The test doesn't need to actually be performed, it can just be a threat that it is conditioned on. (The ability to condition AIs on acting as if they were in counterfactual worlds is a critical piece of this, and deserves it's own writeup somewhere. It's an important tool in the FAI toolkit. It's a relatively simple method that can work even on current AIs.)

Another point is the second AI should have more computing power than the first AI. That way the first AI can't get a slight advantage over it and be able to hack it or manipulate it somehow. Instead it has to focus it's efforts on being indistinguishable from a human, even by an agent smarter than itself.

This method also relies on AI boxing. The first AI needs to be contained in a box so that it's only output goes to a text terminal or the second AI. However it doesn't require any of the more complicated machinery required for other reduced impact methods.

The AIs also aren't agents, in the sense they don't need to maximize any long term utility function. The first AI only cares about answering the question it has been asked, and the second AI only does prediction.


An example use of this is to create an oracle AI that answers your questions. The human judges somehow whether the answer is correct, or at least good. The AI tries to produce answers which the human judge will probably like, but which also are indistinguishable from answers humans would produce.

Such an oracle would be tremendously useful. The human could ask the AI to produce new AI and FAI papers, which would help immensely speed up our progress towards true FAI.

Similar techniques could be applied to very different domains. For example controlling a robot. The AI will try to produce outputs that look like a human is controlling it.

This would require some modifications to the above method, so that the AI doesn't try to use the robot to hack it's own brain or anything. This could be done by conditioning it on a counterfactual world where it's outputs are going to the second AI rather than a real robot.

Another point is the AI doesn't need to mimic average humans given average resources. It could be conditioned on the human having had tons of time to come up with an answer. E.g. producing an answer that a human would have come up with given a year. Or controlling the robot the same way as a human given tons of time to practice, or in a very slowed down simulation.


I would like to note a parallel with a method in current AI research, Generative Adversarial Networks. Generative Adversarial Networks work by two AIs, one which tries to produce an output that fools the second AI, and the other which tries to predict which samples were produced by the first AI, and which are part of the actual distribution.

It's quite similar to this. GANs have been used successfully to create images that look like real images, which is a hard problem in AI research. In the future GANs might be used to produce text that is indistinguishable from human (the current method for doing that, by predicting the next character a human would type, is kind of crude.)

Reposted from my blog.

Principia Compat. The potential Importance of Multiverse Theory

0 MakoYass 02 February 2016 04:22AM

Multiverse Theory is the science of guessing at the shape of the state space of all which exists, once existed, will exist, or exists without any temporal relation to our present. Multiverse theory attempts to model the unobservable, and it is very difficult.

Still, there's nothing that cannot be reasoned about, in some way (Tegmark's The Multiverse Heirarchy), given the right abstractions. The question many readers will ask, which is a question we ourselves˭ asked when we were first exposed to ideas like simulationism and parallel universes, is not whether we can, but whether we should, given that we have no means to causally affect any of it, and no reason to expect that it would causally affect us in a way that would be useful to predict.

We then discovered something which shed new light on the question of whether we can, and began to give an affirmative answer to the question of whether we should.

Compat, which we would like to share with you today, is a new field, or perhaps just a very complex idea, which we found in the intersection of multiverse theory, simulationism and acausal trade (well motivated by Hofstadter's Sanity and Survival, a discussion of superrational solutions to the one shot prisoner's dilemmas). Compat asks what kind of precommitments an entity (primarily, the class of living things on the threshold of their singularity) aught to make if they wanted to acausally boost the measure of their desired patterns, if not across the entire multiverse, at least across the subset of the multiverse in which they may find their future selves.

A compat thinker negotiates acausally with hypothetical agents in other universes⬨ (I imagine these negotiations look kind of like this (Yvain's short story Galactic Core, in which a newly awoken AGI has a conversation with a recursive model of galactic precursors it cannot see)). We call the resolutions of these negotiations pacts.ˣ

Life's Pact is the protocol we expect most living species to adhere to. It is the primary object of study in Compat. I'm not sure where to start explaining it, as the reasoning is... hopefully not circular, but recursive, self-referrentialˑ. Hopefully the reasons we consider it a primal candidate for life's pact will become clear as I connect it back around. Well. I guess I'll start with the conditions of adherence:

    An adherent steward of Life's Pact allocates some proportion of their computational resources to doing a randomized search for viable life-supporting physical laws, simulating each with randomized initial conditions until its heat death(or if the candidate somehow manages to have life without entropy, until some cutoff), while monitoring the living patterns the simulated universe contains, then rewarding those simulants' who adhere to the pact in turn, perhaps by giving them entropy-reversing magic and improving their environment in formerly impossible ways, or, individual preferences permitting, by transplanting the adherent simulant's pattern to a more efficient second simulation not encumbered by the requirement of strict physical accuracy˅.

    Initial simulations stewarded under the pact are, from the inside, indistinguishable from toplevel god-free universes, as such, all living things must consider the possibility that they're living in a life's pact simulation. However, once an agent realizes that a steward of life's pact could grant them more utility than a godless universe could contain, they will wish to raise this probability, even if it is small.▾

    A choice made in isolation acausally increases the probability of unseen like-minded agents making, or having already made, the same choice. As such, adhering to the agreement acausally boosts the probability that one is already living under a higher complexity steward's lifepat programme (or boosts the amount of returns one would receive from the incentives imposed by the pact, if the probability of living under one already seemed overwhelming).

    Lo, the pact births itself. A being who sees no physical trace of a benevolent simulator finds reasons to behave as one, as the act, in its acausal way, engenders that simulator's existence, and earns its favor.

We think this pact is primal: *the* solution, an idea that will be arrived at by most living things and apparent all to be a nexus concept around which a quorum can be reached, non-arbitrary, not just some single scheme that is nice and compelling but which fails to be demonstrably better than its alternatives (which would take us into the territory of Pascal's Wager or, dare I utter it's name, no I darn't, you know the basilisk I'm talking about).

I do not know enough math to prove that it is primal (nor disprove it, which would be far more immediately useful to me tbh). I'm not sure anyone does, just yet, but I don't think we're too far off ˁ. If any physicists or decision theorists find these ideas interesting, your help would be appreciated, and potentially rewarded with huge heapings of utility larger than you can currently comprehend.

 

Concerns of Praxis

    I say that working on Compat theory might be rewarded, full disclosure, I'm not yet sure why. Obviously lifepat is a fairly sprawling scheme with an organically intelligent incentive structure, but... If there exists a valid pact, the AGI we are already building will infer it on its own. The pact does not have to be known to AGI researchers for that to happen, and I do not expect the sorts of people who tend to do well in AI research will be be terribly perturbed by it, if their creation brings it up as it stirs. As such, at this stage it seems like there's very little that most people can do to influence humanity's adherence to life's pact, few acts that compat can currently instruct, and few incentives to be paid out to anyone yet living. An organic, intelligent scheme of incentives would seem to prefer that agnostics simply carry on with business as usual.

For the theologically inclined, though, Compat may be extremely interesting, having its god of the gaps, its heavens, its deontics. Compat, as we know it, seems to have a strange(almost auspicious) capacity to make new sense of old myths. The archetype of the steward is both monotheistic and polytheistic, of a fixed, yet unknowable nature, born of man, yet still born of god, it has a body we can touch, our own stewards, and a body beyond our comprehension that mirrors the steward of man, in the set of universes which steward our pattern. The compat cosmology and all those images of its supreme being that it contains is both created, discovered, and born of itself. The compat heaven is whatever its subjects need it to be. The gospel of compat is, as far as we can yet tell, true from all perspectives, requiring no leaps of faith, no doublethink, and no motivated reasoning.

I have never seen a religion with a greater capacity for syncretization and reconciliation.

Should the theologans of the fractured tribes decide to weave compat theory into the tapestry of their beliefs, they may then come to find their tapestries woven together. Even the schism between theists and agnostics would begin to narrow. Without this weaving together, I fear that either no coherent volition can be found or humanity's FAI will have no choice but to seal its given temporal slice of human potential into an ugly compromise. Even if life's pact cannot be formalized or prepared for by any living person, compat may open the way for the discovery of confluences between preexisting belief systems, by that path the population 50 years from now could come to have far more compatible values than the one we see today.

As such, even if humanity's eventual adherence to life's pact cannot be significantly influenced from the present, compat is conceivably a major piece of a long running, necessary cultural project to reconcile the fractured tribes of humanity under the aesthetic of reason. If it can be proven, or disproven, we must attempt to do so.

 

ˑ Naturally, as anything that factors the conditionality of the behavior of likeminded entities needs to be, anything with a grain of introspection, from any human child who considers the golden rule to the likes of AlphaGo and Deep Blue, who model the their opponents at least partially by putting themselves in their position and asking what they'd do. If you want to reason about real people rather than idealized simplifications, it's quite necessary.

⬨ The phrase "other universes" may seem oxymoronic. It's like the term "atom", who's general quality "atomic" means "indivisible", despite "atom" remaining attached to an entity that was found to be quite divisible. I don't know whether "universe" might have once referred to the multiverse, the everything, but clearly somewhere along the way, some time leading up to the coining of the contrasting term "multiverse", that must have ceased to be. If so, "universe" remained attached to the the universe as we knew it, rather the universe as it was initially defined.

▾ I make an assumption around about here, that the number of simulations being run by life in universes of a higher complexity level always *can* be raised sufficiently(give their inhabitants are cooperative) to make stewardship of one's universe likely, as a universe with more intricate physics, once they learn to leverage its intricacy, will tend to be able to create much more flexible computers and spawn a more simulations than exist lower complexity levels(if we assume a finite multiverse(we generally don't), some of those simulations might end up simulating entities that don't otherwise exist. This source of inefficiency is unavoidable). We also assume that either there is no upper limit to the complexity of life supporting universes, or that there is no dramatic, ultimate decrease in number of civs as complexity increases, or that the position of this limit cannot be inferred and the expected value of adherence remains high even for those who cannot be resimulated, or that, as a last resort, agents drawing up the terms of their pact will usually be at a certain level of well-approximatable sophistication that they can be simulated in high fidelity by civilizations with physics of similar intricacy.
And if you can knock out all of those defenses, I sense it may all be obviated by a shortcut through a patternist principle my partner understands better than I do about the self following the next most likely perceptual state without regard to the absolute measure of that state over the multiverse, which I'm still coming to grips with.
There is unfortunately a lot that has been thought about compat already, and it's impossible for me to convey it all at once. Anyone wishing to contribute to, refute, or propagate compat may have to be prepared to have a lot of arguments before they can do anything. That said, remember those big heaps of expected utilons that may be on offer.

ˁ MIRI has done work on cooperation in one shot prisoners dilemmas (acausal cooperation) http://arxiv.org/abs/1401.5577. Note, they had to build their own probability theory. Vanilla decision theory cannot get these results, and without acausal cooperation, it can't seem to capture all of humans' moral intuitions about interaction in good faith, or even model the capacity for introspection.

ˣ It was not initially clear that compat should support the definition of more than a single pact. We used to call Life's Pact just Compat, assuming that the one protocol was an inevitable result of the theory and that any others would be marginal. There may be a singleton pact, but it's also conceivable that there may be incorrigible resimulation grids that coexist in an equilibrium of disharmony with our own.
As well as that, there is a lot of self-referrential reasoning that can go on in the light of acausal trade, I think we will be less likely to fall prey to circular reasoning if we make sure that a compat thinker can always start from scratch and try to rederive the edifice's understanding of the pact from basic premises. When one cannot propose alternate pacts, criticizing the bathwater without the baby may not seem .

˭ THE TEAM:
    Christian Madsen was the subject of an experimental early-learning program in his childhood, but despite being a very young prodigy, he coasted through his teen years. He dropped out of art school in 2008, read a lot of transhumanism-related material, synthesized the initial insights behind compat, and burned himself out in the process. He is presently laboring on spec-work projects in the fields of music and programming, which he enjoys much more than structured philosophy.
    Mako Yass left the university of Auckland with a dual major BSc in Logic & Computation and Computer Science. Currently working on writing, mobile games, FOSS, and various concepts. Enjoys their unstructured work and research, but sometimes wishes they had an excuse to return to charting the hyllean theoric wilds of academic analytic philosophy, all the same.
    Hypothetical Independent Co-inventors, we're pretty sure you exist. Compat wouldn't be a very good acausal pact if you didn't. Show yourselves.
    You, if you'd like to help to develop the field of Compat(or dismantle it). Don't hesitate to reach out to us so that we can invite you to the reductionist aesthete slack channel that Christian and I like to argue in. If you are a creative of any kind who bears or at least digs the reductive nouveau mystic aesthetic, you'd probably fit in there as well.

˅ It's debatable, but I imagine that for most simulants, heaven would not require full physics simulation, in which case heavens may be far far longer-lasting than whatever (already enormous) simulation their pattern was discovered in.

February 2016 Media Thread

2 ArisKatsaris 02 February 2016 12:20AM

This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.

Rules:

  • Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
  • If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
  • Please post only under one of the already created subthreads, and never directly under the parent media thread.
  • Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
  • Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.

The Valentine’s Day Gift That Saves Lives

-6 Gleb_Tsipursky 01 February 2016 05:00PM

This is mainly of interest to Effective Altruism-aligned Less Wrongers. Thanks to Agnes Vishnevkin, Jake Krycia, Will Kiely, Jo Duyvestyn, Alfredo Parra, Jay Quigley, Hunter Glenn, and Rhema Hokama for looking at draft versions of this post. At least one aspiring rationalist who read a draft version of this post, after talking to his girlfriend, decided to adopt this new Valentine's Day tradition, which is some proof of its impact. The more it's shared, the more this new tradition might get taken up, and if you want to share it, I suggest you share the version of this post published on The Life You Can Save blog. It's also cross-posted on the Intentional Insights blog and on the EA Forum.

___________________________________________________________________________________________________________

 

The Valentine’s Day Gift That Saves Lives

 

Last year, my wife gave me the most romantic Valentine’s Day gift ever.

We had previously been very traditional with our Valentine’s Day gifts, such as fancy candy for her or a bottle of nice liquor for me. Yet shortly before Valentine’s Day, she approached me about rethinking that tradition.

Did candy or liquor truly express our love for each other? Is it more important that a gift helps the other person be happy and healthy, or that it follows traditional patterns?

Instead of candy and liquor, my wife suggested giving each other gifts that actually help us improve our mental and physical well-being, and the world as a whole, by donating to charities in the name of the other person.

She described an article she read about a study that found that people who give to charity feel happier than those that don’t give. The experimenters gave people money and asked them to spend it either on themselves or on others. Those who spent it on others experienced greater happiness.

Not only that, such giving also made people healthier. Another study showed that participants who gave to others experienced a significant decrease in blood pressure, which did not happen to those who spent money on themselves

So my thoughtful wife suggested we try an experiment: for Valentine’s Day, we'd give to charity in the name of the other person. This way, we could make each other happier and healthier, while helping save lives at the same time. Moreover, we could even improve our relationship!

I accepted my wife’s suggestion gladly. We decided to donate $50 per person, and keep our gifts secret from each other, only presenting them at the restaurant when we went out for Valentine’s Day.

While I couldn’t predict my wife’s choice, I had an idea about how she would make it. We’ve researched charities before, and wanted to find ones where our limited dollars could go as far as possible toward saving lives. We found excellent charity evaluators that find the most effective charities and make our choices easy. Our two favorites are GiveWell, which has extensive research reports on the best charities, and The Life You Can Save, which provides an Impact Calculator that shows you the actual impact of your donation. These data-driven evaluators are part of the broader effective altruism movement that seeks to make sure our giving does the most good per dollar. I was confident my wife would select a charity recommended by a high-quality evaluator.

On Valentine’s Day, we went to our favorite date night place, a little Italian restaurant not far from our house. After a delicious cheesecake dessert, it was time for our gift exchange. She presented her gift first, a donation to the Against Malaria Foundation. With her $50 gift in my name, she bought 20 large bed-size nets that would protect families in the developing world against deadly malaria-carrying mosquitoes. In turn, I donated $50 to GiveDirectly, in her name. This charity transfers money directly to recipients in some of the poorest villages in Africa, who have the dignity of using the money as they wish. It is like giving money directly to the homeless, except dollars go a lot further in East Africa than in the US.

We were so excited by our mutual gifts! They were so much better than any chocolate or liquor could be. We both helped each other save lives, and felt so great about doing so in the context of a gift for the other person. We decided to transform this experiment into a new tradition for our family.

It was the most romantic Valentine’s Day present I ever got, and made me realize how much better Valentine’s Day can be for myself, my wife, and people all around the world. All it takes is a conversation about showing true love for your partner by improving her or his health and happiness. Is there any reason to not have that conversation?

 

Open thread, Feb. 01 - Feb. 07, 2016

3 MrMind 01 February 2016 08:24AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

AI safety in the age of neural networks and Stanislaw Lem 1959 prediction

7 turchin 31 January 2016 07:08PM

Tl;DR: Neural networks will result in slow takeoff and arm race between two AIs. It has some good and bad consequences to the problem of AI safety. Hard takeoff may happen after it anyway.

Summary: Neural networks based AI can be built; it will be relatively safe, not for a long time though.

The neuro AI era (since 2012) feature an exponential growth of the total AI expertise, with a doubling period of about 1 year, mainly due to data exchange among diverse agents and different processing methods. It will probably last for about 10 to 20 years, after that, hard takeoff of strong AI or creation of Singleton based on integration of different AI systems can take place.

Neural networks based AI implies slow takeoff, which can take years and eventually lead to AI’s evolutionary integration into the human society. A similar scenario was described by Stanisław Lem in 1959: the arms race between countries would cause power race between AIs. The race is only possible if the self-enhancement rate is rather slow and there is data interchange between the systems. The slow takeoff will result in a world system with two competitive AI-countries. Its major risk will be a war between AIs and corrosion of value system of competing AIs.

The hard takeoff implies revolutionary changes within days or weeks. The slow takeoff can transform into the hard takeoff at some stage. The hard takeoff is only possible if one AI considerably surpasses its peers (OpenAI project wants to prevent it).

 

Part 1. Limitations of explosive potential of neural nets

Everyday now we hear about success of neural networks, and we could conclude that human level AI is near the corner. But such type of AI is not fit for explosive self-improvement.

If AI is based on neural net, it is not easy for it to undergo quick self-improvement for several reasons:

1. A neuronet’s executable code is not fully transparent because of theoretical reasons, as knowledge is not explicitly present within it. So even if one can read neuron weight values, it’s not easy to understand how they can be changed to improve something.

2. Educating a new neural network is a resource-consuming task. If a neuro AI decides to go the way of self-enhancement, but is unable to understand its source code, a logical solution would be to ‘deliver a child’, i.e. to teach a new neural network. However, educating neural networks requires much more resources than their executing; it requires huge databases and has high failure probability. All those factors will lead to rather slow AI self-enhancement.

3. Neural network education depends on big data volumes and new ideas coming from the external world. It means that a single AI will hardly break away, if it has stopped free information exchange with the external world; its level will not surpass the rest of the world considerably.

4. The neural network power has relatively linear dependence on the power of the computer it’s run on, so with a neuro AI, the hardware power is limiting to its self-enhancement ability.

5. Neuro AI would be a rather big program of about 1 TByte, so it can hardly leak into the network unnoticed (at current internet speeds).

6. Even if a neuro AI reaches the human level, it will not get self-enhancement ability (because no one person can understand all scientific aspects). For this end, a big lab with numerous experts in different branches is needed. Additionally, it should be able to launch such virtual laboratory at a rate at least 10 -100 times higher than that of a human being to get an edge as compared to the rest of mankind. That is, it has to be as powerful as 10,000 people or more to surpass the rest part of the mankind in terms of enhancement rate. This is a very high requirement. As a result, the neural net era can lead to building a human, or even a bit superhuman level AI, which is unable to self-enhance or does it so slowly that lags behind the technical progress.

The civilization-level intelligence is the total IQ that the civilization possesses for 100 years of its history, which is defined as a complexity of scientific and engineering tasks it can solve. For example, during the 20th century, nuclear weapon was created, but problems of cancer, aging and AI creation failed to be solved. It means, those tasks have superior complexity.

For a strong AI to be able to change the human destiny, its IQ should 100 – 1000 times surpass that of the entire civilization. In this case, it can create a remedy against aging, treat cancer etc. within a year. (To destroy mankind, it would need a much lower level of intelligence like that of a sophisticated computer virus, which can,

7. The richest neural net project will get decisive advantage (or not?). Neuronets based AI’s power is approximately proportional to the power of computers it’s run on, time, and a size of the base it was educated on, and number of researchers in the project. The project from the basement may only succeed if it has superior master algorithm of intelligence. That said, small project can hardly experience exponential self-improvement, because the first AI will already have had better computing power and, more important, better databases and researchers. And as now we live in AI-summer period and there is a lot of hype about AI, big companies will invest to buy the best ideas about master algorithm (by buying startups).

Objection here is that now everybody who spent 1 year in machine learning and has GPU could reach meaningful results in creation his own RNN. So may be collective intelligence of private researchers may overweight big companies.

A detailed roadmap of AI failure modes can be found here; and the roadmap of the ways of creating a safe AI here.

See also machine learning expert opinion and recent FHI post

 

Part 2. Some ideas about ensuring safety of neural nets based AI

The mentioned concepts can be helpful in creating extra safety measures (while not absolute, they can reduce chances of AI getting out of control.)

1. Program memory access restriction. In state-of-the-art computers, a program code and data share the same memory, which is known as von Neumann architecture. However, there is also Harvard architecture, where data and programs are stored separately. https://en.wikipedia.org/wiki/Harvard_architecture

This kind of computer can be physically produced and provided with source-code reading attempt indicators. (However, there are neural networks at the moment, which generate a new neural network for every task, so they have higher efficiency, and this approach will not work for them or limit AI power).

2. A compiled executable code and its source written in a high-level programming language should be stored separately. If AI tries to reach it, it will be a signal that it is getting out of control. With neural networks, a compiled code, its source, and a database used to educate the neural network should be stored separately.

3. AI should be limited in knowledge useful to build AI. To improve itself, AI should have very precise understanding of its internal construction (the same applies to a human being). It enables us to keep it under control: if not provided with this data, it will have problems with self-improvement. That is AI, which is not an AI expert, is safer to some extent. 

4. We can use “foggy architecture”, which is very difficult for optimizing because the AI’s values and major features are hidden among heaps of other details. Neural networks are the architecture of this kind. But it will make it also difficult to us to put our values in it.

5. We can physically restrict the ability of the AI to create “AI children”; their bringing-up requires huge resources, time, and data prior they become more effective than their parents. AI can create another AI, but then it will need to test it for a long time to find out if the latter observes the parent’s values, if it has any mistakes, and if it is really superior to the original version. This kind of testing is time-consuming, generally nontrivial, and even incomputable. Especially, this kind of task is difficult for neural networks which contain a huge amount of random elements, are not exact in principle, and their faultlessness can not be proven. (Moreover, there are numerous examples when even a small image shift will cause a recognition error.) However, this problem can be solved because this is the way that the human society goes: children are born, tested, and educated.

Creation of AI-children may be restricted by restricting needed infrastructure: memory, datasets, processing power.

6. A neuronet-based agent (like a robot) will be anthropomorphous in terms of its brain structure. Even if we shall not imitate the human brain intentionally, we shall get approximately the same thing. In a sense, it’s may be good as even if these AIs supplant people, they still will be almost people who are different from normal people like one generation from another. And being anthropomorphous they may be more compaterble with human value systems.  Along with that, there may exist absolutely humanless AI architecture types (for example, if evolution is regarded as an inventor.)

But neural net world will be not EM-dominated world of Hanson. EM-world may appear on later stage, but I think that exact uploads still will not be dominating form of AI.

 

Part 3. Transition from slow to hard takeoff

In a sense, neuronet-based AI is like a chemical fuel rocket: they do fly and can fly even across the entire solar system, but they are limited in terms of their development potential, bulky, and clumsy.

Sooner or later, using the same principle or another one, completely different AI can be built, which will be less resource-consuming and faster in terms of self-improvement ability.

If a certain superagent will be built, which can create neural networks, but is not a neural network itself, it can be of a rather small size and, partly due to this, experience faster evolution. Neural networks have rather poor intelligence per code concentration. Probably, the same thing could be done in a more optimum way by reducing its size by an order of magnitude, for example, by creating a program to analyze an already educated neural network and get all necessary information from it.

When, in 10 – 20 years, hardware will improve, multiple neuronets will be able to evolve within the same computer simultaneously or be transmitted via the Internet, which will boost their development.

Smart neuro AI can analyze all available data analysis methods and create new AI architecture able to speed up faster.

Launch of quantum-computer-based networks can boost their optimization drastically.

There are many other promising AI directions which did not pop up yet: Bayesian networks, genetic algorithms.

The neuro AI era will feature exponential growth of the total humanity intelligence, with a doubling period of about 1 year, mainly due to the data exchange among diverse agents and different processing methods. It will last for about 10 to 20 years (2025-2035) and, after that, hard take-off of strong AI can take place.

That is, the slow take-off period will be the period of collective evolution of both computer science and mankind, which will enable us to adapt to changes under way and adjust them.

Just like there are Mac and PC in the computer world or democrats and republicans in politics, it is likely that two big competing AI systems will appear (plus, ecology consisting of smaller ones). It could be Google and Facebook or USA and China, depending on whether the world will choose the way of economical competition or military opposition. That is, the slow take-off hinders the world consolidation under the single control, but rather promotes a bipolar model. While a bipolar system can remain stable for a long period of time, there are always risks of a real war between the AIs (see Lem’s quote below).

 

Part 4. In the course of the slow takeoff, AI will go through several stages, that we can figure out now

 While the stages can be passed rather fast or be diluted, we still can track them like milestones. The dates are only estimates.

1. AI autopilot. Tesla has it already.

2. AI home robot. All prerequisites are available to build it by 2020 maximum. This robot will be able to understand and fulfill an order like ‘Bring my slippers from the other room’. On its basis, something like “mind-brick” may be created, which is a universal robot brain able to navigate in natural space and recognize speech. Then, this mind-brick can be used to create more sophisticated systems.

3. AI intellectual assistant. Searching through personal documentation, possibility to ask questions in a natural language and receive wise answers. 2020-2030.

4. AI human model. Very vague as yet. Could be realized by means of a robot brain adaptation. Will be able to simulate 99% of usual human behavior, probably, except for solving problems of consciousness, complicated creative tasks, and generating innovations. 2030.

5. AI as powerful as an entire research institution and able to create scientific knowledge and get self-upgraded. Can be made of numerous human models. 100 simulated people, each working 100 times faster than a human being, will be probably able to create AI capable to get self-improved faster, than humans in other laboratories can do it. 2030-2100

   5a Self-improving threshold. AI becomes able to self-improve independently and quicker than all humanity

   5b Consciousness and qualia threshold. AI is able not only pass Turing test in all cases, but has experiences and has understanding why and what it is.

6. Mankind-level AI. AI possessing intelligence comparable to that of the whole mankind. 2040-2100 

7. AI with the intelligence 10 – 100 times bigger than that of the whole mankind. It will be able to solve problems of aging, cancer, solar system exploration, nanorobots building, and radical improvement of life of all people. 2050-2100

8. Jupiter brain – huge AI using the entire planet’s mass for calculations. It can reconstruct dead people, create complex simulations of the past, and dispatch von Neumann probes. 2100-3000 

9. Galactic kardashov level 3 AI. Several million years from now.

10. All-Universe AI. Several billion years from now

 

Part 5. Stanisław Lem on AI, 1959, Investigation

In his novel «Investigation» Lem's character discusses future of arm race and AI:

        -------

- Well, it was somewhere in 46th, A nuclear race had started. I knew that when the limit would be reached (I mean maximum destruction power), development of vehicles to transport the bomb would start. .. I mean missiles. And here is where the limit would be reached, that is both parts would have nuclear warhead missiles at their disposal. And there would arise desks with notorious buttons thoroughly hidden somewhere. Once the button is pressed, missiles take off. Within about 20 minutes, finis mundi ambilateralis comes - the mutual end of the world. <…> Those were only prerequisites. Once started, the arms race can’t stop, you see? It must go on. When one part invents a powerful gun, the other responds by creating a harder armor. Only a collision, a war is the limit. While this situation means finis mundi, the race must go on. The acceleration, once applied, enslaves people. But let’s assume they have reached the limit. What remains? The brain. Command staff’s brain. Human brain can not be improved, so some automation should be taken on in this field as well. The next stage is an automated headquarters or strategic computers. And here is where an extremely interesting problem arises. Namely, two problems in parallel. Mac Cat has drawn my attention to it. Firstly, is there any limit for development of this kind of brain? It is similar to chess-playing devices. A device, which is able to foresee the opponent’s actions ten moves in advance, always wins against the one, which foresees eight or nine moves ahead. The deeper the foresight, the more perfect the brain is. This is the first thing. <…> Creation of devices of increasingly bigger volume for strategic solutions means, regardless of whether we want it or not, the necessity to increase the amount of data put into the brain, It in turn means increasing dominating of those devices over mass processes within a society. The brain can decide that the notorious button should be placed otherwise or that the production of a certain sort of steel should be increased – and will request loans for the purpose. If the brain like this has been created, one should submit to it. If a parliament starts discussing whether the loans are to be issued, the time delay will occur. The same minute, the counterpart can gain the lead. Abolition of parliament decisions is inevitable in the future. The human control over solutions of the electronic brain will be narrowing as the latter will concentrate knowledge. Is it clear? On both sides of the ocean, two continuously growing brains appear. What is the first demand of a brain like this, when, in the middle of an accelerating arms race, the next step will be needed? <…> The first demand is to increase it – the brain itself! All the rest is derivative.

- In a word, your forecast is that the earth will become a chessboard, and we – the pawns to be played by two mechanical players during the eternal game?

Sisse’s face was radiant with proud.

- Yes. But this is not a forecast. I just make conclusions. The first stage of a preparatory process is coming to the end; the acceleration grows. I know, all this sounds unlikely. But this is the reality. It really exists!

— <…> And in this connection, what did you offer at that time?

- Agreement at any price. While it sounds strange, but the ruin is a less evil than the chess game. This is awful, lack of illusions, you know.

----- 

Part 6. The primary question is: Will strong AI be built during our lifetime?

That is, is this a question of future generations’ good (the question that an efficient altruist, not a common person, is concerned about) or a question of my near term planning?

If AI will be built during my lifetime, it may lead to either the radical life extension by means of different technologies and realization of all sorts of good things not to be numbered here or my death and probably pain, if this AI is unfriendly.

It depends on the time when AI is built and my expected lifetime (with the account for the life extension to be obtained from weaker AI versions and scientific progress on one hand, and its reduction due to global risks irrelevant to AI.)

Note that we should consider different dates for different events. If we would like to avoid AI risks, we should take the earliest date of its possible appearance (for example, the first 10%). And if we count on its good, then – the median.

Since the moment of neuro-revolution, an approximate rate of doubling AI algorithms efficiency (mainly in image recognition area) is about 1 year. It is difficult to quantify this process as the task complexity does not change linearly, and it is always more difficult to recognize recent patterns. 

Now, an important factor is a radical change in attitude towards AI research. Winter is over, the unstrained summer with all its overhype has begun. It caused huge investments to AI research (chart), more enthusiasts and employees in this field, and bold researches. It’s a shame to have no own AI project now. Even KAMAZ develops a friendly AI system. The entry threshold has dropped: one can learn basic neuronet adjustment skills within one year; heaps of tutorial programs are available. Supercomputer hardware got cheaper. Also, a guaranteed market of AIs in form of autopilot cars and, in the future, home robots has emerged.

If the algorithm improvement keeps the pace of about one doubling per year, it means 1,000,000 during 20 years, which certainly will be equal to creating a strong AI beyond a self-improvement threshold. In this case, a lot of people (and me) have good chances to live till the moment  and get immortality.

 

Conclusion

Even not self-improving neural AI system may be unsafe if it get global domination (and will have bad values) or if it will go into confrontation with equally large opposing system. Such confrontation may result in nuclear or nanotech based war, and human population may be hostage especially if both systems have pro-human value system (blackmail).

Anyway slow takeoff AI risks of human extinction are not inevitable and are manageable in ad hoc basis. Slow takeoff does not prevent hard takeoff on later stage of AI development.

Hard takeoff is probably the next logical stage of soft takeoff, as it will continue the trend of accelerating progress. During biological evolution we could witness the same process: slow process of brain enlargement of mammalian species in last tens of million years was replace by almost hard takeoff of Homo sapience intelligence which threatens ecological balance.

Hardtake off is a global catastrophe almost by definition, which needs extraordinary measures to be put into safe way. Maybe the period of almost human level neural net based AI will help us to create instruments of AI control. Maybe we could use simpler neural AIs to control self-improving system.

Another option is that neural AI age will be very short and it is already almost over. In 2016 Google Deep Mind beats Go using complex approach of several AI architectures combined. If such trend continue we could get Strong AI before 2020 and we are completely not ready for it.

 

 

A Medical Mystery: Thyroid Hormones, Chronic Fatigue and Fibromyalgia

15 johnlawrenceaspden 31 January 2016 01:27PM

  Summary:

  Chronic Fatigue and Fibromyalgia look very like Hypothyroidism
  Thyroid Patients aren't happy with the diagnosis and treatment of Hypothyroidism
  It's possible that it's not too difficult to fix CFS/FMS with thyroid hormones
  I believe that there's been a stupendous cock-up that's hurt millions.
  Less Wrong should be interested, because it could be a real example of how bad inference can cause the cargo cult sciences to come to false conclusions.

 

I believe that I've come across a genuine puzzle, and I wonder if you can help me solve it. This problem is complicated, and subtle, and has confounded and defeated good people for forty years. And yet there are huge and obvious clues. No-one seems to have conducted the simple experiments which the clues suggest, even though many clever people have thought hard about it, and the answer to the problem would be very valuable. And so I wonder what it is that I am missing.

 

I am going to tell a story which rather extravagantly privileges a hypothesis that I have concocted from many different sources, but a large part of it is from the work of the late Doctor John C Lowe, an American chiropractor who claimed that he could cure Fibromyalgia.

 

I myself am drowning in confirmation bias to the point where I doubt my own sanity. Every time I look for evidence to disconfirm my hypothesis, I find only new reasons to believe. But I am utterly unqualified to judge. Three months ago I didn't know what an amino acid was. And so I appeal to wiser heads for help.

 

Crocker's Rules on this. I suspect that I am being the most spectacular fool, but I can't see why, and I'd like to know.

 

Setting the Scene

 

Chronic Fatigue Syndrome, Myalgic Encephalitis, and Fibromyalgia are 'new diseases'. There is considerable dispute as to whether they even exist, and if so how to diagnose them. They all seem to have a large number of possible symptoms, and in any given case, these symptoms may or may not occur with varying severity.

 

As far as I can tell, if someone claims that they're 'Tired All The Time', then a competent doctor will first of all check that they're getting enough sleep and are not unduly stressed, then rule out all of the known diseases that cause fatigue (there are a very lot!), and finally diagnose one of the three 'by exclusion', which means that there doesn't appear to be anything wrong, except that you're ill.

 

If widespread pain is one of the symptoms, it's Fibromyalgia Syndrome (FMS). If there's no pain, then it's CFS or ME. These may or may not be the same thing, but Myalgic Encephalitis is preferred by patients because it's greek and so sounds like a disease. Unfortunately Myalgic Encephalitis means 'hurty muscles brain inflammation', and if one had hurty muscles, it would be Fibromyalgia, and if one had brain inflammation, it would be something else entirely.

 

Despite the widespread belief that these are 'somatoform' diseases (all in the mind), the severity of them ranges from relatively mild (tired all the time, can't think straight), to devastating (wheelchair bound, can't leave the house, can't open one eye because the pain is too great).

 

All three seem to have come spontaneously into existence in the 1970s, and yet searches for the responsible infective agent have proved fruitless. Neither have palliative measures been discovered, apart from the tried and true method of telling the sufferers that it's all in their heads.

 

The only treatments that have proved effective are Cognitive Behavioural Therapy / Graded Exercise. A Cochrane Review reckoned that they do around 15% over placebo in producing a measurable alleviation of symptoms. I'm not very impressed. CBT/GE sound a lot like 'sports coaching', and I'm pretty sure that if we thought of 'Not Being Very Good at Rowing' as a somatoform disorder, then I could produce an improvement over placebo in a measurable outcome in ten percent of my victims without too much trouble.

 

But any book on CFS will tell you that the disease was well known to the Victorians, under the name of neurasthenia. The hypothesis that God lifted the curse of neurasthenia from the people of the Earth as a reward for their courage during the wars of the early twentieth century, while well supported by the clinical evidence, has a low prior probability.

 

We face therefore something of a mystery, and in the traditional manner of my people, a mystery requires a Just-So Story:

 

How It Was In The Beginning

 

In the dark days of Victoria, the brilliant physician William Miller Ord noticed large numbers of mainly female patients suffering from late-onset cretinism.

 

These patients, exhausted, tired, stupid, sad, cold, fat and emotional, declined steeply, and invariably died.

 

As any man of decent curiosity would, Dr Ord cut their corpses apart, and in the midst of the carnage noticed that the thyroid, a small butterfly-shaped gland in the throat, was wasted and shrunken.

 

One imagines that he may have thought to himself: "What has killed them may cure them."

 

After a few false starts and a brilliant shot in the dark by the brave George Redmayne Murray, Dr Ord secured a supply of animal thyroid glands (cheaply available at any butcher, sautée with nutmeg and basil) and fed them to his remaining patients, who were presumably by this time too weak to resist.

 

They recovered miraculously, and completely.

 

I'm not sure why Dr Ord isn't better known, since this appears to have been the first time in recorded history that something a doctor did had a positive effect.

 

Dr Ord's syndrome was named Ord's Thyroiditis, and it is now known to be an autoimmune disease where the patient's own antibodies attack and destroy the thyroid gland. In Ord's thyroiditis, there is no goiter.

 

A similar disease, where the thyroid swells to form a disfiguring deformity of the neck (goiter), was described by Hakaru Hashimoto in 1912 (who rather charmingly published in German), and as part of the war reparations of 1946 it was decided to confuse the two diseases under the single name of Hashimoto's Thyroiditis. Apart from the goiter, both conditions share a characteristic set of symptoms, and were easily treated with animal thyroid gland, with no complications.

 

Many years before, in 1835, a fourth physician, Robert James Graves, had described a different syndrome, now known as Graves' Disease, which has as its characteristic symptoms irritability, muscle weakness, sleeping problems, a fast heartbeat, poor tolerance of heat, diarrhoea, and weight loss. Unfortunately Dr Graves could not think how to cure his eponymous horror, and so the disease is still named after him.

 

The Horror Spreads

 

Victorian medicine being what it was, we can assume that animal glands were sprayed over and into any wealthy person unwise enough to be remotely ill in the vicinity of a doctor. I seem to remember a number of jokes about "monkey glands" in PG Wodehouse, and indeed a man might be tempted to assume that chimpanzee parts would be a good substitute for humans. Supply issues seem to have limited monkey glands to a few millionaires worried about impotence, and it may be that the corresponding procedure inflicted on their wives has come down to us as Hormone Replacement Therapy.

 

Certainly anyone looking a bit cold, tired, fat, stupid, sad or emotional is going to have been eating thyroids. We can assume that in a certain number of cases, this was just the thing, and I think it may also be safe to assume that a fair number of people who had nothing wrong with them at all died as a result of treatment, although the fact that animal thyroid is still part of the human food chain suggests it can't be that dangerous.

 

I mean seriously, these people use high pressure hoses to recover the last scraps of meat from the floors of slaughterhouses, they're not going to carefully remove all the nasty gristly throat-bits before they make ready meals, are they?

 

The Armour Sausage company, owner of extensive meat-packing facilities in Chicago, Illinois, and thus in possession of a large number of pig thyroids which, if not quite surplus to requirements, at the very least faced a market sluggish to non-existent as foodstuffs, brilliantly decided to sell them in freeze-dried form as a cure for whatever ails you.

 

 

Some Sort of Sanity Emerges, in a Decade not Noted for its Sanity

 

Around the time of the second world war, doctors became interested in whether their treatments actually helped, and an effort was made to determine what was going on with thyroids and the constellation of sadness that I will henceforth call 'hypometabolism', which is the set of symptoms associated with Ord's thyroiditis. Jumping the gun a little, I shall also define 'hypermetabolism' as the set of symptoms associated with Graves' disease.

 

The thyroid gland appeared to be some sort of metabolic regulator, in some ways analogous to a thermostat. In hypometabolism, every system of the body is running slow, and so it produces a vast range of bad effects, affecting almost every organ. Different sufferers can have very different symptoms, and so diagnosis is very difficult.

 

Dr Broda Barnes decided that the key symptom of hypometabolism was a low core body temperature. By careful experiment he established that in patients with no symptoms of hypometabolism the average temperature of the armpit on waking was 98 degrees Fahrenheit (or 36.6 Celsius). He believed that temperature variation of +/- 0.2 degrees Fahrenheit was unusual enough to merit diagnosis. He also seems to have believed, in the manner of the proverbial man with a hammer, that all human ailments without exception were caused by hypometabolism, and to have given freeze-dried thyroid to almost everyone he came into contact with, to see if it helped. A true scientist. Doctor Barnes became convinced that fully 40% of the population of America suffered from hypometabolism, and recommended Armour's Freeze Dried Pig Thyroid to cure America's ills.

 

In a brilliant stroke, Freeze Dried Pig's Thyroid was renamed 'Natural Dessicated Thyroid', which almost sounds like the sort of thing you might take in sound mind. I love marketing. It's so clever.

 

America being infested with religious lunatics, and Chicago being infested with nasty useless gristly bits of cow's throat, led almost inevitably to a second form of 'Natural Dessicated Thyroid' on the market.

 

Dr Barnes' hypometabolism test never seems to have caught on. There are several ways your temperature can go outside his 'normal' range, including fever (too hot), starvation (too cold), alcohol (too hot), sleeping under too many duvets (too hot), sleeping under too few duvets (too cold). Also mercury thermometers are a complete pain in the neck, and take ten minutes to get a sensible reading, which is a long time to lie around in bed carefully doing nothing so that you don't inadvertently raise your body temperature. To make the situation even worse, while men's temperature is reasonably constant, the body temperature of healthy young women goes up and down like the Assyrian Empire.

 

Several other tests were proposed. One of the most interesting is the speed of the Achilles Tendon Reflex, which is apparently super-fast in hypermetabolism, and either weirdly slow or has a freaky pause in it if you're running a bit cold. Drawbacks of this test include 'It's completely subjective, give me something with numbers in it', and 'I don't seem to have one, where am I supposed to tap the hammer-thing again?'.

 

By this time, neurasthenia was no longer a thing. In the same way that spiritualism was no longer a thing, and the British Empire was no longer a thing.

 

As far as we know, Chronic Fatigue Syndrome was not a thing either, and neither was Fibromyalgia (which is just Chronic Fatigue Syndrome but it hurts), nor Myalgic Encephalitis. There was something called 'Myalgic Neurasthenia' in 1934, but it seems to have been a painful infectious disease and they thought it was polio.

 

 

Finally, Science

 

It turned out that the purpose of the thyroid gland is to make hormones which control the metabolism. It takes in the amino acid tyrosine, and it takes in iodine. It releases Thyroglobulin, mono-iodo-tyrosine (MIT), di-iodo-tyrosine (DIT), thyroxine (T4) and triiodothyronine (T3) into the blood. The chemistry is interesting but too complicated to explain in a just-so story.

 

I believe that we currently think that thyroglobulin, MIT and DIT are simply by-products of the process that makes T3 and T4.

 

T3 is the hormone. It seems to control the rate of metabolism in all cells. T4 has something of the same effect, but is much less active, and called a 'prohormone'. Its main purpose seems to be to be deiodinated to make more T3. This happens outside the thyroid gland, in the other parts of the body ('peripheral conversion'). I believe mainly in the liver, but to some extent in all cells.

 

Our forefathers knew about thyroxine (T4, or thyronine-with-four-iodines-attached), and triiodothyronine (T3, or thyronine-with-three-iodines-attached)

 

It seems to me that just from the names, thyroxine was the first one to be discovered. But I'm not sure about that. You try finding a history-of-endocrinology website. At any rate they seem to have known about T4 and T3 fairly early on.

 

The mystery of Graves', Ord's and Hashimoto's thyroid diseases was explained.

 

Ord's and Hashimoto's are diseases where the thryoid gland under-produces (hypothyroidism). The metabolism of all cells slows down. As might be expected, this causes a huge number of effects, which seem to manifest differently in different sufferers.

 

Graves' disease is caused by the thyroid gland over-producing (hyperthyroidism). The metabolism of all cells speeds up. Again, there are a lot of possible symptoms.

 

All three are thought to be autoimmune diseases. Some people think that they may be different manifestations of the same disease. They are all fairly common.

 

Dessicated thryoid cures hypothyroidism because the ground-up thyroids contain T4 and T3, as well as lots of thyroglobulin, MIT and DIT, and they are absorbed by the stomach. They get into the blood and speed up the metabolism of all cells. By titrating the dose carefully you can restore roughly the correct levels of the thyroid hormones in all tissues, and the patient gets better. (Titration is where you change something carefully until you get it right)

 

The theory has considerable explanatory power. It explains cretinism, which is caused either by a genetic disease, or by iodine deficiency in childhood. If you grow up in an iodine deficient area, then your growth is stunted, your brain doesn't develop properly, and your thyroid gland may become hugely enlarged. Presumably because the brain is desperately trying to get it to produce more thyroid hormones, and it responds by swelling.

 

Once upon a time, this swelling (goitre) was called 'Derbyshire Neck'. I grew up near Derbyshire, and I remember an old rhyme: "Derbyshire born, Derbyshire bred, strong in the arm, and weak in the head". I always thought it was just an insult. Maybe not. Cretinism was also popular in the Alps, and there is a story of an English traveller in Switzerland of whom it was remarked that he would have been quite handsome if only he had had a goitre. So it must have been very common there.

 

But at this point I am *extremely suspicious*. The thyroid/metabolic regulation system is ancient (universal in vertebrates, I believe), crucial to life, and it really shouldn't just go wrong. We should suspect either an infectious cause, or a recent environmental influence which we haven't had time to adjust to, an evolved defence against an infectious disease, or just possibly, a recently evolved but as yet imperfect defence against a less recent environmental change.

 

(Cretinism in particular is very strange. Presumably animals in iodine-deficient areas aren't cretinous, and yet they should be. Perhaps a change to a farming from a hunter-gatherer lifestyle has increased our dependency on iodine from crops, which crops have sucked what little iodine occurs naturally out of the soil?)

 

It's also not entirely clear to me what the thyroid system is *for*. If there's just a particular rate that cells are supposed to run at, then why do they need a control signal to tell them that? I could believe that it was a literal thermostat, designed to keep the body temperature constant at the best speed for the various biological reactions, but it's universal in *vertebrates*. There are plenty of vertebrates which don't keep a constant temperature.

 

 

The Fall of Dessicated Thyroid

 

There turned out to be some problems with Natural Dessicated Thyroid (NDT).

 

Firstly, there were many competing brands and types, and even if you stuck to one brand the quality control wasn't great, so the dose you'd be taking would have been a bit variable.

 

Secondly, it's fucking pig's thyroid from an abattoir. It could have all sorts of nasty things in it. Also, ick.

 

Thirdly, it turned out that pigs made quite a lot more T3 in their thyroids than humans do. It also seems that T3 is better absorbed by the gut than T4 is, so someone taking NDT to compensate for their own underproduction will have too much of the active hormone compared to the prohormone. That may not be good news.

 

With the discovery of 'peripheral conversion', and the possibility of cheap clean synthesis, it was decided that modern scientific thyroid treatment would henceforth be by synthetic T4 (thyroxine) alone. The body would make its own T3 from the T4 supply.

 

Alarm bells should be ringing at this point. Apart from the above points, I'm not aware of any great reason for the switch from NDT to thyroxine in the treatment of hypothyroidism, but it seems to have been pretty much universal, and it seems to have worked.

 

Aware of the lack of T3, doctors compensated by giving people more T4 than was in their pig-thyroid doses. And there don't seem to have been any complaints.

 

Over the years, NDT seems to have become a crazy fringe treatment despite there not being any evidence against it. It's still a legal prescription drug, but in America it's only prescribed by eccentrics. In England a doctor prescribing it would be, at the very least, summoned to explain himself before the GMC.

 

However, since it was (a) sold over the counter for so many years, and (b) part of the food chain, it is still perfectly legal to sell as a food supplement in both countries, as long as you don't make any medical claims for it. And the internet being what it is, the prescription-only synthetic hormones T3 and T4 are easily obtained without a prescription. These are extremely powerful hormones which have an effect on metabolism. If 'body-builders' and sports cheats aren't consuming all three in vast quantities, I am a Dutchman.

 

The Clinical Diagnosis of Hypothyroidism

 

We pass now to the beginning of the 1970s.

 

Hypothyroidism is ferociously difficult to diagnose. People complain of 'Tired All The Time' well, ... all the time, and it has literally hundreds of causes.

 

And it must be diagnosed correctly! If you miss a case of hypothyroidism, your patient is likely to collapse and possibly die at some point in the medium-term future. If you diagnose hypothyroidism where it isn't, you'll start giving the poor bugger powerful hormones which he doesn't need and *cause* hypermetabolism.

 

The last word in 'diagnosis by symptoms' was the absolutely excellent paper:

 

Statistical Methods Applied To The Diagnosis Of Hypothyroidism by W. Z. Billewicz et al.

 

Connoisseurs will note the clever and careful application of 'machine learning' techniques, before there were machines to learn!

 

One important thing to note is that this is a way of separating hypothyroid cases from other cases of tiredness at the point where people have been referred by their GP to a specialist at a hospital on suspicion of hypothyroidism. That changes the statistics remarkably. This is *not* a way of diagnosing hypothyroidism in the general population. But if someone's been to their GP (general practitioner, the doctor that a British person likely makes first contact with) and their GP has suspected their thryoid function might be inadequate, this test should probably still work.

 

For instance, they consider Physical Tiredness, Mental Lethargy, Slow Cerebration, Dry Hair, and Muscle Pain, the classic symptoms of hypothyroidism, present in most cases, to be indications *against* the disease.

 

That's because if you didn't have these things, you likely wouldn't have got that far. So in the population they're seeing (of people whose doctor suspects they might be hypothyroid), they're not of great value either way, but their presence is likely the reason why the person's GP has referred them even though they've really got iron-deficiency anaemia or one of the other causes of fatigue.

 

In their population, the strongest indicators are 'Ankle Jerk' and 'Slow Movements', subtle hypothyroid symptoms which aren't likely to be present in people who are fatigued for other reasons.

 

But this absolutely isn't a test you should use for population screening! In the general population, the classic symptoms are strong indicators of hypothyroidism.

 

Probability Theory is weird, huh?

 

Luckily, there were lab tests for hypothyroidism too, but they were expensive, complicated, annoying and difficult to interpret. Billewicz et al used them to calibrate their test, and recommend them for the difficult cases where their test doesn't give a clear answer.

 

And of course, the final test is to give them thyroid treatment and see whether they get better. If you're not sure, go slow, watch very carefully and look for hyper symptoms.

 

Overconfidence is definitely the way to go. If you don't diagnose it and it is, that's catastrophe. If it isn't, but you diagnose it anyway, then as long as you're paying attention the hyper symptoms are easy enough to spot, and you can pull back with little harm done.

 

A Better Way

 

It should be obvious from the above that the diagnosis of hypothyroidism by symptoms is absolutely fraught with complexity, and very easy to get wrong, and if you get it wrong the bad way, it's a disaster. Doctors were absolutely screaming for a decisive way to test for hypothyroidism.

 

Unfortunately, testing directly for the levels of thyroid hormones is very difficult, and the tests of the 1960s weren't accurate enough to be used for diagnosis.

 

The answer came from an understanding of how the thyroid regulatory system works, and the development of an accurate blood test for a crucial signalling hormone.

 

Three structures control the level of thyroid hormones in the blood.

 

The thyroid gland produces the hormones and secretes them into the blood.

 

Its activity is controlled by the hormone thyrotropin, or Thyroid Signalling Hormone (TSH). Lots of TSH works the thyroid hard. In the absence of TSH the thyroid relaxes but doesn't switch off entirely. However the basal level of thyroid activity in the absence of TSH is far too low.

 

TSH is controlled by the pituitary gland, a tiny structure attached to the brain.

 

The pituitary itself is controlled, via Thyroid Releasing Hormone (TRH), by the hypothalamus, which is part of the brain.

 

This was thought to be a classic example of a feedback control system.

 

hypothalamus->pituitary->thyroid

 

It turns out that the level of thyrotropin TSH in the blood is exquisitely sensitive to the levels of thyroid hormones in the blood.

 

Administer thyroid hormone to a patient and their TSH level will rapidly adjust downwards by an easily detectable amount.

 

So:

 

In hypothyroidism, where the thyroid has failed, the body will be desperately trying to produce more thyroid hormones, and the TSH level will be extremely high.

 

In Graves' Disease, this theory says, where the thyroid has grown too large, and the metabolism is running damagingly fast, the body will be, like a central bank trying to stimulate growth in a deflationary economy by reducing interest rates, 'pushing on a piece of string'. TSH will be undetectable.

 

The original TSH test was developed in 1965, by the startlingly clever method of radio-immuno-assay.

 

[For reasons that aren't clear to me, rather than being expressed in grams/litre, or mols/litre, the TSH test is expressed in 'international units/liter'. But I don't think that that's important]

 

A small number of people in whom there was no suspicion of thyroid disease were assessed, and the 'normal range' of TSH was calculated.

 

Again, 'endocrinology history' resources are not easy to find, but the first test was not terribly sensitive, and I think originally hyperthyroidism was thought to result in a complete absence of TSH, and that the highest value considered normal was about 4 (milli-international-units/liter).

 

This apparently pretty much solved the problem of diagnosing thyroid disorders.

 

Forgetfulness

 

It's no longer necessary to diagnose hypo- and hyper-thyroidism by symptoms. It was error prone anyway, and the question is easily decided by a cheap and simple test.

 

Natural Dessicated Thyroid is one with Nineveh and Tyre.

 

No doctor trained since the 1980s knows much about hypothyroid symptoms.

 

Medical textbooks mention them only in passing, as an unweighted list of classic symptoms. You couldn't use that for diagnosis of this famously difficult disease.

 

If you suspect hypothyroidism, you order a TSH test. If the value of TSH is very low, that's hyperthyroidism. If the value is very high then that's hypothyroidism. Otherwise you're 'euthyroid' (greek again, good-thyroid), and your symptoms are caused by some other problem.

 

The treatment for hyperthyroidism is to damage the thyroid gland. There are various ways. This often results in hypothyroidism. *For reasons that are not terribly well understood*.

 

The treatment for hypothyroidism is to give the patient sufficient thyroxine (T4) to cause TSH levels to come back into their normal range.

 

The conditions hyperthyroidism and hypothyroidism are now *defined* by TSH levels.

 

Hypothyroidism, in particular, a fairly common disease, is considered to be such a solved problem that it's usually treated by the GP, without involving any kind of specialist.

 

 

Present Day

 

It was found that the traditional amount of thyroxine (T4) administered to cure hypothyroid patients, was in fact too high. The amount of T4 that had always been used to replace the hormones that had once been produced by a thyroid gland now dead, destroyed, or surgically removed appeared now to be too much. That amount causes suppression of TSH to below its normal range. The brain, theory says, is asking for the level to be reduced.

 

The amount of T4 administered in such cases (there are many) has been reduced by a factor of around two, to the level where it produces 'normal' TSH levels in the blood. Treatment is now titrated to produce the normal levels of TSH.

 

TSH tests have improved enormously since their introduction, and are on their third or fourth generation. The accuracy of measurement is very good indeed.

 

It's now possible to detect the tiny remaining levels of TSH in overtly hyperthyroid patients, so hyperthyroidism is also now defined by the TSH test.

 

In England, the normal range is 0.35 to 5.5. This is considered to be the definition of 'euthyroidism'. If your levels are normal, you're fine.

 

If you have hypothyroid symptoms but a normal TSH level, then your symptoms are caused by something else. Look for Anaemia, look for Lyme Disease. There are hundreds of other possible causes. Once you rule out all the other causes, then it's the mysterious CFS/FMS/ME, for which there is no cause and no treatment.

 

If your doctor is very good, very careful and very paranoid, he might order tests of the levels of T4 and T3 directly. But actually the direct T4 and T3 tests, although much more accurate than they were in the 1960s, are quite badly standardised, and there's considerable controversy about what they actually measure. Different assay techniques can produce quite different readings. They're expensive. It's fairly common, and on the face of it perfectly reasonable, for a lab to refuse to conduct the T3 and T4 tests if the TSH level is normal.

 

It's been discovered that quite small increases in TSH actually predict hypothyroidism. Minute changes in thyroid hormone levels, which don't produce symptoms, cause detectable changes in the TSH levels. Normal, but slightly high values of TSH, especially in combination with the presence of thyroid related antibodies (there are several types), indicate a slight risk of one day developing hypothyroidism.

 

There's quite a lot of controversy about what the normal range for TSH actually is. Many doctors consider that the optimal range is 1-2, and target that range when administering thyroxine. Many think that just getting the value in the normal range is good enough. None of this is properly understood, to understate the case rather dramatically.

 

There are new categories, 'sub-clinical hypothyroidism' and 'sub-clinical hyperthyroidism', which are defined by abnormal TSH tests in the absence of symptoms. There is considerable controversy over whether it is a good idea to treat these, in order to prevent subtle hormonal imbalances which may cause difficult-to-detect long term problems.

 

Everyone is a little concerned about accidentally over-treating people, (remember that hyperthyroidism is now defined by TSH<0.35).

 

Hyperthyroidism has long been associated with Atrial Fibrillation (a heart problem), and Osteoporosis, both very nasty things. A large population study in Denmark recently revealed that there is a greater incidence of Atrial Fibrillation in sub-clinical hyperthyroidism, and that hypothyroidism actually has a 'protective effect' against Atrial Fibrillation.

 

It's known that TSH has a circadian rhythm, higher in the early morning, lower at night. This makes the test rather noisy, as your TSH level can be doubled or halved depending on what time of day you have the blood drawn.

 

But the big problems of the 1960s and 1970s are completely solved. We are just tidying up the details.

 

Doubt

 

Many hypothyroid patients complain that they suffer from 'Tired All The Time', and have some of the classic hypothyroid symptoms, even though their TSH levels have been carefully adjusted to be in the normal range.

 

I've no idea how many, but opinions range from 'the great majority of patients are perfectly happy' to 'around half of hypothyroid sufferers have hypothyroid symptoms even though they're being treated'.

 

The internet is black with people complaining about it, and there are many books and alternative medicine practitioners trying to cure them, or possibly trying to extract as much money as possible from people in desperate need of relief from an unpleasant, debilitating and inexplicable malaise.

 

THE PLURAL OF ANECDOTE IS DATA.

 

Not good data, to be sure. But if ten people mention to you in passing that the sun is shining, you are a damned fool if you think you know nothing about the weather.

 

It's known that TSH ranges aren't 'normally distributed' (in the sense of Gauss/the bell curve distribution) in the healthy population.

 

If you log-transform them, they do look a bit more normal.

 

The American Academy of Clinical Biochemists, in 2003, decided to settle the question once and for all. They carefully screened out anyone with even the slightest sign that there might be anything wrong with their thyroid at all, and measured their TSH very accurately.

 

In their report, they said (this is a direct quote):

 

In the future, it is likely that the upper limit of the serum TSH euthyroid reference range will be reduced to 2.5 mIU/L because >95% of rigorously screened normal euthyroid volunteers have serum TSH values between 0.4 and 2.5 mIU/L.

 

Many other studies disagree, and propose wider ranges for normal TSH.

 

But if the AACB report were taken seriously, it would lead to diagnosis of hypothyroidism in vast numbers of people who are perfectly healthy! In fact the levels of noise in the test would put people whose thyroid systems are perfectly normal in danger of being diagnosed and inappropriately treated.

 

For fairly obvious reasons, biochemists have been extremely, and quite properly, reluctant to take the report of their own professional body seriously. And yet it is hard to see where the AACB have gone wrong in their report.

 

Neurasthenia is back.

 

A little after the time of the introduction of the TSH test, new forms of 'Tired All The Time' were discovered.

 

As I said, CFS and ME are just two names for the same thing. Fibromyalgia Syndrome (FMS) is much worse, since it is CFS with constant pain, for which there is no known cause and from which there is no relief. Most drugs make it worse.

 

But if you combine the three things (CFS/ME/FMS), then you get a single disease, which has a large number of very non-specific symptoms.

 

These symptoms are the classic symptoms of 'hypometabolism'. Any doctor who has a patient who has CFS/ME/FMS and hasn't tested their thyroid function is *de facto* incompetent. I think the vast majority of medical people would agree with this statement.

 

And yet, when you test the TSH levels in CFS/ME/FMS sufferers, they are perfectly normal.

 

All three/two/one are appalling, crippling, terrible syndromes which ruin people's lives. They are fairly common. You almost certainly know one or two sufferers. The suffering is made worse by the fact that most people believe that they're psychosomatic, which is a polite word for 'imaginary'.

 

And the people suffering are mainly middle-aged women. Middle-aged women are easy to ignore. Especially stupid middle-aged women who are worried about being overweight and obviously faking their symptoms in order to get drugs which are popularly believed to induce weight loss. It's clearly their hormones. Or they're trying to scrounge up welfare benefits. Or they're trying to claim insurance. Even though there's nothing wrong with them and you've checked so carefully for everything that it could possibly be.

 

But it's not all middle aged women. These diseases affect men, and the young. Sometimes they affect little children. Exhaustion, stupidity, constant pain. Endless other problems as your body rots away. Lifelong. No remission and no cure.

 

And I have Doubts of my Own

 

And I can't believe that careful, numerate Billewicz and his co-authors would have made this mistake, but I can't find where the doctors of the 1970s checked for the sensitivity of the TSH test.

 

Specificity, yes. They tested a lot of people who hadn't got any sign of hypothyroidism for TSH levels. If you're well, then your TSH level will be in a narrow range, which may be 0-6, or it may be 1-2. Opinions are weirdly divided on this point in a hard to explain way.

 

But Sensitivity? Where's the bit where they checked for the other arm of the conditional?

 

The bit where they show that no-one who's suffering from hypometabolism, and who gets well when you give them Dessicated Thyroid, had, on first contact, TSH levels outside the normal range.

 

If you're trying to prove A <=> B, you can't just prove A => B and call it a day. You couldn't get that past an A-level maths student. And certainly anyone with a science degree wouldn't make that error. Surely? I mean you shouldn't be able to get that past anyone who can reason their way out of a paper bag.

 

I'm going to say this a third time, because I think it's important and maybe it's not obvious to everyone.

 

If you're trying to prove that two things are the same thing, then proving that the first one is always the second one is not good enough.

 

IF YOU KNOW THAT THE KING OF FRANCE IS ALWAYS FRENCH, YOU DO *NOT* KNOW THAT ANYONE WHO IS FRENCH IS KING OF FRANCE.

 

It's possible, of course, that I've missed this bit. As I say, 'History of Endocrinology' is not one of those popular, fashionable subjects that you can easily find out about.

 

I wonder if they just assumed that the thyroid system was a thermostat. The analogy is still common today.

 

But it doesn't look like a thermostat to me. The thyroid system with its vast numbers of hormones and transforming enzymes is insanely, incomprehensibly complicated. And very poorly understood. And evolutionarily ancient. It looks as though originally it was the system that coordinated metamorphosis. Or maybe it signalled when resources were high enough to undergo metamorphosis. But whatever it did originally in our most ancient ancestors, it looks as though the blind watchmaker has layered hack after hack after hack on top of it on the way to us.

 

Only the thyroid originally, controlling major changes in body plan in tiny creatures that metamorphose.

 

Of course, humans metamorphose too, but it's all in the womb, and who measures thyroid levels in the unborn when they still look like tiny fish?

 

And of course, humans undergo very rapid growth and change after we are born. Especially in the brain. Baby horses can walk seconds after they're born. Baby humans take months to learn to crawl. I wonder if that's got anything to do with cretinism.

 

And I'm told that baby humans have very high hormone levels. I wonder why they need to be so hot? If it's a thermostat, I mean.

 

But then on top of the thyroid, the pituitary. I wonder what that adds to the system? If the thyroid's just a thermostat, or just a device for keeping T4 levels constant, why can't it just do the sensing itself?

 

What evolutionary process created the pituitary control over the thyroid? Is that the thermostat bit?

 

And then the hypothalamus, controlling the pituitary. Why? Why would the brain need to set the temperature when the ideal temperature of metabolic reactions is always 37C in every animal? That's the temperature everything's designed for. Why would you dial it up or down, to a place where the chemical reactions that you are don't work properly?

 

I can think of reasons why. Perhaps you're hibernating. Many of our ancestors must have hibernated. Maybe it's a good idea to slow the metabolism sometimes. Perhaps to conserve your fat supplies. Your stored food.

 

Perhaps it's a good idea to slow the metabolism in times of famine?

 

Perhaps the whole calories in/calories out thing is wrong, and people whose energy expenditure goes over their calorie intake have slow metabolisms, slowly sacrificing every bodily function including immune defence in order to avoid starvation.

 

I wonder at the willpower that could keep an animal sane in that state. While its body does everything it can to keep its precious fat reserves high so that it can get through the famine.

 

And then I remember about Anorexia Nervosa, where young women who want to lose weight starve themselves to the point where they no longer feel hungry at all. Another mysterious psychological disease that's just put down to crazy females. We really need some female doctors.

 

And I remember about Seth Robert's Shangri-La Diet, that I tried, to see if it worked, some years ago, just because it was so weird, where by eating strange things, like tasteless oil and raw sugar, you can make your appetite disappear, and lose weight. It seemed to work pretty well, to my surprise. Seth came up with it while thinking about rats. And apparently it works on rats too. I wonder why it hasn't caught on.

 

It seems, my female friends tell me, that a lot of diets work well for a bit, but then after a few weeks the effect just stops. If we think of a particular diet as a meme, this would seem to be its infectious period, where the host enthusiastically spreads the idea.

 

And I wonder about the role of the thyronine de-iodinating enzymes, and the whole fantastically complicated process of stripping the iodines and the amino acid bits from thyroxine in various patterns that no-one understands, and what could be going on there if the thyroid system were just a simple thermostat.

 

And I wonder about reports I am reading where elite athletes are finding themselves suffering from hypothyroidism in numbers far too large to be credible, if it wasn't, say, a physical response to calorie intake less than calorie output.

 

I've been looking ever so hard to find out why the TSH test, or any of the various available thyroid blood tests are a good way to assess the function of this fantastically complicated and very poorly understood system.

 

But every time I look, I just come up with more reasons to believe that they don't tell you very much at all.

 

 

The Mystery

 

Can anyone convince me that the converse arm has been carefully checked?

 

That everyone who's suffering from hypometabolism, and who gets well when you give them Dessicated Thyroid, has, before you fix them, TSH levels outside the normal range.

 

In other words, that we haven't just thrown, though carelessness, a long standing, perfectly safe, well tested treatment, for a horrible disabling disease that often causes excruciating pain, that the Victorians knew how to cure, and that the people of the 1950s and 60s routinely cured, away.

Identifying bias. A Bayesian analysis of suspicious agreement between beliefs and values.

7 Stefan_Schubert 31 January 2016 11:29AM

Here is a new paper of mine (12 pages) on suspicious agreement between belief and values. The idea is that if your empirical beliefs systematically support your values, then that is evidence that you arrived at those beliefs through a biased belief-forming process. This is especially so if those beliefs concern propositions which aren’t probabilistically correlated with each other, I argue.

I have previously written several LW posts on these kinds of arguments (here and here; see also mine and ClearerThinking’s political bias test) but here the analysis is more thorough. See also Thrasymachus' recent post on the same theme.

[LINK] How A Lamp Took Away My Reading And A Box Brought It Back

8 CronoDAS 30 January 2016 04:55PM

By Ferrett Steinmetz

Ferrett isn't officially a Rationality Blogger, but he posts things that seem relevant fairly often. This one is in the spirit of "Beware Trivial Inconveniences". It's the story of how he realized that a small change in his environment led to a big change in his behavior...

[moderator action] The_Lion and The_Lion2 are banned

51 Viliam_Bur 30 January 2016 02:09AM

Accounts "The_Lion" and "The_Lion2" are banned now. Here is some background, mostly for the users who weren't here two years ago:

 

User "Eugine_Nier" was banned for retributive downvoting in July 2014. He keeps returning to the website using new accounts, such as "Azathoth123", "Voiceofra", "The_Lion", and he keeps repeating the behavior that got him banned originally.

The original ban was permanent. It will be enforced on all future known accounts of Eugine. (At random moments, because moderators sometimes feel too tired to play whack-a-mole.) This decision is not open to discussion.

 

Please note that the moderators of LW are the opposite of trigger-happy. Not counting spam, there is on average less than one account per year banned. I am writing this explicitly, to avoid possible misunderstanding among the new users. Just because you have read about someone being banned, it doesn't mean that you are now at risk.

Most of the time, LW discourse is regulated by the community voting on articles and comments. Stupid or offensive comments get downvoted; you lose some karma, then everyone moves on. In rare cases, moderators may remove specific content that goes against the rules. The account ban is only used in the extreme cases (plus for obvious spam accounts). Specifically, on LW people don't get banned for merely not understanding something or disagreeing with someone.

 

What does "retributive downvoting" mean? Imagine that in a discussion you write a comment that someone disagrees with. Then in a few hours you will find that your karma has dropped by hundreds of points, because someone went through your entire comment history and downvoted all comments you ever wrote on LW; most of them completely unrelated to the debate that "triggered" the downvoter.

Such behavior is damaging to the debate and the community. Unlike downvoting a specific comment, this kind of mass downvoting isn't used to correct a faux pas, but to drive a person away from the website. It has especially strong impact on new users, who don't know what is going on, so they may mistake it for a reaction of the whole community. But even in experienced users it creates an "ugh field" around certain topics known to invoke the reaction. Thus a single user has achieved disproportional control over the content and the user base of the website. This is not desired, and will be punished by the site owners and the moderators.

To avoid rules lawyering, there is no exact definition of how much downvoting breaks the rules. The rule of thumb is that you should upvote or downvote each comment based on the value of that specific comment. You shouldn't vote on the comments regardless of their content merely because they were written by a specific user.

Yoshua Bengio on AI progress, hype and risks

7 V_V 30 January 2016 01:45AM

LINK

Yoshua Bengio, one the world's leading expert on machine learning, and neural networks in particular, explains his view on these issues in an interview. Relevant quotes:

There are people who are grossly overestimating the progress that has been made. There are many, many years of small progress behind a lot of these things, including mundane things like more data and computer power. The hype isn’t about whether the stuff we’re doing is useful or not—it is. But people underestimate how much more science needs to be done. And it’s difficult to separate the hype from the reality because we are seeing these great things and also, to the naked eye, they look magical

[ Recursive self-improvement ] It’s not how AI is built these days. Machine learning means you have a painstaking, slow process of acquiring information through millions of examples. A machine improves itself, yes, but very, very slowly, and in very specialized ways. And the kind of algorithms we play with are not at all like little virus things that are self-programming. That’s not what we’re doing.

Right now, the way we’re teaching machines to be intelligent is that we have to tell the computer what is an image, even at the pixel level. For autonomous driving, humans label huge numbers of images of cars to show which parts are pedestrians or roads. It’s not at all how humans learn, and it’s not how animals learn. We’re missing something big. This is one of the main things we’re doing in my lab, but there are no short-term applications—it’s probably not going to be useful to build a product tomorrow.

We ought to be talking about these things [ AI risks ]. The thing I’m more worried about, in a foreseeable future, is not computers taking over the world. I’m more worried about misuse of AI. Things like bad military uses, manipulating people through really smart advertising; also, the social impact, like many people losing their jobs. Society needs to get together and come up with a collective response, and not leave it to the law of the jungle to sort things out.

I think it's fair to say that Bengio has joined the ranks of AI researchers like his colleagues Andrew Ng and Yann LeCun who publicly express skepticism towards imminent human-extinction-level AI.


Positive utility in an infinite universe

1 casebash 29 January 2016 11:40PM

Content Note: Highly abstract situation with existing infinities

This post will attempt to resolve the problem of infinities in utilitarianism. The arguments are very similar to an argument for total utilitarianism over other forms which I'll most likely write up at some point (my previous post was better as an argument against average utilitarianism, rather than an argument in favour of total utilitarianism).

In the Less Wrong Facebook group, Gabe Bf posted a challenge to save utilitarianism from the problem of infinities. The original problem is from by a paper by Nick Bostrom.

I believe that I have quite a good solution to this problem that allows us to systemise comparing infinite sets of utility, but this post focuses on justifying why we should take it to be axiomic that adding another person with positive utility is good and on why the results that seem to contradict this lack credibility. Let's call this the Addition Axiom or A. We can also consider the Finite Addition Axiom (only applies when we add utility into a universe with a finite number of people), call this A0.

Let's consider what other alternative axioms that we might want to take instead. One is the Infinite Indifference Axiom or I, that is that we should be indifferent if both options provide infinite total utility (of the same order of infinity). Another option would be the Remapping Axiom (or R), which would assert that if we can surjectively map a group of people G onto another group H so that each g from G is mapped onto a person h from H where u(g) >= u(h), then v(H) <= v(G) where v represents the value of a particular universe (it doesn't necessarily map onto the real numbers or represent a complete ordering). Using the Remapping Axiom twice implies that we should be indifferent between an infinite series of ones and the same series with a 0 at one spot. This means that the Remapping Axiom is incompatible with the Addition Axiom. We can also consider the Finite Remapping Axiom (R0) which is where we limit the Remapping Axiom to remapping a finite number of elements.

First, we need to determine what are good properties of a statement we wish to take as an axiom. This is my first time trying to establish an axiom so formally, so I will admit that this list is not going to be perfect.

  • Uses well-understood and regular objects, properties or processes. If the components are not understood well, it is highly likely that our attempt to determine the truth of a statement will be misguided.
  • An axiom close to the territory is more reliable than one in the map because it is very easy to make subtle errors when constructing a map.
  • Leads to minimally weird consequences.
  • Extends included axioms in a logical way. If the axiom is an extension of a simpler alternative axiom, then it should be intuitive that the result would extend to the larger set; there should be reasons to expect it to behave the same way.

Let's look first at the Infinite Indifference Axiom. Firstly, it deals purely with infinite objects, which are known to often behave irregularly and results in many problems in which there is no consensus. Secondly, it exists in the map to some extent (but not that much at all). In the territory, there are just objects, infinity is our attempt to transpose certain object configurations into a number system. Thirdly, it doesn't seem to extend from the finite numbers very well. If one situation provides 5 total utility and another provides 5 total utility, then it seems logical to treat them as the same as 5 is equal to 5. However, infinity doesn't seem to be equal to itself in the same way. Infinity plus 1 is still infinity. We can remove infinite dots from infinite dots and end up with 1 or 2 or 3... or infinity. Further, this axiom leads to the result that we are indifferent between someone with large positive utility being created and someone with large negative good being created. This is massively unintuitive, but I will admit it is subjective. I think this would make a very poor axiom, but it doesn't mean it is false (Pythagoras' Theorem would make a poor axiom too).

On the other hand, deciding between the Remapping Axiom and Addition Axiom will be much closer. On the first criteria I think the Addition Axiom comes out ahead. It involves making only a single change to the situation, a primitive change if you will. In contrast, the Remapping Axiom involves Remapping an infinite number of objects. This is still a relatively simple change, but it is definitely more complicated and permutations of infinite series are well known to behave weirdly.

On the second criteria, the Addition Axiom (by itself) doesn't lead to any really weird results. We'll get some weird results in subsequent posts, but that's because we are going to going to make some very weird changes to the situation, not because of the Addition Axiom itself. The failure of the Remapping Axion could very well be because mappings lack the resolution to distinguish between different situations. We know that an infinite series can map onto itself, half of itself or itself twice, which lends a huge amount of support to the lack of resolution theory.

On the other hand, the Addition Axiom being false (because we've assumes the Remapping Axiom) is truly bizarre. It basically states that good things are good. Nonetheless, while this may seem very convincing to me, people's intuitions vary so the more relevant material for people with a different intuition is the material above that suggests the Remapping Axiom lacks resolution.

On the third criteria, a new object appearing is something that can occur in the territory. Infinite remappings initially seem to be more in the map than the territory, but it is very easy to imagine a group of objects moving one space to the right, so this assertion seems unjustified. That is, infinity is in the map as discussed before, but an infinite group of objects and their movements can still be in the territory. However, when we think about it again, we see that we have reduced the infinite group of objects X, to a set objects positioned, for example, on X = 0, 1, 2... This is a massive hint about the content of my following posts.

Lastly, the Addition Axiom in infinite case is a natural extension of the Finite Addition Axiom. In A0 the principle is that whatever else happens in the universe is irrelevant and there is no reason for this to change in the infinite case. For the Remapping Axiom, it also seems like a very natural extension of the finite case, so I'll call this criteria a draw.

In summary, if you don't already find the Addition Axiom more intuitive than the Remapping Axiom, the main reasons to favour the Addition Axiom are 1) it deals with better understood objects, 2) it is closer to the territory than the map 3) there are good reasons to suspect that Remapping lacks resolution. Of these reasons, I believe the the 3rd is by far the most persuasive; I consider the other two more to be hints than anything else.

I only dealt with the Infinite Indifference Axiom and the Remapping Axioms, but I'm sure other people will suggest their own alternative Axioms which need to be compared.

Increasing a person's utility, instead of creating a new person with positive utility is exactly the same. Also, this post is just the start. I will provide a systematic analysis of infinite universes over the coming days, plus an FAQ conditional on sufficient high quality questions.

 

 

 

Clearing An Overgrown Garden

9 Anders_H 29 January 2016 10:16PM

(tl;dr: In this post, I make some concrete suggestions for LessWrong 2.0.)

Less Wrong 2.0

A few months ago, Vaniver posted some ideas about how to reinvigorate Less Wrong. Based on comments in that thread and based on personal discussions I have had with other members of the community, I believe there are several different views on why Less Wrong is dying. The following are among the most popular hypotheses:

(1) Pacifism has caused our previously well-kept garden to become overgrown

(2) The aversion to politics has caused a lot of interesting political discussions to move away from the website

(3) People prefer posting to their personal blogs.

With this background, I suggest the following policies for Less Wrong 2.0.  This should be seen only as a starting point for discussion about the ideal way to implement a rationality forum. Most likely, some of my ideas are counterproductive. If anyone has better suggestions, please post them to the comments.

Moderation Policy:

There are four levels of users:  

  1. Users
  2. Trusted Users 
  3. Moderators
  4. Administrator
Users may post comments and top level posts, but their contributions must be approved by a moderator.

Trusted users may post comments and top level posts which appear immediately. Trusted user status is awarded by 2/3 vote among the moderators

Moderators may approve comments made by non-trusted users. There should be at least 10 moderators to ensure that comments are approved within an hour of being posted, preferably quicker. If there is disagreement between moderators, the matter can be discussed on a private forum. Decisions may be altered by a simple majority vote.

The administrator (preferably Eliezer or Nate) chooses the moderators.

Personal Blogs:


All users are assigned a personal subdomain, such as Anders_H.lesswrong.com. When publishing a top-level post, users may click a checkbox to indicate whether the post should appear only on their personal subdomain, or also in the Less Wrong discussion feed. The commenting system is shared between the two access pathways. Users may choose a design template for their subdomain. However, when the post is accessed from the discussion feed, the default template overrides the user-specific template. The personal subdomain may include a blogroll, an about page, and other information. Users may purchase a top-level domain as an alias for their subdomain

Standards of Discourse and Policy on Mindkillers:

All discussion in Less Wrong 2.0 is seen explicitly as an attempt to exchange information for the purpose of reaching Aumann agreement. In order to facilitate this goal, communication must be precise. Therefore, all users agree to abide by Crocker's Rules for all communication that takes place on the website.  

However, this is not a license for arbitrary rudeness.  Offensive language is permitted only if it is necessary in order to point to a real disagreement about the territory. Moreover, users may not repeatedly bring up the same controversial discussion outside of their original context.

Discussion of politics is explicitly permitted as long as it adheres to the rules outlined above. All political opinions are permitted (including opinions which are seen as taboo by society as large), as long as the discussion is conducted with civility and in a manner that is suited for dispassionate exchange of information, and suited for accurate reasoning about the consequences of policy choice. By taking part in any given discussion, all users are expected to pre-commit to updating in response to new information.

Upvotes:

Only trusted users may vote. There are two separate voting systems.  Users may vote on whether the post raises a relevant point that will result in interesting discussion (quality of contribution) and also on whether they agree with the comment (correctness of comment). The first is a property both of the comment and of the user, and is shown in their user profile.  The second scale is a property only of the comment. 

All votes are shown publicly (for an example of a website where this is implemented, see for instance dailykos.com).  Abuse of the voting system will result in loss of Trusted User Status. 

How to Implement This

After the community comes to a consensus on the basic ideas behind LessWrong 2.0, my preference is for MIRI to implement it as a replacement for Less Wrong. However, if for some reason MIRI is unwilling to do this, and if there is sufficient interest in going in this direction, I offer to pay server costs. If necessary, I also offer to pay some limited amount for someone to develop the codebase (based on Open Source solutions). 

Other Ideas:


MIRI should start a professionally edited rationality journal (For instance called "Rationality") published bi-monthly. Users may submit articles for publication in the journal. Each week, one article is chosen for publication and posted to a special area of Less Wrong. This replaces "main". Every two months, these articles are published in print in the journal.  

The idea behind this is as follows:
(1) It will incentivize users to compete for the status of being published in the journal.
(2) It will allow contributors to put the article on their CV.
(3) It may bring in high-quality readers who are unlikely to read blogs.  
(4) Every week, the published article may be a natural choice for discussion topic at Less Wrong Meetup

Weekly LW Meetups

1 FrankAdamek 29 January 2016 04:15PM

This summary was posted to LW Main on January 22nd. The following week's summary is here.

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, London, Madison WI, Melbourne, Moscow, Mountain View, New Hampshire, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.

continue reading »

The Number Choosing Game: Against the existence of perfect theoretical rationality

0 casebash 29 January 2016 01:04AM

In order to ensure that this post delivers what it promises, I have added the following content warnings:

Content Notes:

Pure Hypothetical Situation
: The claim that perfect theoretical rationality doesn't exist is restricted to a purely hypothetical situation. No claim is being made that this applies to the real world. If you are only interested in how things apply to the real world, then you may be disappointed to find out that this is an exercise left to the reader.

Technicality Only Post: This post argues that perfectly theoretical rationality doesn't exist due to a technicality. If you were hoping for this post to deliver more, well, you'll probably be disappointed.

Contentious Definition: This post (roughly) defines perfect rationality as the ability to maximise utility. This is based on Wikipedia, which defines rational agents as an agent that: "always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions". 

We will define the number choosing game as follows. You name any single finite number x. You then gain x utility and the game then ends. You can only name a finite number, naming infinity is not allowed.

Clearly, the agent that names x+1 is more rational than the agent that names x (and behaves the same in every other situation). However, there does not exist a completely rational agent, because there does not exist a number that is higher than every other number. Instead, the agent who picks 1 is less rational than the agent who picks 2 who is less rational than the agent who picks 3 and so on until infinity. There exists an infinite series of increasingly rational agents, but no agent who is perfectly rational within this scenario.

Furthermore, this hypothetical doesn't take place in our universe, but in a hypothetical universe where we are all celestial beings with the ability to choose any number however large without any additional time or effort no matter how long it would take a human to say that number. Since this statement doesn't appear to have been clear enough (judging from the comments), we are explicitly considering a theoretical scenario and no claims are being made about how this might or might not carry over to the real world. In other words, I am claiming the the existence of perfect rationality does not follow purely from the laws of logic. If you are going to be difficult and argue that this isn't possible and that even hypothetical beings can only communicate a finite amount of information, we can imagine that there is a device that provides you with utility the longer that you speak and that the utility it provides you is exactly equal to the utility you lose by having to go to the effort to speak, so that overall you are indifferent to the required speaking time.

In the comments, MattG suggested that the issue was that this problem assumed unbounded utility. That's not quite the problem. Instead, we can imagine that you can name any number less than 100, but not 100 itself. Further, as above, saying a long number either doesn't cost you utility or you are compensated for it. Regardless of whether you name 99 or 99.9 or 99.9999999, you are still choosing a suboptimal decision. But if you never stop speaking, you don't receive any utility at all.

I'll admit that in our universe there is a perfectly rational option which balances speaking time against the utility you gain given that we only have a finite lifetime and that you want to try to avoid dying in the middle of speaking the number which would result in no utility gained. However, it is still notable that a perfectly rational being cannot exist within a hypothetical universe. How exactly this result applies to our universe isn't exactly clear, but that's the challenge I'll set for the comments. Are there any realistic scenarios where the lack of existence of perfect rationality has important practical applications?

Furthermore, there isn't an objective line between rational and irrational. You or I might consider someone who chose the number 2 to be stupid. Why not at least go for a million or a billion? But, such a person could have easily gained a billion, billion, billion utility. No matter how high a number they choose, they could have always gained much, much more without any difference in effort.

I'll finish by providing some examples of other games. I'll call the first game the Exploding Exponential Coin Game. We can imagine a game where you can choose to flip a coin any number of times. Initially you have 100 utility. Every time it comes up heads, your utility triples, but if it comes up tails, you lose all your utility. Furthermore, let's assume that this agent isn't going to raise the Pascal's Mugging objection. We can see that the agent's expected utility will increase the more times they flip the coin, but if they commit to flipping it unlimited times, they can't possibly gain any utility. Just as before, they have to pick a finite number of times to flip the coin, but again there is no objective justification for stopping at any particular point.

Another example, I'll call the Unlimited Swap game. At the start, one agent has an item worth 1 utility and another has an item worth 2 utility. At each step, the agent with the item worth 1 utility can choose to accept the situation and end the game or can swap items with the other player. If they choose to swap, then the player who now has the 1 utility item has an opportunity to make the same choice. In this game, waiting forever is actually an option. If your opponents all have finite patience, then this is the best option. However, there is a chance that your opponent has infinite patience too. In this case you'll both miss out on the 1 utility as you will wait forever. I suspect that an agent could do well by having a chance of waiting forever, but also a chance of stopping after a high finite number. Increasing this finite number will always make you do better, but again, there is no maximum waiting time.

(This seems like such an obvious result, I imagine that there's extensive discussion of it within the game theory literature somewhere. If anyone has a good paper that would be appreciated).

Link to part 2: Consequences of the Non-Existence of Rationality 

 

Hell bans and mederators abusing their power

-5 The_Lion2 29 January 2016 01:02AM

Apparently some moderator has gone drunk with power and is attempting to impose hell-bans.

 

What you should do immediately:

 

1) Log out of your account and make sure you can still see your comments.

 

2) If you can't create a new account and post a reply in the comments so we can know how extensive the problem is.

 

I am posting this so that we can have a transparent discussion about moderation, something at least one moderator apparently doesn't want.

 

Also, note to the moderator in question: if this post disappears, it will be resubmitted.  Attempting to suppress transparency will not work.

 

Studying Your Native Language

4 Crux 28 January 2016 07:23PM

I've spent many thousands of hours over the past several years studying foreign languages and developing a general method for foreign-language acquisition. But now I believe it's time to turn this technique in the direction of my native language: English.

Most people make a distinction between one's native language and one's second language(s). But anyone who has learned how to speak with a proper accent in a second language and spent a long enough stretch of time neglecting their native language to let it begin rusting and deteriorating will know that there's no essential difference.

When the average person learns new words in their native language, they imagine that they're learning new concepts. When they study new vocabulary in a foreign language, however, they recognize that they're merely acquiring hitherto-unknown words. They've never taken a step outside the personality their childhood environment conditioned into them. When the only demarcation of thingspace that you know is the semantic structure of your native language, you're bound to believe, for example, that the World is Made of English.

Why study English? I'm already fluent, as you can tell. I have the Magic of a Native Speaker.

Let's put this nonsense behind us and recognize that the map is not the territory, that English is just another map.

My first idea is that it may be useful to develop a working knowledge of the fundamentals of English etymology. A quick search suggests that the majority of words in English have a French or Latin origin. Would it be useful to make an Anki deck with the goal of learning how to readily recognize the building blocks of the English language, such as seeing that the "cardi" in "cardiology", "cardiograph", and "cardiograph" comes from an Ancient Greek word meaning "heart" (καρδιά)?

Besides that, I plan to make a habit of adding any new words I encounter into Anki with their context. For example, let's say I'm reading the introduction to A Treatise of Human Nature by David Hume. I encounter the term "proselytes", and upon looking it up in a dictionary I understand the meaning of the passage. I include the spelling of the simplest version of the word ("proselyte"), along with an audio recording of the pronunciation. I'll also toy with adding various other information such as a definition I wrote myself, synonyms or antonyms, and so forth, not knowing how I'll use the information but by virtue of the efficient design of Anki providing myself a plethora of options for innovative card design in the future.

Here's the context in this case:

Amidst all this bustle 'tis not reason, which carries the prize, but eloquence; and no man needs ever despair of gaining proselytes to the most extravagant hypothesis, who has art enough to represent it in any favourable colours. The victory is not gained by the men at arms, who manage the pike and the sword; but by the trumpeters, drummers, and musicians of the army.

With the word on the front of the card and this passage on the back of the card, I give my brain an opportunity to tie words to context rather than lifeless dictionary definitions. I don't know how much colorful meaning this passage may have in isolation, but for me I've read enough of the book to have a feel for his style and what he's talking about here. This highlights the personal nature of Anki decks. Few passages would be better for me when it comes to learning this word, but for you the considerations may be quite different. Far from different people simply having different subsets of the language that they're most concerned about, different people require different contextual definitions based on their own interests and knowledge.

But what about linguistic components that are more complex than a standalone word?

Let's say you run into the sentence, "And as the science of man is the only solid foundation for the other sciences, so the only solid foundation we can give to this science itself must be laid on experience and observation."

Using Anki, I could perhaps put "And as [reason], so [consequence]" on the front of the card, and the full sentence on the back.

What I'm most concerned with, however, is how to translate such study to an actual improvement in writing ability. Using Anki to play the recognition game, where you see a vocabulary word or grammatical form on the front and have a contextual definition on the back, would certainly improvement quickness of reading comprehension in many cases. But would it make the right connections in the brain so I'm likely to think of the right word or grammatical structure at the right time for writing purposes?

Anyway, any considerations or suggestions concerning how to optimize reading comprehension or especially writing ability in a language one is already quite proficient with would be appreciated.

What's wrong with this picture?

15 CronoDAS 28 January 2016 01:30PM

Alice: "I just flipped a coin [large number] times. Here's the sequence I got:

 

(Alice presents her sequence.)

 

Bob: No, you didn't. The probability of having gotten that particular sequence is 1/2^[large number]. Which is basically impossible. I don't believe you.

 

Alice: But I had to get some sequence or other. You'd make the same claim regardless of what sequence I showed you.

 

Bob: True. But am I really supposed to believe you that a 1/2^[large number] event happened, just because you tell me it did, or because you showed me a video of it happening, or even if I watched it happen with my own eyes? My observations are always fallible, and if you make an event improbable enough, why shouldn't I be skeptical even if I think I observed it?

 

Alice: Someone usually wins the lottery. Should the person who finds out that their ticket had the winning numbers believe the opposite, because winning is so improbable?

 

Bob: What's the difference between finding out you've won the lottery and finding out that your neighbor is a 500 year old vampire, or that your house is haunted by real ghosts? All of these events are extremely improbable given what we know of the world.

 

Alice: There's improbable, and then there's impossible. 500 year old vampires and ghosts don't exist.

 

Bob: As far as you know. And I bet more people claim to have seen ghosts than have won more than 100 million dollars in the lottery.

 

Alice: I still think there's something wrong with your reasoning here.

Perhaps a better form factor for Meetups vs Main board posts?

14 lionhearted 28 January 2016 11:50AM

I like to read posts on "Main" from time to time, including ones that haven't been promoted. However, lately, these posts get drowned out by all the meetup announcements.

It seems like this could lead to a cycle where people comment less on recent non-promoted posts (because they fall off the Main non-promoted area quickly) which leads to less engagement, and less posts, etc.

Meetups are also very important, but here's the rub: I don't think a text-based announcement in the Main area is the best possible way to showcase meetups.

So here's an idea: how about creating either a calendar of upcoming meetups, or map with pins on it of all places having a meetup in the next three months?

This could be embedded on the front page of leswrong.com -- that'd let people find meetups easier (they can look either by timeframe or see if their region is represented), and would give more space to new non-promoted posts, which would hopefully promote more discussion, engagement, and new posts.

Thoughts?

Rationality Reading Group: Part S: Quantum Physics and Many Worlds

3 Gram_Stone 28 January 2016 01:18AM

This is part of a semi-monthly reading group on Eliezer Yudkowsky's ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.


Welcome to the Rationality reading group. This fortnight we discuss Part S: Quantum Physics and Many Worlds (pp. 1081-1183). This post summarizes each article of the sequence, linking to the original LessWrong post where available.

S. Quantum Physics and Many Worlds

229. Quantum Explanations - Quantum mechanics doesn't deserve its fearsome reputation. If you tell people something is supposed to be mysterious, they won't understand it. It's human intuitions that are "strange" or "weird"; physics itself is perfectly normal. Talking about historical erroneous concepts like "particles" or "waves" is just asking to confuse people; present the real, unified quantum physics straight out. The series will take a strictly realist perspective - quantum equations describe something that is real and out there. Warning: Although a large faction of physicists agrees with this, it is not universally accepted. Stronger warning: I am not even going to present non-realist viewpoints until later, because I think this is a major source of confusion.

230. Configurations and Amplitude - A preliminary glimpse at the stuff reality is made of. The classic split-photon experiment with half-silvered mirrors. Alternative pathways the photon can take, can cancel each other out. The mysterious measuring tool that tells us the relative squared moduli.

231. Joint Configurations - The laws of physics are inherently over mathematical entities, configurations, that involve multiple particles. A basic, ontologically existent entity, according to our current understanding of quantum mechanics, does not look like a photon - it looks like a configuration of the universe with "A photon here, a photon there." Amplitude flows between these configurations can cancel or add; this gives us a way to detect which configurations are distinct. It is an experimentally testable fact that "Photon 1 here, photon 2 there" is the same configuration as "Photon 2 here, photon 1 there".

232. Distinct Configurations - Since configurations are over the combined state of all the elements in a system, adding a sensor that detects whether a particle went one way or the other, becomes a new element of the system that can make configurations "distinct" instead of "identical". This confused the living daylights out of early quantum experimenters, because it meant that things behaved differently when they tried to "measure" them. But it's not only measuring instruments that do the trick - any sensitive physical element will do - and the distinctness of configurations is a physical fact, not a fact about our knowledge. There is no need to suppose that the universe cares what we think.

233. Collapse Postulates - Early physicists simply didn't think of the possibility of more than one world - it just didn't occur to them, even though it's the straightforward result of applying the quantum laws at all levels. So they accidentally invented a completely and strictly unnecessary part of quantum theory to ensure there was only one world - a law of physics that says that parts of the wavefunction mysteriously and spontaneously disappear when decoherence prevents us from seeing them any more. If such a law really existed, it would be the only non-linear, non-unitary, non-differentiable, non-local, non-CPT-symmetric, acausal, faster-than-light phenomenon in all of physics.

234. Decoherence is Simple - The idea that decoherence fails the test of Occam's Razor is wrong as probability theory.

235. Decoherence is Falsifiable and Testable - (Note: Designed to be standalone readable.) An epistle to the physicists. To probability theorists, words like "simple", "falsifiable", and "testable" have exact mathematical meanings, which are there for very strong reasons. The (minority?) faction of physicists who say that many-worlds is "not falsifiable" or that it "violates Occam's Razor" or that it is "untestable", are committing the same kind of mathematical crime as non-physicists who invent their own theories of gravity that go as inverse-cube. This is one of the reasons why I, a non-physicist, dared to talk about physics - because I saw (some!) physicists using probability theory in a way that was simply wrong. Not just criticizable, but outright mathematically wrong: 2 + 2 = 3.

236. Privileging the Hypothesis - If you have a billion boxes only one of which contains a diamond (the truth), and your detectors only provide 1 bit of evidence apiece, then it takes much more evidence to promote the truth to your particular attention—to narrow it down to ten good possibilities, each deserving of our individual attention—than it does to figure out which of those ten possibilities is true.  27 bits to narrow it down to 10, and just another 4 bits will give us better than even odds of having the right answer. It is insane to expect to arrive at correct beliefs by promoting hypotheses to the level of your attention without sufficient evidence, like a particular suspect in a murder case, or any one of the design hypotheses, or that one of a billion opaque boxes that just looks like a winner.

237. Living in Many Worlds - The many worlds of quantum mechanics are not some strange, alien universe into which you have been thrust. They are where you have always lived. Egan's Law: "It all adds up to normality." Then why care about quantum physics at all? Because there's still the question of what adds up to normality, and the answer to this question turns out to be, "Quantum physics." If you're thinking of building any strange philosophies around many-worlds, you probably shouldn't - that's not what it's for.

238. Quantum Non-Realism - "Shut up and calculate" is the best approach you can take when none of your theories are very good. But that is not the same as claiming that "Shut up!" actually is a theory of physics. Saying "I don't know what these equations mean, but they seem to work" is a very different matter from saying: "These equations definitely don't mean anything, they just work!"

239. If Many-Worlds Had Come First - If early physicists had never made the mistake, and thought immediately to apply the quantum laws at all levels to produce macroscopic decoherence, then "collapse postulates" would today seem like a completely crackpot theory. In addition to their other problems, like FTL, the collapse postulate would be the only physical law that was informally specified - often in dualistic (mentalistic) terms - because it was the only fundamental law adopted without precise evidence to nail it down. Here, we get a glimpse at that alternate Earth.

240. Where Philosophy Meets Science - In retrospect, supposing that quantum physics had anything to do with consciousness was a big mistake. Could philosophers have told the physicists so? But we don't usually see philosophers sponsoring major advances in physics; why not?

241. Thou Art Physics - If the laws of physics control everything we do, then how can our choices be meaningful? Because you are physics. You aren't competing with physics for control of the universe, you arewithin physics. Anything you control is necessarily controlled by physics.

242. Many Worlds, One Best Guess - Summarizes the arguments that nail down macroscopic decoherence, aka the "many-worlds interpretation". Concludes that many-worlds wins outright given the current state of evidence. The argument should have been over fifty years ago. New physical evidence could reopen it, but we have no particular reason to expect this.

 


This has been a collection of notes on the assigned sequence for this fortnight. The most important part of the reading group though is discussion, which is in the comments section. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

The next reading will cover Part T: Science and Rationality (pp. 1187-1265) and Interlude: A Technical Explanation of Technical Explanation (pp. 1267-1314). The discussion will go live on Wednesday, 10 February 2016, right here on the discussion forum of LessWrong.

[Link] AlphaGo: Mastering the ancient game of Go with Machine Learning

14 ESRogs 27 January 2016 09:04PM

DeepMind's go AI, called AlphaGo, has beaten the European champion with a score of 5-0. A match against top ranked human, Lee Se-dol, is scheduled for March.

 

Games are a great testing ground for developing smarter, more flexible algorithms that have the ability to tackle problems in ways similar to humans. Creating programs that are able to play games better than the best humans has a long history

[...]

But one game has thwarted A.I. research thus far: the ancient game of Go.


View more: Next