Meetup : Glasgow (Scotland) Meetup
Discussion article for the meetup : Glasgow (Scotland) Meetup
At lest two of us will be meeting in the Curler's Rest on Byers Road at 3.30 on Sunday. We decided we might as well advertise here so that a. anyone who is interested can come along and b. (more likely) anyone who thinks this is much too short notice for a meetup can let us know that they exist, and we can arrange something that better suits other people next time.
I'll bring along some sort of activity/game/something that can keep us entertained/avoid awkward silences if people do turn up.
If you are intending to come, feel free to PM me on here, and I'll get back to you with contact details so you can find us.
Discussion article for the meetup : Glasgow (Scotland) Meetup
Selection Effects in estimates of Global Catastrophic Risk
Here's a poser that occurred to us over the summer, and one that we couldn't really come up with any satisfactory solution to. The people who work at the Singularity Institute have a high estimate of the probability that an Unfriendly AI will destroy the world. People who work for http://nuclearrisk.org/ have a very high estimate of the probability that a nuclear war will destroy the world (by their estimates, if you are American and under 40, then nuclear war is the single most likely way in which you might die next year).
It seems like there are good reasons to take these numbers seriously, because Eliezer is probably the world expert on AI risk, and Hellman is probably the world expert on nuclear risk. However, there's a problem - Eliezer is an expert on AI risk because he believes that AI risk is a bigger risk than nuclear war. Similarly, Hellman chose to study nuclear risks and not AI risk I because he had a higher than average estimate of the threat of nuclear war.
It seems like it might be a good idea to know what the probability of each of these risks is. Is there a sensible way for these people to correct for the fact that the people studying these risks are those that have high estimate of them in the first place?
Rationality Boot Camp
We have been at Rationality Boot Camp for nearly a week (Jasen has decided it should be renamed "megacamp", but since everyone here has probably heard of it under the name Boot Camp, I'll stick with that).
So far, it has been quite a lot of fun certainly a fair amount more than I expect I would have at regular Boot Camp. We have a blog about the sessions (I think it is likely to be too high volume for posting to Less Wrong to be the ideal solution: I'm willing to change this if there's a strong consensus that I'm wrong). So far, I have made the only two posts: I think some of the other campers will be blogging at some point, I will try to post at least a few times a week. If anyone is interested in reading it, it can be found here:
The Friendly AI Game
At the recent London meet-up someone (I'm afraid I can't remember who) suggested that one might be able to solve the Friendly AI problem by building an AI whose concerns are limited to some small geographical area, and which doesn't give two hoots about what happens outside that area. Cipergoth pointed out that this would probably result in the AI converting the rest of the universe into a factory to make its small area more awesome. In the process, he mentioned that you can make a "fun game" out of figuring out ways in which proposed utility functions for Friendly AIs can go horribly wrong. I propose that we play.
Here's the game: reply to this post with proposed utility functions, stated as formally or, at least, as accurately as you can manage; follow-up comments explain why a super-human intelligence built with that particular utility function would do things that turn out to be hideously undesirable.
There are three reasons I suggest playing this game. In descending order of importance, they are:
- It sounds like fun
- It might help to convince people that the Friendly AI problem is hard(*).
- We might actually come up with something that's better than anything anyone's thought of before, or something where the proof of Friendliness is within grasp - the solutions to difficult mathematical problems often look obvious in hindsight, and it surely can't hurt to try
Open Thread: Mathematics
In Luke's recent post on what sort of posts we would like to see more of, one suggestion was "Open Thread: Math". This suggestion has been voted up by (at least) 12 people. Since it's going to take me less than 2 minutes to type this post, I figured I might as well just go ahead and post the thread, rather than vote up the suggestion.
So, this is an open thread on mathematics. As things stand, I have no idea what the rules should be (I don't know what the people who voted up the post suggestion expected the rules to be), but I guess the general principle should be that we have maths questions which are vaguely related to LW-type ideas, as there are plenty of more appropriate fora for general mathematical discussion already out there.
Slate article on Efficient Charity (link)
From the article:
Billions of dollars are given and spent on aid and development by individuals and companies each year. Despite this generosity, we simply do not allocate enough resources to solve all of the world's biggest problems. In a world fraught with competing claims on human solidarity, we have a moral obligation to direct additional resources to where they can achieve the most good. And that is as true of our own small-scale charitable donations as it is of governments' or philanthropists' aid budgets.
...
Guided by their consideration of each option's costs and benefits, and setting aside matters like media attention, the experts identified the best investments: those for which relatively tiny amounts of money could generate significant returns in terms of health, prosperity, and community advantages. These included: increased immunization coverage, initiatives to reduce school dropout rates, community-based nutrition promotion, and micronutrient supplementation.
Their conclusion? Micronutrients for people in poor countries. No, I don't think SIAI was considered as an option.
Simpson's Paradox
This is my first attempt at an elementary statistics post, which I hope is suitable for Less Wrong. I am going to present a discussion of a statistical phenomenon known as Simpson's Paradox. This isn't a paradox, and it wasn't actually discovered by Simpson, but that's the name everybody uses for it, so it's the name I'm going to stick with. Along the way, we'll get some very basic practice at calculating conditional probabilities.
A worked example
The example I've chosen is an exercise from a university statistics course that I have taught on for the past few years. It is by far the most interesting exercise in the entire course, and it goes as follows:
You are a doctor in charge of a large hospital, and you have to decide which treatment should be used for a particular disease. You have the following data from last month: there were 390 patients with the disease. Treatment A was given to 160 patients of whom 100 were men and 60 were women; 20 of the men and 40 of the women recovered. Treatment B was given to 230 patients of whom 210 were men and 20 were women; 50 of the men and 15 of the women recovered. Which treatment would you recommend we use for people with the disease in future?
The simplest way to represent these sort of data is to draw a table, we can then pick the relevant numbers out of the table to calculate the required conditional probabilities.
Overall
| A | B | |
| lived | 60 | 65 |
| died | 100 | 165 |
The probability that a randomly chosen person survived if they were given treatment A is 60/160 = 0.375
The probability that a randomly chosen person survived if they were given treatment B is 65/230 = 0.283
So a randomly chosen person given treatment A was more likely to surive than a randomly chosen person given treatment B. Looks like we'd better give people treatment A.
However, since were given a breakdown of the data by gender, let's look and see if treatment A is better for both genders, or if it gets all of its advantage from one or the other.
Study shows existence of psychic powers.
According to the New Scientist, Daryl Bern has a paper to appear in Journal of Personality and Social Psychology, which claims that the participants in psychological experiments are able to predict the future. A preprint of this paper is available online. Here's a quote from the New Scientist article:
In one experiment, students were shown a list of words and then asked to recall words from it, after which they were told to type words that were randomly selected from the same list. Spookily, the students were better at recalling words that they would later type. In another study, Bem adapted research on "priming" – the effect of a subliminally presented word on a person's response to an image. For instance, if someone is momentarily flashed the word "ugly", it will take them longer to decide that a picture of a kitten is pleasant than if "beautiful" had been flashed. Running the experiment back-to-front, Bem found that the priming effect seemed to work backwards in time as well as forwards.
Question: even assuming the methodology is sound, given experimenter bias, publication bias and your priors on the existence of psi, what sort of p-values would you need to see in that paper in order to believe with, say, 50% probability that the effect measured is real?
How can we compare decision theories?
There has been a lot of discussion on LW about finding better decision theories. A lot of the reason for the various new decision theories proposed here seems to be an effort to get over the fact that classical CDT gives the wrong answer in 1-shot PD's, Newcomb-like problems and Parfit's Hitchhiker problem. While Gary Drescher has said that TDT is "more promising than any other decision theory I'm aware of ", Eliezer gives a list of problems in which his theory currently gives the wrong answer (or, at least, it did a year ago). Adam Bell's recent sequence has talked about problems for CDT, and is no doubt about to move onto problems with EDT (in one of the comments, it was suggested that EDT is "wronger" than CDT).
In the Iterated Prisoner's Dilemma, it is relatively trivial to prove that no strategy is "optimal" in the sense that it gets the best possible pay-out against all opponents. The reasoning goes roughly like this: any strategy which ever cooperates does worse than it could have against, say, Always Defect. Any strategy which doesn't start off with cooperate does worse than it could have against, say Grim. So, whatever strategy you choose, there is another strategy that would do better than you against some possible opponent. So no strategy is "optimal". Question: is it possible to prove similarly that there is no "optimal" Decision Theory? In other words - given a decision theory A, can you come up with some scenario in which it performs worse than at least one other decision theory? Than any other decision theory?
One initial try would be: Omega gives you two envelopes - the left envelope contains $1 billion iff you don't implement decision theory A in deciding which envelope to choose. The right envelope contains $1000 regardless.
Or, you might not like Omega being able to make decisions about you based entirely on your sourcecode (or "ritual of cognition"), then how about this:in order for two decision theories to sensibly be described as "different", there must be some scenario in which they perform a different action (let's call this Scenario 1). In Scenario 1, DT A makes decision A whereas DT B makes decision B. In Scenario 2, Omega offers you the following setup: here are two envelopes, you can pick exactly one of them. I've just simulated you in Scenario 1. If you chose decision B, there's $1,000,000 in the left envelope. Otherwise it's empty. There's $1000 in the right envelope regardless.
I'm not sure if there's some flaw in this reasoning (are there decision theories for which Omega offering such a deal is a logical impossibility? It seems unlikely: I don't see how your choice of algorithm could affect Omega's ability to talk about it). But I imagine that some version of this should work - in which case, it doesn't make sense to talk about one decision theory being "better" than another, we can only talk about decision theories being better than others for certain classes of problems.
I have no doubt that TDT is an improvement on CDT, but in order for this to even make sense, we'd have to have some way of thinking about what sort of problem we want our decision theory to solve. Presumably the answer is "the sort of problems which you're actually likely to face in the real world". Do we have a good formalism for what this means? I'm not suggesting that the people who discuss these questions haven't considered this issue, but I don't think I've ever seen it explicitly addressed. What exactly do we mean by a "better" decision theory?
What should I have for dinner? (A case study in decision making)
Everyone knows that eating fatty foods is bad for you, that high cholesterol causes heart disease and that we should all do some more exercise so that we can lose weight. How do I know that everyone knows this? Well, for one thing, this government website tells me so:
We all know too much fat is bad for us. But we don't always know where it's lurking. It seems to be in so many of the things we like, so it's sometimes difficult to know how to cut down.
...kids need to do at least 60 minutes of physical activity that gets their heart beating faster than usual. And they need to do it every day to burn off calories and prevent them storing up excess fat in the body which can lead to cancer, type 2 diabetes and heart disease.
See, it's right there in black and white. We all know too much fat is bad for us. Except... there are a lot of people who don't agree. Gary Taubes is one of them, His book, Good Calories Bad Calories (The Diet Delusion in the UK and Australia), sets out the case against what he calls the Dietary Fat Hypothesis for obesity and heart disease, and proposes instead the Carbohydrate Hypothesis: that both obesity and heart disease are caused by excessive consumption of refined carbohydrates, rather than dietary fat.
Taubes is very convincing. He explains how people have consistently recommended low-carb diets for weight-loss for the past 150 years. He explains how scientists roundly ignored studies that contradicted the link between high cholesterol and coronary disease. There are details of the mechanism by which eating refined carbohydrate affects insulin production, leading to obesity. He gives a plausible narrative for how the Dietary Fat Hypothesis came to be accepted scientific wisdom despite not actually being true (or supported by the majority of the evidence). He explains how studies of low-fat diets simply ignored overall mortality rates, reporting only deaths from heart disease, and how one study wasn't published because 'we weren't happy with the way it turned out'. All in all, the book is very convincing.
I expect a relatively large percentage of people on LW are already aware of this. Searching the LW archives for 'Taubes' gives several, mostly positive, references to his work (Eliezer seems to be convinced "Dietary scientists ignoring their own experimental evidence have killed millions and condemned hundreds of millions more to obesity with high-fructose corn syrup."). However, I do expect it to be news to some people, and I think it raises an important question. Given that everyone needs to eat something, we all need to decide whether we believe Taubes or whether we believe Change 4 Life.
Good Calories, Bad Calories is 601 pages of relatively small type, and contains 111 pages of references. Most of you probably don't want to read a book that long, and you definitely don't want to check all of it's references. Even if you did, Taubes openly admits that his book is attempting to argue for the Carbohydrate Hypothesis - he is trying to convince you, why should you be surprised if you find yourself convinced? (He claims not to be cherry-picking but then, he would, wouldn't he?) So how can you decide whether to trust the government or whether to trust some journalist with no training in biology? Even if you do decide to assess the evidence for yourself, how exactly should you go about it?
This is the key question of rationality. How can we believe what is true? And I think this makes a great case study - it's an area in which we all have to have a belief (or at least, act as though we have a belief) and one in which there is (or at least appears to be) genuine controversy as to what is true and what is not.
If you've already thought about this, do you believe Taubes' thesis, and how did you come to this conclusion? If this is the first time you've ever heard of Taubes, how far have you shifted your probability for the Dietary Fat Hypothesis based on reading this post? What more research do you intend to do to decide whether or not to continue believing it? How much weight do you place on the fact that I believe Taubes? On the fact that Eliezer believes Taubes (Eliezer, if your position is more nuanced than this, feel free to correct me)? How much did you update your beliefs based on what other commentors have said (assuming there have been any)?
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)