Upcoming LW Changes
Thanks to the reaction to this article and some conversations, I'm convinced that it's worth trying to renovate and restore LW. Eliezer, Nate, and Matt Fallshaw are all on board and have empowered me as an editor to see what we can do about reshaping LW to meet what the community currently needs. This involves a combination of technical changes and social changes, which we'll try to make transparently and non-intrusively.
On 'Why Global Poverty?' and Arguments from Unobservable Impacts
Related: Is Molecular Nanotechnology "Scientific"?
For context, Jeff Kaufman delivered a speech on effective altruism and cause prioritization at EA Global 2015 entitled 'Why Global Poverty?', which he has transcribed and made available here. It's certainly worth reading.
I was dissatisfied with this speech in some ways. For the sake of transparency and charity, I will say that Kaufman has written a disclaimer explaining that, because of a miscommunication, he wrote this speech in the span of two hours immediately before he delivered it (instead of eating lunch, I would like to add), and that even after writing the text version, he is not entirely satisfied with the result.
I'm not that familiar with the EA community, but I predict that debates about cause prioritization, especially when existential risk mitigation is among the causes being discussed, can become mind-killed extremely quickly. And I don't mean to convey that in the tone of a wise outsider. It makes sense, considering the stakes at hand and the eschatological undertones of existential risk. (That is to say that the phrase 'save the world' can be sobering or gross, depending on the individual.) So, as is always implicit, but is sometimes worth making explicit, I'm criticizing some arguments as I understand them, not any person. I write this precisely because rationality is a common interest of many causes. I'll be focusing on the part about existential risk, as well as the parts that it is dependent upon. Lastly, I'd be interested in knowing if anyone else has criticized this speech in writing or come to conclusions similar to mine. Without further ado:
Jeff Kaufman's explanation of EA and why it makes sense is boilerplate; I agree with it, naturally. I also agree with the idea that certain existential risk mitigation strategies are comparatively less neglected by national governments and thus that risks like these are considerably less likely to be where one can make one's most valuable marginal donation. E.g., there are people who are paid to record and predict the trajectories of celestial objects, celestial mechanics is well-understood, and an impact event in the next two centuries is, with high meta-confidence, far less probable than many other risks. You probably shouldn't donate to asteroid impact risk mitigation organizations if you have to choose a cause from the category of existential risk mitigation organizations. The same goes for most natural (non-anthropogenic) risks.
The next few parts are worth looking at in detail, however:
At the other end we have risks like the development of an artificial intelligence that destroys us through its indifference. Very few people are working on this, there's low funding, and we don't have much understanding of the problem. Neglectedness is a strong heuristic for finding causes where your contribution can go far, and this does seem relatively neglected. The main question for me, though, is how do you know if you're making progress?
Everything before the question seems accurate to me. Furthermore, if I interpret the question correctly, then what's implied is a difference between the observable consequences of global poverty mitigation and existential risk mitigation. I think the implied difference is fair. You can see the malaria evaporating but you only get one chance to build a superintelligence right. (It's worth saying that AI risk is also the example that Kaufman uses in his explanation.)
However, I don't think that this necessarily implies that we can't have some confidence that we're actually mitigating existential risks. This is clear if we dissolve the question. What are the disguised queries behind the question 'How do you know if you're making progress?'
If your disguised query is 'Can I observe the consequences of my interventions and update my beliefs and correct my actions accordingly?', then in the case of existential risks, the answer is "No", at least in the traditional sense of an experiment.
If your disguised query is 'Can I have confidence in the effects of my interventions without observing their consequences?', then that seems like a different, much more complicated question that is both interesting and worth examining further. I'll expand on this conceivably more controversial bit later, so that it doesn't seem like I'm being uncharitable or quoting out of context. Kaufman continues:
First, a brief digression into feedback loops. People succeed when they have good feedback loops. Otherwise they tend to go in random directions. This is a problem for charity in general, because we're buying things for others instead of for ourselves. If I buy something and it's no good I can complain to the shop, buy from a different shop, or give them a bad review. If I buy you something and it's no good, your options are much more limited. Perhaps it failed to arrive but you never even knew you were supposed to get it? Or it arrived and was much smaller than I intended, but how do you know. Even if you do know that what you got is wrong, chances are you're not really in a position to have your concerns taken seriously.
This is a big problem, and there are a few ways around this. We can include the people we're trying to help much more in the process instead of just showing up with things we expect them to want. We can give people money instead of stuff so they can choose the things they most need. We can run experiments to see which ways of helping people work best. Since we care about actually helping people instead of just feeling good about ourselves, we not only can do these things, we need to do them. We need to set up feedback loops where we only think we're helping if we're actually helping.
Back to AI risk. The problem is we really really don't know how to make good feedback loops here. We can theorize that an AI needs certain properties not to just kill us all, and that in order to have those properties it would be useful to have certain theorems proved, and go work on those theorems. And maybe we have some success at this, and the mathematical community thinks highly of us instead of dismissing our work. But if our reasoning about what math would be useful is off there's no way for us to find out. Everything will still seem like it's going well.
I think I get where Kaufman is coming from on this. First, I'm going to use an analogy to convey what I believe to be the commonly used definition of the phrase 'feedback loop'.
If you're an entrepreneur, you want your beliefs about which business strategies will be successful to be entangled with reality. You also have a short financial runway, so you need to decide quickly, which means that you have to obtain your evidence quickly if you want your beliefs to be entangled in time for it to matter. So immediately after you affect the world, you look at it to see what happened and update on it. And this is virtuous.
And of course, people are notoriously bad at remaining entangled with reality when they don't look at it. And this seems like an implicit deficiency in any existential risk mitigation intervention; you can't test the effectiveness of your intervention. You succeed or fail, one time.
Next, let's taboo the phrase 'feedback loop'.
So, it seems like there's a big difference between first handing out insecticidal bed nets and then looking to see whether or not the malaria incidence goes down, and paying some mathematicians to think about AI risk. When the AI researchers 'make progress', where can you look? What in the world is different because they thought instead of not, beyond the existence of an academic paper?
But a big part of this rationality thing is knowing that you can arrive at true beliefs by correct reasoning, and not just by waiting for the answer to smack you in the face.
And I would argue that any altruist is doing the same thing when they have to choose between causes before they can make observations. There are a million other things that the founders of the Against Malaria Foundation could have done, but they took the risk of riding on distributing bed nets, even though they had yet to see it actually work.
In fact, AI risk is not-that-different from this, but you can imagine it as a variant where you have to predict much further into the future, the stakes are higher, and you don't get a second try after you observe the effect of your intervention.
And if you imagine a world where a global authoritarian regime involuntarily reads its citizens' minds as a matter of course, and there it is lawful that anyone who identifies as an EA is to be put in an underground chamber where they are given a minimum income that they may donate as they please, and they are allowed to reason on their prior knowledge only, never being permitted to observe the consequences of their donations, then I bet that EAs would not say, "I have no feedback loop and I therefore cannot decide between any of these alternatives."
Rather, I bet that they would say, "I will never be able to look at the world and see the effects of my actions at a time that affects my decision-making, but this is my best educated guess of what the best thing I can do is, and it's sure as hell better than doing nothing. Yea, my decision is merely rational."
You want observational consequences because they give you confidence in your ability to make predictions. But you can make accurate predictions without being able to observe the consequences of your actions, and without just getting lucky, and sometimes you have to.
But in reality we're not deciding between donating something and donating nothing. We're choosing between charitable causes. But I don't think that the fact that our interventions are less predictable should make us consider the risk more negligible or the prevention thereof less valuable. Above choosing causes where the effects of interventions are predictable, don't we want to choose the most valuable causes? A bias towards causes with consistently, predictably, immediately effective interventions doesn't seem like something that should completely dominate our decision-making process even if there's an alternative cause that can be less predictably intervened upon but that would result in outcomes with extremely high utility if successfully intervened upon.
To illustrate, imagine that you are at some point on a long road, truly in the middle of nowhere, and you see a man whose car has a flat tire. You know that someone else may not drive by for hours, and you don't know how well-prepared the man is for that eventuality. You consider stopping your car to help; you have a spare, you know how to change tires, and you've seen it work before. And if you don't do it right the first time for some weird reason, you can always try again.
But suddenly, you notice that there is a person lying motionless on the ground, some ways down the road; far, but visible. There's no cellphone service, it would take an ambulance hours to get here unless they happened to be driving by, and you have no medical training or experience.
I don't know about you, but even if I'm having an extremely hard time thinking of things to do about a guy dying on my watch in the middle of nowhere, the last thing I do is say, "I have no idea what to do if I try to save that guy, but I know exactly how to change a tire, so why don't I just change the tire instead." Because even if I don't know what to do, saving a life is so much more important than changing a tire that I don't care about the uncertainty. And maybe if I went and actually tried saving his life, even if I wasn't sure how to go about it, it would turn out that I would find a way, or that he needed help, but he wasn't about to die immediately, or that he was perfectly fine all along. And I never would've known if I'd changed a tire and driven in the opposite direction.
And it doesn't mean that the strategy space is open season. I'm not going to come up with a new religion on the spot that contains a prophetic vision that this man will survive his medical emergency, nor am I going to try setting him on fire. There are things that will obviously not work without me trying them out. And that can be built on with other ideas that are not-obviously-wrong-but-may-turn-out-to-be-wrong-later. It's great to have an idea of what you can know is wrong even if you can't try anything. Because not being able to try more than once is precisely the problem.
If we stop talking about what rational thinking feels like, and just start talking about rational thinking with the usual words, then what I'm getting at is that, in reality, there is an inside view to the AI risk arguments. You can always talk about confidence levels outside of an argument, but it helps to go into the details of the inside view, to see where our uncertainty about various assertions is greatest. Otherwise, where is your outside estimate even coming from, besides impression?
We can't run an experiment to see if the mathematics of self-reference, for example, is a useful thing to flesh out before trying to solve the larger problem of AI risk, but there are convincing reasons that it is. And sometimes that's all you have at the time.
And if you ever ask me, "Why does your uncertainty bottom out here?", then I'll ask you "Why does your uncertainty bottom out there?" Because it bottoms out somewhere, even if it's at the level of "I know that I know nothing," or some other similarly useless sentiment. And it's okay.
But I will say that this state of affairs is not optimal. It would be nice if we could be more confident about our reasoning in situations where we aren't able to make predictions, and then perform interventions, and then make observations that we can update on, and then try again. It's great to have medical training in the middle of nowhere.
And I will also say that I imagine that Kaufman is not talking about it being a fundamentally bad idea forever to donate to existential risk mitigation, but that it just doesn't seem like a good idea right now, because we don't know enough about when we should be confident in predictions that we can't test before we have to take action.
But if you know you're confused about how to determine the impact of interventions intended to mitigate existential risks, it's almost as if you should consider trying to figure out that problem itself. If you could crack the problem of mitigating existential risks, it would blow global poverty out of the water. And the problem doesn't immediately seem completely obviously intractable.
In fact, it's almost as if the cause you should choose is the research of existential risk strategy (a subset of cause prioritization). And, if you were to write a speech about it, it seems like it would be a good idea to make it really clear that that's probably very impactful, because value of information counts.
And so, when you read a speech that you claim is entitled 'Why Global Poverty?', I read a speech entitled 'Why Existential Risk Strategy Research?'
The Brain Preservation Foundation's Small Mammalian Brain Prize won
The Brain Preservation Foundation’s Small Mammalian Brain Prize has been won with fantastic preservation of a whole rabbit brain using a new fixative+slow-vitrification process.
- BPF announcement (21CM’s announcement)
- evaluation
-
The process was published as “Aldehyde-stabilized cryopreservation”, McIntyre & Fahy 2015 (mirror)
(They had problems with 2 pigs and got 1 pig brain successfully cryopreserved but it wasn’t part of the entry. I’m not sure why: is that because the Large Mammalian Brain Prize is not yet set up?)We describe here a new cryobiological and neurobiological technique, aldehyde-stabilized cryopreservation (ASC), which demonstrates the relevance and utility of advanced cryopreservation science for the neurobiological research community. ASC is a new brain-banking technique designed to facilitate neuroanatomic research such as connectomics research, and has the unique ability to combine stable long term ice-free sample storage with excellent anatomical resolution. To demonstrate the feasibility of ASC, we perfuse-fixed rabbit and pig brains with a glutaraldehyde-based fixative, then slowly perfused increasing concentrations of ethylene glycol over several hours in a manner similar to techniques used for whole organ cryopreservation. Once 65% w/v ethylene glycol was reached, we vitrified brains at −135 °C for indefinite long-term storage. Vitrified brains were rewarmed and the cryoprotectant removed either by perfusion or gradual diffusion from brain slices. We evaluated ASC-processed brains by electron microscopy of multiple regions across the whole brain and by Focused Ion Beam Milling and Scanning Electron Microscopy (FIB-SEM) imaging of selected brain volumes. Preservation was uniformly excellent: processes were easily traceable and synapses were crisp in both species. Aldehyde-stabilized cryopreservation has many advantages over other brain-banking techniques: chemicals are delivered via perfusion, which enables easy scaling to brains of any size; vitrification ensures that the ultrastructure of the brain will not degrade even over very long storage times; and the cryoprotectant can be removed, yielding a perfusable aldehyde-preserved brain which is suitable for a wide variety of brain assays…We have shown that both rabbit brains (10 g) and pig brains (80 g) can be preserved equally well. We do not anticipate that there will be significant barriers to preserving even larger brains such as bovine, canine, or primate brains using ASC.
- previous discussion: Mikula’s plastination came close but ultimately didn’t seem to preserve the whole brain when applied.
- commentary: Alcor, Robin Hanson, John Smart, Evidence-Based Cryonics, Vice, Pop Sci
To summarize it, you might say that this is a hybrid of current plastination and vitrification methods, where instead of allowing slow plastination (with unknown decay & loss) or forcing fast cooling (with unknown damage and loss), a staged approach is taking: a fixative is injected into the brain first to immediately lock down all proteins and stop all decay/change, and then it is leisurely cooled down to be vitrified.
This is exciting progress because the new method may wind up preserving better than either of the parent methods, but also because it gives much greater visibility into the end-results: the aldehyde-vitrified brains can be easily scanned with electron microscopes and the results seen in high detail, showing fantastic preservation of structure, unlike regular vitrification where the scans leave opaque how good the preservation was. This opacity is one reason that as Mike Darwin has pointed out at length on his blog and jkaufman has also noted that we cannot be confident in how well ALCOR or CI’s vitrification works - because if it didn’t, we have little way of knowing.
EDIT: BPF’s founder Ken Hayworth (Reddit account) has posted a piece, arguing that ALCOR & CI cannot be trusted to do procedures well and that future work should be done via rigorous clinical trials and only then rolled out. “Opinion: The prize win is a vindication of the idea of cryonics, not of unaccountable cryonics service organizations”
…“Should cryonics service organizations immediately start offering this new ASC procedure to their ‘patients’?” My personal answer (speaking for myself, not on behalf of the BPF) has been a steadfast NO. It should be remembered that these same cryonics service organizations have been offering a different procedure for years. A procedure that was not able to demonstrate, to even my minimal expectations, preservation of the brain’s neural circuitry. This result, I must say, surprised and disappointed me personally, leading me to give up my membership in one such organization and to become extremely skeptical of all since. Again, I stress, current cryonics procedures were NOT able to meet our challenge EVEN UNDER IDEAL LABORATORY CONDITIONS despite being offered to paying customers for years[1]. Should we really expect that these same organizations can now be trusted to further develop and properly implement such a new, independently-invented technique for use under non-ideal conditions?
Let’s step back for a moment. A single, independently-researched, scientific publication has come out that demonstrates a method of structural brain preservation (ASC) compatible with long-term cryogenic storage in animal models (rabbit and pig) under ideal laboratory conditions (i.e. a healthy living animal immediately being perfused with fixative). Should this one paper instantly open the floodgates to human application? Under untested real-world conditions where the ‘patient’ is either terminally ill or already declared legally dead? Should it be performed by unlicensed persons, in unaccountable organizations, operating outside of the traditional medical establishment with its checks and balances designed to ensure high standards of quality and ethics? To me, the clear answer is NO. If this was a new drug for cancer therapy, or a new type of heart surgery, many additional steps would be expected before even clinical trials could start. Why should our expectations be any lower for this?
The fact that the ASC procedure has won the brain preservation prize should rightly be seen as a vindication of the central idea of cryonics –the brain’s delicate circuitry underlying memory and personality CAN in fact be preserved indefinitely, potentially serving as a lifesaving bridge to future revival technologies. But, this milestone should certainly not be interpreted as a vindication of the very different cryonics procedures that are practiced on human patients today. And it should not be seen as a mandate for more of the same but with an aldehyde stabilization step casually tacked on. …
Require contributions in advance
If you are a person who finds it difficult to tell "no" to their friends, this one weird trick may save you a lot of time!
Scenario 1
Alice: "Hi Bob! You are a programmer, right?"
Bob: "Hi Alice! Yes, I am."
Alice: "I have this cool idea, but I need someone to help me. I am not good with computers, and I need someone smart whom I could trust, so they wouldn't steal my idea. Would you have a moment to listen to me?"
Alice explains to Bob her idea that would completely change the world. Well, at the least the world of bicycle shopping.
Instead of having many shops for bicycles, there could be one huge e-shop that would collect all the information about bicycles from all the existing shops. The customers would specify what kind of a bike they want (and where they live), and the system would find all bikes that fit the specification, and display them ordered by lowest price, including the price of delivery; then it would redirect them to the specific page of the specific vendor. Customers would love to use this one website, instead of having to visit multiple shops and compare. And the vendors would have to use this shop, because that's where the customers would be. Taking a fraction of a percent from the sales could make Alice (and also Bob, if he helps her) incredibly rich.
Bob is skeptical about it. The project suffers from the obvious chicken-and-egg problem: without vendors already there, the customers will not come (and if they come by accident, they will quickly leave, never to return again); and without customers already there, there is no reason for the vendors to cooperate. There are a few ways how to approach this problem, but the fact that Alice didn't even think about it is a red flag. She also has no idea who are the big players in the world of bicycle selling; and generally she didn't do her homework. But after pointing out all these objections, Alice still remains super enthusiastic about the project. She promises she will take care about everything -- she just cannot write code, and she needs Bob's help for this part.
Bob believes strongly in the division of labor, and that friends should help each other. He considers Alice his friend, and he will likely need some help from her in the future. Fact is, with perfect specification, he could make the webpage in a week or two. But he considers bicycles to be an extremely boring topic, so he wants to spend as little time as possible on this project. Finally, he has an idea:
"Okay, Alice, I will make the website for you. But first I need to know exactly how the page will look like, so that I don't have to keep changing it over and over again. So here is the homework for you -- take a pen and paper, and make a sketch of how exactly the web will look like. All the dialogs, all the buttons. Don't forget logging in and logging out, editing the customer profile, and everything else that is necessary for the website to work as intended. Just look at the papers and imagine that you are the customer: where exactly would you click to register, and to find the bicycle you want? Same for the vendor. And possibly a site administrator. Also give me the list of criteria people will use to find the bike they want. Size, weight, color, radius of wheels, what else? And when you have it all ready, I will make the first version of the website. But until then, I am not writing any code."
Alice leaves, satisfied with the outcome.
This happened a year ago.
No, Alice doesn't have the design ready, yet. Once in a while, when she meets Bob, she smiles at him and apologizes that she didn't have the time to start working on the design. Bob smiles back and says it's okay, he'll wait. Then they change the topic.
Scenario 2
Cyril: "Hi Diana! You speak Spanish, right?"
Diana: "Hi Cyril! Yes, I do."
Cyril: "You know, I think Spanish is the most cool language ever, and I would really love to learn it! Could you please give me some Spanish lessons, once in a while? I totally want to become fluent in Spanish, so I could travel to Spanish-speaking countries and experience their culture and food. Would you please help me?"
Diana is happy that someone takes interest in her favorite hobby. It would be nice to have someone around she could practice Spanish conversation with. The first instinct is to say yes.
But then she remembers (she knows Cyril for some time; they have a lot of friends in common, so they meet quite regularly) that Cyril is always super enthusiastic about something he is totally going to do... but when she meets him next time, he is super enthusiastic about something completely different; and she never heard about him doing anything serious about his previous dreams.
Also, Cyril seems to seriously underestimate how much time does it take to learn a foreign language fluently. Some lessons, once in a while will not do it. He also needs to study on his own. Preferably every day, but twice a week is probably a minimum, if he hopes to speak the language fluently within a year. Diana would be happy to teach someone Spanish, but not if her effort will most likely be wasted.
Diana: "Cyril, there is this great website called Duolingo, where you can learn Spanish online completely free. If you give it about ten minutes every day, maybe after a few months you will be able to speak fluently. And anytime we meet, we can practice the vocabulary you have already learned."
This would be the best option for Diana. No work, and another opportunity to practice. But Cyril insists:
"It's not the same without the live teacher. When I read something from the textbook, I cannot ask additional questions. The words that are taught are often unrelated to the topics I am interested in. I am afraid I will just get stuck with the... whatever was the website that you mentioned."
For Diana this feels like a red flag. Sure, textbooks are not optimal. They contain many words that the student will not use frequently, and will soon forget them. On the other hand, the grammar is always useful; and Diana doesn't want to waste her time explaining the basic grammar that any textbook could explain instead. If Cyril learns the grammar and some basic vocabulary, then she can teach him all the specialized vocabulary he is interested in. But now it feels like Cyril wants to avoid all work. She has to draw a line:
"Cyril, this is the address of the website." She takes his notebook and writes 'www.duolingo.com'. "You register there, choose Spanish, and click on the first lesson. It is interactive, and it will not take you more than ten minutes. If you get stuck there, write here what exactly it was that you didn't understand; I will explain it when we meet. If there is no problem, continue with the second lesson, and so on. When we meet next time, tell me which lessons you have completed, and we will talk about them. Okay?"
Cyril nods reluctantly.
This happened a year ago.
Cyril and Diana have met repeatedly during the year, but Cyril never brought up the topic of Spanish language again.
Scenario 3
Erika: "Filip, would you give me a massage?"
Filip: "Yeah, sure. The lotion is in the next room; bring it to me!"
Erika brings the massage lotion and lies on the bed. Filip massages her back. Then they make out and have sex.
This happened a year ago. Erika and Filip are still a happy couple.
Filip's previous relationships didn't work well, in long term. In retrospect, they all followed a similar scenario. At the beginning, everything seemed great. Then at some moment the girl started acting... unreasonably?... asking Filip to do various things for her, and then acting annoyed when Filip did exactly what he was asked to do. This happened more and more frequently, and at some moment she broke up with him. Sometimes she provided explanation for breaking up that Filip was unable to decipher.
Filip has a friend who is a successful salesman. Successful both professionally and with women. When Filip admitted to himself that he is unable to solve the problem on his own, he asked his friend for advice.
"It's because you're a f***ing doormat," said the friend. "The moment a woman asks you to do anything, you immediately jump and do it, like a well-trained puppy. Puppies are cute, but not attractive. Have you ready any of those books I sent you, like, ten years ago? I bet you didn't. Well, it's all there."
Filip sighed: "Look, I'm not trying to become a pick-up artist. Or a salesman. Or anything. No offense, but I'm not like you, personality-wise, I never have been, and I don't want to become your - or anyone else's - copy. Even if it would mean greater success in anything. I prefer to treat other people just like I would want them to treat me. Most people reciprocate nice behavior; and those who don't, well, I avoid them as much as possible. This works well with my friends. It also works with the girls... at the beginning... but then somehow... uhm... Anyway, all your books are about manipulating people, which is ethically unacceptable for me. Isn't there some other way?"
"All human interaction is manipulation; the choice is between doing it right or wrong, acting consciously or driven by your old habits..." started the friend, but then he gave up. "Okay, I see you're not interested. Just let me show you the most obvious mistake you make. You believe that when you are nice to people, they will perceive you as nice, and most of them will reciprocate. And when you act like an asshole, it's the other way round. That's correct, on some level; and in a perfect world this would be the whole truth. But on a different level, people also perceive nice behavior as weakness; especially if you do it habitually, as if you don't have any other option. And being an asshole obviously signals strength: you are not afraid to make other people angry. Also, in long term, people become used to your behavior, good or bad. The nice people don't seem so nice anymore, but they still seem weak. Then, ironicaly, if the person well-known to be nice refuses to do something once, people become really angry, because their expectations were violated. And if the asshole decides to do something nice once, they will praise him, because he surprised them pleasantly. You should be an asshole once in a while, to make people see that you have a choice, so they won't take your niceness for granted. Or if your girlfriend wants something from you, sometimes just say no, even if you could have done it. She will respect you more, and then she will enjoy more the things you do for her."
Filip: "Well, I... probably couldn't do that. I mean, what you say seems to make sense, however much I hate to admit it. But I can't imagine doing it myself, especially to a person I love. It's just... uhm... wrong."
"Then, I guess, the very least you could do is to ask her to do something for you first. Even if it's symbolic, that doesn't matter; human relationships are mostly about role-playing anyway. Don't jump immediately when you are told to; always make her jump first, if only a little. That will demonstrate strength without hurting anyone. Could you do that?"
Filip wasn't sure, but at the next opportunity he tried it, and it worked. And it kept working. Maybe it was all just a coincidence, maybe it was a placebo effect, but Filip doesn't mind. At first it felt kinda artificial, but then it became natural. And later, to his surprise, Filip realized that practicing these symbolic demands actually makes it easier to ask when he really needed something. (In which case sometimes he was asked to do something first, because his girlfriend -- knowingly or not? he never had the courage to ask -- copied the pattern; or maybe she has already known it long before. But he didn't mind that either.)
The lesson is: If you find yourself repeatedly in situations where people ask you to do something for them, but at the end they don't seem to appreciate what you did for them, or don't even care about the thing they asked you to do... and yet you find it difficult to say "no"... ask them to contribute to the project first.
This will help you get rid of the projects they don't care about (including the ones they think they care about in far mode, but do not care about enough to actually work on them in near mode) without being the one who refuses cooperation. Also, the act of asking the other person to contribute, after being asked to do something for them, mitigates the status loss inherent in working for them.
[link] "The Happiness Code" - New York Times on CFAR
http://www.nytimes.com/2016/01/17/magazine/the-happiness-code.html
Long. Mostly quite positive, though does spend a little while rolling its eyes at the Eliezer/MIRI connection and the craziness of taking things like cryonics and polyamory seriously.
The value of ambiguous speech
This was going to be a reply in a discussion between ChristianKl and MattG in another thread about conlangs, but their discussion seemed to have enough significance, independent of the original topic, to deserve a thread of its own. If I'm doing this correctly (this sentence is an after-the-fact update), then you should be able to link to the original comments that inspired this thread here: http://lesswrong.com/r/discussion/lw/n0h/linguistic_mechanisms_for_less_wrong_cognition/cxb2
Is a lack of ambiguity necessary for clear thinking? Are there times when it's better to be ambiguous? This came up in the context of the extent to which a conlang should discourage ambiguity, as a means of encouraging cognitive correctness by its users. It seems to me that something is being taken for granted here, that ambiguity is necessarily an impediment to clear thinking. And I certainly agree that it can be. But if detail or specificity are the opposites of ambiguity, then surely maximal detail or specificity is undesirable when the extra information isn't relevant, so that a conlang would benefit from not requiring users to minimize ambiguity.
Moving away from the concept of conlangs, this opens up some interesting (at least to me) questions. Exactly what does "ambiguity" mean? Is there, for each speech act, an optimal level of ambiguity, and how much can be gained by achieving it? Are there reasons why a certain, minimal degree of ambiguity might be desirable beyond avoiding irrelevant information?
[LINK] Speed superintelligence?
From Toby Ord:
Tool assisted speedruns (TAS) are when people take a game and play it frame by frame, effectively providing super reflexes and forethought, where they can spend a day deciding what to do in the next 1/60th of a second if they wish. There are some very extreme examples of this, showing what can be done if you really play a game perfectly. For example, this video shows how to winSuper Mario Bros 3 in 11 minutes. It shows how different optimal play can be from normal play. In particular, on level 8-1, it gains 90 extra lives by a sequence of amazing jumps.
Other TAS runs get more involved and start exploiting subtle glitches in the game. For example, this page talks about speed running NetHack, using a lot of normal tricks, as well as luck manipulation (exploiting the RNG) and exploiting a dangling pointer bug to rewrite parts of memory.
Though there are limits to what AIs could do with sheer speed, it's interesting that great performance can be achieved with speed alone, that this allows different strategies from usual ones, and that it allows the exploitation of otherwise unexploitable glitches and bugs in the setup.
Two Growth Curves
Sometimes, it helps to take a model that part of you already believes, and to make a visual image of your model so that more of you can see it.
One of my all-time favorite examples of this:
I used to often hesitate to ask dumb questions, to publicly try skills I was likely to be bad at, or to visibly/loudly put forward my best guesses in areas where others knew more than me.
I was also frustrated with this hesitation, because I could feel it hampering my skill growth. So I would try to convince myself not to care about what people thought of me. But that didn't work very well, partly because what folks think of me is in fact somewhat useful/important.
Then, I got out a piece of paper and drew how I expected the growth curves to go.

In blue, I drew the apparent-coolness level that I could achieve if I stuck with the "try to look good" strategy. In brown, I drew the apparent-coolness level I'd have if I instead made mistakes as quickly and loudly as possible -- I'd look worse at first, but then I'd learn faster, eventually overtaking the blue line.
Suddenly, instead of pitting my desire to become smart against my desire to look good, I could pit my desire to look good now against my desire to look good in the future :)
I return to this image of two growth curves often when I'm faced with an apparent tradeoff between substance and short-term appearances. (E.g., I used to often find myself scurrying to get work done, or to look productive / not-horribly-behind today, rather than trying to build the biggest chunks of capital for tomorrow. I would picture these growth curves.)
[Link] 2015 modafinil user survey
I am running, in collaboration with ModafinilCat, a survey of modafinil users asking about their experiences, side-effects, sourcing, efficacy, and demographics:
https://docs.google.com/forms/d/1ZNyGHl6vnHD62spZyHIqyvNM_Ts_82GvZQVdAr2LrGs/viewform
This is something of a followup to the LW surveys which find substantial modafinil use, and Yvain's 2014 nootropics survey. I hope the results will be useful; the legal questions should help reduce uncertainty there, and the genetics questions (assuming any responses) may be interesting too.
The Library of Scott Alexandria
I've put together a list of what I think are the best Yvain (Scott Alexander) posts for new readers, drawing from SlateStarCodex, LessWrong, raikoth.net, and Scott's LiveJournal.
The list should make the most sense to people who start from the top and read through it in order, though skipping around is encouraged too. Rather than making a chronological list, I’ve tried to order things by a mix of "where do I think most people should start reading?" plus "sorting related posts together."
This is a work in progress; you’re invited to suggest things you’d add, remove, or shuffle around. Since many of the titles are a bit cryptic, I'm adding short descriptions. See my blog for a version without the descriptions.
I. Rationality and Rationalization
- Blue- and Yellow-Tinted Choices ····· An introduction to context-sensitive biases.
- The Apologist and the Revolutionary ····· Do separate brain processes rationalize and question ideas?
- Historical Realism ····· When reality is unrealistic.
- Simultaneously Right and Wrong ····· On self-handicapping and self-deception.
- You May Already Be A Sinner ····· Self-deception in cases where your decisions make no difference.
- Beware the Man of One Study ····· On minimum wage laws and cherry-picked evidence.
- Debunked and Well-Refuted ····· When should we say that a study has been "debunked"?
- How to Not Lose an Argument ····· How to be more persuasive in entrenched arguments.
- The Least Convenient Possible World ····· Why it's useful to strengthen arguments you disagree with.
- Bayes for Schizophrenics: Reasoning in Delusional Disorders ····· Hypotheses about the role of perception, evidence integration, and priors in delusions.
- Generalizing from One Example ····· On the typical mind fallacy: assuming other people are like you.
- Typical Mind and Politics ····· Do political disagreements stem from neurological disagreements?
II. Probabilism
- Confidence Levels Inside and Outside an Argument ····· Should you believe your own conclusions, when they're extreme?
- Schizophrenia and Geomagnetic Storms ····· When bizarre ideas turn out to be true.
- Talking Snakes: A Cautionary Tale ····· Should we dismiss all absurd claims?
- Arguments from My Opponent Believes Something ····· Ten fully general arguments.
- Statistical Literacy Among Doctors Now Lower Than Chance ····· Common errors in probabilistic reasoning.
- Techniques for Probability Estimates ····· Six methods for quantifying uncertainty.
- On First Looking into Chapman’s “Pop Bayesianism” ····· Reasons Bayesian epistemology may not be trivial.
- Utilitarianism for Engineers ····· Are there good-enough heuristics for comparing people's preferences?
- If It’s Worth Doing, It’s Worth Doing with Made-Up Statistics ····· The practical value of probabilities.
- Marijuana: Much More Than You Wanted to Know ····· Assessing marijuana's costs and benefits.
- Are You a Solar Deity? ····· On confirmation bias in the comparative study of religions.
- The "Spot the Fakes" Test ····· An approach to testing humanities hypotheses.
- Epistemic Learned Helplessness ····· What should we do when bad arguments sound convincing?
III. Science and Doubt
- Google Correlate Does Not Imply Google Causation ····· Peculiar correlations between Google search terms.
- Stop Confounding Yourself! Stop Confounding Yourself! ····· A correlational study on the effects of bullying.
- Effects of Vertical Acceleration on Wrongness ····· On evidence-based medicine.
- 90% Of All Claims About The Problems With Medical Studies Are Wrong ····· Is it the case that "90% of medical research is false"?
- Prisons are Built with Bricks of Law and Brothels with Bricks of Religion, But That Doesn’t Prove a Causal Relationship ····· Do psychiatric interventions increase suicide risk?
- Noisy Poll Results and the Reptilian Muslim Climatologists from Mars ····· Skepticism about poll results.
- Two Dark Side Statistics Papers ····· Statistical tricks for creating effects out of nothing.
- Alcoholics Anonymous: Much More Than You Wanted to Know ····· Is AA effective for treating alcohol abuse?
- The Control Group Is Out Of Control ····· Parapsychology as the "control group" for all of psychology.
- The Cowpox of Doubt ····· Focusing on easy questions inoculates against uncertainty.
- The Skeptic's Trilemma ····· Explaining mysteries, vs. worshiping them, vs. dismissing them.
- If You Can't Make Predictions, You're Still in a Crisis ····· On psychology studies' replication failures.
IV. Medicine, Therapy, and Human Enhancement
- Scientific Freud ····· How does psychoanalysis compare to cognitive behavioral therapy?
- Sleep – Now by Prescription ····· On melatonin.
- In Defense of Psych Treatment for Attempted Suicide ····· Suicide is usually not a rational, informed decision.
- Who By Very Slow Decay ····· On old age and death in the medical system.
- Medicine, As Not Seen on TV ····· What is it actually like to be a doctor?
- Searching for One-Sided Tradeoffs ····· How can we find good ideas that others haven't found first?
- Do Life Hacks Ever Reach Fixation? ····· Why aren't there more good ideas that everyone has adopted?
- Polyamory is Boring ····· Deromanticizing multi-partner romance.
- Can You Condition Yourself? ····· On shaping new habits by rewarding oneself.
- Wirehead Gods on Lotus Thrones ····· Is the future boring? Transcendently blissful? Boringly blissful?
- Don’t Fear the Filter ····· Does the Fermi Paradox mean that our species is doomed?
- Transhumanist Fables ····· Six futurist fairy tales.
V. Introduction to Game Theory
- Backward Reasoning Over Decision Trees ····· Sequential games, and why adding options can hurt you.
- Nash Equilibria and Schelling Points ····· Simultaneous games, mixed strategies, and coordination.
- Introduction to Prisoners' Dilemma ····· Why Nash equilibria are sometimes bad for everyone.
- Real-World Solutions to Prisoners' Dilemmas ····· How society and evolution ensure mutual cooperation.
- Interlude for Behavioral Economics ····· Fairness, superrationality, and self-image in real-world games.
- What is Signaling, Really? ····· Actions that convey information, sometimes at great cost.
- Bargaining and Auctions ····· Idealized models of correct bidding.
- Imperfect Voting Systems ····· Strengths and weaknesses of different voting systems.
- Game Theory as a Dark Art ····· Ways to exploit seemingly "economically rational" behavior.
VI. Promises and Principles
- Beware Trivial Inconveniences ····· Small obstacles can have a huge effect on behavior.
- Time and Effort Discounting ····· On inconsistencies in our revealed preferences.
- Applied Picoeconomics ····· Binding your future self to your present goals.
- Schelling Fences on Slippery Slopes ····· Using arbitrary thresholds to improve coordination.
- Democracy is the Worst Form of Government Except for All the Others Except Possibly Futarchy ····· Like democracy, futarchy (rule by prediction markets) has the advantage of appearing impartial.
- Eight Short Studies on Excuses ····· When should we allow exceptions to our rules?
- Revenge as Charitable Act ····· Revenge can be a personally costly way to disincentivize misdeeds.
- Would Your Real Preferences Please Stand Up? ····· Are we hypocrites, or just weak-willed?
- Are Wireheads Happy? ····· Distinguishing "wanting" something from "liking" it.
- Guilt: Another Gift Nobody Wants ····· An evolutionary, signaling-based explanation of guilt.
VII. Cognition and Association
- Diseased Thinking: Dissolving Questions about Disease ····· On verbal disagreements.
- The Noncentral Fallacy — The Worst Argument in the World? ····· Judging an entire category by an emotional association that only applies to typical category members.
- The Power of Positivist Thinking ····· Focus on statements' empirical content.
- When Truth Isn't Enough ····· It's possible to agree denotationally while disagreeing connotationally.
- Ambijectivity ····· When a question is both subjective and objective.
- The Blue-Minimizing Robot ····· A parable on agency.
- Basics of Animal Reinforcement ····· A primer on classical and operant conditioning.
- Wanting vs. Liking Revisited ····· Distinguishing motivation to act from reinforcement.
- Physical and Mental Behavior ····· Behaviorism meets thinking.
- Trivers on Self-Deception ····· The conscious mind as a self-serving social narrative.
- Ego-Syntonic Thoughts and Values ····· On endorsed vs. non-endorsed mental behavior.
- Approving Reinforces Low-Effort Behaviors ····· Using your self-image to blackmail yourself.
- To What Degree Do We Have Goals? ····· Are our unconscious drives like an agent?
- The Limits of Introspection ····· Are we good at directly perceiving our cognition?
- Secrets of the Eliminati ····· Reducing phenomena to simpler parts, vs. eliminating them.
- Tendencies in Reflective Equilibrium ····· Aspiring to become more consistent.
- Hansonian Optimism ····· If ego-syntonic goals are about signaling, is goodness a lie?
VIII. Doing Good
- Newtonian Ethics ····· Satirizing moral parochialism and sloppy systematizations of ethics.
- Efficient Charity: Do Unto Others... ····· How should we act when our decisions matter most?
- The Economics of Art and the Art of Economics ····· Should Detroit sell its publicly owned artwork?
- A Modest Proposal ····· Using dead babies as a unit of currency.
- The Life Issue ····· What are the consequences of drone warfare?
- What if Drone Warfare Had Come First? ····· A thought experiment.
- Nefarious Nefazodone and Flashy Rare Side-Effects ····· On choosing between drug side-effects.
- The Consequentialism FAQ ····· Argues for assessing actions based on how they help or harm people.
- Doing Your Good Deed for the Day ····· Doing some good can reduce people's willingness to do more good.
- I Myself Am A Scientismist ····· Why apply scientific methods to non-scientific domains?
- Whose Utilitarianism? ····· Questioning the objectivity and uniqueness of utilitarianism.
- Book Review: After Virtue ····· On virtue ethics, a reaction against modern moral philosophy.
- Read History of Philosophy Backwards ····· Historical texts reveal our implicit assumptions.
- Virtue Ethics: Not Practically Useful Either ····· Is virtue ethics useful prescriptively or descriptively?
- Last Thoughts on Virtue Ethics ····· What claims do virtue ethicists make?
- Proving Too Much ····· If an argument sometimes proves falsehoods, it can't be valid.
IX. Liberty
- The Non-Libertarian FAQ (aka Why I Hate Your Freedom)
- A Blessing in Disguise, Albeit a Very Good Disguise
- Basic Income Guarantees
- Book Review: The Nurture Assumption
- The Death of Wages is Sin
- Thank You For Doing Something Ambiguously Between Smoking And Not Smoking
- Lies, Damned Lies, and Facebook (Part 1 of ∞)
- The Life Cycle of Medical Ideas
- Vote on Values, Outsource Beliefs
- A Something Sort of Like Left-Libertarian-ist Manifesto
- Plutocracy Isn’t About Money
- Against Tulip Subsidies
- SlateStarCodex Gives a Graduation Speech
X. Progress
- Intellectual Hipsters and Meta-Contrarianism
- A Signaling Theory of Class x Politics Interaction
- Reactionary Philosophy in an Enormous, Planet-Sized Nutshell
- A Thrive/Survive Theory of the Political Spectrum
- We Wrestle Not With Flesh And Blood, But Against Powers And Principalities
- Poor Folks Do Smile… For Now
- Apart from Better Sanitation and Medicine and Education and Irrigation and Public Health and Roads and Public Order, What Has Modernity Done for Us?
- The Wisdom of the Ancients
- Can Atheists Appreciate Chesterton?
- Holocaust Good for You, Research Finds, But Frequent Taunting Causes Cancer in Rats
- Public Awareness Campaigns
- Social Psychology is a Flamethrower
- Nature is Not a Slate. It’s a Series of Levers.
- The Anti-Reactionary FAQ
- The Poor You Will Always Have With You
- Proposed Biological Explanations for Historical Trends in Crime
- Society is Fixed, Biology is Mutable
XI. Social Justice
- Practically-a-Book Review: Dying to be Free
- Drug Testing Welfare Users is a Sham, But Not for the Reasons You Think
- The Meditation on Creepiness
- The Meditation on Superweapons
- The Meditation on the War on Applause Lights
- The Meditation on Superweapons and Bingo
- An Analysis of the Formalist Account of Power Relations in Democratic Societies
- Arguments About Male Violence Prove Too Much
- Social Justice for the Highly-Demanding-of-Rigor
- Against Bravery Debates
- All Debates Are Bravery Debates
- A Comment I Posted on “What Would JT Do?”
- We Are All MsScribe
- The Spirit of the First Amendment
- A Response to Apophemi on Triggers
- Lies, Damned Lies, and Social Media: False Rape Accusations
- In Favor of Niceness, Community, and Civilization
XII. Politicization
- Right is the New Left
- Weak Men are Superweapons
- You Kant Dismiss Universalizability
- I Can Tolerate Anything Except the Outgroup
- Five Case Studies on Politicization
- Black People Less Likely
- Nydwracu’s Fnords
- All in All, Another Brick in the Motte
- Ethnic Tension and Meaningless Arguments
- Race and Justice: Much More Than You Wanted to Know
- Framing for Light Instead of Heat
- The Wonderful Thing About Triggers
- Fearful Symmetry
- Archipelago and Atomic Communitarianism
XIII. Competition and Cooperation
- Galactic Core
- Book Review: The Two-Income Trap
- Just for Stealing a Mouthful of Bread
- Meditations on Moloch
- Misperceptions on Moloch
- The Invisible Nation — Reconciling Utilitarianism and Contractualism
- Freedom on the Centralized Web
- Book Review: Singer on Marx
- Does Class Warfare Have a Free Rider Problem?
- Book Review: Red Plenty
If you liked these posts and want more, I suggest browsing the SlateStarCodex archives.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)