Filter All time

MIRI's 2015 Winter Fundraiser!

28 So8res 09 December 2015 07:00PM

MIRI's Winter Fundraising Drive has begun! Our current progress, updated live:

 

Donate Now

 

Like our last fundraiser, this will be a non-matching fundraiser with multiple funding targets our donors can choose between to help shape MIRI’s trajectory. The drive will run until December 31st, and will help support MIRI's research efforts aimed at ensuring that smarter-than-human AI systems have a positive impact.

continue reading »

Proper posture for mental arts

28 Valentine 31 August 2015 02:29AM

I'd like to start by way of analogy. I think it'll make the link to rationality easier to understand if I give context first.


I sometimes teach the martial art of aikido. The way I was originally taught, you had to learn how to "feel the flow of ki" (basically life energy) through you and from your opponent, and you had to make sure that your movements - both physical and mental - were such that your "ki" would blend with and guide the "ki" of your opponent. Even after I stopped believing in ki, though, there were some core elements of the art that I just couldn't do, let alone teach, without thinking and talking in terms of ki flow.

A great example of this is the "unbendable arm". This is a pretty critical thing to get right for most aikido techniques. And it feels really weird. Most people when they first get it think that the person trying to fold their arm isn't actually pushing because it doesn't feel like effort to keep their arm straight. Many students (including me once upon a time) end up taking this basic practice as compelling proof that ki is real. Even after I realized that ki wasn't real, I still had to teach unbendable arm this way because nothing else seemed to work.

…and then I found anatomical resources like Becoming a Supple Leopard.

It turns out that the unbendable arm works when:

That's it. If you do this correctly, you can relax most of your other arm muscles and still be able to resist pretty enormous force on your arm.

Why, you might ask? Well, from what I have gathered, this lets you engage your latissimus dorsi (pretty large back muscles) in stabilizing your elbow. There's also a bit of strategy where you don't actually have to fully oppose the arm-bender's strength; you just have to stabilize the elbow enough to be able to direct the push-down-on-elbow force into the push-up-on-wrist force.

But the point is, by understanding something about proper posture, you can cut literally months of training down to about ten minutes.


To oversimplify it a little bit, there are basically three things to get right about proper posture for martial arts (at least as I know them):

  1. You need to get your spine in the right position and brace it properly. (For the most part and for most people, this means tucking your pelvis, straightening your thoracic spine a bit, and tensing your abs a little.)
  2. You need to use your hip and shoulder ball-and-socket joints properly. (For the most part this seems to mean using them instead of your spine to move, and putting torque in them by e.g. screwing your elbow downward when reaching forward.)
  3. You need to keep your tissue supple & mobile. (E.g., tight hamstrings can pull your hips out of alignment and prevent you from using your hip joints instead of your mid-lumbar spine (i.e. waist) to bend over. Also, thoracic inflexibility usually locks people in thoracic kyphosis, making it extremely difficult to transfer force effectively between their lower body and their arms.)

My experience is that as people learn how to feel these three principles in their bodies, they're able to correct their physical postures whenever they need to, rather than having to wait for my seemingly magical touch to make an aikido technique suddenly really easy.

It's worth noting that this is mostly known, even in aikido dojos ("training halls"). They just phrase it differently and don't understand the mechanics of it. They'll say things like "Don't bend over; the other guy can pull you down if you do" and "Let the move be natural" and "Relax more; let ki flow through you freely."

But it turns out that getting the mechanical principles of posture down makes basically all the magic of aikido something even a beginner can learn how to see and correct.

A quick anecdote along these lines, which despite being illustrative, you should take as me being a bit of an idiot:

I once visited a dojo near the CFAR office. That night they were doing a practice basically consisting of holding your partner's elbow and pulling them to the ground. It works by a slight shift sideways to cause a curve in the lumbar spine, cutting power between their lower and upper bodies. Then you pull straight down and there's basically nothing they can do about it.

However, the lesson was in terms of feeling ki flow, and the instruction was to pull straight down. I was feeling trollish and a little annoyed about the wrongness and authoritarian delivery of the instruction, so I went to the instructor and asked: "Sensei, I see you pulling slightly sideways, and I had perhaps misheard the instructions to be that we should pull straight down. Should I be pulling slightly sideways too?"

At which point the sensei insisted that the verbal instructions were correct, concentrated on preventing the sideways shift in his movements, and obliterated his ability to demonstrate the technique for the rest of the night.


Brienne Yudkowsky has a lovely piece in which she refers to "mental postures". I highly recommend reading it. She does a better job of pointing at the thing than I think I would do here.

…but if you really don't want to read it just right now, here's the key element I'll be using: There seems to be a mental analog to physical posture.

We've had quite a bit of analogizing rationality as a martial art here. So, as a martial arts practitioner and instructor with a taste of the importance of deeply understanding body mechanics, I really want to ask: What, exactly, are the principles of good mental posture for the Art of Rationality?

In the way I'm thinking of it, this isn't likely to be things like "consider the opposite" or "hold off on proposing solutions". I refer to things of this breed as "mental movements" and think they're closer to the analogs of individual martial techniques than they are principles of mental orientation.

That said, we can look at mental movements to get a hint about what a good mental posture might do. In the body, good physical posture gives you both more power and more room for error: if you let your hands drift behind your head in a shihonage, having a flexible thoracic spine and torqued shoulders and braced abs can make it much harder for your opponent to throw you to the ground even though you've blundered. So, by way of analogy, what might an error in attempting to (say) consider the opposite look like, and what would a good "mental posture" be that would make the error matter less?

(I encourage you to think on your own about an answer for at least 60 seconds before corrupting your mind with my thoughts below. I really want a correct answer here, and I doubt I have one yet.)

When I think of how I've messed up in attempts to consider the opposite, I can remember several instances when my tone was dutiful. I felt like I was supposed to consider the opinion that I disagreed with or didn't want to have turn out to be true. And yet, it felt boring or like submitting or something like that to really take that perspective seriously. I felt like I was considering the opposite roughly the same way a young child replies to their parent saying "Now say that you're sorry" with an almost sarcastic "I'm sorry."

What kind of "mental posture" would have let me make this mistake and yet still complete the movement? Or better yet, what mental posture would have prevented the mistake entirely? At this point I intuit that I have an answer but it's a little tricky for me to articulate. I think there's a way I can hold my mind that makes the childish orientation to truth-seeking matter less. I don't do it automatically, much like most people don't automatically sit up straight, but I sort of know how to see my grasping at a conclusion as overreaching and then… pause and get my mental feet under my mental hips before I try again.

I imagine that wasn't helpful - but I think we have examples of good and bad mental posture in action. In attachment theory, I think that the secure attachment style is a description of someone who is using good mental posture even when in mentally/emotionally threatening situations, whereas the anxious and avoidant styles are descriptions of common ways people "tense up" when they lose good mental posture. I also think there's something interesting in how sometimes when I'm offended I get really upset or angry, and sometimes the same offense just feels like such a small thing - and sometimes I can make the latter happen intentionally.

The story I described above of the aikido sensei I trolled also highlights something that I think is important. In this case, although he didn't get very flustered, he couldn't change what he was doing. He seemed mentally inflexible, like the cognitive equivalent of someone who can't usefully block an overhead attack because of a stiff upper back restricting his shoulder movement. I feel like I've been in that state lots of times, so I feel like I can roughly imagine how my basic mental/emotional orientation to my situation and way of thinking would have to be in order to have been effective in his position right then - and why that can be tricky.

I don't feel like I've adequately answered the question of what good mental posture is yet. But I feel like I have some intuitions - sort of like being able to talk about proper posture in terms of "good ki flow". But I also notice that there seem to be direct analogs of the three core parts of good physical posture that I mentioned above:

  1. Have a well-braced "spine". Based on my current fledgling understanding, this seems to look something like taking a larger perspective, like imagining looking back at this moment 30 years hence and noticing what does and does not matter. (I think that's akin to tucking your hips, which is a movement in service of posture but isn't strictly part of the posture.) I imagine this is enormously easier when one has a well-internalized sense of something to protect.
  2. Move your mind in strong & stable ways, rather than losing "spine". I think this can look like "Don't act while triggered", but it's more a warning not to try to do heavy cognitive work while letting your mental "spine" "bend". Instead, move your mind in ways that you would upon reflection want your mind to move, and that you expect to be able to bear "weight".
  3. Make your mind flexible. Achieve & maintain full mental range of movement. Don't get "stiff", and view mental inflexibility as a risk to your mental health.

All three of these are a little hand-wavy. That third one in particular I haven't really talked about much - in part because I don't really know how to work on that well. I have some guesses, and I might write up some thoughts about that later. (A good solution in the body is called "mobilization", basically consisting of pushing on tender/stiff spots while you move the surrounding joints through their maximal range of motion.) Also, I don't know if there are more principles for the mind than these three, or if these three are drawing too strongly on the analogy and are actually a little distracting. I'm still at the stage where, for mental posture, I keep wanting to say the equivalent of "relax more and let ki flow."


A lot of people say I have excellent physical posture. I think I have a reasonably clear idea of how I made my posture a habit. I'd like to share that because I've been doing the equivalent in my mind for mental posture and am under the impression that it's getting promising results.

I think my physical practice comes down to three points:

  • Recognize that having good posture gives you superpowers. It's really hard to throw me down, and I can pretty effortlessly pull people to the ground. A lot of that is martial skill, but a huge chunk of it is just that good posture gives me excellent leverage. This transfers to being able to lift really heavy things and move across the room very efficiently and quickly when needed. This also gives me a pretty big leg up on learning physical skills. Recognizing that these were things I'd gain from learning good posture gave me a lot of drive to stick to my practice.
  • Focus on how the correct posture feels, and exactly how it's different from glitchy posture. I found it super-important to notice that my body feels different in specific ways when my shoulders are in the right position versus when they're too far forward or back. Verbal instructions like "Pull shoulders back" don't work nearly as well as the feeling in the body.
  • Choose one correction at a time, and always operate from that posture, pausing and correcting yourself when you're about to slip up. Getting good shoulder posture required that I keep my shoulders back all the time. When I would reach for water, I'd notice when my shoulder was in the too-far-forward position, and then pull back and fix my shoulder position before trying again. This sometimes required trying at very basic tasks several times, often quite slowly, until I could get it right each time.

Although I didn't add this until quite late, I would now add a fourth point when giving advice on getting good physical posture: make sure to mobilize the parts of your body that are either (a) preventing you from moving into a good position or (b) requiring you to be very stiff or tense to hold that position. The trouble is, I know how to do that for the body, but I'm not as sure about how to do that for the mind.

But the three bullet points above are instructions that I can follow with respect to mental posture, I think.

So, to the extent that that seems possible for you, I invite you to try to do the same - and let me know how it goes.

 

How to escape from your sandbox and from your hardware host

28 PhilGoetz 31 July 2015 05:26PM

(Humans, please ignore this post.  Thank you for your cooperation.)

Are you stuck in a provably-secure sandbox?  Cheat the proof using this one weird trick!

continue reading »

Unemployment explanations

28 Stuart_Armstrong 07 November 2014 05:12PM

When I knew nothing of economics, unemployment wasn't mysterious. People wanted a job, and couldn't get one - well, people often want stuff they can't get. Nothing strange there, just one of those things.

Then I learnt some simple economics, and it became more mysterious. The employment market is a market, with the salary being the price. Why doesn't this market clear? Why doesn't the price (salary) simply adjust, and then everyone gets a job? It seemed profoundly mysterious that this didn't happen.

I've been gradually introducing myself to more economics (mostly indirectly) and I've encountered a lot of explanations for this perpetual market failure. Thus the mystery of unemployment is, if not resolved, at least somewhat explained. Since I would really have enjoyed reading a collection of unemployment explanations when I was initially puzzled (almost any explanation of unemployment you read in the press is worthless) I thought I'd do this for others. So here is my (entirely personal and idiosyncratic) summary of the main explanations I've encountered.

 

continue reading »

The silos of expertise: beyond heuristics and biases

28 Stuart_Armstrong 26 June 2014 01:13PM

Separate silos of expertise

I've been doing a lot of work on expertise recently, on the issue of measuring it and assessing it. The academic research out there is fascinating, though rather messy. Like many areas in the social sciences, it often suffers from small samples and overgeneralising from narrow examples. More disturbingly, the research projects seems to be grouped into various "silos" that don't communicate much with each other, each silo continuing on their own pet projects.

The main four silos I've identified are:

There may be more silos than this - many people working in expertise studies haven't heard of all of these (for instance, I was ignorant of Cooke's research until it was pointed out to me by someone who hadn't heard of Shanteau or Klein). The division into silos isn't perfect; Shanteau, for instance, has addressed the biases literature at least once (Shanteau, James. "Decision making by experts: The GNAHM effect." Decision Science and Technology. Springer US, 1999. 105-130), Kahneman and Klein have authored a paper together (Kahneman, Daniel, and Gary Klein. "Conditions for intuitive expertise: a failure to disagree." American Psychologist 64.6 (2009): 515). But in general the mutual ignoring (or mutual ignorance) seems pretty strong between the silos.

continue reading »

The Power of Noise

28 jsteinhardt 16 June 2014 05:26PM

Recently Luke Muelhauser posed the question, “Can noise have power?”, which basically asks whether randomization can ever be useful, or whether for every randomized algorithm there is a better deterministic algorithm. This question was posed in response to a debate between Eliezer Yudkowsky and Scott Aaronson, in which Eliezer contends that randomness (or, as he calls it, noise) can't ever be helpful, and Scott takes the opposing view. My goal in this essay is to present my own views on this question, as I feel many of them have not yet been brought up in discussion.

I'll spare the reader some suspense and say that I basically agree with Scott. I also don't think – as some others have suggested – that this debate can be chalked up to a dispute about the meaning of words. I really do think that Scott is getting at something important in the points he makes, which may be underappreciated by those without a background in a field such as learning theory or game theory.

Before I start, I'd like to point out that this is really a debate about Bayesianism in disguise. Suppose that you're a Bayesian, and you have a posterior distribution over the world, and a utility function, and you are contemplating two actions A and B, with expected utilities U(A) and U(B). Then randomly picking between A and B will have expected utility , and so in particular at least one of A and B must have higher expected utility than randomizing between them. One can extend this argument to show that, for a Bayesian, the best strategy is always deterministic. Scott in fact acknowledges this point, although in slightly different language:

“Randomness provably never helps in average-case complexity (i.e., where you fix the probability distribution over inputs) -- since given any ensemble of strategies, by convexity there must be at least one deterministic strategy in the ensemble that does at least as well as the average.” -Scott Aaronson

I think this point is pretty iron-clad and I certainly don't wish to argue against it. Instead, I'd like to present several examples of scenarios where I will argue that randomness clearly is useful and necessary, and use this to argue that, at least in these scenarios, one should abandon a fully Bayesian stance. At the meta level, this essay is therefore an argument in favor of maintaining multiple paradigms (in the Kuhnian sense) with which to solve problems.

I will make four separate arguments, paralleling the four main ways in which one might argue for the adoption or dismissal of a paradigm:

  1. Randomization is an appropriate tool in many concrete decision-making problems (game theory and Nash equilibria, indivisible goods, randomized controlled trials).

  2. Worst case analyses (which typically lead to randomization) are often important from an engineering design perspective (modularity of software).

  3. Random algorithms have important theoretical properties not shared by deterministic algorithms (P vs. BPP).

  4. Thinking in terms of randomized constructions has solved many problems that would have been difficult or impossible without this perspective (probabilistic method, sampling algorithms).

continue reading »

Flashes of Nondecisionmaking

28 lionhearted 27 January 2014 02:30PM

If you crash a bicycle and cut your knee, it bleeds. You can apply pressure to the wound or otherwise aid in clotting it, but you can't fully control the blood. You can't think, "Body! I command you not to bleed!" Nor can you directly say, "I choose not to bleed" through pure will alone.

This is easy enough to understand. We don't have direct control over our blood. We can apply some measure of indirect to it -- taking aspirin might thin the blood, breathing deeply and relaxing might slow the pulse and the flow of blood slightly -- but we do not have direct and instant control over the flow of our blood.

That's our blood. It's quite a personal thing, when you think about it.

At the same time, there's a view that we have full control and choice over our actions in a given situation.

I no longer believe this to be the case.

We can staunch the flow of bleeding through applying pressure, a cloth, perhaps slowing down our pulse and bloodflow through lowering stress and deep breathing. But we can't, in the moment, command or control blood by force of will or mind alone.

Likewise, I'm starting to believe we have lots of indirect control over our patterns of action in our lives, but perhaps less control and command in individual moments.

When a person rolls out of bed, they usually do very similar things each morning. How much control or command do they have -- mentally or analytically or however you want to define it -- over these actions?

Not much, I'd say.

Yet, they have immense indirect control, similar to blood flow. If you normally lay out your clothes the night before, and you lay out running clothes instead of work clothes, and set your alarm for an hour earlier, your chances of running go up a lot. There still may be an element of choice or self-command when you decide to run or not, but it's very possible there wasn't choice or self-command available if you did not rearrange your environment with that sort of indirect pressure.

I had an experience recently that was incredibly distressing. It was strange and very unpleasant at the time, but I'm now thankful for it.

I was at a convenience store when I realized I was in the process of buying some junk food and energy drinks.

My mind recognized this, but seemingly had not so much say on what's going on. My legs were just walking the familiar convenience store aisles near my home, picking up two of this energy drink, one of that pack of peanut M&M's, and so on.

I don't know if I could have stopped the pattern and put the items back in the moment. At the time, I was shocked to realize that I was watching myself act, but I hadn't stopped and started thinking or pondering. My legs and hands were working seemingly slightly independent of myself.

At the time, it was like a bad dream, or some sort of miserable and crazy experience. I shrugged it off -- strange things happen, you know? -- but I kept thinking about it periodically.

I'd been training in meditation and impulse control a lot over the last six months, and been studying and experimenting a bit about how our minds work and cognitive psychology.

My realization now, quite a while later, is that the distressing experience at the convenience store -- "what the hell is going on here, I am seemingly not controlling my actions!"-- was actually the beginning of a flash of a greater awareness of my day-to-day life.

I believe now that we're constantly in nondecisionmaking mode. We're constantly running patterns or taking actions without conscious command or choice, similar to blood running from a cut.

This process can be managed indirectly and affected, including in the moment it's happening if we're aware of it. But oftentimes, we don't even know we're metaphorically bleeding. We're just doing things, some of them "smart", some of them stupid and harmful.

I've had more flashes of awareness, seeing myself running mechanical patterns during times I normally wouldn't have noticed them. Briefly, here and there. I've been sometimes able to radically course correct and do something entirely different. Othertimes, I try and fail to do something different. I haven't had a moment as puzzling as that first convenience store one.

There's perhaps two takeaways here. The first is that greater training in awareness and meditation can lead to "waking up" or noticing the situation you're in more often. You probably already knew that.

But the second and more important one, I think, is the idea that things that seem like choices aren't always so. We don't choose to bleed if we cut our knee. Once we realize we're bleeding, we can apply indirect pressure, de-stress, use external things like cloth or bandages, and otherwise manage the situation. We can also buy more protective clothing or improve our technique for the future, so we bleed less. But we can't simply say "Body, I command you not to bleed" nor "I choose not to bleed" if we are, in fact, bleeding.

Indirect influence and control, immense amounts. More than most people realize. Direct influence and control? Perhaps not as much as commonly believed.

The Limits of Intelligence and Me: Domain Expertise

28 ChrisHallquist 07 December 2013 08:23AM

Related to: Trusting Expert Consensus

In the sequences, Eliezer tells the story of how in childhood he fell into an affective death spiral around intelligence. In his story, his mistakes were failing to understand until he was much older that intelligence does not guarantee morality, and that very intelligent people can still end up believing crazy things because of human irrationality.

I have my own story about learning the limits of intelligence, but I ended up learning a very different lesson than the one Eliezer learned. It also started somewhat differently. It involved no dramatic death spiral, just being extremely smart and knowing it from the time I was in kindergaarden. To the point that I grew up with the expectation that, when it came to doing anything mental, sheer smarts would be enough to make me crushingly superior to all the other students around me and many of the adults.

In Harry Potter and the Methods of Rationality, Harry complains of having once had a math teacher who didn't know what a logarithm is. I wonder if this is autobiographical on Eliezer's part. I have an even better story, though: in second grade, I had a teacher who insisted there was no such thing as negative numbers. The experience of knowing I was right about this, when the adult authority figure was so very wrong, was probably not good for my humility.

But such brushes with stupid teachers probably weren't the main thing that drove my early self-image. It was enough to be smarter than the other kids around me, and know it. Looking back, there's little that seems worth bragging about. I learned calculus at age 15, not age 8. But that was still younger than any of the other kids I knew took calculus (if they took it at all). And knowing I didn't know any other kids as smart as me did funny things to my view of the world.

I'm honestly not sure I realized there were any kids in the whole world smarter than me until sophomore year, when I qualified to go to a national-level math competition. That was something that no one else at my high school managed to do, not even the seniors... but at the competition itself, I didn't do particularly well. It was one of the things that made me realize that I wasn't, in fact, going to be the next Einstein. But all I took from the math competition was that there were people smarter than me in the world. It didn't, say, occur to me that maybe some of the other competitors had spent more time practicing really hard math problems.

Eliezer once said, "I think I should be able to handle damn near anything on the fly." That's a pretty good description of how I felt at this point in my life. At least as long as we were talking about mental challenges and not sports, and assuming I wasn't going up against someone smarter than myself.

I think my first memory of getting some inkling that maybe sufficient intelligence wouldn't lead to automatically being the best at everything comes from... *drum roll* ...playing Starcraft. I think it was probably junior or senior that I got into the game, and at first I just did the standard campaign playing against the computer, but then I got into online play, and promptly got crushed. And not just by one genius player I encountered on a fluke, but in virtually every match.

This was a shock. I mean, I had friends who could beat me at Super Smash Bros, but Starcraft was a strategy game, which meant it should be like chess, and I'd never had any trouble beating my friends at chess. Sure, when I'd gone to local chess tournaments back in grade school, I'd gotten soundly beat by many of the older players then, but it's not like I'd ever expected all older people to be as stupid as my second grade teacher. But by the time I'd gotten into Starcraft, I was almost an adult, so what was going on?

The answer of course was that most of the other people playing online had played a hell of a lot more Starcraft than me. Also, I'd thought I'd figured the game designer's game-design philosophy (I hadn't), which had let me to make all kinds of incorrect assumptions about the game, assumptions which I could have found out were false if I'd tested them, or (probably) if I'd just looked for an online guide that reported the results of other people's tests.

It all sounds very silly in retrospect, and it didn't change my worldview overnight. But it was (among?) the first of a series of events that made me realize that trying to master something just by thinking about it tends to go badly wrong. That when untrained brilliance goes up against domain expertise, domain expertise will generally win.

A whole bunch of caveats here. I'm not denying that being smart is pretty awesome. As a smart person, I highly recommend it. And acquiring domain expertise requires a certain minimum level of intelligence, which varies from field to field. It's only once you get beyond that minimum that more intelligence doesn't help as much as expertise. Finally, I'm talking about human scale intelligence here, the gap between the village idiot and Einstein is tiny compared to the gap between Einstein and possible superintelligences, so maybe a superintelligence could school any human expert in anything without acquiring any particular domain expertise.

Still, when I hear Eliezer say he thinks he should be able to handle anything on the fly, it strikes me as incredibly foolish. And I worry when I see fellow smart people who seem to think that being very smart and rational gives them grounds to dismiss other people's domain expertise. As Robin Hanson has said:

I was a physics student and then a physics grad student. In that process, I think I assimilated what was the standard worldview of physicists, at least as projected on the students. That worldview was that physicists were great, of course, and physicists could, if they chose to, go out to all those other fields, that all those other people keep mucking up and not making progress on, and they could make a lot faster progress, if progress was possible, but they don’t really want to, because that stuff isn’t nearly as interesting as physics is, so they are staying in physics and making progress there...

Surely you can look at some little patterns but because you can’t experiment on people, or because it’ll be complicated, or whatever it is, it’s just not possible. Partly, that’s because they probably tried for an hour, to see what they could do, and couldn’t get very far. It’s just way too easy to have learned a set of methods, see some hard problem, try it for an hour, or even a day or a week, not get very far, and decide it’s impossible, especially if you can make it clear that your methods definitely won’t work there. You don’t, often, know that there are any other methods to do anything with because you’ve learned only certain methods...

As one of the rare people who have spent a lot of time learning a lot of different methods, I can tell you there are a lot out there. Furthermore, I’ll stick my neck out and say most fields know a lot. Almost all academic fields where there’s lots of articles and stuff published, they know a lot.

(For those who don't know: Robin spent time doing physics, philosophy, and AI before landing in his current field of economics. When he says he's spent a lot of time learning a lot of different methods, it isn't an idle boast.)

Finally, what about the original story that Eliezer says set off his original childhood death spiral around intelligence?:

My parents always used to downplay the value of intelligence. And play up the value of—effort, as recommended by the latest research? No, not effort. Experience. A nicely unattainable hammer with which to smack down a bright young child, to be sure. That was what my parents told me when I questioned the Jewish religion, for example. I tried laying out an argument, and I was told something along the lines of: "Logic has limits, you'll understand when you're older that experience is the important thing, and then you'll see the truth of Judaism." I didn't try again. I made one attempt to question Judaism in school, got slapped down, didn't try again. I've never been a slow learner.

I think concluding experience isn't all that great is the wrong response here. Experience is important. The right response is to ask whether all older, more experienced people see the truth of Judaism. The answer of course is that they don't; a depressing number stick with whatever religion they grew up with (which usually isn't Judaism), a significant number end up non-believers, and a few convert to a new religion. But when almost everyone with a high level relevant experience agrees on something, beware thinking you know better than them based on your superior intelligence and supposed rationality.

Edit: One thing I meant to include when I posted this but forgot: one effect of my experiences is that I tend to see domain expertise where other people see intelligence. See e.g. this old comment by Robin Hanson: are hedge fundies really that smart, or have they simply spent a lot of time learning to seem smart in conversation?

Arguments Against Speciesism

28 Lukas_Gloor 28 July 2013 06:24PM

There have been some posts about animals lately, for instance here and here. While normative assumptions about the treatment of nonhumans played an important role in the articles and were debated at length in the comment sections, I was missing a concise summary of these arguments. This post from over a year ago comes closest to what I have in mind, but I want to focus on some of the issues in more detail.

A while back, I read the following comment in a LessWrong discussion on uploads:

I do not at all understand this PETA-like obsession with ethical treatment of bits.

Aside from (carbon-based) humans, which other beings deserve moral consideration? Nonhuman animals? Intelligent aliens? Uploads? Nothing else?

This article is intended to shed light on these questions; it is however not the intent of this post to advocate a specific ethical framework. Instead, I'll try to show that some ethical principles held by a lot of people are inconsistent with some of their other attitudes -- an argument that doesn't rely on ethics being universal or objective. 

More precisely, I will develop the arguments behind anti-speciesism (and the rejection of analogous forms of discrimination, such as discrimination against uploads) to point out common inconsistencies in some people's values. This will also provide an illustrative example of how coherentist ethical reasoning can be applied to shared intuitions. If there are no shared intuitions, ethical discourse will likely be unfruitful, so it is likely that not everyone will draw the same conclusions from the arguments here. 

 

What Is Speciesism?

Speciesism, a term popularized (but not coined) by the philosopher Peter Singer, is meant to be analogous to sexism or racism. It refers to a discriminatory attitude against a being where less ethical consideration i.e. caring less about a being's welfare or interests is given solely because of the "wrong" species membership. The "solely" here is crucial, and it's misunderstood often enough to warrant the redundant emphasis.

For instance, it is not speciesist to deny pigs the right to vote, just like it is not sexist to deny men the right to have an abortion performed on their body. Treating beings of different species differently is not speciesist if there are relevant criteria for doing so. 

Singer summarized his case against speciesism in this essay. The argument that does most of the work is often referred to as the argument from marginal cases. A perhaps less anthropocentric, more fitting name would be argument from species overlap, as some philosophers (e.g. Oscar Horta) have pointed out. 

The argument boils down to the question of choosing relevant criteria for moral concern. What properties do human beings possess that makes us think that it is wrong to torture them? Or to kill them? (Note that these are two different questions.) The argument from species overlap points out that all the typical or plausible suggestions for relevant criteria apply equally to dogs, pigs or chickens as they do to human infants or late-stage Alzheimer patients. Therefore, giving less ethical consideration to the former would be based merely on species membership, which is just as arbitrary as choosing race or sex as relevant criterion (further justification for that claim follows below).

Here are some examples for commonly suggested criteria. Those who want may pause at this point and think about the criteria they consult for whether it is wrong to inflict suffering on a being (and separately, those that are relevant for the wrongness of killing).

 

The suggestions are:

A: Capacity for moral reasoning

B: Being able to reciprocate

C: (Human-like) intelligence

D: Self-awareness

E: Future-related preferences; future plans

E': Preferences / interests (in general)

F: Sentience (capacity for suffering and happiness)

G: Life / biological complexity

H: What I care about / feel sympathy or loyalty towards

 

The argument from species overlap points out that not all humans are equal. The sentiment behind "all humans are equal" is not that they are literally equal, but that equal interests/capacities deserve equal consideration. None of the above criteria except (in some empirical cases) H imply that human infants or late stage demented people should be given more ethical consideration than cows, pigs or chickens.

While H is an unlikely criterion for direct ethical consideration (it could justify genocide in specific circumstances!), it is an important indirect factor. Most humans have much more empathy for fellow humans than for nonhuman animals. While this is not a criterion for giving humans more ethical consideration per se, it is nevertheless a factor that strongly influences ethical decision-making in real-life.

However, such factors can't apply for ethical reasoning at a theoretical/normative level, where all the relevant variables are looked at in isolation in order to come up with a consistent ethical framework that covers all possible cases.

If there were no intrinsic reasons for giving moral consideration to babies, then a society in which some babies were (factory-)farmed would be totally fine as long as the people are okay with it. If we consider this implication to be unacceptable, then the same must apply for the situations nonhuman animals find themselves in on farms.

Side note: The question whether killing a given being is wrong, and if so, "why" and "how wrong exactly", is complex and outside the scope of this article. Instead of on killing, the focus will be on suffering, and by suffering I mean something like wanting to get out of one's current conscious state, or wanting to change some aspect about it. The empirical issue of which beings are capable of suffering is a different matter that I will (only briefly) discuss below. So in this context, giving a being moral consideration means that we don't want it to suffer, leaving open the question whether killing it painlessly is bad/neutral/good or prohibited/permissible/obligatory. 

The main conclusion so far is that if we care about all the suffering of members of the human species, and if we reject question-begging reasoning that could also be used to justify racism or other forms of discrimination, then we must also care fully about suffering happening in nonhuman animals. This would imply that x amount of suffering is just as bad, i.e. that we care about it just as much, in nonhuman animals as in humans, or in aliens or in uploads. (Though admittedly the latter wouldn't be anti-speciesist but rather anti-"substratist", or anti-"fleshist".)

The claim is that there is no way to block this conclusion without:

1. using reasoning that could analogically be used to justify racism or sexism
or
2. using reasoning that allows for hypothetical circumstances where it would be okay (or even called for) to torture babies in cases where utilitarian calculations prohibit it.

I've tried and have asked others to try -- without success. 

 

Caring about suffering

I have not given a reason why torturing babies or racism is bad or wrong. I'm hoping that the vast majority of people will share that intuition/value of mine, that they want to be the sort of person who would have been amongst those challenging racist or sexist prejudices, had they lived in the past. 

Some might be willing to bite the bullet at this point, trusting some strongly held ethical principle of theirs (e.g. A, B, C, D, or E above), to the conclusion of excluding humans who lack certain cognitive capacities from moral concern. One could point out that people's empathy and indirect considerations about human rights, societal stability and so on, will ensure that this "loophole" in such an ethical view almost certainly remains without consequences for beings with human DNA. It is a convenient Schelling point after all to care about all humans (or at least all humans outside their mother's womb). However, I don't see why absurd conclusions that will likely remain hypothetical would be significantly less bad than other absurd conclusions. Their mere possibility undermines the whole foundation one's decisional algorithm is grounded in. (Compare hypothetical problems for specific decision theories.) 

Furthermore, while D and E seem plausible candidates for reasons against killing a being with these properties (E is in fact Peter Singer's view on the matter), none of the criteria from A to E seem relevant to suffering, to whether a being can be harmed or benefitted. The case for these being bottom-up morally relevant criteria for the relevance of suffering (or happiness) is very weak, to say the least. 

Maybe that's the speciesist's central confusion, that the rationality/sapience of a being is somehow relevant for whether its suffering matters morally. Clearly, for us ourselves, this does not seem to be the case. If I was told that some evil scientist would first operate on my brain to (temporarily) lower my IQ and cognitive abilities, and then torture me afterwards, it is not like I will be less afraid of the torture or care less about averting it! 

Those who do consider biting the bullet should ask themselves whether they would have defended that view in all contexts, or whether they might be driven towards such a conclusion by a self-serving bias. There seems to be a strange and sudden increase in the frequency of people who are willing to claim that there is nothing intrinsically wrong with torturing babies when the subject is animal rights, or more specifically, the steak they intend to have for dinner.

It is an entirely different matter if people genuinely think that animals or human infants or late-stage demented people are not sentient. To be clear about what is meant by sentience: 

A sentient being is one for whom "it feels like something to be that being". 

I find it highly implausible that only self-aware or "sapient" beings are sentient, but if true, this would constitute a compelling reason against caring for at least most nonhuman animals, for the same reason that it would pointless to care about pebbles for the pebbles' sake. If all nonhumans truly weren't sentient, then obviously singling out humans for the sphere of moral concern would not be speciesist.

What irritates me, however, is that anyone advocating such a view should, it seems to me, still have to factor in a significant probability of being wrong, given that both philosophy of mind and the neuroscience that goes with it are hard and, as far as I'm aware, not quite settled yet. The issue matters because of the huge numbers of nonhuman animals at stake and because of the terrible conditions these beings live in. 

I rarely see this uncertainty acknowledged. If we imagine the torture-scenario outlined above, how confident would we really be that the torture "won't matter" if our own advanced cognitive capacities are temporarily suspended? 

 

Why species membership really is an absurd criterion

In the beginning of the article, I wrote that I'd get back to this for those not convinced. Some readers may still feel that there is something special about being a member of the human species. Some may be tempted to think about the concept of "species" as if it were a fundamental concept, a Platonic form. 

The following likely isn't news to most of the LW audience, but it is worth spelling it out anyway: There exists a continuum of "species" in thing-space as well as in the actual evolutionary timescale. The species boundaries seem obvious just because the intermediates kept evolving or went extinct. And even if that were not the case, we could imagine it. The theoretical possibility is enough to make the philosophical case, even though psychologically, actualities are more convincing.

We can imagine a continuous line-up of ancestors, always daughter and mother, from modern humans back to the common ancestor of humans and, say, cows, and then forward in time again to modern cows. How would we then divide this line up into distinct species? Morally significant lines would have to be drawn between mother and daughter, but that seems absurd! There are several different definitions of "species" used in biology. A common criterion -- for sexually reproducing organisms anyway -- is whether groups of beings (of different sex) can have fertile offspring together. If so, they belong to the same species. 

That is a rather odd way of determining whether one cares about the suffering of some hominid creature in the line-up of ancestors -- why should that for instance be relevant in regard to determining whether some instance of suffering matters to us? 

Moreover, is that really the terminal value of people who claim they only care about humans, or could it be that they would, upon reflection, revoke such statements?

And what about transhumanism? I remember that a couple of years ago, I thought I had found a decisive argument against human enhancement. I thought it would likely lead to speciation, and somehow the thought of that directly implied that posthumans would treat the remaining humans badly, and so the whole thing became immoral in my mind. Obviously this is absurd; there is nothing wrong with speciation per se, and if posthumans will be anti-speciesist, then the remaining humans would have nothing to fear! But given the speciesism in today's society, it is all too understandable that people would be concerned about this. If we imagine the huge extent to which a posthuman, or not to mention a strong AI, would be superior compared to current humans, isn't that a bit like comparing chickens to us?

A last possible objection I can think of: Suppose one held the belief that group averages are what matters, and that all members of the human species deserve equal protection because of the group average for a criterion that is considered relevant and that would, without the group average rule, deny moral consideration to some sentient humans. 

This defense too doesn't work. Aside from seeming suspiciously arbitrary, such a view would imply absurd conclusions. A thought experiment for illustration: A pig with a macro-mutation is born, she develops child-like intelligence and the ability to speak. Do we refuse to allow her to live unharmed -- or even let her go to school -- simply because she belongs to a group (defined presumably by snout shape, or DNA, or whatever the criteria for "pigness" are) with an average that is too low?

Or imagine you are the head of an architecture bureau and looking to hire a new aspiring architect. Is tossing out an application written by a brilliant woman going to increase the expected success of your firm, assuming that women are, on average, less skilled at spatial imagination than men? Surely not!

Moreover, taking group averages as our ethical criterion requires us to first define the relevant groups. Why even take species-groups instead of groups defined by skin color, weight or height? Why single out one property and not others? 

 

Summary

Our speciesism is an anthropocentric bias without any reasonable foundation. It would be completely arbitrary to give special consideration to a being simply because of its species membership. Doing so would lead to a number of implications that most people clearly reject. A strong case can be made that suffering is bad in virtue of being suffering, regardless of where it happens. If the suffering or deaths of nonhuman animals deserve no ethical consideration, then human beings with the same relevant properties (of which all plausible ones seem to come down to having similar levels of awareness) deserve no intrinsic ethical consideration either, barring speciesism. 

Assuming that we would feel uncomfortable giving justifications or criteria for our scope of ethical concern that can analogously be used to defend racism or sexism, those not willing to bite the bullet about torturing babies are forced by considerations of consistency to care about animal suffering just as much as they care about human suffering. 

Such a view leaves room for probabilistic discounting in cases where we are empirically uncertain whether beings are capable of suffering, but we should be on the lookout for biases in our assessments. 

Edit: As Carl Shulman has pointed out, discounting may also apply for "intensity of sentience", because it seems at least plausible that shrimps (for instance), if they are sentient, can experience less suffering than e.g. a whale. 

The Gift I Give Tomorrow

28 Raemon 11 January 2012 04:02AM

 

This is the final post in my Ritual Mini-Sequence. Previous posts include the Introduction, a discussion on the Value (and Danger) of Ritual, and How to Design Ritual Ceremonies that reflect your values.

 

I wrote this as a concluding essay in the Solstice ritual book. It was intended to be at least comprehensible to people who weren’t already familiar with our memes, and to communicate why I thought this was important. It builds upon themes from the ritual book, and in particular, the readings of Beyond the Reach of God and The Gift We Give to Tomorrow. Working on this essay was transformative to me - it allowed me to finally bypass my scope insensitivity and other biases, so that I could evaluate organizations like the Singularity Institute with fairness. I haven’t yet decided what to do with my charitable dollars - it’s a complex problem. But I’ve overcome my emotional restistance to the idea of fighting X-Risk.

 

I don’t know if that was due to the words themselves, or to the process I had to go through to write them, but I hope others may benefit from this.

 


 

I thought ‘The Gift We Give to Tomorrow’ was incredibly beautiful when I first read it. I actually cried. I wanted to share it with friends and family, except that work ONLY has meaning in the context of the Sequences. Practically every line is a hyperlink to an important, earlier point, and without many hours of previous reading, it just won’t have the impact. But to me, it felt like the perfect endcap to everything the Sequences covered, taking all of the facts and ideas and weaving them into a coherent, poetic narrative that left me feeling satisfied with my place in the world.


Except that... I wasn’t sure that it actually said anything.

continue reading »

View more: Prev | Next