The Great Brain is Located Externally

25 Alicorn 25 June 2009 10:29PM

Dilbert cartoon

How many of the things you "know" do you have memorized?

Do you remember how to spell all of those words you let the spellcheck catch?  Do you remember what fraction of a teaspoon of salt goes into that one recipe, or would you look at the list of ingredients to be sure?  Do you remember what kinds of plastic they recycle in your neighborhood, or do you delegate that task to a list attached with a magnet to the fridge?

If I asked you what day of the month it is today, would you know, or would you look at your watch/computer clock/the posting date of this post?

Before I lost my Palm Pilot, I called it my "external brain".  It didn't really fit the description; with no Internet access, it mostly held my contact list, class schedule, and grocery list.  And a knockoff of Minesweeper.  Still, in a real enough sense, it remembered things for me.

continue reading »

Well-Kept Gardens Die By Pacifism

105 Eliezer_Yudkowsky 21 April 2009 02:44AM

Previously in seriesMy Way
Followup toThe Sin of Underconfidence

Good online communities die primarily by refusing to defend themselves.

Somewhere in the vastness of the Internet, it is happening even now.  It was once a well-kept garden of intelligent discussion, where knowledgeable and interested folk came, attracted by the high quality of speech they saw ongoing.  But into this garden comes a fool, and the level of discussion drops a little—or more than a little, if the fool is very prolific in their posting.  (It is worse if the fool is just articulate enough that the former inhabitants of the garden feel obliged to respond, and correct misapprehensions—for then the fool dominates conversations.)

So the garden is tainted now, and it is less fun to play in; the old inhabitants, already invested there, will stay, but they are that much less likely to attract new blood.  Or if there are new members, their quality also has gone down.

Then another fool joins, and the two fools begin talking to each other, and at that point some of the old members, those with the highest standards and the best opportunities elsewhere, leave...

I am old enough to remember the USENET that is forgotten, though I was very young.  Unlike the first Internet that died so long ago in the Eternal September, in these days there is always some way to delete unwanted content.  We can thank spam for that—so egregious that no one defends it, so prolific that no one can just ignore it, there must be a banhammer somewhere.

But when the fools begin their invasion, some communities think themselves too good to use their banhammer for—gasp!—censorship.

After all—anyone acculturated by academia knows that censorship is a very grave sin... in their walled gardens where it costs thousands and thousands of dollars to enter, and students fear their professors' grading, and heaven forbid the janitors should speak up in the middle of a colloquium.

continue reading »

Average utilitarianism must be correct?

2 PhilGoetz 06 April 2009 05:10PM

I said this in a comment on Real-life entropic weirdness, but it's getting off-topic there, so I'm posting it here.

My original writeup was confusing, because I used some non-standard terminology, and because I wasn't familiar with the crucial theorem.  We cleared up the terminological confusion (thanks esp. to conchis and Vladimir Nesov), but the question remains.  I rewrote the title yet again, and have here a restatement that I hope is clearer.

  • We have a utility function u(outcome) that gives a utility for one possible outcome.  (Note the word utility.  That means your diminishing marginal utility, and all your preferences, and your aggregation function for a single outcome, are already incorporated into this function.  There is no need to analyze u further, as long as we agree on using a utility function.)
  • We have a utility function U(lottery) that gives a utility for a probability distribution over all possible outcomes.
  • The von Neumann-Morgenstern theorem indicates that, given 4 reasonable axioms about U, the only reasonable form for U is to calculate the expected value of u(outcome) over all possible outcomes.  This is why we constantly talk on LW about rationality as maximizing expected utility.
  • This means that your utility function U is indifferent with regard to whether the distribution of utility is equitable among your future selves.  Giving one future self u=10 and another u=0 is equally as good as giving one u=5 and another u=5.
  • This is the same ethical judgement that an average utilitarian makes when they say that, to calculate social good, we should calculate the average utility of the population; modulo the problems that population can change and that not all people are equal.  This is clearer if you use a many-worlds interpretation, and think of maximizing expected value over possible futures as applying average utilitarianism to the population of all possible future yous.
  • Therefore, I think that, if the 4 axioms are valid when calculating U(lottery), they are probably also valid when calculating not our private utility, but a social utility function s(outcome), which sums over people in a similar way to how U(lottery) sums over possible worlds.  The theorem then shows that we should set s(outcome) = the average value of all of the utilities for the different people involved. (In other words, average utilitarianism is correct).  Either that, or the axioms are inappropriate for both U and s, and we should not define rationality as maximizing expected utility.
  • (I am not saying that the theorem reaches down through U to say anything directly about the form of u(outcome).  I am saying that choosing a shape for U(lottery) is the same type of ethical decision as choosing a shape for s(outcome); and the theorem tells us what U(lottery) should look like; and if that ethical decision is right for U(lottery), it should also be right for s(outcome). )
  • And yet, average utilitarianism asserts that equity of utility, even among equals, has no utility.  This is shocking, especially to Americans.
  • It is even more shocking that it is thus possible to prove, given reasonable assumptions, which type of utilitarianism is correct.  One then wonders what other seemingly arbitrary ethical valuations actually have provable answers given reasonable assumptions.

Some problems with average utilitarianism from the Stanford Encyclopedia of Philosophy:

Despite these advantages, average utilitarianism has not obtained much acceptance in the philosophical literature. This is due to the fact that the principle has implications generally regarded as highly counterintuitive. For instance, the principle implies that for any population consisting of very good lives there is a better population consisting of just one person leading a life at a slightly higher level of well-being (Parfit 1984 chapter 19). More dramatically, the principle also implies that for a population consisting of just one person leading a life at a very negative level of well-being, e.g., a life of constant torture, there is another population which is better even though it contains millions of lives at just a slightly less negative level of well-being (Parfit 1984). That total well-being should not matter when we are considering lives worth ending is hard to accept. Moreover, average utilitarianism has implications very similar to the Repugnant Conclusion (see Sikora 1975; Anglin 1977).

(If you assign different weights to the utilities of different people, we could probably get the same result by considering a person with weight W to be equivalent to W copies of a person with weight 1.)

Missed Distinctions

34 jimrandomh 11 April 2009 03:15AM

When we lump unlike things together, it confuses us and opens holes in our theories. I'm not normally one to read about diets, dieting advice, or anything of that sort, but in today's article about the Shangri-La Diet, I saw an important distinction that no one's talked about. Something Eliezer said in the comments struck me as odd:

a skipped meal you wouldn't notice would have me dizzy when I stand up

And a few posts later,

I can starve or think, not both at the same time.

Reading these, I thought, that's not what being hungry like feels like for me. But while being hungry doesn't feel like that, those descriptions were nonetheless familiar. And then it hit me.

He wasn't describing the symptoms of hunger. He was describing the symptoms of hypoglycemia, more commonly known as low blood sugar. Blood sugar is one of the main systems responsible for regulating appetite, so for most people, having low blood sugar and being hungry are one and the same. The main focus of the Atkins diet, for example, is reducing swings in blood sugar, thereby reducing appetite. The Shangri-La diet seems like it would have a similar effect.

continue reading »

Extenuating Circumstances

34 Eliezer_Yudkowsky 06 April 2009 10:57PM

Followup toTsuyoku Naritai

"Just remember, there but for a massive genetic difference, environmental factors, and conscious choices, go you or I." -- Justin Corwin

Failures don't have single causes.  We choose single causes to focus on, but nothing in the universe emerges from a single parent event.  Every assassination ever committed is the fault of every asteroid that wasn't in the right place to hit the assassin.

What good, then, does it do to blame circumstances for your failure?  What good does it do? - to look over a huge causal lattice in which your own decisions played a part, and point to something you can't control, and say:  "There is where it failed."  It might be that a surgical intervention on the past, altering some node outside yourself, would have let you succeed instead of fail.  But what good does this counterfactual do you?  Will you choose that outside reality be different on your next try?

And yet... when I look at other people, not myself, I find myself taking "extenuating circumstances" into account a great deal.  I go to great lengths to "save the world" (as I believe from my epistemic vantage point).  When I consider doing less, I consider that this would make me a horrible awful unforgivable person.  And then I cheerfully shake hands with others who aren't trying at all to save the world.  I seem to want to have my cake and eat it too - to instantiate Goetz's Paradox:  "Society tells you to work to make yourself more valuable.  Then it tells you that when you reason morally, you must assume that all lives are equally valuable.  You can't have it both ways."

Is this an inherent subjective asymmetry - does morality just look different from the outside than inside?  If so, is that okay, or is it a sign of self-contradiction?  Or is it condescension on my part - that I think less of others and so hold them to lower standards?

continue reading »

Declare your signaling and hidden agendas

19 Kaj_Sotala 13 April 2009 12:01PM

Follow-up to: It's okay to be (at least a little) irrational

Many science journals require their authors to declare any competing interests they happen to have. For instance, if you're submitting a study about the health effects of tobacco, and you happen to sit on the board of directors of a major tobacco company, you're supposed to say that out loud. 

The process obviously isn't perfect, as most journals don't have the resources to ensure their authors do actually declare all competing interests. On the whole, though, it helps protect both the readers and the authors. The readers, because they'll know to be more careful in evaluating the reports of researchers who might be biased. The authors, because by declaring any competing interests upfront, they're protected from later accusations of dishonesty. (That's the theory, at least. In practice, authors often don't declare their interests, even if they should.)

Signaling has been discussed a lot on Overcoming Bias, though a bit less on Less Wrong. A large fraction of people's behavior is actually intended to signal some qualities to others, though this isn't necessarily a conscious process. On the other hand, it often is. As seasoned OB/LW readers, it seems to me like many would instinctively try to avoid giving the impression of excess signaling. We're rationalists, after all! We're trying to find the truth, not show off or impress others of our worth!

As if we even could avoid trying to make a good impression on others, or avoid having other kinds of hidden agendas. We're not any less human simply because we have rallied our rationality's banner. (Not to mention that signaling isn't a bad thing, by itself - humanity would be in a very poor state if we didn't have any signals about what others were like.) So, in the interest of self-honesty, I suggest we all begin explicitly declaring our (conscious) hidden agendas and signaling intentions when writing posts. As with the policy of scholarly journals, this will help both readers and writers, and in this case also serve a third and fourth function - making us more honest to ourselves, and make people realize that it's okay to have hidden agendas, and that they don't have to pretend they don't have any. I'll start out with mine.

continue reading »

"Stuck In The Middle With Bruce"

54 CronoDAS 09 April 2009 12:24AM

I was somewhat disappointed to find a lack of Magic: the Gathering players on LessWrong when I asked about it in the off-topic thread. You see, competitive Magic is one of the best, most demanding rationality battlefields that I know about. Furthermore, Magic is discussed extensively on the Internet, and many articles in which people try to explain how to become a better Magic player are, essentially, describing how to become more rational: how to better learn from experience, make judgments from noisy data, and (yes) overcome biases that interfere with one's ability to make better decisions.

Because people here don't play Magic, I can't simply link to those articles and say, "Here. Go read." I have to put everything into context, because Magic jargon has become its own language, distinct from English. Think I'm kidding? I was able to follow match coverage written in French using nothing but my knowledge of Magic-ese and what I remembered from my high school Spanish classes. Instead of simply linking, in order to give you the full effect, I'd have to undertake a project equivalent to translating a work in a foreign language.

So it is with great trepidation that I give you, untranslated, one of the "classics" of Magic literature.

Stuck In The Middle With Bruce by John F. Rizzo.

Now, John "Friggin'" Rizzo isn't one of the great Magic players. Far from it. He is, however, one of the great Magic writers, to the extent that the adjective "great" can be applied to someone who writes about Magic. His bizarre stream-of-consciousness writing style, personal stories, and strongly held opinions have made him a legend in the Magic community. "Stuck in the Middle with Bruce" is his most famous work, as incomprehensible as it may be to those who don't speak our language (and even to those that do).

So, why am I choosing to direct you to this particular piece of writing? Well, although Rizzo doesn't know much about winning, he knows an awful lot about what causes people to lose, and that's the topic of this particular piece - people's need to lose.

Does Bruce whisper into your ear, too?

The Benefits of Rationality?

18 cousin_it 31 March 2009 11:17AM

Robin wrote how being rational can harm you. Let's look at the other side: what significant benefits does rationality give?

The community here seems to agree that rationality is beneficial. Well, obviously people need common sense to survive, but does an additional dose of LessWrong-style rationality help us appreciably in our personal and communal endeavors?

Does LessWrong make us WIN?

(If we don't WIN, our evangelism rings a little hollow. Science didn't spread due to evangelism, science spread because it works. Art spreads because people love it. I want to hold my Art to this standard. Push-selling a solution while it's still inferior might be the locally optimal decision but it corrupts long-term, as many of us have seen in the IT industry. That's if the example of all religions and political movements isn't enough for you. Beware the Evangelism Death Spiral!)

We may claim internal benefits such as improved clarity of thought from each new blog insight. But religious people claim similar internal benefits that actually spill out into the measurable world, such as happiness and charitability. This fact gives us envy and we attempt to use our internal changes to group together for world-benefitting tasks. To my mind this looks like putting the cart before the horse: why compete with religion on its terms, don't we have utility functions of our own to satisfy?

No, feelings won't do. If feelings turn you on, do drugs or get religious. Rationalism needs to verifiably bring external benefit. Don't help me become pure from racism or somesuch. Help me WIN, and the world will beat a path to our door.

Okay, interpersonal relationships are out. Then the most obvious area where rationalism could help is business. And the most obvious community-beneficial application (riffing on some recent posts here) would be scientists banding together and making a profitable part-time business to fund their own research. I can see how many techniques taught here could help, e.g. PD cooperation techniques. If a "rationalism case study" of this sort ever gets launched, I for one will gladly offer my effort. Of course this is just one suggestion; everything's possible.

One thing's definite for me: rationalism needs to be grounded in real-world victories for each one of us. Otherwise what's the point?

Mind Control and Me

10 Patrick 21 March 2009 05:31PM

Reading Eliezer Yudkowsky's works have always inspired an insidious feeling in me, sort of a cross between righteousness, contempt, the fun you get from understanding something new and gravitas. It's a feeling that I have found to be pleasurable, or at least addictive enough to go through all of his OB posts,  and the feeling makes me less skeptical and more obedient than I normally would be. For instance, in an act of uncharacteristic generosity, I decided to make a charitable donation on Eliezer's advice.

Now this is probably a good idea, because the charity is probably going to help guys like me later on in life and of course it's the Right Thing to Do. But the bottom line is that I did something I normally wouldn't have because Eliezer told me to. My sociopathic selfishness was acting as canary in the mine of my psyche.

Now this could be because Eliezer has creepy mind control powers, but I get similar feelings when reading other people, such as George Orwell, Richard Stallman or Paul Graham. I even have a friend who can inspire that insidious feeling in me. So it's a personal problem, one that I'm not sure I want to remove, but I would like to understand it better.

There are probably buttons being pushed by the style and the sort of ideas in the work that help to create the feeling, and I'll probably try to go over an essay or two and dissect it. However, I'd like to know who and at what times, if anyone at all, I should let create such feelings in me. Can I trust anyone that much, even if they aren't aware that they're doing it?

I don't know if anyone else here has similar brain overrides, or if I'm just crazy, but it's possible that such brain overrides could be understood much more thoroughly and induced in more people.  So what are the ethics of mind control (for want of a better term) and how much effort should we put in to stopping such feelings from occuring?

 

 

Edit Mar 22: Decided to remove the cryonics example due to factual inaccuracies.

Individual Rationality Is a Matter of Life and Death

24 patrissimo 21 March 2009 07:22PM

On at least two occasions - one only a year past - my life was at serious risk because I was not thinking clearly.  Both times, I was lucky (and once, the car even survived!).  As a gambler I don't like counting on luck, and I'd much rather be rational enough to avoid serious mistakes.  So when I checked the top-ranked posts here and saw Robin's Rational Me or We? arguing against rationality as a martial art I was dumbfounded.  To me, individual rationality is a matter of life and death[1].

In poker, much attention is given to the sexy art of reading your opponent, but the true veteran knows that far more important is the art of reading and controlling yourself.  It is very rare that a situation comes up where a "tell" matters, and each of my opponents is only in an occasional hand.  I and my irrationalities, however, are in every decision in every hand.  This is why self-knowledge and self-discipline are first-order concerns in poker, while opponent reading is second or perhaps even third.

And this is why Robin's post is so wrong[2].  Our minds and their irrationalities are part of every second of our lives, every moment we experience, and every decision that we make.  And contra to Robin's security metaphor, few of our decisions can be outsourced.  My two bad decisions regarding motor vehicles, for example, could not have easily been outsourced to a group rationality mechanism[3].  Only a tiny percentage of the choices I make every day can be punted to experts.

continue reading »

View more: Prev | Next