What big goals do we have?
Sometime ago Jonii wrote:
I mean, paperclip maximizer is seriously ready to do anything to maximize paperclips. It really takes the paperclips seriously.
When I'm hungry I eat, but then I don't go on eating some more just to maximize a function. Eating isn't something I want a lot of. Likewise I don't want a ton of survival, just a bounded amount every day. Let's define a goal as big if you don't get full: every increment of effort/achievement is valuable, like paperclips to Clippy. Now do we have any big goals? Which ones?
Save the world. A great goal if you see a possible angle of attack, which I don't. The SIAI folks are more optimistic, but if they see a chink in the wall, they're yet to reveal it.
Help those who suffer. Morally upright but tricky to execute: James Shikwati, Dambisa Moyo and Kevin Myers show that even something as clear-cut as aid to Africa can be viewed as immoral. Still a good goal for anyone, though.
Procreate. This sounds fun! Fortunately, the same source that gave us this goal also gave us the means to achieve it, and intelligence is not among them. :-) And honestly, what sense in making 20 kids just to play the good-soldier routine for your genes? There's no unique "you gene" anyway, in several generations your descendants will be like everyone else's. Yeah, kids are fun, I'd like two or three.
Follow your muse. Music, comedy, videogame design, whatever. No limit to achievement! A lot of this is about signaling: would you still bother if all your successes were attributed to someone else's genetic talent? But even apart from the signaling angle, there's still the worrying feeling that entertainment is ultimately useless, like humanity-scale wireheading, not an actual goal for us to reach.
Accumulate power, money or experiences. What for? I never understood that.
Advance science. As Erik Naggum put it:
The purpose of human existence is to learn and to understand as much as we can of what came before us, so we can further the sum total of human knowledge in our life.
Don't know, but I'm pretty content with my life lately. Should I have a big goal at all? How about you?
Normal Cryonics
I recently attended a small gathering whose purpose was to let young people signed up for cryonics meet older people signed up for cryonics - a matter of some concern to the old guard, for obvious reasons.
The young cryonicists' travel was subsidized. I suspect this led to a greatly different selection filter than usually prevails at conferences of what Robin Hanson would call "contrarians". At an ordinary conference of transhumanists - or libertarians, or atheists - you get activists who want to meet their own kind, strongly enough to pay conference fees and travel expenses. This conference was just young people who took the action of signing up for cryonics, and who were willing to spend a couple of paid days in Florida meeting older cryonicists.
The gathering was 34% female, around half of whom were single, and a few kids. This may sound normal enough, unless you've been to a lot of contrarian-cluster conferences, in which case you just spit coffee all over your computer screen and shouted "WHAT?" I did sometimes hear "my husband persuaded me to sign up", but no more frequently than "I pursuaded my husband to sign up". Around 25% of the people present were from the computer world, 25% from science, and 15% were doing something in music or entertainment - with possible overlap, since I'm working from a show of hands.
I was expecting there to be some nutcases in that room, people who'd signed up for cryonics for just the same reason they subscribed to homeopathy or astrology, i.e., that it sounded cool. None of the younger cryonicists showed any sign of it. There were a couple of older cryonicists who'd gone strange, but none of the young ones that I saw. Only three hands went up that did not identify as atheist/agnostic, and I think those also might have all been old cryonicists. (This is surprising enough to be worth explaining, considering the base rate of insanity versus sanity. Maybe if you're into woo, there is so much more woo that is better optimized for being woo, that no one into woo would give cryonics a second glance.)
The part about actually signing up may also be key - that's probably a ten-to-one or worse filter among people who "get" cryonics. (I put to Bill Faloon of the old guard that probably twice as many people had died while planning to sign up for cryonics eventually, than had actually been suspended; and he said "Way more than that.") Actually signing up is an intense filter for Conscientiousness, since it's mildly tedious (requires multiple copies of papers signed and notarized with witnesses) and there's no peer pressure.
For whatever reason, those young cryonicists seemed really normal - except for one thing, which I'll get to tomorrow. Except for that, then, they seemed like very ordinary people: the couples and the singles, the husbands and the wives and the kids, scientists and programmers and sound studio technicians.
It tears my heart out.
Consciousness
(ETA: I've created three threads - color, computation, meaning - for the discussion of three questions posed in this article. If you are answering one of those specific questions, please answer there.)
I don't know how to make this about rationality. It's an attack on something which is a standard view, not only here, but throughout scientific culture. Someone else can do the metalevel analysis and extract the rationality lessons.
The local worldview reduces everything to some combination of physics, mathematics, and computer science, with the exact combination depending on the person. I think it is manifestly the case that this does not work for consciousness. I took this line before, but people struggled to understand my own speculations and this complicated the discussion. So the focus is going to be much more on what other people think - like you, dear reader. If you think consciousness can be reduced to some combination of the above, here's your chance to make your case.
The main exhibits will be color and computation. Then we'll talk about reference; then time; and finally the "unity of consciousness".
The Contrarian Status Catch-22
It used to puzzle me that Scott Aaronson still hasn't come to terms with the obvious absurdity of attempts to make quantum mechanics yield a single world.
I should have realized what was going on when I read Scott's blog post "The bullet-swallowers" in which Scott compares many-worlds to libertarianism. But light didn't dawn until my recent diavlog with Scott, where, at 50 minutes and 20 seconds, Scott says:
"What you've forced me to realize, Eliezer, and I thank you for this: What I'm uncomfortable with is not the many-worlds interpretation itself, it's the air of satisfaction that often comes with it."
-- Scott Aaronson, 50:20 in our Bloggingheads dialogue.
It doesn't show on my face (I need to learn to reveal my expressions more, people complain that I'm eerily motionless during these diavlogs) but at this point I'm thinking, Didn't Scott just outright concede the argument? (He didn't; I checked.) I mean, to me this sounds an awful lot like:
Sure, many-worlds is the simplest explanation that fits the facts, but I don't like the people who believe it.
And I strongly suspect that a lot of people out there who would refuse to identify themselves as "atheists" would say almost exactly the same thing:
What I'm uncomfortable with isn't the idea of a god-free physical universe, it's the air of satisfaction that atheists give off.
Value is Fragile
Followup to: The Fun Theory Sequence, Fake Fake Utility Functions, Joy in the Merely Good, The Hidden Complexity of Wishes, The Gift We Give To Tomorrow, No Universally Compelling Arguments, Anthropomorphic Optimism, Magical Categories, ...
If I had to pick a single statement that relies on more Overcoming Bias content I've written than any other, that statement would be:
Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth.
"Well," says the one, "maybe according to your provincial human values, you wouldn't like it. But I can easily imagine a galactic civilization full of agents who are nothing like you, yet find great value and interest in their own goals. And that's fine by me. I'm not so bigoted as you are. Let the Future go its own way, without trying to bind it forever to the laughably primitive prejudices of a pack of four-limbed Squishy Things -"
My friend, I have no problem with the thought of a galactic civilization vastly unlike our own... full of strange beings who look nothing like me even in their own imaginations... pursuing pleasures and experiences I can't begin to empathize with... trading in a marketplace of unimaginable goods... allying to pursue incomprehensible objectives... people whose life-stories I could never understand.
That's what the Future looks like if things go right.
If the chain of inheritance from human (meta)morals is broken, the Future does not look like this. It does not end up magically, delightfully incomprehensible.
With very high probability, it ends up looking dull. Pointless. Something whose loss you wouldn't mourn.
Seeing this as obvious, is what requires that immense amount of background explanation.
One Argument Against An Army
Followup to: Update Yourself Incrementally
Yesterday I talked about a style of reasoning in which not a single contrary argument is allowed, with the result that every non-supporting observation has to be argued away. Today I suggest that when people encounter a contrary argument, they prevent themselves from downshifting their confidence by rehearsing already-known support.
Suppose the country of Freedonia is debating whether its neighbor, Sylvania, is responsible for a recent rash of meteor strikes on its cities. There are several pieces of evidence suggesting this: the meteors struck cities close to the Sylvanian border; there was unusual activity in the Sylvanian stock markets before the strikes; and the Sylvanian ambassador Trentino was heard muttering about "heavenly vengeance".
Someone comes to you and says: "I don't think Sylvania is responsible for the meteor strikes. They have trade with us of billions of dinars annually." "Well," you reply, "the meteors struck cities close to Sylvania, there was suspicious activity in their stock market, and their ambassador spoke of heavenly vengeance afterward." Since these three arguments outweigh the first, you keep your belief that Sylvania is responsible—you believe rather than disbelieve, qualitatively. Clearly, the balance of evidence weighs against Sylvania.
Then another comes to you and says: "I don't think Sylvania is responsible for the meteor strikes. Directing an asteroid strike is really hard. Sylvania doesn't even have a space program." You reply, "But the meteors struck cities close to Sylvania, and their investors knew it, and the ambassador came right out and admitted it!" Again, these three arguments outweigh the first (by three arguments against one argument), so you keep your belief that Sylvania is responsible.
Indeed, your convictions are strengthened. On two separate occasions now, you have evaluated the balance of evidence, and both times the balance was tilted against Sylvania by a ratio of 3-to-1.
Something to Protect
Followup to: Tsuyoku Naritai, Circular Altruism
In the gestalt of (ahem) Japanese fiction, one finds this oft-repeated motif: Power comes from having something to protect.
I'm not just talking about superheroes that power up when a friend is threatened, the way it works in Western fiction. In the Japanese version it runs deeper than that.
In the X saga it's explicitly stated that each of the good guys draw their power from having someone—one person—who they want to protect. Who? That question is part of X's plot—the "most precious person" isn't always who we think. But if that person is killed, or hurt in the wrong way, the protector loses their power—not so much from magical backlash, as from simple despair. This isn't something that happens once per week per good guy, the way it would work in a Western comic. It's equivalent to being Killed Off For Real—taken off the game board.
The way it works in Western superhero comics is that the good guy gets bitten by a radioactive spider; and then he needs something to do with his powers, to keep him busy, so he decides to fight crime. And then Western superheroes are always whining about how much time their superhero duties take up, and how they'd rather be ordinary mortals so they could go fishing or something.
Similarly, in Western real life, unhappy people are told that they need a "purpose in life", so they should pick out an altruistic cause that goes well with their personality, like picking out nice living-room drapes, and this will brighten up their days by adding some color, like nice living-room drapes. You should be careful not to pick something too expensive, though.
In Western comics, the magic comes first, then the purpose: Acquire amazing powers, decide to protect the innocent. In Japanese fiction, often, it works the other way around.
Of course I'm not saying all this to generalize from fictional evidence. But I want to convey a concept whose deceptively close Western analogue is not what I mean.
I have touched before on the idea that a rationalist must have something they value more than "rationality": The Art must have a purpose other than itself, or it collapses into infinite recursion. But do not mistake me, and think I am advocating that rationalists should pick out a nice altruistic cause, by way of having something to do, because rationality isn't all that important by itself. No. I am asking: Where do rationalists come from? How do we acquire our powers?
Doing your good deed for the day
Interesting new study out on moral behavior. The one sentence summary of the most interesting part is that people who did one good deed were less likely to do another good deed in the near future. They had, quite literally, done their good deed for the day.
In the first part of the study, they showed that people exposed to environmentally friendly, "green" products were more likely to behave nicely. Subjects were asked to rate products in an online store; unbeknownst to them, half were in a condition where the products were environmentally friendly, and the other half in a condition where the products were not. Then they played a Dictator Game. Subjects who had seen environmentally friendly products shared more of their money.
In the second part, instead of just rating the products, they were told to select $25 worth of products to buy from the store. One in twenty five subjects would actually receive the products they'd purchased. Then they, too, played the Dictator Game. Subjects who had bought environmentally friendly products shared less of their money.
In the third part, subjects bought products as before. Then, they participated in a "separate, completely unrelated" experiment "on perception" in which they earned money by identifying dot patterns. The experiment was designed such that participants could lie about their perceptions to earn more. People who purchased the green products were more likely to do so.
This does not prove that environmentalists are actually bad people - remember that whether a subject purchased green products or normal products was completely randomized. It does suggest that people who have done one nice thing feel less of an obligation to do another.
This meshes nicely with a self-signalling conception of morality. If part of the point of behaving morally is to convince yourself that you're a good person, then once you're convinced, behaving morally loses a lot of its value.
Your Price for Joining
Previously in series: Why Our Kind Can't Cooperate
In the Ultimatum Game, the first player chooses how to split $10 between themselves and the second player, and the second player decides whether to accept the split or reject it—in the latter case, both parties get nothing. So far as conventional causal decision theory goes (two-box on Newcomb's Problem, defect in Prisoner's Dilemma), the second player should prefer any non-zero amount to nothing. But if the first player expects this behavior—accept any non-zero offer—then they have no motive to offer more than a penny. As I assume you all know by now, I am no fan of conventional causal decision theory. Those of us who remain interested in cooperating on the Prisoner's Dilemma, either because it's iterated, or because we have a term in our utility function for fairness, or because we use an unconventional decision theory, may also not accept an offer of one penny.
And in fact, most Ultimatum "deciders" offer an even split; and most Ultimatum "accepters" reject any offer less than 20%. A 100 USD game played in Indonesia (average per capita income at the time: 670 USD) showed offers of 30 USD being turned down, although this equates to two week's wages. We can probably also assume that the players in Indonesia were not thinking about the academic debate over Newcomblike problems—this is just the way people feel about Ultimatum Games, even ones played for real money.
There's an analogue of the Ultimatum Game in group coordination. (Has it been studied? I'd hope so...) Let's say there's a common project—in fact, let's say that it's an altruistic common project, aimed at helping mugging victims in Canada, or something. If you join this group project, you'll get more done than you could on your own, relative to your utility function. So, obviously, you should join.
But wait! The anti-mugging project keeps their funds invested in a money market fund! That's ridiculous; it won't earn even as much interest as US Treasuries, let alone a dividend-paying index fund.
Clearly, this project is run by morons, and you shouldn't join until they change their malinvesting ways.
Now you might realize—if you stopped to think about it—that all things considered, you would still do better by working with the common anti-mugging project, than striking out on your own to fight crime. But then—you might perhaps also realize—if you too easily assent to joining the group, why, what motive would they have to change their malinvesting ways?
Well... Okay, look. Possibly because we're out of the ancestral environment where everyone knows everyone else... and possibly because the nonconformist crowd tries to repudiate normal group-cohering forces like conformity and leader-worship...
...It seems to me that people in the atheist/libertarian/technophile/sf-fan/etcetera cluster often set their joining prices way way way too high. Like a 50-way split Ultimatum game, where every one of 50 players demands at least 20% of the money.
Link: PRISMs, Gom Jabbars, and Consciousness (Peter Watts)
http://www.rifters.com/crawl/?p=791
Morsella has gone back to basics. Forget art, symphonies, science. Forget the step-by-step learning of complex tasks. Those may be some of the things we use consciousness for now but that doesn’t mean that’s what it evolved for, any more than the cones in our eyes evolved to give kaleidoscope makers something to do. What’s the primitive, bare-bones, nuts-and-bolts thing that consciousness does once we’ve stripped away all the self-aggrandizing bombast?
Morsella’s answer is delightfully mundane: it mediates conflicting motor commands to the skeletal muscles.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)