Comment author: Yvain2 09 February 2009 11:54:34PM 10 points [-]

Uncle Tom's Cabin is not a valid argument that slavery is wrong. "My mirror neurons make me sympathize with a person whose suffering is caused by Policy X" to "Policy X is immoral and must be stopped" is not a valid pattern of inference.

Consider a book about the life of a young girl who works in a sweatshop. She's plucked out of a carefree childhood, tyrannized and abused by greedy bosses, and eventually dies of work-related injuries incurred because it wasn't cost-effective to prevent them. I'm sure this book exists, though I haven't personally come across it. And I'm sure this book would provide just as emotionally compelling an argument for banning sweatshops as Uncle Tom's Cabin did for banning slavery.

But the sweatshop issue is a whole lot more complex than that, right? And the arguments in favor of sweatshops are more difficult to put into novel form, or less popular among the people who write novels, or simply not mentioned in that particular book, or all three.

The problem with fiction as evidence is that it's like the guy who say "It was negative thirty degrees last night, worst snowstorm in fifty years, so how come them liberals are still talking about 'global warming'?". It cuts off a tiny slice of the universe and invites you to use it to judge the entire system.

But I agree that fiction is not solely a tool of the dark side. Eliezer's comment about it activating the Near mode thinking struck me as the most specifically useful sentence in the entire post, and I would like to see more on that. I would also add one other benefit: fiction drags you into the author's mindset for a while against your will. You cannot read the book about the poor girl in the sweatshops without - at least a little - cheering on the labor unions and hating the greedy bosses, and this is true no matter how good a capitalist you may be in real life. It confuses whatever part of you is usually building a protective shell of biases around your opinion, and gets you comfortable with living on the opposite side of the argument. If the other side of the argument is a more stable attractor, you might even stay there.

...that wasn't a very formal explanation, but it's the best way I can put it right now.

Comment author: Yvain2 03 February 2009 12:35:22PM 13 points [-]

Assuming the Lord Pilot was correct in saying that, without the nova star, the Happy Fun People would never be able to reach the human starline network ...and assuming it's literally impossible to travel FTL without a starline ...and assuming the only starline to the nova star was the one they took ...and assuming Huygens, described as a "colony world", is sparsely populated, and either can be evacuated or is considered "expendable" compared to the alternatives

...then blow up Huygens' star. Without the Huygens-Nova starline, the Happy People won't be able to cross into human space, but the Happy-Nova-Babyeater starline will be unaffected. The Happy People can take care of the Babyeaters, and humankind will be safe. For a while.

Still not sure I'd actually take that solution. It depends on how populated Huygens is and how confident I am the Super Happy People can't come up with alternate transportation, and I'm also not *entirely* opposed to the Happy People's proposal. But:

If I had a comm link to the Happy People, I'd also want to hear their answer to the following line of reasoning: one ordinary nova in a single galaxy just attracted three separate civilizations. That means intelligent life is likely to be pretty common across the universe, and our three somewhat-united species are likely to encounter far more of it in the years to come. If the Happy People keep adjusting their (and our) utility functions each time we meet a new intelligent species, then by the millionth species there's not going to be a whole lot remaining of the original Super Happy way of thinking - or the human way of thinking, for that matter. If they're so smart, what's their plan for when that happens?

If they answer "We're fully prepared to compromise our and your utility functions limitlessly many times for the sake of achieving harmonious moralities among all forms of life in the Universe, and we predict each time will involve a change approximately as drastic as making you eat babies," then it will be a bad day to be a colonist on Huygens.

In response to Building Weirdtopia
Comment author: Yvain2 13 January 2009 09:01:52PM 42 points [-]

Political Weirdtopia: Citizens decide it is unfair for a democracy to count only the raw number of people who support a position without considering the intensity with which they believe it. Of course, one can't simply ask people to self-report the intensity with which they believe a position on their ballot, so stronger measures are required. Voting machines are redesigned to force voters to pull down a lever for each issue/candidate. The lever delivers a small electric shock, increasing in intensity each second the voter holds it down. The number of votes a person gets for a particular issue or candidate is a function of how long they keep holding down the lever.

In (choose one: more/less) enlightened sects of this society, the electric shock is capped at a certain level to avoid potential fatalities among overzealous voters. But in the (choose one: more/less) enlightened sects, voters can keep pulling down on the lever as long as they can stand the pain and their heart keeps working. Citizens consider this a convenient and entirely voluntary way to purge fanaticism from the gene pool.

The society lasts for several centuries before being taken over by a tiny cabal of people with Congenital Insensitivity to Pain Disorder.

In response to Dunbar's Function
Comment author: Yvain2 31 December 2008 07:10:32AM 1 point [-]

Though it's a side issue, what's even more... interesting.... is the way that our brains simply haven't updated to their diminished power in a super-Dunbarian world. We just go on debating politics, feverishly applying our valuable brain time to finding better ways to run the world, with just the same fervent intensity that would be appropriate if we were in a small tribe where we could persuade people to change things.

Thank you. That's one of those insights that makes this blog worth reading.

In response to Thanksgiving Prayer
Comment author: Yvain2 28 November 2008 02:08:47PM 11 points [-]

"O changeless and aeternal physical constants, we give thanks to thee for existing at values such that the Universe, upon being set in motion and allowed to run for thirteen billion years, give or take an eon, naturally tends toward a state in which we are seated here tonight with turkey, mashed potatoes, and cranberry sauce in front of us."

Or "O natural selection, thou hast adapted turkeys to a mostly predation-free environment, making them slow, weak, and full of meat. In contrast, thou hast adapted us humans to an environment full of dangers and a need for complex decisions, giving us cognitive abilities that we could eventually use to discover things like iron working. Therefore we thank thee, o natural selection, that we may slaughter and consume arbitrary numbers of turkeys at our pleasure without fear of harm or retribution. Furthermore, we thank thee for giving us an instinctual sense of morality strong enough that we feel compelled to ceremonially express our gratitude to all those who have helped us over the past year, yet not so strong that we dwell too much on what's happening to the turkey when we do so. Amen."

In response to Whither OB?
Comment author: Yvain2 18 November 2008 09:09:38PM 0 points [-]

I don't know what's up with people who say they still haven't read the archives. When I discovered OB, I spent all my free time for two weeks reading the archives straight through :)

I support Roland's idea. A few Eliezer posts per week, plus an (official, well-publicized, Eliezer-and-Robin-supported) forum where the rest of us could discuss those posts and bring up issues of our own. Certain community leaders (hopefully Eliezer and Robin if they have time) picking out particularly interesting topics and comments on the board and telling the posters to write them up in more depth as blog posts. Even if people rejected community-based blog posting, just having a forum to keep the Overcoming Bias community together would be worthwhile.

I'm more comfortable with BBSs than complicated upvote systems like Digg or Reddit. The ones I've seen tend toward groupthink, fifty topics on the same issue, and inane "Upvote if you don't like President Bush" threads.

There are some interesting ideas floating around on preventing bulletin boards from degenerating. Require everyone use their real name, or some kind of initial investment of time or money to register an account, or have a karma system.

Kind of off-topic, but in case this is one of my last chances, I want to thank Robin and Eliezer and all the other writers. I usually only comment when I disagree with something, so it's probably not obvious, but I am in awe of the intelligence and clear thinking you display. You have changed my outlook on life, logic, and the world.

In response to Selling Nonapples
Comment author: Yvain2 15 November 2008 02:48:20AM 13 points [-]

I don't know anything about the specific AI architectures in this post, but I'll defend non-apples. If one area of design-space is very high in search ordering but very low in preference ordering (ie a very attractive looking but in fact useless idea), then telling people to avoid it is helpful beyond the seemingly low level of optimization power it gives.

A metaphor: religious beliefs constitute a very small and specific area of beliefspace, but that area originally looks very attractive. You could spend your whole life searching within that area and never getting anywhere. Saying "be atheist!" provides an trivial amount of optimization power. But that doesn't mean it's of trivial importance in the search for correct beliefs. Another metaphor: if you're stuck in a ditch, the majority of the effort it takes to journey a mile will be the ten vertical meters it takes to climb to the top.

Saying "not X" doesn't make people go for all non-X equally. It makes them apply their intelligence to the problem again, ignoring the trap at X that they would otherwise fall into. If the problem is pretty easy once you stop trying to sell apples, then "sell non-apples" might provide most of the effective optimization power you need.

Comment author: Yvain2 13 November 2008 04:24:38PM 2 points [-]

Robin Gane-McCalla is an Overcoming Bias reader? I knew him back in college, but haven't talked to him in years. It really is a small world.

Comment author: Yvain2 06 November 2008 03:46:00PM 0 points [-]

"Why do people, including you apparently, always hide the price for this kind of thing? Market segmentation? Trying to get people to mentally commit before they find out how expensive it is? Maintaining a veneer of upper-class distaste for the crassness of money (or similarly, a "if you have to ask how much it is, you can't afford it" type thing)?"

I agree with that, and I have a policy of never buying from anyone who does this.

Often I don't know how much something would cost even to an order of magnitude; for example, I have no clue whether Eliezer charged Jane Street closer to $1,000 or $10,000 for his talk. This is probably because I'm not a finance company talk arranger, but I have the same problem with things that are targeted at normal people like me (vacation packages especially). I find (though I can't explain this) that I very rarely bother asking someone who provides no price information for a quote.

Even a "my base fee is $2,000, but varies based on this and this" or a "My fee is in the low four figures" would be better than "my fee is low".

Comment author: Yvain2 04 November 2008 12:08:00AM 2 points [-]

Disappointing. I kept on waiting for Eliezer to say some sort of amazingly witty thing that would cause everything Jaron was saying to collapse like a house of cards, but either he was too polite to interrupt or the format wasn't his style.

At first I thought Jaron was talking nonsense, but after thinking it over for a while, I'm prepared to give him the benefit of the doubt. He said that whether a computer can be intelligent makes no difference and isn't worth talking about. That's obviously wrong if he's using a normal definition of intelligent, but if by intelligent he means "conscious", it makes a lot of sense and he's probably even right - there's not a lot of practical value in worrying about whether an intelligent computer would be conscious (as opposed to a zombie) at this point. He wouldn't be the first person to use those two words in weird ways.

I am also at least a little sympathetic to his "consciousness can't be reduced" argument. It made more sense once he said that consciousness wasn't a phenomenon. Still not perfect sense, but trying to raise something stronger from its corpse I would argue something sort of Kantian like the following:

Goldbach's conjecture says that every number is the sum of three primes. It hasn't been proven but there's a lot of inductive evidence for it. If I give you a difficult large number, like 20145, you may not be capable of figuring out the three primes, but you should still guess they exist. Even if you work quite hard to find them and can't, it's still more likely that it's a failure on your part than that the primes don't exist.

However, North Dakota is clearly not the sum of three primes. Even someone with no mathematical knowledge can figure this out. This statement is immune to all of the inductive evidence that Goldbach's conjecture is true, immune to the criticism that you simply aren't smart enough to find the primes, and doesn't require extensive knowledge of the history and geography of North Dakota to make. It's just a simple category error.

Likewise, we have good inductive evidence that all objects follow simple scientific/reductionist laws. A difficult-to-explain object, like ball lightning, probably still follows scientific/reductionist laws, even if we haven't figured out what they are yet. But consciousness is not an object; it's the subject, that by which objects are perceived. Trying to apply rules about objects to it is a category error, and his refusal to do so is immune to the normal scientific/reductionist criticisms you would level against someone who tried that on ball lightning or homeopathy or something.

I'm not sure if I agree with this argument, but I think it's coherent and doesn't violate any laws of rationality.

I agree with everyone who found his constant appeal to "I make algorithms, so you have to believe me!" and his weird nervous laughter irritating.

View more: Prev | Next