"I'm curious to know how you know that in advance? Isn't it like a kid making a binding decision on its future self? As Aubrey says, (I'm paraphrasing): "If I'm healthy today and enjoying my life, I'll want to wake up tomorrow. And so on." You live a very long time one day at a time."
Good point. I usually trust myself to make predictions of this sort. For example, I predict that I would not want to eat pizza every day in a row for a year, even though I currently like pizza, and this sort of prediction has worked in the past. But I shoul...
"There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities."
That doesn't seem at all obvious to me. First, our current society doesn't allow people to die, although today law enforcement is spotty enough that they can't really prevent it. I assume far future societies will have excellent law enforcement, including mind reading and total surveillance (unless libertarians seriously get their act together in the next hundred ye...
facepalm And I even read the Sundering series before I wrote that :(
Coming up with narratives that turn the Bad Guys into Good Guys could make good practice for rationalists, along the lines of Nick Bostrom's Apostasy post. Obviously I'm not very good at it.
GeorgeNYC, very good points.
Wealth redistribution in this game wouldn't have to be communist. Depending on how you set up the analogy, it could also be capitalist.
Call JW the capitalist and AA the worker. JW is the one producing wealth, but he needs AA's help to do it. Call the under-the-table wealth redistribution deals AA's "salary".
The worker can always cooperate, in which case he makes some money but the capitalist makes more.
Or he can threaten to defect unless the capitalist raises his salary - he's quitting his job or going on strike for higher pay.
(To perfect the ana...
Darnit TGGP, you're right. Right. From now on I use Lord of the Rings for all "sometimes things really are black and white" examples. Unless anyone has some clever reason why elves are worse than Sauron.
[sorry if this is a repost; my original attempt to post this was blocked as comment spam because it had too many links to other OB posts]
I've always hated that Dante quote. The hottest place in Hell is reserved for brutal dictators, mass murderers, torturers, and people who use flamethrowers on puppies - not for the Swiss.
I came to the exact opposite conclusion when pondering the Israel-Palestinian conflict. Most of the essays I've seen in newspapers and on bulletin boards are impassioned pleas to designate one side or the other as Evildoers and the other ...
"To be concerned about being grown up, to admire the grown up because it is grown up, to blush at the suspicion of being childish; these things are the marks of childhood and adolescence. And in childhood and adolescence they are, in moderation, healthy symptoms. Young things ought to want to grow. But to carry on into middle life or even into early manhood this concern about being adult is a mark of really arrested development. When I was ten, I read fairy tales in secret and would have been ashamed if I had been found doing so. Now that I am fifty I read them openly. When I became a man I put away childish things, including the fear of childishness and the desire to be very grown up." - C.S. Lewis
Bruce and Waldheri, you're being unfair.
You're interpreting this as "some scientists got together one day and asked Canadians about their grief just to see what would happen, then looked for things to correlate it with, and after a bunch of tries came across some numbers involving !Kung tribesmen reproductive potential that fit pretty closely, and then came up with a shaky story about why they might be linked and published it."
I interpret it as "some evolutionary psychologists were looking for a way to confirm evolutionary psychology, predic...
@Robin: Thank you. Somehow I missed that post, and it was exactly what I was looking for.
@Vladimir Nesov: I agree with everything you said except for your statement that fiction is a valid argument, and your supporting analogy to mathematical proof.
Maybe the problem is the two different meanings of "valid argument". First, the formal meaning where a valid argument is one in which premises are arranged correctly to prove a conclusion eg mathematical proofs and Aristotelian syllogisms. Well-crafted policy arguments, cost-benefit analyses, and stati...
Uncle Tom's Cabin is not a valid argument that slavery is wrong. "My mirror neurons make me sympathize with a person whose suffering is caused by Policy X" to "Policy X is immoral and must be stopped" is not a valid pattern of inference.
Consider a book about the life of a young girl who works in a sweatshop. She's plucked out of a carefree childhood, tyrannized and abused by greedy bosses, and eventually dies of work-related injuries incurred because it wasn't cost-effective to prevent them. I'm sure this book exists, though I haven't p...
Assuming the Lord Pilot was correct in saying that, without the nova star, the Happy Fun People would never be able to reach the human starline network ...and assuming it's literally impossible to travel FTL without a starline ...and assuming the only starline to the nova star was the one they took ...and assuming Huygens, described as a "colony world", is sparsely populated, and either can be evacuated or is considered "expendable" compared to the alternatives
...then blow up Huygens' star. Without the Huygens-Nova starline, the Happy P...
Political Weirdtopia: Citizens decide it is unfair for a democracy to count only the raw number of people who support a position without considering the intensity with which they believe it. Of course, one can't simply ask people to self-report the intensity with which they believe a position on their ballot, so stronger measures are required. Voting machines are redesigned to force voters to pull down a lever for each issue/candidate. The lever delivers a small electric shock, increasing in intensity each second the voter holds it down. The number of vote...
Though it's a side issue, what's even more... interesting.... is the way that our brains simply haven't updated to their diminished power in a super-Dunbarian world. We just go on debating politics, feverishly applying our valuable brain time to finding better ways to run the world, with just the same fervent intensity that would be appropriate if we were in a small tribe where we could persuade people to change things.
Thank you. That's one of those insights that makes this blog worth reading.
"O changeless and aeternal physical constants, we give thanks to thee for existing at values such that the Universe, upon being set in motion and allowed to run for thirteen billion years, give or take an eon, naturally tends toward a state in which we are seated here tonight with turkey, mashed potatoes, and cranberry sauce in front of us."
Or "O natural selection, thou hast adapted turkeys to a mostly predation-free environment, making them slow, weak, and full of meat. In contrast, thou hast adapted us humans to an environment full of dang...
I don't know what's up with people who say they still haven't read the archives. When I discovered OB, I spent all my free time for two weeks reading the archives straight through :)
I support Roland's idea. A few Eliezer posts per week, plus an (official, well-publicized, Eliezer-and-Robin-supported) forum where the rest of us could discuss those posts and bring up issues of our own. Certain community leaders (hopefully Eliezer and Robin if they have time) picking out particularly interesting topics and comments on the board and telling the posters to writ...
I don't know anything about the specific AI architectures in this post, but I'll defend non-apples. If one area of design-space is very high in search ordering but very low in preference ordering (ie a very attractive looking but in fact useless idea), then telling people to avoid it is helpful beyond the seemingly low level of optimization power it gives.
A metaphor: religious beliefs constitute a very small and specific area of beliefspace, but that area originally looks very attractive. You could spend your whole life searching within that area and never...
Robin Gane-McCalla is an Overcoming Bias reader? I knew him back in college, but haven't talked to him in years. It really is a small world.
"Why do people, including you apparently, always hide the price for this kind of thing? Market segmentation? Trying to get people to mentally commit before they find out how expensive it is? Maintaining a veneer of upper-class distaste for the crassness of money (or similarly, a "if you have to ask how much it is, you can't afford it" type thing)?"
I agree with that, and I have a policy of never buying from anyone who does this.
Often I don't know how much something would cost even to an order of magnitude; for example, I have no clue whe...
Disappointing. I kept on waiting for Eliezer to say some sort of amazingly witty thing that would cause everything Jaron was saying to collapse like a house of cards, but either he was too polite to interrupt or the format wasn't his style.
At first I thought Jaron was talking nonsense, but after thinking it over for a while, I'm prepared to give him the benefit of the doubt. He said that whether a computer can be intelligent makes no difference and isn't worth talking about. That's obviously wrong if he's using a normal definition of intelligent, but if by...
One more thing: Eliezer, I'm surprised to be on the opposite side as you here, because it's your writings that convinced me a catastrophic singularity, even one from the small subset of catastrophic singularities that keep people alive, is so much more likely than a good singularity. If you tell me I'm misinterpreting you, and you assign high probability to the singularity going well, I'll update my opinion (also, would the high probability be solely due to the SIAI, or do you think there's a decent chance of things going well even if your own project fails?)