"I'm curious to know how you know that in advance? Isn't it like a kid making a binding decision on its future self? As Aubrey says, (I'm paraphrasing): "If I'm healthy today and enjoying my life, I'll want to wake up tomorrow. And so on." You live a very long time one day at a time."
Good point. I usually trust myself to make predictions of this sort. For example, I predict that I would not want to eat pizza every day in a row for a year, even though I currently like pizza, and this sort of prediction has worked in the past. But I shoul...
"There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities."
That doesn't seem at all obvious to me. First, our current society doesn't allow people to die, although today law enforcement is spotty enough that they can't really prevent it. I assume far future societies will have excellent law enforcement, including mind reading and total surveillance (unless libertarians seriously get their act together in the next hundred ye...
facepalm And I even read the Sundering series before I wrote that :(
Coming up with narratives that turn the Bad Guys into Good Guys could make good practice for rationalists, along the lines of Nick Bostrom's Apostasy post. Obviously I'm not very good at it.
GeorgeNYC, very good points.
Wealth redistribution in this game wouldn't have to be communist. Depending on how you set up the analogy, it could also be capitalist.
Call JW the capitalist and AA the worker. JW is the one producing wealth, but he needs AA's help to do it. Call the under-the-table wealth redistribution deals AA's "salary".
The worker can always cooperate, in which case he makes some money but the capitalist makes more.
Or he can threaten to defect unless the capitalist raises his salary - he's quitting his job or going on strike for higher pay.
(To perfect the ana...
Darnit TGGP, you're right. Right. From now on I use Lord of the Rings for all "sometimes things really are black and white" examples. Unless anyone has some clever reason why elves are worse than Sauron.
[sorry if this is a repost; my original attempt to post this was blocked as comment spam because it had too many links to other OB posts]
I've always hated that Dante quote. The hottest place in Hell is reserved for brutal dictators, mass murderers, torturers, and people who use flamethrowers on puppies - not for the Swiss.
I came to the exact opposite conclusion when pondering the Israel-Palestinian conflict. Most of the essays I've seen in newspapers and on bulletin boards are impassioned pleas to designate one side or the other as Evildoers and the other ...
"how do we minimize suffering in the Middle East?" may be an easier question than "who's to blame?"
The quickest way to minimize suffering is to nuke the Middle East into a sea of glass with the nukes spaced such that every person is vaporized instantly without feeling a thing. As they feel nothing from their instant vaporization, they are no longer suffering and no longer are capable of suffering or causing suffering.
Somehow I don't see this as a viable solution.
"To be concerned about being grown up, to admire the grown up because it is grown up, to blush at the suspicion of being childish; these things are the marks of childhood and adolescence. And in childhood and adolescence they are, in moderation, healthy symptoms. Young things ought to want to grow. But to carry on into middle life or even into early manhood this concern about being adult is a mark of really arrested development. When I was ten, I read fairy tales in secret and would have been ashamed if I had been found doing so. Now that I am fifty I read them openly. When I became a man I put away childish things, including the fear of childishness and the desire to be very grown up." - C.S. Lewis
Bruce and Waldheri, you're being unfair.
You're interpreting this as "some scientists got together one day and asked Canadians about their grief just to see what would happen, then looked for things to correlate it with, and after a bunch of tries came across some numbers involving !Kung tribesmen reproductive potential that fit pretty closely, and then came up with a shaky story about why they might be linked and published it."
I interpret it as "some evolutionary psychologists were looking for a way to confirm evolutionary psychology, predic...
@Robin: Thank you. Somehow I missed that post, and it was exactly what I was looking for.
@Vladimir Nesov: I agree with everything you said except for your statement that fiction is a valid argument, and your supporting analogy to mathematical proof.
Maybe the problem is the two different meanings of "valid argument". First, the formal meaning where a valid argument is one in which premises are arranged correctly to prove a conclusion eg mathematical proofs and Aristotelian syllogisms. Well-crafted policy arguments, cost-benefit analyses, and stati...
Uncle Tom's Cabin is not a valid argument that slavery is wrong. "My mirror neurons make me sympathize with a person whose suffering is caused by Policy X" to "Policy X is immoral and must be stopped" is not a valid pattern of inference.
Consider a book about the life of a young girl who works in a sweatshop. She's plucked out of a carefree childhood, tyrannized and abused by greedy bosses, and eventually dies of work-related injuries incurred because it wasn't cost-effective to prevent them. I'm sure this book exists, though I haven't p...
Assuming the Lord Pilot was correct in saying that, without the nova star, the Happy Fun People would never be able to reach the human starline network ...and assuming it's literally impossible to travel FTL without a starline ...and assuming the only starline to the nova star was the one they took ...and assuming Huygens, described as a "colony world", is sparsely populated, and either can be evacuated or is considered "expendable" compared to the alternatives
...then blow up Huygens' star. Without the Huygens-Nova starline, the Happy P...
Political Weirdtopia: Citizens decide it is unfair for a democracy to count only the raw number of people who support a position without considering the intensity with which they believe it. Of course, one can't simply ask people to self-report the intensity with which they believe a position on their ballot, so stronger measures are required. Voting machines are redesigned to force voters to pull down a lever for each issue/candidate. The lever delivers a small electric shock, increasing in intensity each second the voter holds it down. The number of vote...
Though it's a side issue, what's even more... interesting.... is the way that our brains simply haven't updated to their diminished power in a super-Dunbarian world. We just go on debating politics, feverishly applying our valuable brain time to finding better ways to run the world, with just the same fervent intensity that would be appropriate if we were in a small tribe where we could persuade people to change things.
Thank you. That's one of those insights that makes this blog worth reading.
"O changeless and aeternal physical constants, we give thanks to thee for existing at values such that the Universe, upon being set in motion and allowed to run for thirteen billion years, give or take an eon, naturally tends toward a state in which we are seated here tonight with turkey, mashed potatoes, and cranberry sauce in front of us."
Or "O natural selection, thou hast adapted turkeys to a mostly predation-free environment, making them slow, weak, and full of meat. In contrast, thou hast adapted us humans to an environment full of dang...
I don't know what's up with people who say they still haven't read the archives. When I discovered OB, I spent all my free time for two weeks reading the archives straight through :)
I support Roland's idea. A few Eliezer posts per week, plus an (official, well-publicized, Eliezer-and-Robin-supported) forum where the rest of us could discuss those posts and bring up issues of our own. Certain community leaders (hopefully Eliezer and Robin if they have time) picking out particularly interesting topics and comments on the board and telling the posters to writ...
I don't know anything about the specific AI architectures in this post, but I'll defend non-apples. If one area of design-space is very high in search ordering but very low in preference ordering (ie a very attractive looking but in fact useless idea), then telling people to avoid it is helpful beyond the seemingly low level of optimization power it gives.
A metaphor: religious beliefs constitute a very small and specific area of beliefspace, but that area originally looks very attractive. You could spend your whole life searching within that area and never...
Robin Gane-McCalla is an Overcoming Bias reader? I knew him back in college, but haven't talked to him in years. It really is a small world.
"Why do people, including you apparently, always hide the price for this kind of thing? Market segmentation? Trying to get people to mentally commit before they find out how expensive it is? Maintaining a veneer of upper-class distaste for the crassness of money (or similarly, a "if you have to ask how much it is, you can't afford it" type thing)?"
I agree with that, and I have a policy of never buying from anyone who does this.
Often I don't know how much something would cost even to an order of magnitude; for example, I have no clue whe...
Disappointing. I kept on waiting for Eliezer to say some sort of amazingly witty thing that would cause everything Jaron was saying to collapse like a house of cards, but either he was too polite to interrupt or the format wasn't his style.
At first I thought Jaron was talking nonsense, but after thinking it over for a while, I'm prepared to give him the benefit of the doubt. He said that whether a computer can be intelligent makes no difference and isn't worth talking about. That's obviously wrong if he's using a normal definition of intelligent, but if by...
This is a beautiful comment thread. Too rarely do I get to hear anything at all about people's inner lives, so too much of my theory of mind is generalizations from one example.
For example, I would never have guessed any of this about reflectivity. Before reading this post, I didn't think there was such a thing as people who hadn't "crossed the Rubicon", except young children. I guess I was completely wrong.
Either I feel reflective but there's higher level of reflectivity I haven't reached and can't even imagine (which I consider unlikely but am ...
I am glad Stanislav Petrov, contemplating his military oath to always obey his superiors and the appropriate guidelines, never read this post.
"Historically speaking, it seems likely that, of those who set out to rob banks or murder opponents "in a good cause", those who managed to hurt themselves, mostly wouldn't make the history books. (Unless they got a second chance, like Hitler after the failed Beer Hall Putsch.) Of those cases we do read about in the history books, many people have done very well for themselves out of their plans to lie and rob and murder "for the greater good". But how many people cheated their way to actual huge altruistic benefits - cheated an...
"I need to beat my competitors" could be used as a bad excuse for taking unnecessary risks. But it is pretty important. Given that an AI you coded right now with your current incomplete knowledge of Friendliness theory is already more likely to be Friendly than that of some competitor who's never really considered the matter, you only have an incentive to keep researching Friendliness until the last possible moment when you're confident that you could still beat your competitors.
The question then becomes: what is the minimum necessary amount of F...
Given that full-scale nuclear war would either destroy the world or vastly reduce the number of living people, Petrov, Arkhipov, and all the other "heroic officer makes unlikely decision to avert nuclear war" stories Recovering Irrationalist describes above make a more convincing test case for the anthropic principle than an LHC breakdown or two.
Just realized that several sentences in my previous post make no sense because they assume Everett branches were separate before they actually split, but think the general point still holds.
Originally I was going to say yes to the last question, but after thinking over why a failure of the LHC now (before it would destroy Earth) doesn't let me conclude anything by the anthropic principle, I'm going to say no.
Imagine a world in which CERN promises to fire the Large Hadron Collider one week after a major terrorist attack. Consider ten representative Everett branches. All those branches will be terrorist-free for the next few years except number 10, which is destined to suffer a major terrorist attack on January 1, 2009.
On December 31, 2008, Yva...
IMHO, the idea that wealth can't usefully be measured is one which is not sufficiently worthwhile to merit further discussion.
The "wealth" idea sounds vulnerable to hidden complexity of wishes. Measure it in dollars and you get hyperinflation. Measure it in resources, and the AI cuts down all the trees and converts them to lumber, then kills all the animals and converts them to oil, even if technology had advanced beyond the point of needing either. Find some clever way to specify the value of all resources, convert them to products and allocate ...
I was one of the people who suggested the term h-right before. I'm not great with mathematical logic, and I followed the proof only with difficulty, but I think I understand it and I think my objections remain. I think Eliezer has a brilliant theory of morality and that it accords with all my personal beliefs, but I still don't understand where it stops being relativist.
I agree that some human assumptions like induction and Occam's Razor have to be used partly as their own justification. But an ultimate justification of a belief has to include a reason for...
...yeah, this was supposed to go in the new article, and I was just checking something in this one and accidentally posted it here. Please ignore embarrassed
I was one of the people who suggested the term h-right before. I'm not great with mathematical logic, and I followed the proof only with difficulty, but I think I understand it and I think my objections remain. I think Eliezer has a brilliant theory of morality and that it accords with all my personal beliefs, but I still don't understand where it stops being relativist.
I agree that some human assumptions like induction and Occam's Razor have to be used partly as their own justification. But an ultimate justification of a belief has to include a reason for...
To say that Eliezer is a moral relativist because he realizes that a primality sorter might care about primality rather than morality, is equivalent to calling him a primality relativist because he realizes that a human might care about morality rather than primality.
But by Eliezer's standards, it's impossible for anyone to be a relativist about anything.
Consider what Einstein means when he says time and space are relative. He doesn't mean you can just say whatever you want about them, he means that they're relative to a certain reference frame. An observe...
Why "ought" vs. "p-ought" instead of "h-ought" vs. "p-ought"?
Sure, it might just be terminology. But change
"So which of these two perspectives do I choose? The human one, of course; not because it is the human one, but because it is right."
to
"So which of these two perspectives do I choose? The human one, of course; not because it is the human one, but because it is h-right."
and the difference between "because it is the human one" and "because it is h-right" sounds a lot less convincing.
But that's clearly not true, except in the sense that it's "arbitrary" to prefer life over death. It's a pretty safe generalization that actions which are considered to be immoral are those which are considered to be likely to cause harm to others.
From an reproductive fitness point of view, or a what-humans-prefer point of view, there's nothing at all arbitrary about morality. Yes, it does mostly contain things that avoid harm. But from an objective point of view, "avoid harm" or "increase reproductive fitness" is as arbitrary...
Things I get from this:
Things decided by our moral system are not relative, arbitrary or meaningless, any more than it's relative, arbitrary or meaningless to say "X is a prime number"
Which moral system the human race uses is relative, arbitrary, and meaningless, just as there's no reason for the pebble sorters to like prime numbers instead of composite numbers, perfect numbers, or even numbers.
A smart AI could follow our moral system as well or better than we ourselves can, just as the Pebble-Sorters' AI can hopefully discover that they're
This is something that's bothered me a lot about the free market. Many people, often including myself, believe that a bunch of companies which are profit-maximizers (plus some simple laws against use of force) will cause "nice" results. These people believe the effect is so strong that no possible policy directly aimed at niceness will succeed as well as the profit-maximization strategy does. There seems to be a lot of evidence for this. But it also seems too easy, as if you could take ten paper-clip maximizers competing to convert things into di...
No, I still think there's a difference, although the omnipotence suggestion might have been an overly hasty way of explaining it. One side has moving parts, the other is just a big lump of magic.
When a statement is meaningful, we can think of an experiment that confirms it such that the experiment is also built out of meaningful statements. For example, my experiment to confirm the cake-in-the-sun is for a person on August 1 to go to the center of the sun, and see if it tastes delicious. So, IF Y is in the center of the sun, AND IF Y is there on August 1, ...
There are different shades of positivism, and I think at least some positivists are willing to say any statement for which there is a decision procedure even possible in principle for an omnipotent being is meaningful.
Under this interpretation, as Doug S. says, the omnipotent being can travel back in time, withstand the heat of the sun, and check the status of the cake. The omnipotent being could also teleport to the spaceship past the cosmological horizon and see if it's still there or not.
However, an omnipotent being still wouldn't have a decision proced...
Wow. And this is the sort of thing you write when you're busy...
I've enjoyed these past few posts, but the part I've found most interesting are the attempts at evolutionary psychology-based explanations for things, like teenage rebellion and now flowers. Are these your own ideas, or have you taken them from some other source where they're backed up by further research? If the latter, can you tell me what the source is? I would love to read more of them (I've already read "Moral Animal", but most of these are still new to me).
If one defines morality in a utilitarian way, in which a moral person is one who tries for the greatest possible utility of everyone in the world, that sidesteps McCarthy's complaint. In that case, the apex of moral progress is also, by definition, the world in which people are happiest on average.
It's easy to view moral progress up to this point as progress towards that ideal. Ending slavery increases ex-slaves' utility, hopefully less than it hurts ex-slaveowners. Ending cat-burning increases cats' utility, hopefully less than it hurts that of cat-burnin...
I second Vladimir's "Prince of Nothing" recommendation. It's a great read just as pure fantasy fiction, but it also helped me to understand some of the concepts on this blog. Reading the "chimpanzee - village idiot - Einstein" line of posts, I found myself interpreting them by sticking Anasurimbor Kelhus at the right end of the spectrum and going from there.
Subhan's explanation is coherent and believable, but he has to bite a pretty big bullet. I happen to like helping people, Hitler happens to like hurting people, and we can both condemn each other if we want but both of our likes are equally valid.
I think most people who think about morality have long realized Subhan's position is a very plausible one, but don't want to bite that bullet. Subhan's arguments confirm that the position is plausible, but they don't make the consequences any more tolerable. I realize that appeal to consequences is a fallacy and that reality doesn't necessarily have to be tolerable, but I don't feel anywhere near like the question has been "dissolved"
It depends.
My morality is my urge to care for other people, plus a systematization of exactly how to do that. You could easily disprove the systematization by telling me something like that giving charity to the poor increases their dependence on handouts and only leaves them worse off. I'd happily accept that correction.
I don't think you could disprove the urge to care for other people, because urges don't have truth-values.
The best you could do would be, as someone mentioned above, to prove that everyone else was an NPC without qualia. Prove that, and I'd probably just behave selfishly, except when it was too psychologically troubling to do so.
It depends on how you disproved my morality.
As far as I can tell, my morality consists of an urge to care about others channeled through a systematization of how to help people most effectively. Someone could easily disprove specifics of the systematization by proving something like that giving charity to the poor only encourages their dependence and increases poverty. If you disproved it that way, I would accept your correction and channel my urge to care differently.
But I don't think you could disprove the urge to care itself, since it's an urge and does...
I took a different route on the "homework".
My thought was that "can" is a way of stating your strength in a given field, relative to some standard. "I can speak Chinese like a native" is saying "My strength in Chinese is equal to the standard of a native level Chinese speaker." "Congress can declare war" means "Congress' strength in the system of American government is equal to the strength needed to declare war."
Algorithmically, it would involve calculating your own strength in a field, and then ...
"But I'd agree that if a scientific understanding destroyed Keats's sense of wonder, then that was a bug in Keats"
If Keats could turn his wonder on and off like a light switch, then clearly he was being silly in withholding his wonder from science. Since science is clearly true, in order to maximize his wonder Keats should have pressed the "off" button for wonder based on ideas like rainbows being Bifrost the magic bridge to Heaven, and the "on" button for wonder based on science.
But Keats, and the rest of us, can't turn wonde...
I had a professor, David Berman, who believed some people could image well and other people couldn't. He cited studies by Galton and James in which some people completely denied they had imaginative ability, and other people were near-perfect "eidetic" imagers. Then he suggested psychological theories denying imagination were mostly developed by those who could not themselves imagine. The only online work of his I can find on the subject is http://books.google.co.jp/books?id=fZXoM80K9qgC&pg=PA13&lpg=PA13&ots=Zs03EkNZ-B&sig=2eVzzMm...
No large N experiments. but Feynman in one of his autobiographies tests this with a friend. One of them hears numbers, the other sees them. They are unable to multitask within the domain they use to process numbers. I for one hear numbers. I can count while performing visual tasks. My father sees them. He cannot. He can speak and count, which I find amusing.
One more thing: Eliezer, I'm surprised to be on the opposite side as you here, because it's your writings that convinced me a catastrophic singularity, even one from the small subset of catastrophic singularities that keep people alive, is so much more likely than a good singularity. If you tell me I'm misinterpreting you, and you assign high probability to the singularity going well, I'll update my opinion (also, would the high probability be solely due to the SIAI, or do you think there's a decent chance of things going well even if your own project fails?)