"Caledonian, I look forward to being able to downvote your comments instead of deleting them."
What, the software forces you to delete my comments? Someone's holding a gun to your head?
I look forward to your forming a completely closed memetic sphere around yourself, instead of this partially-closed system you've already established.
Because I'm curious:
How much evidence, and what kind, would be necessary before suspicions of contrarianism are rejected in favor of the conclusion that the belief was wrong?
Surely this is a relevant question for a Bayesian.
I would personally be more concerned about an AI trying to make me deliriously happy no matter what methods it used.
Happiness is part of our cybernetic feedback mechanism. It's designed to end once we're on a particular course of action, just as pain ends when we act to prevent damage to ourselves. It's not capable of being a permanent state, unless we drive our nervous system to such an extreme that we break its ability to adjust, and that would probably be lethal.
Any method of producing constant happiness ultimately turns out to be pretty much equivale...
Few people become bored with jumping in SMB because 1) becoming skilled at it is quite hard, 2) it's used to accomplish specific tasks and is quite useful in that context, 3) it's easier to become bored with the game as a whole than with that particular part of it.
Having to take action to avoid unpleasant surprises is usually pleasant, as long as your personal resources aren't stretched too much in the process.
If you eliminate the potential for unpleasant surprises, the game isn't much fun. (Imagine playing chess against an opponent that was so predictable as to never threaten to beat you. Why bother?)
Lots of people find planning their character design decisions, and exploring in detail the mechanical consequences of their designs, to be 'fun'.
Which is why there are so many sites that (for example) post in their entirety the skills for Diablo II and how each additional skillpoint affects the result - information that cannot be easily acquired from the game itself.
Although there are some basic principles behind 'fun', the specific things that make something 'fun' vary wildly from one person to another. If what the designers created wasn't to your taste, perhaps it's not that they failed, but that you're not a member of their target audience.
Gwern, why do you think we have those emotional responses to pain in the first place?
Yes, I'm aware of forms of brain damage that make people not care about negative stimuli. They're extraordinarily crippling.
Nancy Lebovitz, those are great. I may have to appropriate some of those.
I'd say the primary bad thing about pain is not that it hurts, but that it's pushy and won't tune out. You could learn to sleep in a ship's engine room, but a mere stubbed toe grabs and holds your attention. That, I think we could delete with impunity.
If we could learn to simply get along with any level of pain... how would it constitute an obstacle?
Real accomplishment requires real obstacles to avoid, remove, or transcend. Real obstacles require real consequences. And real consequences require pain.
I would suggest that this book, and the two books immediately preceding it, are an examination of the difference between what people believe they want the world to be and what they actually want and need it to be. When people gain enough power to create their vision of the perfect world, they do - and then find they've constructed an elaborate prison at best and a slow and terrible death at worst.
An actual "perfect world" can't be safe, controlled, or certain -- and the inevitable consequence of that is pain. But so is delight.
The opposite of a Great Truth is unpretentiousness.
Mr. Tyler:
I admire your persistence; however, you should be reminded that preaching to the deaf is not a particularly worthwhile activity.
My own complaints regarding the Brave New World consist mainly of noting that Huxley's dystopia specialized in making people fit the needs of society. And if meant whittling down a square peg so it would fit into a round hole, so be it.
Embryos were intentionally damaged (primarily through exposure to alcohol) so that they would be unlikely to have capabilities beyond what society needed them to.
This is completely incompatible with my beliefs about the necessity of self-regulating feedback loops, and developing order from the bottom upwards.
Or, to put it another way:
"Fixing" the future, in a way that renders human beings completely redundant and unnecessary even to themselves, isn't fixing anything. It's creating a problem of unlimited scope.
If that's the ultimate outcome of, say, producing superhuman minds - whether they're somehow enslaved to human preferences or not - then we're trying very hard to create a world in which the only rational treatment of humanity is extinction. Whether imposed from without or from within, voluntarily, is irrelevant.
Based on the comments here, it would seem that it's the people who reject ultimately-meaningless forms of play - that is, 'play' that doesn't develop skills useful to perpetuation - and concentrate on the "real world" who will end up existing.
And the Luddites will inherit the Earth...
The mere fact that he has put so much time and energy into working on this issue over many years is strong evidence that he sincerely believes that it is a real possibility
Only if there are no other consequences of his actions that he desires. People working to forward an ideology don't necessary believe the ideology they're selling - they only need to value some of the consequences of spreading it.
I hope you are both willing at least to say that the other's contrary stance tells you that there is a good likelihood that you are wrong.
If Robin knows that Eliezer believes there is a good likelihood that Eliezer's position is wrong, why would Robin then conclude that his own position is likely to be wrong? And vice versa?
The fact that Eliezer and Robin disagree indicates one of two things: either one possesses crucial information that the other does not, or at least one of the two have made a fatal error.
The disagreement stems from the fact that each believes the other to have made the fatal error, and that their own position is fundamentally sound.
Eric, it's more amusing that both often cite a theorem that agreeing to disagree is impossible.
It's only impossible for rational Bayesians, which neither Hanson nor Yudkowsky are. Or any other human beings, for that matter.
Don't you get the same effect from adding an orderly grid of dots?In that particular example, yes. Because the image is static, as is the static.
If the static could change over time, you could get a better sense of where the image lies. It's cheaper and easier - and thus 'better' - to let natural randomness produce this static, especially since significant resources would have to be expended to eliminate the random noise.
What about from aligning the dots along the lines of the image?If we knew where the image was, we wouldn't need the dots.
To be pre...
Caledonian: Yes, I did. So: can't you always do better in principle by increasing sensitivity?That's a little bit like saying that you could in principle go faster than light if you ignore relativistic effects, or that you could in principle produce a demonstration within a logical system that it is consistent if you ignore Godel's Fork.
There are lots of things we can do in principle if we ignore the fact that reality limits the principles that are valid.
As the saying goes: the difference between 'in principle' and 'in practice' is that in principle the...
Caledonian: couldn't you always do better in such a case, in principle (ignoring resource limits), by increasing resolution?
I double-checked the concept of 'optical resolution' on Wikipedia.Resolution is (roughly speaking) the ability to distinguish two dots that are close-together as different - the closer the dots can be and still distinguished, the higher the resolution, and the greater detail that can be perceived.
I think perhaps you mean 'sensitivity'. It's the ability to detect weak signals close to perceptual threshold that noise improves, not the detail.
But it is an inherently odd proposition that you can get a better picture of the environment by adding noise to your sensory information - by deliberately throwing away your sensory acuity. This can only degrade the mutual information between yourself and the environment. It can only diminish what in principle can be extracted from the data.
It is certainly counterintuitive to think that, by adding noise, you can get more out of data. But it is nevertheless true.
Every detection system has a perceptual threshold, a level of stimulation needed for it to ...
Foraging animals make the same 'mistake': given two territories in which to forage, one of which has a much more plentiful resource and is far more likely to reward an investment of effort and time with a payoff, the obvious strategy is to only forage in the richer territory; however, animals instead split their time between the two spaces as the relative probability of a successful return.
In other words, if one territory is twice as likely to produce food through foraging as the other, animals spend twice as much time there: 2/3rds of their time in the ...
I would suggest taking a hard look at the elements of your social support network, and trying to determine which would sever their links with you if they knew you were not a Christian.
I do not agree that you are compelled not to lie to people. Truth is a valuable thing, and shouldn't be wasted on those unworthy of it.
Consider that Carl Sagan's protagonist in "Contact", Ellie Arroway, claimed to be a Christian, despite being an atheist. Look carefully at the arguments she offered regarding that claim, and see if they can be adapted to your life....
It is impossible to determine whether something was well-designed without speculating as to its intended function. Bombs are machines, machines whose function is to fly apart; they generally do not last particularly long when they are used. Does that make them poorly-made?
If the purpose of a collection of gears was to fly apart and transmit force that way, sticking together would be a sign of bad design. Saying that the gears must have been well-designed because they stick together is speculating as to their intended function.
I do not see what is gained...
There is no way to tell that something is made by 'intelligence' merely by looking at it - it takes an extensive collection of knowledge about its environment to determine whether something is likely to have arisen through simple processes.
A pile of garbage seems obviously unnatural to us only because we know a lot about Earth nature. Even so, it's not a machine. Aliens concluding that it is a machine with an unknown purpose would be mistaken.
I see that the sentence noting how this line of argument comes dangerously close to the Watchmaker Argument for God has been edited out.
Why? If it's a bad point, it merely makes me look bad. If it's a good point, what's gained by removing it?
Z.M., I agree with your analysis up to the point where you suggest that rational agents act to preserve their current value system.
It may be useful to consider why we have value systems in the first place. When we know why we do a thing, we can evaluate how well we do it, but not until then.
I have no idea what the machine is doing. I don't even have a hypothesis as to what it's doing. Yet I have recognized the machine as the product of an alien intelligence.
Are beaches the product of an alien intelligence? Some of them are - the ones artificially constructed and maintained by humans. What about the 'naturally-occurring' ones, constructed and maintained by entropy? Are they evidence for intelligence? Those grains of sand don't wear down, and they're often close to spherical. Would a visiting UFO pause in awe to recognize beaches as machines with unknown purposes?
Z.M., I agree with your analysis up to the point where you suggest that rational agents act to preserve their current value system.
I suggest that it may be useful for you to consider what the purpose of a value system is. When trying to decide between two value systems, a rational agent must evaluate them in some way. Is there an impersonal and objective set of criteria for evaluation?
Suppose I landed on an alien planet and discovered what seemed to be a highly sophisticated machine, all gleaming chrome as the stereotype demands. Can I recognize this machine as being in any sense well-designed, if I have no idea what the machine is intended to accomplish? I have no idea what the machine is doing. I don't even have a hypothesis as to what it's doing. Yet I have recognized the machine as the product of an alien intelligence.
Carefully, Eliezer. You are very, very close to simply restating the Watchmaker Argument in favor of the exis...
You can't escape the temptation to lie to people just by having them not pay you in money. There are other forms of payment, of renumeration, besides money.
In fact, if you care about anything involving people or capable of being affected by them in some way, there can always arise situations in which you could maximize some of your goals or preferences by deceiving them.
There are only a few goals or preferences that change this -- chief among them, the desire to get what you want without deception. If you possess those goals or preferences in a dominant ...
Personally, I'm doing it mainly because everyone else is (stop laughing, it's an important heuristic that should only be overridden when you have a definite reason).Most smart people I know think that "because everyone else does it" IS a definite reason.
Information and education should be freeWhy? People don't value what they get for free. Education was once valued very highly by the common folk in America. That changed once education began to be provided as a right, and children were obliged to go to school instead of its being a sacrific...
But a vote for a losing candidate is not "thrown away"; it sends a message to mainstream candidates that you vote, but they have to work harder to appeal to your interest group to get your vote.
Such actions send a lot of messages. I have no confidence in the ability of politicians to determine what I would be trying to convey or the effectiveness of my attempting to do so.
Besides, the point is trivial. A vote for a losing candidate isn't thrown away because the vote almost certainly couldn't have been used productively in the first place - yo...
He quickly appears to conclude that he cannot really discuss any issues with EY because they don't even share the same premises.So they should establish what premises they DO share, and from that base, determine why they hold the different beliefs that they do.
I find it unlikely that they don't share any premises at all. Their ability to communicate anything, albeit strictly limited, indicates that's there's common ground of a sort.
Was Carl Sagan hawking a religion?Yes. He was trying to convince people that rationality could substitute itself for mysticism, which they highly value.
Pretty much everyone trying to "sell" an idea dip into religious memes, sooner or later -- as my now-deleted comment makes clear.
Vladimir, you are slightly incorrect. Eliezer doesn't preach rationality as a Way, he preaches a Way that he claims is rationality.
And like any other preacher, he doesn't take well to people questioning and challenging him. You're likely to see your comments censored.
There are those who will see it as almost a religious principle that no one can possibly know that a design will work, no matter how good the argument, until it is actually tested.
It's not necessarily a religious principle, although like anything else, it can be made one. It's a simple truth.
There is a non-trivial distinction between believing that a design will work, and believing that a design is highly likely to work. Maintaining the awareness that our maps do not constrain the territory is hard work, but necessary for a rationalist.
ou buy the Boltzmann brain argument? How did you calculate the probabilities? Nobody knows the probability of what seems to be our universe formingThat's not the right probability to be concerned about
and certainly nobody knows the probability of a Boltzmann brain forming in a universe of unknown size and age.Emphasis added by me.
Again, you're asking about the wrong thing. Limiting the analysis to a single universe misses the point and -- as you rightfully object -- reduces the hypothesis to what is nearly a necessarily false statement. But there's n...
I have not yet read of any means of actually measuring g , has anyone here got any references?
There's no way to "actually measure g", because g has no operational definition beyond statistical analyses of IQ.
There have been some attempts to link calculated g with neural transmission speeds and how easily brains can cope with given problems, but there's been little success.
I received an email from Eliezer stating:
You're welcome to repost if you criticize Coherent Extrapolated Volition specifically, rather than talking as if the document doesn't exist. And leave off the snark at the end, of course.
There is no 'snark'; what there IS, is a criticism. A very pointed one that Eliezer cannot counter.
There is no content to 'Coherent Extrapolated Volition'. It contains nothing but handwaving, smoke and mirrors. From the point of view of rational argument, it doesn't exist.
Gravity may not be a genius, but it's still an optimization problem, since the ball "wants" to minimize its potential energy.
Using the terms as Eliezer has, can you offer an example of a phenomenon that is NOT an optimization?
"If we walk without rhythm, we won't attract the worm."
Set up a pattern-recognition system directed at your own actions, and when you fall into a predictable rut, do something differently.
Rand is consumed by a need to provide a 'rationalized' explanation for the irrational behavior of her villians. In essence, she declares them to have a sort of Freudian death wish that causes them to sabotage and destroy everyone capable of living happily, ultimately ending with themselves dying last.
Peculiar, given how utterly incompatible her thinking is with the pseudo-scientific bent of Freud's... although the sort of cult that formed around them both is appropriately similar.
I'm pretty sure her thinking was wrong in that regard. Most people don't have secret, rational reasons they hide even from themselves for the irrational things they do. They're simply irrational. Rand, I think, could not accept that.
Heaven and Earth are not humane.
The Tao Te Ching repeatedly stresses the idea that the Great Way has as little to do with what human moralities label desirable as it has to do with what most human beings perceive as kindness, generosity, or mercy.
Humans consider it cruelty, indifference, and ruthlessness.