Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The scourge of perverse-mindedness

98 Post author: simplicio 21 March 2010 07:08AM

This website is devoted to the art of rationality, and as such, is a wonderful corrective to wrong facts and, more importantly, wrong procedures for finding out facts.

There is, however, another type of cognitive phenomenon that I’ve come to consider particularly troublesome, because it militates against rationality in the irrationalist, and fights against contentment and curiousity in the rationalist. For lack of a better word, I’ll call it perverse-mindedness.

The perverse-minded do not necessarily disagree with you about any fact questions. Rather, they feel the wrong emotions about fact questions, usually because they haven’t worked out all the corollaries.

Let’s make this less abstract. I think the following quote is preaching to the choir on a site like LW:

“The universe that we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil, no good, nothing but pitiless indifference.”
-Richard Dawkins, "God's Utility Function," Scientific American (November, 1995).

Am I posting that quote to disagree with it? No. Every jot and tittle of it is correct. But allow me to quote another point of view on this question.

“We are not born into this world, but grow out of it; for in the same way an apple tree apples, the Earth peoples.”

This quote came from an ingenious and misguided man named Alan Watts. You will not find him the paragon of rationality, to put it mildly. And yet, let’s consider this particular statement on its own. What exactly is wrong with it? Sure, you can pick some trivial holes in it – life would not have arisen without the sun, for example, and Homo sapiens was not inevitable in any way. But the basic idea – that life and consciousness is a natural and possibly inevitable consequence of the way the universe works – is indisputably correct.

So why would I be surprised to hear a rationalist say something like this? Note that it is empirically indistinguishable from the more common view of “mankind confronted by a hostile universe.” This is the message of the present post: it is not only our knowledge that matters, but also our attitude to that knowledge. I believe I share a desire with most others here to seek truth naively, swallowing the hard pills when it becomes necessary. However, there is no need to turn every single truth into a hard pill. Moreover, sometimes the hard pills also come in chewable form.

What other fact questions might people regard in a perverse way?

How about materialism, the view that reality consists, at bottom, in the interplay of matter and energy? This, to my mind, is the biggie. To come to facilely gloomy conclusions based on materialism seems to be practically a cottage industry among Christian apologists and New Agers alike. Since the claims are all so similar to each other, I will address them collectively.

“If we are nothing but matter in motion, mere chemicals, then:

  1. Life has no meaning;
  2. Morality has no basis;
  3. Love is an illusion;
  4. Everything is futile (there is no immortality);
  5. Our actions are determined; we have no free will;
  6. et
  7. cetera.”


The usual response from materialists is to say that an argument from consequences isn’t valid – if you don’t like the fact that X is just matter in motion, that doesn’t make it false. While eminently true, as a rhetorical strategy for convincing people who aren’t already on board with our programme, it’s borderline suicidal.

I have already hinted at what I think the response ought to be. It is not necessarily a point-by-point refutation of each of these issues individually. The simple fact is, not only is materialism true, but it shouldn’t bother anyone who isn’t being perverse about it, and it wouldn’t bother us if it had always been the standard view.

There are multiple levels of analysis in the lives of human beings. We can speak of societies, move to individual psychology, thence to biology, then chemistry… this is such a trope that I needn’t even finish the sentence.

However, the concerns of, say, human psychology (as distinct from neuroscience), or morality, or politics, or love, are not directly informed by physics. Some concepts only work meaningfully on one level of analysis. If you were trying to predict the weather, would you start by modeling quarks? Reductionism in principle I will argue for until the second coming (i.e., forever). Reductionism in practice is not always useful. This is the difference between proximate and ultimate causation. The perverse-mindedness I speak of consists in leaping straight from behaviour or phenomenon X to its ultimate cause in physics or chemistry. Then – here’s the “ingenious” part – declaring that, since the ultimate level is devoid of meaning, morality, and general warm-and-fuzziness, so too must be all the higher levels.

What can we make of someone who says that materialism implies meaninglessness? I can only conclude that if I took them to see Seurat’s painting “A Sunday Afternoon on the Island of La Grande Jatte," they would earnestly ask me what on earth the purpose of all the little dots was. Matter is what we’re made of, in the same way as a painting is made of dried pigments on canvas. Big deal! What would you prefer to be made of, if not matter?

It is only by the contrived unfavourable contrast of matter with something that doesn’t actually exist – soul or spirit or élan vital or whatever – that somebody can pull off the astounding trick of spoiling your experience of a perfectly good reality, one that you should feel lucky to inhabit.

I worry that some rationalists, while rejecting wooly dualist ideas about ghosts in the machine, have tacitly accepted the dualists’ baseless assumptions about the gloomy consequences of materialism. There really is no hard pill to swallow.

What are some other examples of perversity? Eliezer has written extensively on another important one, which we might call the disappointment of explicability. “A rainbow is just light refracting.” “The aurora is only a bunch of protons hitting the earth’s magnetic field.” Rationalists are, sadly, not immune to this nasty little meme. It can be easily spotted by tuning your ears to the words “just” and “merely.” By saying, for example, that sexual attraction is “merely” biochemistry, you are telling the truth and deceiving at the same time. You are making a (more or less) correct factual statement, while Trojan-horsing an extraneous value judgment into your listener’s mind as well: “chemicals are unworthy.” On behalf of chemicals everywhere, I say: Screw you! Where would you be without us?

What about the final fate of the universe, to take another example? Many of us probably remember the opening scene of Annie Hall, where little Alfie tells the family doctor he’s become depressed because everything will end in expansion and heat death. “He doesn’t do his homework!” cries his mother. “What’s the point?” asks Alfie.

Although I found that scene hilarious, I have actually heard several smart people po-facedly lament the fact that the universe will end with a whimper. If this seriously bothers you psychologically, then your psychology is severely divorced from the reality that you inhabit. By all means, be depressed about your chronic indigestion or the Liberal Media or teenagers on your lawn, but not about an event that will happen in 1014 years, involving a dramatis personae of burnt-out star remnants. Puh-lease. There is infinitely more tragedy happening every second in a cup of buttermilk.

The art of not being perverse consists in seeing the same reality as others and agreeing about facts, but perceiving more in an aesthetic sense. It is the joy of learning something that’s been known for centuries; it is appreciating the consilience of knowledge without moaning about reductionism; it is accepting nature on her own terms, without fatuous navel-gazing about how unimportant you are on the cosmic scale. If there is a fact question at stake, take no prisoners; but you don’t get extra points for unnecessary angst.

Comments (249)

Comment author: Pablo_Stafforini 22 March 2010 11:30:09AM 15 points [-]

Although I found that scene hilarious, I have actually heard several smart people po-facedly lament the fact that the universe will end with a whimper. If this seriously bothers you psychologically, then your psychology is severely divorced from the reality that you inhabit. By all means, be depressed about your chronic indigestion or the Liberal Media or teenagers on your lawn, but not about an event that will happen in 1014 years, involving a dramatis personae of burnt-out star remnants. Puh-lease. There is infinitely more tragedy happening every second in a cup of buttermilk.

So, what's your argument here? That we shouldn't care about the far future because it is temporally very removed from us? I personally deeply dislike this implication of modern cosmology, because it imposes an upper limit on sentience. I would much prefer that happiness continues to exist indefinitely than that it ceases to exist simply because the universe can no longer support it.

Comment author: khafra 22 March 2010 02:48:11PM 3 points [-]

Your personally being inconvenienced by the heat death of the universe is even less likely than winning the powerball lottery; if you wouldn't spend $1 on a lottery ticket, why spend $1 worth of time worrying about the limits of entropy? Sure, it's the most unavoidable of existential risks, but it's vanishingly unlikely to be the one that gets you.

Comment author: Nick_Tarleton 22 March 2010 03:13:07PM 19 points [-]

Why should I only emotionally care about things that will affect me?

I don't see any good reason to be seriously depressed about any Far fact; but if any degree of sadness is ever an appropriate response to anything Far, the inevitability of death seems like one of the best candidates.

Comment author: Rain 21 March 2010 12:50:54PM *  9 points [-]

From what I can tell, my framing depends upon my emotions more than the reverse, though there's a bit of a feedback cycle as well.

That is to say, if I am feeling happy on a sunny day, I will say that the amazing universe is carrying me along a bright path of sunshine and joy, providing light to dark places, and friendly faces to accompany me, and holy crap that sunlight's passing millions of miles to warm our lives, how awesome is that?

But if I am feeling depressed on that very same day, I will say that the sun's radiation is slowly breaking down the atoms of my weak flesh on the path toward decay and death while all energy slips into entropy and... well, who really cares, anyway?

The art of not being perverse consists in seeing the same reality as others and agreeing about facts, but perceiving more in an aesthetic sense.

If emotions drive the words, as I feel they do, then this statement, while true, comes from the bright side: "Say happy things, look at the world in a happy way, and you, too, will be happy!"

My dark side disagrees: "There's yet another happy person telling me I shouldn't be depressed, because they're not, and it's not so hard, is it? Great. Thanks for all your help. <eyeroll>"

Comment author: simplicio 21 March 2010 03:46:34PM *  2 points [-]

If emotions drive the words, as I feel they do, then this statement, while true, comes from the bright side: "Say happy things, look at the world in a happy way, and you, too, will be happy!"

My dark side disagrees: "There's yet another happy person telling me I shouldn't be depressed, because they're not, and it's not so hard, is it? Great. Thanks for all your help. <eyeroll>"

I understand how it might sound like that. Of course a sunny disposish is not always possible or even desirable - cheeriness can be equally self-indulgent, and in many ways nature really is trying to kill us.

But there are some fact questions that people feel bad about quite gratuitously. That's what I would like to change. These are the obstacles to human contentedness that people only encounter if they actually go out looking for obstacles, looking for something to feel bad about.

There's lots to legitimately be upset about in this world, lots of suffering endured by people not unlike us. We don't need extra suffering contrived ex nihilo by our minds.

Comment author: MichaelVassar 21 March 2010 10:01:50PM 20 points [-]

I tend to think that the hazard of perverse response to materialism has been fairly adequately dealt with in this community. OTOH, the perverse response to psychology has not. The fact that something is grounded in "status seeking", "conditioning", or "evolutionary motives" generally no more deprives the higher or more naive levels of validity or reality than does materialism, hence my quip that "I believe exactly what Robin Hanson believes, except that I'm not cynical"

Comment author: NancyLebovitz 21 March 2010 11:16:56PM 3 points [-]

If anyone's addressed the interaction between status-seeking, conditioning, and/or evolved drives and the fact that people manage to do useful and sometimes wonderful things anyway, I haven't seen it.

Comment author: MichaelVassar 22 March 2010 04:14:01PM 4 points [-]

I'm just confused. Those terms are short-hand for a model that exists to predict the world. If that model doesn't help you to predict the world, throw the model out, just don't bemoan the world fitting the model if and when it does fit. The world is still the world, as well as being a thing described by a model that is typically phrased cynically.

Comment author: simplicio 21 March 2010 11:36:47PM 2 points [-]

I was very tempted to include evo-psych in my list, but decided it probably warrants more than a cursory treatment.

Comment author: PhilGoetz 21 March 2010 07:47:35PM *  30 points [-]

I worry that some rationalists, while rejecting wooly dualist ideas about ghosts in the machine, have tacitly accepted the dualists’ baseless assumptions about the gloomy consequences of materialism.

There actually is a way in which they're right.

My first thought was, "You've got it backwards - it isn't that materialism isn't gloomy; it's that spiritualism is even gloomier." Because spiritual beliefs - I'm usually thinking of Christianity when I say that - don't really give you oughtness for free; they take the arbitrary moral judgements of the big guy in the sky and declare them correct. And so you're not only forced to obey this guy; you're forced to enjoy obeying him, and have to feel guilty if you have any independent moral ideas. (This is why Christianity, Islam, communism, and other similar religions often make their followers morally-deficient.)

But what do I mean by gloomier? I must have some baseline expectation which both materialism and spirituality fall short of, to feel that way.

And I do. It's memories of how I felt when I was a Christian. Like I was a part of a difficult but Good battle between right and wrong.

Now, hold off for a moment on asking whether that view is rational or coherent, and consider a dog. A dog wants to make its master happy. Dogs have been bred for thousands of years specifically not to want to challenge their master, or to pursue their own goals, as wolves do. When a dog can be with its master, and do what its master tells it to, and see that its master is pleased, the dog is genuinely, tail-waggingly happy. Probably happier than you or I are even capable of being.

A Christian just wants to be a good dog. They've found a way to reach that same blissful state themselves.

The materialistic worldview really is gloomy compared to being a dog.

And we don't have any way to say that we're right and they're wrong.

Factually, of course, they're wrong. But when you're a dog, being factually wrong isn't important. Obeying your master is important. Judged by our standards of factual correctness, we're right and they're wrong. Judged by their standards of being (or maybe feeling like) a good dog, they're right and we're wrong.

One of the problems with CEV, perhaps related to wireheading, is that it would probably fall into a doglike attractor. Possibly you can avoid it by writing into the rules that factual correctness trumps all other values. I don't think you can avoid it that easily. But even if you could, by doing so, you've already decided whose values you're going to implement, before your FAI has even booted up; and the whole framework of CEV is just a rationalization to excuse the fact that the world is going to end up looking the way you want it to look.

Comment author: MichaelVassar 21 March 2010 10:10:14PM 8 points [-]

I disagree with most of this but vote it up for being an excellent presentation of a complex and important position that must be addressed (though as noted, I think it can be) and hasn't been adequately addressed to satisfy (or possibly even to be understood by) all or most LW readers.

Phil, I suggest, that you try to look at Christian and secular children (and possibly those of some other religions) and decide empirically whether they really seem to differ so much in happiness or well being. Looking at people in a wide range of cultures in situations would in general be helpful, but especially that contrast or mostly, I suspect, lack of contrast.

Comment author: PhilGoetz 21 March 2010 11:16:30PM *  20 points [-]

Phil, I suggest, that you try to look at Christian and secular children (and possibly those of some other religions) and decide empirically whether they really seem to differ so much in happiness or well being.

Children are where not to look. Dogs psychologically resemble wolf-pups; they are childlike. Religion, like the breeding of dogs, is neotenous; it allows retention of childlike features into adulthood. To see the differences I'm talking about, you therefore need to look at adults.

Anyway, if you're asking me to judge based on who is the happiest, you've taken the first step down the road to wireheading. Dogs have been genetically reprogrammed to develop in a way that wires their value system to getting a pat on the head from their master.

The basic problem here is how we can simultaneously preserve human values, and not become wireheads, when some people are already wireheads. The religious worldview I spoke of above is a kind of wireheading. Would CEV dismiss it as wireheading? If so, what human values aren't wireheading? How do we walk the tightrope between wireheads and moral realists? Is there even a tightrope to walk there?

Comment author: orthonormal 21 March 2010 08:28:59PM 8 points [-]

IAWYC except for the last paragraph. While CEV isn't guaranteed to be a workable concept, and while it's dangerous to get into the habit of ruling out classes of counterargument by definition, I think there's a problem with criticizing CEV on the grounds "I think CEV will probably go this way, but I think that way is a big mistake, and I expect we'd all see it as a mistake even if we knew more, thought faster, etc." This is exactly the sort of error the CEV project is built to avoid.

Comment author: Rain 21 March 2010 09:24:45PM *  3 points [-]

I was a strong proponent of CEV as the most-correct theory I had heard on the topic of what goals to set, but I've become more skeptical as Eliezer started talking about potential tweaks to avoid insane results like the dog scenario above.

It seems similar in nature to the rule-building method of goal definition, where you create a list, and which has been roundly criticized as near impossible to do correctly.

Comment author: MichaelVassar 21 March 2010 10:11:16PM 2 points [-]

I also dislike tweaks, but I think that Eliezer does too. I certainly don't endorse any sort of tweak that I have heard and understood.

Comment author: Nick_Tarleton 22 March 2010 12:24:38AM 3 points [-]

FWIW, Eliezer seems to have suggested an anti-selfish-bastard tweak here.

Comment author: MichaelVassar 22 March 2010 04:51:13PM 1 point [-]

Thanks! I'm unhappy to see that, but my preferences are over states of the world, not beliefs, unless they simply strongly favor the belief that they are over states of the world.

Fortunately, we have some time, but that does bode ill I think. OTOH, the general trend, though not the universal trend, is for CEV to look more difficult and stranger with time.

Comment author: NancyLebovitz 22 March 2010 04:57:33PM 0 points [-]

I don't trust CEV. The further you extrapolate from where you are, the less experience you have with applying the virtue you're trying to implement.

Comment author: MichaelVassar 23 March 2010 12:46:30PM 3 points [-]

So you would like experience with the interactions through which our virtues unfold and are developed to be part of the extrapolation dynamic? http://www.google.com/search?q=%22grown+up+further+together%22&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a That always was intended I think.

If that's not what you mean, well, if you can propose alternatives to CEV that don't automatically fail and which also don't look to me like variations on CEV I think you will be the first to do so. CEV is terribly underspecified, so it's hard to think hard about the problem and propose something that doesn't already fall within the current specification.

Comment author: Strange7 21 March 2010 10:04:33PM 3 points [-]

That's why I prefer the 'would it satisfy everyone who ever lived?' strategy over CEV. Humanity's future doesn't have to be coherent. Coherence is something that happens at evolutionary choke-points, when some species dies back to within an order of magnitude of the minimum sustainable population. When some revolutionary development allows unprecedented surpluses, the more typical response is diversification.

Consider the trilobites. If there had been a trilobite-Friendly AI using CEV, invincible articulated shells would comb carpets of wet muck with the highest nutrient density possible within the laws of physics, across worlds orbiting every star in the sky. If there had been a trilobite-engineered AI going by 100% satisfaction of all historical trilobites, then trilobites would live long, healthy lives in a safe environment of adequate size, and the cambrian explosion (or something like it) would have proceeded without them.

Most people don't know what they want until you show it to them, and most of what they really want is personal. Food, shelter, maybe a rival tribe that's competent enough to be interesting but always loses when something's really at stake. The option of exploring a larger world, seldom exercised. It doesn't take a whole galaxy's resources to provide that, even if we're talking trillions of people.

Comment author: orthonormal 21 March 2010 10:20:53PM 2 points [-]

I realized a pithy way of stating my objection to that strategy: given how unlikely I think it is that the test could be passed fairly by a Friendly AI, an AI passing the test is stronger evidence that the AI is cheating somehow than that the AI is Friendly.

Comment author: Strange7 21 March 2010 11:26:09PM 2 points [-]

If the AI is programmed so that it genuinely wants to pass the test (or the closest feasible approximation of the test) fairly, cheating isn't an issue. This isn't a matter of fast-talking it's way out of a box. A properly-designed AI would be horrified at the prospect of 'cheating,' the way a loving mother is horrified at the prospect of having her child stolen by fairies and replaced with a near-indistinguishable simulacrum made from sticks and snow.

Comment author: PhilGoetz 21 March 2010 11:37:27PM *  4 points [-]

It is probably possible to pass that test by exploiting human psychology. It is probably impossible to do well on that test by trying to convince humans that your viewpoint is right.

You're talking past orthonormal. You're assuming a properly-designed AI. He's saying that accomplishing the task would be strong evidence of unfriendliness.

Comment author: orthonormal 22 March 2010 12:07:37AM 3 points [-]

What Phil said, and also:

Taboo "fairly"— this is another word the specification of which requires the whole of human values. Proving that the AI understands what we mean by fairness and wants to pass the test fairly is no easier than proving it Friendly in the first place.

Comment author: Strange7 22 March 2010 01:33:55AM 0 points [-]

"Fairly" was the wrong word in this context. Better might be 'honest' or 'truthful.' A truthful piece of information is one which increases the recipient's ability to make accurate predictions; an honest speaker is one whose statements contain only truthful information.

Comment author: RobinZ 22 March 2010 02:23:10AM *  2 points [-]

the recipient's ability to make accurate predictions

About what? Anything? That sounds very easy.

Remember Goodhart's Law - what we want is G, Good, not any particular G* normally correlated with Good.

Comment author: Strange7 22 March 2010 02:50:52AM *  1 point [-]

That sounds very easy.

Walking from Helsinki to Saigon sounds easy, too, depending on how it's phrased. Just one foot in front of the other, right?

Humans make predictions all the time. Any time you perceive anything and are less than completely surprised by it, that's because you made a prediction which was at least partly successful. If, after receiving and assimilating the information in question, any of your predictions is reduced in accuracy, any part of that map becomes less closely aligned with the territory, then the information was not perfectly honest. If you ignore or misinterpret it for whatever reason, even when it's in some higher sense objectively accurate, that still fails the honesty test.

A rationalist should win; an honest communicator should make the audience understand.

Given the option, I'd take personal survival even at the cost of accurate perception and ability to act, but it's not a decision I expect to be in the position of needing to make: an entity motivated to provide me with information that improves my ability to make predictions would not want to kill me, since any incoming information that causes my death necessarily also reduces my ability to think.

Comment author: PhilGoetz 21 March 2010 11:36:00PM 0 points [-]

Your trilobite example is at odds with your everyone-who-lived strategy. The impact of the trilobite example is to show that CEV is fundamentally wrong, because trilobite cognition, no matter how far you extrapolate it, would never lead to love, or value it if it arose by chance.

Some degree of randomness is necessary to allow exploration of the landscape of possible worlds. CEV is designed to prevent exploration of that landscape.

Comment author: orthonormal 22 March 2010 03:48:08AM *  8 points [-]

Let me expand upon Vladimir's comment:

Some degree of randomness is necessary to allow exploration of the landscape of possible worlds. CEV is designed to prevent exploration of that landscape.

You have not yet learned that a certain argumentative strategy against CEV is doomed to self-referential failure. You have just argued that "exploring the landscape of possible worlds" is a good thing, something that you value. I agree, and I think it's a reflectively consistent value, which others generally share at some level and which they might share more completely if they knew more, thought faster, had grown up farther together, etc.

You then assume, without justification, that "exploring the landscape of possible worlds" will not be expressed as a part of CEV, and criticize it on these grounds.

Huh? What friggin' definition of CEV are you using?!?

EDIT: I realized there was an insult in my original formulation. I apologize for being a dick on the Internet.

Comment author: PhilGoetz 22 March 2010 07:21:44PM *  2 points [-]

You then assume, without justification, that "exploring the landscape of possible worlds" will not be expressed as a part of CEV, and criticize it on these grounds.

Because EY has specifically said that that must be avoided, when he describes evolution as something dangerous. I don't think there's any coherent way of saying both that CEV will constrain future development (which is its purpose), and that it will not prevent us from reaching some of the best optimums.

Most likely, all the best optimums lie in places that CEV is designed to keep us away from, just as trilobite CEV would keep us away from human values. So CEV is worse than random.

Comment author: Mitchell_Porter 23 March 2010 09:43:10AM 8 points [-]

Most likely, all the best optimums lie in places that CEV is designed to keep us away from, just as trilobite CEV would keep us away from human values.

That a "trilobite CEV" would never lead to human values is hardly a criticism of CEV's effectiveness. The world we have now is not "trilobite friendly"; trilobites are extinct!

CEV, as I understand it, is very weakly specified. All it says is that a developing seed AI chooses its value system after somehow taking into account what everyone would wish for, if they had a lot more time, knowledge, and cognitive power than they do have. It doesn't necessarily mean, for example, that every human being alive is simulated, given superintelligence, and made to debate the future of the cosmos in a virtual parliament. The combination of better knowledge of reality and better knowledge of how the human mind actually works may make it extremely clear that the essence of human values, extrapolated, is XYZ, without any need for a virtual referendum, or even a single human simulation.

It is a mistake to suppose, for example, that a human-based CEV process will necessarily give rise to a civilizational value system which attaches intrinsic value to such complexities as food, sex, or sleep, and which will therefore be prejudiced against modes of being which involve none of these things. You can have a value system which attributes positive value to human beings getting those things, not because they are regarded as intrinsically good, but because entities getting what they like is regarded as intrinsically good.

If a human being is capable of proposing a value system which makes no explicit mention of human particularities at all (e.g. Ben Goertzel's "growth, choice, and joy"), then so is the CEV process. So if the worry is that the future will be kept unnecessarily anthropomorphic, that is not a valid critique. (It might happen if something goes wrong, but we're talking about the basic idea here, not the ways we might screw it up.)

You could say, even a non-anthropomorphic CEV might keep us away from "the best optimums". But let's consider what that would mean. The proposition would be that even in a civilization making the best, wisest, most informed, most open-minded choices it could make, it still might fall short of the best possible worlds. For that to be true, must it not be the case that those best possible worlds are extremely hard to "find"? And if you propose to find them by just being random, must there not be some risk of instead ending up in very bad futures? This criticism may be comparable to the criticism that rational investment is a bad idea, because you'd make much more money if you won the lottery. If these distant optima are so hard to find, even when you're trying to find good outcomes, I don't see how luck can be relied upon to get you there.

This issue of randomness is not absolute. One might expect a civilization with an agreed-upon value system to nonetheless conduct fundamental experiments from time to time. But if there were experiments whose outcomes might be dangerous as well as rewarding, it would be very foolish to just go ahead and do them because if we get lucky, the consequences would be good. Therefore, I do not think that unconstrained evolution can be favored over the outcomes of non-anthropomorphic CEV.

Comment author: Nick_Tarleton 22 March 2010 08:43:33PM *  6 points [-]

Because EY has specifically said that that must be avoided, when he describes evolution as something dangerous.

That doesn't mean that you can't examine possible trajectories of evolution for good things you wouldn't have thought of yourself, just that you shouldn't allow evolution to determine the actual future.

I don't think there's any coherent way of saying both that CEV will constrain future development (which is its purpose), and that it will not prevent us from reaching some of the best optimums.

I'm not sure what you mean by "constrain" here. A process that reliably reaches an optimum (I'm not saying CEV is such a process) constrains future development to reach an optimum. Any nontrivial (and non-self-undermining, I suppose; one could value the nonexistence of optimization processes or something) value system, whether "provincially human" or not, prefers the world to be constrained into more valuable states.

Most likely, all the best optimums lie in places that CEV is designed to keep us away from

I don't see where you've responded to the point that CEV would incorporate whatever reasoning leads you to be concerned about this.

Comment author: orthonormal 22 March 2010 04:00:58AM 5 points [-]

Or to take one step back:

It seems that you think there are two tiers of values, one consisting of provincial human values, and another consisting of the true universal values like "exploring the landscape of possible worlds". You worry that CEV will catch only the first group of values.

From where I stand, this is just a mistaken question; the values you worry will be lost are provincial human values too! There's no dividing line to miss.

Comment author: PhilGoetz 22 March 2010 07:45:15PM *  0 points [-]

This is one of the things I don't understand: If you think everything is just a provincial human value, then why do you care? Why not play video games or watch YouTube videos instead of arguing about CEV? Is it just more fun?

(There's a longish section trying to answer this question in the CEV document, but I can't make sense of it.)

There's a distinction that hasn't been made on LW yet, between personal values and evangelical values. Western thought traditionally blurs the distinction between them, and assumes that, if you have personal values, you value other people having your values, and must go on a crusade to get everybody else to adopt your personal values.

The CEVer position is, as far as I can tell, that they follow their values because that's what they are programmed to do. It's a weird sort of double-think that can only arise when you act on the supposition that you have no free will with which to act. They're talking themselves into being evangelists for values that they don't really believe in. It's like taking the ability to follow a moral code that you know has no outside justification from Nietzsche's "master morality", and combining it with the prohibition against value-creation from his "slave morality".

Comment author: ata 22 March 2010 08:13:32PM *  3 points [-]

There's a distinction that hasn't been made on LW yet, between personal values and evangelical values. Western thought traditionally blurs the distinction between them, and assumes that, if you have personal values, you value other people having your values, and must go on a crusade to get everybody else to adopt your personal values.

That's how most values work. In general, I value human life. If someone does not share this value, and they decide to commit murder, then I would stop them if possible. If someone does not share this value, but is merely apathetic about murder rather than a potential murderer themselves, then I would cause them to share this value if possible, so there will be more people to help me stop actual murderers. So yes, at least in this case, I would act to get other people to adopt my values, or inhibit them from acting on their own values. Is this overly evangelical? What is bad about it?

In any case, history seems to indicate that "evangelizing your values" is a "universal human value".

Comment author: PhilGoetz 23 March 2010 04:01:27AM *  14 points [-]

Groups that didn't/don't value evangelizing their values:

  • The Romans. They don't care what you think; they just want you to pay your taxes.
  • The Jews. Because God didn't choose you.
  • Nietzschians. Those are their values, dammit! Create your own!
  • Goths. (Angst-goths, not Visi-goths.) Because if everyone were a goth, they'd be just like everyone else.

We get into one sort of confusion by using particular values as examples. You talk about valuing human life. How about valuing the taste of avocados? Do you want to evangelize that? That's kind of evangelism-neutral. How about the preferences you have that make one particular private place, or one particular person, or other limited resource, special to you? You don't want to evangelize those preferences, or you'd have more competition. Is the first sort of value the only one CEV works with? How does it make that distinction?

We get into another sort of confusion by not distinguishing between the values we hold as individuals, the values we encourage our society to hold, and the values we want God to hold. The kind of values you want your God to hold are very different from the kind of values you want people to hold, in the same way that you want the referee to have different desires than the players. CEV mushes these two very different things together.

Comment author: ata 23 March 2010 05:59:46PM *  0 points [-]

Good points. I haven't thoroughly read the CEV document yet, so I don't know if there is any discussion of this, but it does seem that it should make a distinction between those different types of values and preferences.

Comment author: PhilGoetz 22 March 2010 07:02:29PM *  1 point [-]

I understand what you're saying, and I've heard that answer before, repeatedly; and I don't buy it.

Suppose we were arguing about the theory of evolution in the 19th century, and I said, "Look, this theory just doesn't work, because our calculations indicate that selection doesn't have the power necessary." That was the state of things around the turn of the century, when genetic inheritance was assumed to be analog rather than discrete.

An acceptable answer would be to discover that genes were discrete things that an organism had just 2 copies of, and that one was often dominant, so that the equations did in fact show that selection had the necessary power.

An unacceptable answer would be to say, "What definition of evolution are you using? Evolution makes organisms evolve! If what you're talking about doesn't lead to more complex organisms, then it isn't evolution."

Just saying "Organisms become more complex over time" is not a theory of evolution. It's more like an observation of evolution. A theory means you provide a mechanism and argue convincingly that it works. To get to a theory of CEV, you need to define what it's supposed to accomplish, propose a mechanism, and show that the mechanism might accomplish the purpose.

You don't have to get very far into this analysis to see why the answer you've given doesn't, IMHO, work. I'll try to post something later this afternoon on why.

Comment author: PhilGoetz 23 March 2010 03:12:40AM *  5 points [-]

I won't get around to posting that today, but I'll just add that I know that the intent of CEV is to solve the problems I'm complaining about. I know there are bullet points in the CEV document that say, "Renormalizing the dynamic", "Caring about volition," and, "Avoid hijacking the destiny of humankind."

But I also know that the CEV document says,

Since the output of the CEV is one of the major forces shaping the future, I'm still pondering the order-of-evaluation problem to prevent this from becoming an infinite recursion.

and

It may be hard to get CEV right - come up with an AI dynamic such that our volition, as defined, is what we intuitively want. The technical challenge may be too hard; the problems I'm still working out may be impossible or ill-defined. I don't intend to trust any design until I see that it works, and only to the extent I see that it works. Intentions are not always realized.

I think there is what you could call an order-of-execution problem, and I think there's a problem with things being ill-defined, and I think the desired outcome is logically impossible. I could be wrong. But since Eliezer worries that this could be the case, I find it strange that Eliezer's bulldogs are so sure that there are no such problems, and so quick to shoot down discussion of them.

Comment author: Vladimir_Nesov 21 March 2010 11:43:23PM -1 points [-]

Some degree of randomness is necessary to allow exploration of the landscape of possible worlds. CEV is designed to prevent exploration of that landscape.

You never learn.

Comment author: PhilGoetz 22 March 2010 03:40:19AM 4 points [-]

Folks. Vladimir's response is not acceptable in a rational debate. The fact that it currently has 3 points is an indictment of the Less Wrong community.

Comment author: JGWeissman 22 March 2010 03:57:12AM 5 points [-]

Normally I would agree, but he was responding to "Some degree of randomness is necessary". Seriously, you should know that isn't right.

Comment author: PhilGoetz 22 March 2010 07:19:15PM *  2 points [-]

That post is about a different issue. It's about whether introducing noise can help an optimization algorithm. Sounds similar; isn't. The difference is that the optimization algorithm already knows the function that it's trying to optimize.

The basic problem with CEV is that it requires reifying values in a strange way so that there are atomic "values" that can be isolated from an agent's physical and cognitive architecture; and that (I think) it assumes that we have already evolved to the point where we have discovered all of these values. You can make very general value statements, such as that you value diversity, or complexity. But a trilobite can't make any of those value statements. I think it's likely that there are even more important fundamental value statements to be made that we have not yet conceptualized; and CEV is designed from the ground up specifically to prevent such new values from being incorporated into the utility function.

The need for randomness is not because random is good; it's because, for the purpose of discovering better primitives (values) to create better utility functions, any utility function you can currently state is necessarily worse than random.

Comment author: JGWeissman 22 March 2010 08:08:09PM 4 points [-]

Since when is randomness required to explore the "landscape of possible worlds"? Or the possible values that we haven't considered? A methodical search would be better. How did you miss that lesson from Worse Than Random, when it included an example (the pushbutton combination lock) of exploring a space of potential solutions?

Comment author: PhilGoetz 21 March 2010 10:37:48PM *  0 points [-]

There's several grounds for criticism here. Criticizing CEV by saying, "I think CEV will lead to good dogs, because that's what a lot of people would like," sounds valid to me, but would merit more argumentation (on both sides).

Another problem I mentioned is a possibly fundamental problem with CEV. Is it legitimate to say that, when CEV assumes that reasoned extrapolation trumps all existing values, that that is not the same as asserting that reason is the primary value? You could argue that reason is just an engine in service of some other value. There's some evidence that that actually works, as demonstrated by the theologians of the Roman Catholic Church, who have a long history of using reason to defeat reason. But I'm not convinced that makes sense. If it doesn't, then it means that CEV already assumes from the start the very kind of value that its entire purpose is to prevent being assumed.

Third, most human values, like dog-values, are neutral with respect to rationality or threatened by rationality. The dog itself needs to not be much more rational or intelligent than it is.

The only solution is to say that the rationality and the values are in the FAI sysop, while the conscious locus of the values is in the humans. That is, the sysop gets smarter and smarter, with dog-values as its value system. It knows that to get the experiential value out of dog-values, the conscious experiencer needs limited cognition; but that's okay, because the humans are the designated experiencers, while the FAI is the designated thinker and keeper-of-the-values.

There are two big problems with this.

  1. By keeping the locus of consciousness out of the sysop, we're steering dangerously close to one of the worst-possible-of-all-worlds, which is building a singleton that, one way or the other, eventually ends up using most of the universe's computational energy, yet is not itself conscious. That's a waste of a universe.

  2. Value systems are deictic, meaning they use the word "I" a lot. To interpret their meaning, you fill in the "I" with the identity of the reasoning agent. The sysop literally can't have human values if it doesn't have deictic values; and if it has deictic values, they're not going to stay doglike under extrapolation. (You could possibly get around this by using a non-deictic representation, and saying that the values have meaning only when seen in light of the combined sysop+humans system. Like the knowledge of Chinese in Searle's Chinese room.)

The FAI document says it's important to use non-deictic representations in the AI. Aside from the fact that this is probably impossible - cognition is compression, and deictic representations are much more compact, so any intelligence is going to end up using something equivalent to deictic representations - I don't know if it's meaningful to talk about non-deictic values. That would be like saying "I value the taste of chocolate" without saying who is tasting the chocolate. (That's one entry-point into paperclipping scenarios.)

The final, biggest problem illustrated by dog-values is that it's just not sensible to preserve "human values", when human values, even those found within the same person at different times of life, are as different as it is possible for values to be different. Sure, maybe we would have different values if we could see in the ultraviolet, or had seven sexes; but there is just no bigger difference between values than "valuing states of the external world", and "valuing phenomenal perceptions within my head." And there are already humans committed to each of those two fundamental value systems.

Comment author: simplicio 21 March 2010 11:31:19PM 0 points [-]

A Christian just wants to be a good dog. They've found a way to reach that same blissful state themselves.

The materialistic worldview really is gloomy compared to being a dog.

You have a point here. But as you mentioned, we aren't really capable of such a state, nor would it be virtuous to chase after one.

You guys have totally lost me with this AI stuff. I guess there's probably a sequence on it somewhere...

Comment author: EphemeralNight 13 May 2012 04:33:33PM *  7 points [-]

I can only conclude that if I took them to see Seurat’s painting “A Sunday Afternoon on the Island of La Grande Jatte," they would earnestly ask me what on earth the purpose of all the little dots was.

... which we might call the disappointment of explicability. “A rainbow is just light refracting.” “The aurora is only a bunch of protons hitting the earth’s magnetic field.” Rationalists are, sadly, not immune to this nasty little meme.

It occurred to me upon reading this, that perhaps your analogy about the painting is overlooking something important.

In the case of a beautiful painting, if you examine the chain of causality that led to its existence, you will find within that chain, a material system that is the mind and being of the painter. In the case of a rainbow, or an aurora, which, like the painting, is aesthetically pleasing for a human to look upon, the chain of causality that led to its existence does not contain anything resembling our definition of a mind.

In both cases, there exists a real thing, a thing with a reductionist explanation. In both cases a human is likely to be aesthetically pleased by looking at that thing. And, I suspect, in both cases a human's social instincts create a positive emotional response to not just the perceived beauty but to the mind responsible for the existence of said beauty. A human's Map would be marked by that emotional connection, but of course, only in the former case is there actually a mind anywhere in the Territory to correspond to that marking.

It seems possible, even likely, that most of the disappointment you describe, is not in the existence of an explanation, but that the explanation requires the severing of that emotional connection, the erasing from our Map that which is most important to us--other minds. We want to find/meet/see/understand/etc. the mind that caused our feeling of aesthetic pleasure, and hurt when we first understand that there is no mind to find.

That is what I suspect, at least.

Comment author: Friendly-HI 19 May 2012 09:39:23PM 0 points [-]

Brilliant train of thought, there may very well be something to this idea.

I used the painting analogy myself in debating anti-materialists but could always see, how that analogy didn't really satisfy them the way it satisfied me and you've possibly give a valuable clue why.

Comment author: Academian 21 March 2010 04:11:46PM 12 points [-]

What I liked in a nutshell:

What would you prefer to be made of, if not matter?

On behalf of chemicals everywhere, I say: Screw you!

If there is a fact question at stake, take no prisoners; but you don’t get extra points for unnecessary angst.

Comment author: BenAlbahari 21 March 2010 02:20:27PM 17 points [-]

You only included the last sentence of Dawkins' quote. Here's the full quote:

The total amount of suffering per year in the natural world is beyond all decent contemplation. During the minute that it takes me to compose this sentence, thousands of animals are being eaten alive, many others are running for their lives, whimpering with fear, others are slowly being devoured from within by rasping parasites, thousands of all kinds are dying of starvation, thirst, and disease. It must be so. If there ever is a time of plenty, this very fact will automatically lead to an increase in the population until the natural state of starvation and misery is restored. In a universe of electrons and selfish genes, blind physical forces and genetic replication, some people are going to get hurt, other people are going to get lucky, and you won't find any rhyme or reason in it, nor any justice. The universe that we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil, no good, nothing but pitiless indifference.

The universe is perverse. You have to learn to love it in spite of that.

Comment author: Vladimir_Nesov 22 March 2010 12:44:18PM 15 points [-]

The universe is perverse. You have to learn to love it in spite of that.

What? Why would you love the indifferent universe? It has to be transformed.

Comment author: Nisan 23 March 2010 02:03:18AM *  7 points [-]

Right. Materialism tells us that we're probably going to die and it's not going be okay; the right way to feel good about it is to do something about it.

Comment author: BenAlbahari 22 March 2010 01:04:00PM 2 points [-]

My attitude is easier to transform than the universe's attitude.

Comment author: Vladimir_Nesov 22 March 2010 01:14:07PM *  10 points [-]

Maybe easier, but is it the right thing to do? Obvious analogy is wireheading. See also: Morality as Fixed Computation.

Comment author: Nick_Tarleton 22 March 2010 01:28:45PM *  3 points [-]

Emotions ≠ preferences. It may be that something in the vague category "loving the universe" is (maybe depending on your personality) a winning attitude (or more winning than many people's existing attitudes) regardless of your morality. (Of course, yes, in changing your attitude you would have to be careful not to delude yourself about your preferences, and most people advocating changing your attitude don't seem to clearly make the distinction.)

Comment author: Vladimir_Nesov 22 March 2010 01:35:49PM *  6 points [-]

I certainly make that distinction. But it seems to me that "loving" the current wasteland is not an appropriate emotion. Wireheading is wrong not only when/because you stop caring about other things.

Comment author: Nick_Tarleton 22 March 2010 02:37:20PM *  9 points [-]

But it seems to me that "loving" the current wasteland is not an appropriate emotion.

Granted. It seems to me that the kernel of truth in the original statement is something like "you are not obligated to be depressed that the universe poorly satisfies your preferences", which (ISTM) some people do need to be told.

Comment author: SoullessAutomaton 23 March 2010 02:56:32AM *  8 points [-]

Since when has being "good enough" been a prerequisite for loving something (or someone)? In this world, that's a quick route to a dismal life indeed.

There's the old saying in the USA: "My country, right or wrong; if right, to be kept right; and if wrong, to be set right." The sentiment carries just as well, I think, for the universe as a whole. Things as they are may be very wrong indeed, but what does it solve to hate the universe for it? Humans have a long history of loving not what is perfect, but what is broken--the danger lies not in the emotion, but in failing to heal the damage. It may be a crapsack universe out there, but it's still our sack of crap.

By all means, don't look away from the tragedies of the world. Figuratively, you can rage at the void and twist the universe to your will, or you can sit the universe down and stage a loving intervention. The main difference between the two, however, is how you feel about the process; the universe, for better or worse, really isn't going to notice.

Comment author: MichaelVassar 21 March 2010 10:14:32PM 5 points [-]

The amount of pain in nature is immense. Suffering? I'm not so sure. That's a technical question, even if we don't yet know how to ask the right question. A black widow male is certainly in pain as it's eaten but is very likely not suffering. Many times each day I notice that I have been in pain that I was unaware of. The Continental Philosophy and Women's Studies traditions concern themselves with suffering that people aren't aware of, but don't suggest that such suffering comes in varieties that many animals could plausible experience.

Comment author: BenAlbahari 22 March 2010 01:26:17AM *  4 points [-]

This belief people have that "beings kinda different to me" aren't suffering strikes me as near-far bias cranked up to 11. Perhaps you don't notice the pain because it's relatively minor. I'm assuming you didn't have your leg chewed off.

Comment author: orthonormal 22 March 2010 02:37:43AM *  6 points [-]

This belief people have that "beings kinda different to me" aren't suffering strikes me as near-far bias cranked up to 11.

In some people, perhaps that is the reasoning; but there really is more to this discussion than anthropocentrism.

Suffering as we experience it is actually a very complicated brain activity, and it's virtually certain that the real essence of it is in the brain structure rather than the neurotransmitters or other correlates. AFAIK, the full circuitry of the pain center is common to mammals, but not to birds (I could be wrong), fish, or insects. Similar neurotransmitters to ours might be released when a bug finds itself wounded, and its brain might send the impulse to writhe and struggle, but these are not the essence of suffering.

(Similarly, dopamine started out as the trigger for reinforcing connections in very simple brains, as a feedback mechanism for actions that led to success which makes them more likely to execute next time. It's because of that role that it got co-opted in the vast pleasure/reward/memory complexes in the mammalian brain. So I don't see the release of dopamine in a 1000-neuron brain to be an indication that pleasure is being experienced there.)

Comment author: BenAlbahari 22 March 2010 03:32:48AM 4 points [-]

I agree with your points on pain and suffering; more about that on a former Less Wrong post here.

However, reducing the ocean of suffering still leaves you with an ocean. And that suffering is in every sense of the word perverse. If you were constructing a utopia, your first thought would hardly be "well, let's get these animals fighting and eating each other". Anyone looking at your design would exclaim: "What kind of perverse utopia is that?! Are you sick?!". Now, it may be the case that you could give a sophisticated explanation as to why that suffering was necessary, but it doesn't change the fact that your utopia is perverted. My point is we have to accept the perversion. And denying perversion is simply more perversion.

Comment author: MichaelVassar 24 March 2010 07:48:53PM 3 points [-]

To specify a particular theory, my guess is that suffering is an evolved elaboration on pain unique to social mammals or possibly shared by social organisms of all sorts. It seems likely to me to basically mediate an exchange of long-term status for help from group members now.

Comment author: BenAlbahari 25 March 2010 02:56:01AM 3 points [-]

Perhaps: pain is near-mode; suffering is far-mode. Scenario: my leg is getting chewed off.

Near-mode thinking: direct all attention to attempt to remove the immediate source of pain / fight or flight / (instinctive) scream for attention

Far-mode thinking: reevaluate the longer-term life and social consequences of having my leg chewed off / dwell on the problem in the abstract

Comment author: orthonormal 22 March 2010 03:38:15AM 1 point [-]

I agree with this point, and I'd bet karma at better than even odds that so does Michael Vassar.

Comment author: MichaelVassar 22 March 2010 05:04:55PM 3 points [-]

I agree, but I wonder if my confidence in my extrapolation agreeing is greater or less than your confidence in my agreeing was. I tend to claim very much greater than typical agnosticism about the subjective nature of nearby (in an absolute sense) mind-space. I bet a superintelligence could remove my leg without my noticing and I'm curious as to the general layout of the space of ways in which it could remove my leg and have me scream and express horror or agony at my leg's loss without my noticing.

I really do think that at a best guess, according to my extrapolated values, human suffering outweights that of the rest of the biosphere, most likely by a large ratio (best guess might be between one and two orders of magnitude). Much more importantly, at a best guess, human 'unachieved but reasonably achievable without superintelligence flourishing' outweighs the animal analog by many orders of magnitude, and if the two can be put on a common scale I wouldn't be surprised if the former is a MUCH bigger problem than suffering. I also wouldn't be shocked if the majority of total suffering in basically Earth-like worlds (and thus the largest source of expected suffering given our epistemic state) comes from something utterly stupid, such as people happening to take up the factory farming of some species which happens, for no particularly good reason, to be freakishly capable of suffering. Sensitivity to long tails tends to be a dominant feature of serious expected utility calculus given my current set of heuristics. The modal dis-value I might put on a pig living its life in a factory farm is under half the median which is under half the mean.

Comment author: Nick_Tarleton 24 March 2010 08:03:22PM *  4 points [-]

This belief people have that "beings kinda different to me" aren't suffering strikes me as near-far bias cranked up to 11.

That's surely a common reason, but are you sure you're not letting morally loaded annoyance at that phenomenon prejudice you against the proposition?

The cognitive differences between a human and a cow or a spider go far beyond "kinda", and, AFAIK, nobody really knows what "suffering" (in the sense we assign disutility to) is. Shared confusion creates room for reasonable disagreement over best guesses (though possibly not reasonable disagreement over how confused we are).

(See also.)

Comment author: Morendil 22 March 2010 07:14:24AM *  4 points [-]

It doesn't take much near-thinking to draw a distinction between "signals to our brain that are indicative of damage inflicted to a body part" on the one hand, and "the realization that major portions of our life plans have to be scrapped in consequence of damaged body parts" on the other. The former only requires a nervous system, the latter requires the sort of nervous system that makes and cares about plans.

Comment author: BenAlbahari 22 March 2010 10:01:41AM *  10 points [-]

Yes, but that assumes this difference is favorable to your hypothesis. David Foster Wallace from "Consider The Lobster":

Lobsters do not, on the other hand, appear to have the equipment for making or absorbing natural opioids like endorphins and enkephalins, which are what more advanced nervous systems use to try to handle intense pain. From this fact, though, one could conclude either that lobsters are maybe even more vulnerable to pain, since they lack mammalian nervous systems’ built-in analgesia, or, instead, that the absence of natural opioids implies an absence of the really intense pain-sensations that natural opioids are designed to mitigate. I for one can detect a marked upswing in mood as I contemplate this latter possibility...

The entire article is here and that particular passage is here. And later:

Still, after all the abstract intellection, there remain the facts of the frantically clanking lid, the pathetic clinging to the edge of the pot. Standing at the stove, it is hard to deny in any meaningful way that this is a living creature experiencing pain and wishing to avoid/escape the painful experience. To my lay mind, the lobster’s behavior in the kettle appears to be the expression of a preference; and it may well be that an ability to form preferences is the decisive criterion for real suffering.

Comment author: Morendil 22 March 2010 11:15:38AM 7 points [-]

In this last paragraph (which btw is immediately preceded, in the article, by an observation strikingly similar to mine in the grandparent), I would argue that "frantically" and "pathetic" are projections: the emotions they refer to originate in the viewer's mind, not in the lobster's.

We are demonstrably equipped with mental mechanisms whereby we can observe behaviour in others, and as a result of such observations we can experience "ascribed emotions", which can sometimes take on an intensity not far removed from the sensations that originate in ourselves. That's where our intuition that the lobster is in pain comes from.

Later in the article, the author argues that lobsters "are known to exhibit preferences". Well, plants are known to exhibit preferences; they will for instance move so as to face the sun. We do not infer that plants can experience suffering.

We could build a robot today that would sense aspects of its surrounding such as elevated temperature, and we could program that robot to give a higher priority to its "get the hell away from here" program when such conditions obtained. We would then be in a position to observe the robot doing the same thing as the lobster; we would, quite possibly, experience empathy with the robot. But we would not, I think, conclude that it is morally wrong to put the robot in boiling water. We would say that's a mistake, because we have not built into the robot the degree of personhood which would entitle it to such conclusions.

Comment author: RobinZ 22 March 2010 11:27:32AM 4 points [-]

cf. "The Soul of the Mark III Beast", Terrel Miedaner, included in The Mind's I, Dennett & Hofstadter.

Comment author: JenniferRM 22 March 2010 03:34:12PM 2 points [-]

Trust this community to connect the idea to the reference so quickly. "In Hofstadter we trust" :-)

For those who are not helped by the citation, it turns out that someone thoughtfully posted the relevant quote from the book on their website. I recommend reading it, the story is philosophically interesting and emotionally compelling.

Comment author: Tyrrell_McAllister 22 March 2010 04:27:12PM *  5 points [-]

The story was also dramatized in a segment of the movie Victim of the Brain, which is available in its entirety from Google Video. The relevant part begins at around 8:40.

Here is the description of the movie:

1988 docudrama about "the ideas of Douglas Hofstadter". It was created by Dutch director Piet Hoenderdos. Features interviews with Douglas Hofstadter and Dan Dennett. Dennett also stars as himself. Original acquired from the Center for Research in Concepts and Cognition at Indiana University. Uploaded with permission from Douglas Hofstadter. Uploaded by Virgil Griffith.

Comment author: JenniferRM 22 March 2010 05:15:42PM *  3 points [-]

That was fascinating. A lot of the point of the story - the implicit claim - was that you'd feel for an entity based on the way its appearance and behavior connected to your sympathy - like crying sounds eliciting pity.

In text that's not so hard because you can write things like "a shrill noise like a cry of fright" when the simple robot dodges a hammer. The text used to explain the sound are automatically loaded with mental assumptions about "fright", simply to convey the sound to the reader.

With video the challenge seems like it would be much harder. It becomes more possible that people would feel nothing for some reason. Perhaps for technical reasons of video quality or bad acting, or for reasons more specific to the viewer (desensitized to video violence?), or maybe because the implicit theory about how mind-attribution is elicited is simply false.

Watching it turned out to be interesting on more levels than I'd have thought because I did feel things, but I also noticed the visual tropes that are equivalent to mind laden text... like music playing as the robot (off camera) cries and the camera slowly pans over the wreckage of previously destroyed robots.

Also, I thought it was interesting the way they switched the roles for the naive mysterian and the philosopher of mind, with the mysterian being played by a man and the philosopher being played by a woman... with her hair pinned up, scary eye shadow, and black stockings.

"She's a witch! Burn her!"

Comment author: khafra 22 March 2010 02:31:12PM 2 points [-]

Some Jainists and Buddhists infer that plants can experience suffering. The stricter Jainist diet avoids vegetables that are harvested by killing plants, like carrots and potatoes, in favor of fruits and grains that come voluntarily or from already-dead plants.

Comment author: Morendil 22 March 2010 03:12:21PM 2 points [-]

That's a preference of theirs; fine by me, but not obviously evidence-based.

Comment author: khafra 22 March 2010 03:27:13PM 3 points [-]

I don't mean to suggest that plants are clearly sentient, just that it's plausible, even for a human, to have a coherent value system which attempts to avoid the suffering of anything which exhibits preferences.

Comment author: Morendil 22 March 2010 04:35:45PM 2 points [-]

I'd agree with that sentence if you replaced the word "suffering", unsuitable because of its complex connotations, with "killing", which seems adequate to capture the Jainists' intuitions as represented in the link above.

Comment author: Morendil 21 March 2010 09:54:13AM 17 points [-]

tuning your ears to the words “just” and “merely.”

Indeed! See also this classic essay by Jerry Weinberg on Lullaby Words. "Just" is one of them, can you think of others before reading the essay? ;)

Comment author: RichardKennaway 21 March 2010 09:51:05PM *  7 points [-]

"Fundamentally" and all of its near-synonyms: "really", "essentially", "at bottom", "actually", etc.

Usually, these mean "not". ("How was that party you went to last night?" "Oh, it was all right really.") ("Yes, I kidnapped you and chained you in my basement, but fundamentally, underneath it all, I'm essentially a nice guy.")

Comment author: Morendil 22 March 2010 06:52:38AM 3 points [-]

Good one.

On a related note, I often find myself starting a sentence with "The fundamental issue" - and when I catch myself and ask if what I'm talking about is the single issue that in fact underlies all others, and answer myself "no" - then I revise the sentence so something line "One important issue"... Here the lullaby is in two parts, a) everything is less important than this thing and b) there is only this one thing to care about. It's rarely the case that either is true, let alone both.

Comment author: CronoDAS 21 March 2010 07:06:27PM *  5 points [-]

In mathematics, "obvious" is one of those words. It tends to mean "something I don't know how to justify."

Comment author: PeteSchult 22 March 2010 03:03:08AM 4 points [-]

A joke along these lines has the math professor claiming that the proof of some statement is trivial. They pause for a moment, think, then leave the classroom. Half an hour later, they come back and say, "Yes, it was trivial."

Comment author: RobinZ 22 March 2010 03:07:46AM 4 points [-]

I heard about a professor (I think physics) who was always telling his students that various propositions were "simple", despite the fact that the students always struggled to show them. Eventually, the students went to the TA (the one I heard the story from), who told the professor.

So, the next class the professor said, "I have heard that the students do not want me to say 'simple'. I will no longer do so. Now, this proposition is straightforward..."

Comment author: SoullessAutomaton 23 March 2010 03:04:51AM *  11 points [-]

At the Princeton graduate school, the physics department and the math department shared a common lounge, and every day at four o'clock we would have tea. It was a way of relaxing in the afternoon, in addition to imitating an English college. People would sit around playing Go, or discussing theorems. In those days topology was the big thing.

I still remember a guy sitting on the couch, thinking very hard, and another guy standing in front of him, saying, "And therefore such-and-such is true."

"Why is that?" the guy on the couch asks.

"It's trivial! It's trivial!" the standing guy says, and he rapidly reels off a series of logical steps: "First you assume thus-and-so, then we have Kerchoff's this-and-that; then there's Waffenstoffer's Theorem, and we substitute this and construct that. Now you put the vector which goes around here and then thus-and-so..." The guy on the couch is struggling to understand all this stuff, which goes on at high speed for about fifteen minutes!

Finally the standing guy comes out the other end, and the guy on the couch says, "Yeah, yeah. It's trivial."

We physicists were laughing, trying to figure them out. We decided that "trivial" means "proved." So we joked with the mathematicians: "We have a new theorem -- that mathematicians can prove only trivial theorems, because every theorem that's proved is trivial."

The mathematicians didn't like that theorem, and I teased them about it. I said there are never any surprises -- that the mathematicians only prove things that are obvious.

-- Surely you're joking, Mr. Feynman!

Comment author: nhamann 21 March 2010 07:20:54PM 3 points [-]

Most of the time I've run into the word "obviously" is in the middle of a proof in some textbook, and my understanding of the word in that context is that it means "the justification of this claim is trivial to see, and spelling it out would be too tedious/would disrupt the flow of the proof."

Comment author: SoullessAutomaton 21 March 2010 09:19:55PM 8 points [-]

I thought the mathematical terms went something like this:

  • Trivial: Any statement that has been proven
  • Obviously correct: A trivial statement whose proof is too lengthy to include in context
  • Obviously incorrect: A trivial statement whose proof relies on an axiom the writer dislikes
  • Left as an exercise for the reader: A trivial statement whose proof is both lengthy and very difficult
  • Interesting: Unproven, despite many attempts
Comment author: CronoDAS 21 March 2010 07:35:19PM *  3 points [-]

Well, that's what it's supposed to mean. One of my professors (who often waxed sarcastic during lectures) described it as a very dangerous word...

Comment author: kpreid 21 March 2010 08:03:02PM 2 points [-]

Do you really assert that it is more often used incorrectly (that the fact is not actually obvious)?

Comment author: wnoise 23 March 2010 01:22:53AM 8 points [-]

I assert that it ("obviously" in math) is most often used correctly, but that people spend more time experiencing it used incorrectly -- because they spend more time thinking about it when it is not obvious.

Comment author: CronoDAS 21 March 2010 08:11:33PM 3 points [-]

No, I guess not.

Comment author: CronoDAS 22 March 2010 12:14:54AM *  1 point [-]
Comment author: NancyLebovitz 21 March 2010 12:32:45PM 3 points [-]

Voted up because that's an excellent link.

Comment author: Document 27 September 2012 07:59:43PM 1 point [-]
Comment author: PeteSchult 22 March 2010 02:44:46AM *  4 points [-]

On behalf of chemicals everywhere, I say: Screw you! Where would you be without us?

As Monsanto (and some of my user friends :-) ) tells us, "Without chemicals, life itself would be impossible."

More seriously, this post voiced some of the things I've been thinking about lately. It's not that it doesn't all reduce to physics in the end, but the reduction is complicated and probably non-linear, so you have to look at things in a given domain according to the empirically based rules for that domain. Even in chemistry (at least beyond the hydrogen atom, if things are the same as when I was in high school back in the Pleistocene), the reduction to physics is not entirely practical, so chemists develop higher level theories about chemicals rather than lower level "machine language" theories.

Comment author: orthonormal 22 March 2010 02:54:46AM 1 point [-]

Let me be the first to say, Welcome to Less Wrong!

You're quite right, and your comment touches on some of the topics of the reductionism sequence here, in particular the eponymous post.

Comment author: CytokineStorm 21 March 2010 11:40:35PM *  4 points [-]

So facts can fester because you only allow yourself to judge them by their truthfulness, even though your actual relation with them is of a nonfactual nature.

One I had problems with: Humans are animals. It's true, isn't it?! But it's only bothering people for its stereotypical subtext. "Humans are like animals: mindless, violent and dirty."

Festering facts?

Comment author: orthonormal 22 March 2010 12:15:11AM 2 points [-]

Ah yes, it's time to dust off YSITTBIDWTCIYSTEIWEWITTAW again. Er, make that ADBOC.

Comment author: CytokineStorm 22 March 2010 12:19:17AM 0 points [-]

Oops, sorry about that.

Comment author: wedrifid 22 March 2010 02:31:15AM 1 point [-]

"Humans are like animals: mindless, violent and dirty."

Well... that's got a significant element of truth to it too, but I need not be bothered about that either.

Comment author: [deleted] 13 May 2012 04:47:55PM 8 points [-]

"Love is Wonderful biochemistry."

"Rainbows are a Wonderful refraction phenomena"

"Morality is a Wonderful expression of preference"

And so on. Let's go out and replace 'just' and 'merely' with 'wonderful' and assorted terms. Let's sneak Awesomeness into reductionism.

Comment author: EphemeralNight 24 June 2012 09:52:18AM 1 point [-]

This may be the wrong tact. As I pointed out above, I think it likely that the problem lies not in the nature of the phenomenon but in the way a person relates to the phenomenon emotionally. Particularly, that for natural accidents like rainbows, most people simply can't relate emotionally to the physics of light refraction, even if they sort of understand it.

So, I think a more effective tact would be to focus on the experience of seeing the rainbow, rather than the rainbow itself, because if a person is focusing on the rainbow itself, then they inevitably will by disappointed by the reductionist explanation supplanting their instinctive sense of there being something ontologically mental behind the rainbow.

Because, however you word it, the rainbow is just a refraction phenomena, but when you look at the rainbow and experience the sight of the rainbow there are lots of really awesome things happening in your own brain that are way more interesting than the rainbow by itself is.

I think trying to assign words like "just" or "wonderful" to physical processes that cause rainbows is an example of the Mind Projection Fallacy. So, let's not try to get people excited about what makes the rainbow. Let's try to get people excited about what makes the enjoyment of seeing one.

Comment author: VKS 24 June 2012 04:44:44PM 0 points [-]

It may be true that saying these things may not get everybody to see the beauty we see in the mechanics of those various phenomena. But perhaps saying "Rainbows are a wonderful refraction phenomena" can help get across that even if you know that the rainbows are refraction phenomena, you can still see feel wonder at them in the same way as before. The wonder at their true nature can come later.

I guess what I'm getting at is the difference between "Love is wonderful biochemistry" and "Love is a wonderful consequence of biochemistry". The second, everybody can perceive. The first, less so.

Comment author: EphemeralNight 24 June 2012 11:45:43PM *  -1 points [-]

that even if you know that the rainbows are refraction phenomena, you can still see feel wonder at them

This kind of touches my point You're talking about two separate physical processes here, and I hold that the latter is the only one worth getting excited about. Or, at least the only one worth trying to get laypeople excited about.

Comment author: VKS 25 June 2012 08:14:29AM *  0 points [-]

Eh, both phenomena are things we can reasonably get excited about. I don't see that there's much point in trying to declare one inherently cooler than the other. Different people get excited by different things.

I do see, though, that so long as they think that learning about either the cause of their wonder or the cause of the rainbows will steal the beauty from them, no progress will be made on any front. What I'm trying to say is that once that barrier is down, once they stop seeing science as the death of all magic (so to speak), then progress is much easier. Arguably, only then should you be asking yourself whether to explain to them how rainbows work or why one feels wonder when one looks at them.

Comment author: EphemeralNight 03 July 2012 11:10:09AM -1 points [-]

Okay, maybe we need to taboo "excited".

I do see, though, that so long as they think that learning about either the cause of their wonder or the cause of the rainbows will steal the beauty from them, no progress will be made on any front.

This right here is at the crux of my point. I am predicting that, for your average neurotypical, explaining their wonder produces significantly less feeling of stolen beauty than explaining the rainbow. Because, in the former case, you're explaining something mental, whereas in the latter case, you're explaining something mental away.

The rainbow may still be there, but it's status as a Mentally-Caused Thing is not.

Comment author: VKS 03 July 2012 01:10:18PM *  0 points [-]

If people react badly to having somebody explain how their love works, what makes you think that things will go better with wonder?

And, in a different mental thread, I'm going to posit that really, what you talk about matters much less than how you talk about it, in this context. You can (hopefully) get the point across by demonstrating by example that wonder can survive (and even thrive) after some science. At least if, as I suspect, people can perceive wonder through empathy. So, if you feel wonder, feel it obviously and try to get them to do so also. And just select whatever you feel the most wonder at.

Less dubiously, presentation is fairly important to making things engaging. Now, I would guess that the more familiar you are with a subject, the easier it becomes to make it engaging. So select whether you explain rainbow or the wonder of rainbows based on that.

Maybe.

I'm speculating.

Comment author: [deleted] 24 June 2012 12:28:20PM 0 points [-]

That is an interesting analysis. I think I might view "just" and "wonderful" more like physically null words, so as to say they do not have any meaning beyond interpretation.

I guess I am just getting too rational for interacting with normal people psychology purely by typical-mindedness.

Comment deleted 24 June 2012 01:27:18PM *  [-]
Comment author: [deleted] 24 June 2012 02:01:23PM -1 points [-]

You are misunderstanding the purposes of this discussion.

I don't have any problems, I can hardly not see anything as beautiful without maths.

But normalfolk are not so fortunate. How do we trick them into thinking that reductionism is cool?

Comment author: [deleted] 13 May 2012 07:38:42PM 0 points [-]

This reminds me of something, though I can't remember for sure which something it was.

Comment author: MarkusRamikin 13 May 2012 05:30:35PM 0 points [-]

Didn't know we were into affirmations around here. I'm gonna need me some pepto...

Comment author: [deleted] 13 May 2012 06:46:05PM 0 points [-]

I am big on simple tricks to raise the sanity waterline.

Comment author: haig 24 March 2010 09:18:58PM 8 points [-]

In my experience, the inability to be satisfied with a materialistic world-view comes down to simple ego preservation, meaning, fear of death and the annihilation of our selves. The idea that everything we are and have ever known will be wiped out without a trace is literally inconceivable to many. The one common factor in all religions or spiritual ideologies is some sort of preservation of 'soul', whether it be a fully platonic heaven like the Christian belief, a more material resurrection like the Jewish idea, or more abstract ideas found in Eastern and New Age ideologies. The root of spiritual, 'spirit', is a non-corporeal substance/entity whose main purpose is to contrast itself with the material body. Spirit is that which is not material and so can survive the loss of material pattern decay.

In my opinion, THIS IS the hard pill to swallow.

Comment author: SoullessAutomaton 21 March 2010 06:47:04PM 7 points [-]

It's said that "ignorance is bliss", but that doesn't mean knowledge is misery!

I recall studies showing that major positive/negative events in people's lives don't really change their overall happiness much in the long run. Likewise, I suspect that seeing things in terms of grim, bitter truths that must be stoically endured has very little to do with what those truths are.

Comment author: ktismael 23 March 2010 02:35:00PM 4 points [-]

I recall reading (One of Tyler Cowen's books, I think) that happiness is highly correlated with capacity for self-deception. In this case, positive / negative events would have little impact, but not necessarily because people accepted them, but more because the human brain is a highly efficient self-deception machine.

Similarly, a tendency toward depression correlated with an ability to make more realistic predictions about one's life. So I think it may in fact be a particular aspect of human psychology that encourages self-deception and responds negatively to reality.

None of this is to say that these effects can't be reduced or eliminated through various mental techniques, but I don't think it's sufficient to just assert it as cultural.

Comment author: CronoDAS 22 March 2010 12:19:01AM 0 points [-]

It's said that "ignorance is bliss", but that doesn't mean knowledge is misery!

That's a pretty good line!

Comment author: alexflint 22 March 2010 07:43:27AM *  3 points [-]

Bravo for an excellent post!

The one point I want to make is that gloominess is our natural emotional response to many reductionist truths. It is difficult not to see a baseless morality in evolution, hard not to feel worthless before the cosmos, challenging not to perceive meaninglessness in chemical neurology. Perhaps realising the fallacies of these emotional conclusions must necessarily come after the reductionist realisations themselves.

Comment author: Eliezer_Yudkowsky 22 March 2010 07:48:52AM 7 points [-]

I'd still deny this. You need the right (wrong) fallacies to jump to those conclusions. Maybe the fallacies are easy to invent, or maybe our civilization ubiquitously primes people with them, but it still takes an extra and mistaken step.

Comment author: byrnema 22 March 2010 12:01:09PM 7 points [-]

I would call it a conundrum, rather than a fallacy. If my terminal values are impossible to satisfy in a materialistic world,then I'm just out of luck, not factually wrong.

Comment author: BenAlbahari 22 March 2010 08:45:51AM *  1 point [-]

What if the priming is developmental? I wonder if there's any parents out there who have tried to bring up their kids with rational beliefs. E.g. No lies about "bunny heaven"; instead take the kid on a field-trip to a slaughterhouse. And if so, how did it effect how well adjusted the kids were?

Comment author: NancyLebovitz 22 March 2010 10:46:51AM 5 points [-]

Insulating children from death is a relatively modern behavior.

For a long time, most people grew up around killing animals for food, and there was still religion.

Comment author: Strange7 22 March 2010 09:02:27AM 2 points [-]

For this to really work, I think it would require more cultural support than just one set of parents. Maybe something like the school system and interactive history museum Sachisuke wrote about?

Comment author: Strange7 08 July 2011 04:00:02AM 0 points [-]

Of the people who voted this up, I am curious: How much of Sachisuke Masamura's work have you read? PM me.

Comment author: alexflint 22 March 2010 08:25:38AM 1 point [-]

I agree. I think it is the particularities of human psychology leads people to such conclusions. The gloomy conclusions are in no way inherent in the premises.

Comment author: Nick_Tarleton 22 March 2010 03:10:33PM 3 points [-]

I think Eliezer is claiming that human psychology does not lead to those conclusions; culturally transmitted errors are required.

Comment author: Psychohistorian 21 March 2010 08:19:35PM *  5 points [-]

We are not born into this world, but grow out of it; for in the same way an apple tree apples, the Earth peoples.

Your interpretation of this is overly charitable. The analogy to the apple tree makes it basically teleological; as apples define an apple tree, people define the earth. This phrasing implies a sort of purpose, importance (how important are apples to an apple tree?) and moral approval. Also, "We are not born into this world" is a false statement. And the process by which the earth generates people is pretty much nothing like the way in which an apple tree produces apples.

Comment author: CronoDAS 22 March 2010 12:26:20AM 0 points [-]

And the process by which the earth generates people is pretty much nothing like the way in which the earth generates people.

I think you misspoke there...

Comment author: Psychohistorian 22 March 2010 12:43:50AM 0 points [-]

Touche. Fixed.

Comment author: orthonormal 21 March 2010 06:07:59PM 5 points [-]

We're evolutionarily optimized for the savannah, not for the stars. It doesn't seem to me that our present selves are really as capable of being effortlessly content with our worldview as some of our forebears were, because we have some lingering Wrong Questions and wrong expectations written into our minds. Some part of us really wants to see agency in the basic causal framework of our lives, as much as we know this isn't so.

Now that's not a final prescription for hopelessness, because we can hope not to be running on the same bug-riddled brainware for our entire existence, and because there do exist ways to make the universe much more interesting than it presently is.

But it does mean that it's not a moral failing to be disillusioned with the world, now and then, in a way that our religious next-door neighbor isn't. Taking it to an extreme can signify a lack of understanding and imagination, but some amount of it may well be proper for now.

Comment author: Sniffnoy 21 March 2010 09:20:14PM 0 points [-]

I think you have an extra negation in the first sentence of your last paragraph?

Comment author: orthonormal 21 March 2010 09:26:29PM 0 points [-]

No, I think it's right as written. Our religious next-door neighbor may not feel disillusioned, and we might, and this is not necessarily a moral failing in us.

Comment author: Sniffnoy 21 March 2010 09:43:49PM 0 points [-]

Oh, whoops. I accidentally read the "does" as a "doesn't", reading the extra negation right into there...

Comment author: AlanCrowe 21 March 2010 10:15:52PM 6 points [-]

If we are nothing but matter in motion, mere chemicals, then there are only molecules drunkenly bumping into each other and physicists are superstitious fools for believing in the macroscopic variables of thermodynamics such as temperature and pressure.

I find the philosophical position of "nothing buttery" silly because, in the name of materialist reductionism, it asks us to give up thermodynamics. It is indeed an example of perverse mindedness.

Comment author: simplicio 21 March 2010 11:22:47PM 5 points [-]

Not sure what you're arguing against here. Temperature and pressure are explicable in terms of "molecules drunkenly bumping into each other." Or have I misunderstood?

Comment author: AlanCrowe 22 March 2010 01:25:28AM *  5 points [-]

The "Nothing But" argument claims that the things explained by materialistic reduction are explained away. In particular, the "Nothing But" argument claims that materialistic reduction, by explaining love and morality and meaning thereby explains them away, destroying them.

The flaw I see in the "Nothing But" argument is that materialistic reduction also explains temperature and presssure. If to explain is necessarily to explain away then the "Nothing But" argument is not merely claiming that materialistic reduction is trashing love and beauty, the "Nothing But" argument is also claiming that materialistic reduction is trashing temperature and pressure. That is a silly claim and shows that there must be something wrong with the "Nothing But" argument.

I think that there is a socially constructed blind spot around this point. People see that the "Nothing But" argument is claiming that materialistic reduction destroys love, beauty, temperature, and pressure. However claiming that materialistic reduction destroys temperature and pressure is silly. If you acknowledge the point then the "Nothing But" argument is obviously silly, which leaves nothing to discuss, and this is blunt to the point of rudeness. So, for social reasons, we drop the last two and let the "Nothing But" argument make the more modest claim that materialistic reduction destroys love and beauty. Then we can get on with our Arts versus Science bun fight.

In brief, I'm agreeing with you. I just wanted to add a striking example of a meaning above the base level of atoms and molecules. You do not have to look at a pointillist painting to experience the reality of something above the base level. It is enough to breath on your hand and feel the pressure exerted by the warm air.

Comment author: simplicio 22 March 2010 01:31:07AM 0 points [-]

Oh, I see, sorry for the misunderstanding.

I think that there is a socially constructed blind spot around this point. People see that the "Nothing But" argument is claiming that materialistic reduction destroys love, beauty, temperature, and pressure. However claiming that materialistic reduction destroys temperature and pressure is silly.

Yes! Excellent point. I'm not even sure what "explaining away" means, for that matter. It seems to be another one of these notions that comes with a value judgment dangling from it.

Comment author: JGWeissman 22 March 2010 02:06:18AM 4 points [-]

I'm not even sure what "explaining away" means

http://lesswrong.com/lw/oo/explaining_vs_explaining_away/

Comment author: ktismael 23 March 2010 02:55:18PM 2 points [-]

It has been a while since I've read Watts, but I suspect you're misreading his attitude here. In essence the buddhist (particularly the Zen Buddhist) attitude toward reality is very similar to the materialist view which you endorse. That is, that reality exists, and our opinions about it should be recognized as illusory. This can be confused for nihilism or despair, but really is distinct. Take the universe as it is, and experience it directly, without allowing your expectations of how it should be to affect that experience.

Perhaps he doesn't share this view (though given his background it's hard to believe he wouldn't) although without further context it is difficult to judge from just that quote.

Certainly you can argue about reincarnation and divinity and other aspects of Watts philosophy that you find irrational or dogmatic. But on this individual case you bring up, I suspect he shares your view, and I think you (OP) are projecting these views based on assuming that someone recognizing human life is natural in the same way as vegetable life must consider that a bad thing. But to quote the inscrutable philosophy behind this, that is "perfect in its suchness".

Comment author: simplicio 23 March 2010 03:53:16PM *  3 points [-]

I am rather fond of Watts, having read many of his books & listened to his lectures as a youngster. He seems to vacillate between accepting the scientific worldview and inserting metaphysical claims about consciousness as a fundamental phenomenon (as well as other weird claims). For instance, you can find in "The Book on the Taboo..." a wonderful passage about life as "tubes" with an input and an output, playing a huge game of one-upmanship; "this all seems wonderfully pointless," he says, "but after a while it seems more wonderful than pointless."

But in the same book he basically dismisses scientists as trying so hard to be rigorous that they make life not worth living. And you can find him ranting about how Euclid must have been kind of stupid because he started with straight lines (as opposed to organic shapes).

The guy frustrates the hell out of me, because with a couple years of undergrad science under his belt he could've been a correct philosopher as well as an original one.

Comment author: ktismael 23 March 2010 05:09:40PM 1 point [-]

Yeah, I suppose his understanding is not consistent, like most of us he has (had) blindspots in which emotion takes over. I, too, found him interesting and frustrating as a writer.

Mostly, I wanted to bring up the distinction between nihilism and what I guess I'll refer to as the buddhist doctrine of "acceptance". I'm not sure how that distinction is to be drawn, since they look quite similar.

Perhaps I could compare it to the difference between agnosticism (or skepticism) and "hard" atheism. The first, here from Dawkins says "There's probably no god, so quit worrying and enjoy your life." The second, a la Penn Jillette says "There is no God". Nihilism seems to make a claim to knowledge closer to the first, as "Nothing matters". Acceptance seems closer to the first, "It probably doesn't matter whether or not it matters." But I could be full of crap with this whole line of argument.

Anyway, your paraphrase here makes it pretty clear that at least part of the time he suffered from the "mechanism = despair" fallacy, so I suppose it doesn't especially matter here.

Comment author: simplicio 23 March 2010 05:26:11PM 1 point [-]

I think I get the distinction. I suspect Watts would say something like "all of these things - materialism, spiritualism, etc. are just concepts. Reality is reality." Which sounds nice until you realize he means subjectively experienced reality. Elevating the latter to some sort of superior status is a big mistake imo, although the distinction between reality and our conceptions of it is well founded.

Comment author: ktismael 23 March 2010 06:00:07PM 1 point [-]

Well, I hesitate to challenge your reading of Watts, as you've definitely retained more than I have, but I would say that subjectively experienced reality isn't the goal of understanding, rather an attempt to bring once perception closer to actual reality. So I suspect that the doctrine of acceptance would say that if your eyes and ears contradict what appears to be actually happening, then you should let your eyes and ears go.

But of course there is always perception bias, and I'm sure the subject is well covered on LW elsewhere. And, in buddhism all of this is weighted down with a lot of mysticism and even with that this is a highly idealized version anyway. For FSM's sake, the majority of buddhists are sending their prayers up to heaven with incense. So perhaps I should just let it go, eh? :) Anyway, thanks for your comments, it may be helping me set some of my thoughts on all this.

Comment author: Nanani 23 March 2010 01:15:07AM 2 points [-]

"We are not born into this world, but grow out of it; for in the same way an apple tree apples, the Earth peoples.”

This statement is patently false in many ways and there is no way to justify saying that "the basic idea is indisputably correct". The basic idea that the OP imputed was not derivable from this statement in any way that I can see. Am I missing some crucial bit of context?

Some non-trivial holes: We ARE born into this world; we do not grow out of it in any sense, even metaphorical (though I think many here hope to accomplish the feat in the future); the Earth is not an agent and does not verb-people.

The more interesting materialsm discussion is already vigorous. I choose to focus on a minor point not to detract from it.

Comment author: Johnicholas 23 March 2010 01:08:38PM 5 points [-]

The claim "we do not grow out of it in any sense, even metaphorical" is overly strong.

Consider: The process of evolution is just as natural as (on the one hand) the process of birth and (on the other hand) the process of hydrogen fusing into helium. Considering "the earth" as an agent in the process of evolution is no more peculiar than considering the earth as an agent in the statement "The earth moves around the sun."

The claim "we are not born into this world" is literally false, but if we assume (from context) a philosophical notion of "we are born, tabula rasa, into this world and philosophy is us wondering what to make of it", it is rejecting the notion that humans (or viewpoints, or consciousnesses) are somehow special and atomic, made out of a substance fundamentally incompatible to, say, mud.

Comment author: Mitchell_Porter 21 March 2010 10:50:24AM 6 points [-]

Let's talk about worldviews and the sensibilities appropriate to them. A worldview is some thesis about the nature of reality: materialism, solipsism, monotheism, pantheism, transhumanism, etc. A sensibility is an emotion or a complex of emotions about life.

Your thesis is: rationalist materialism is the correct worldview; its critics say negative things about its implications for sensibility; and some of us are accepting those implications, but incorrectly. Instead we can (should?) feel this other way about reality.

My response to all this is mostly at the level of worldview. I don't have your confidence that I have the basics of reality sorted out. I have confidence that I have had a certain sequence of experiences. I expect the world and the people in it to go on behaving, and responding to me, in a known range of ways, but I do not discount the possibility of fundamental changes or novelties in the future. I can picture a world that is matter in motion, and map it onto certain aspects of experience and the presumed history of the world, but I'm also aware of many difficulties, and also of the rather hypothetical nature of this mapping from the perspective of my individual knowledge. I could be dreaming; these consistencies might show themselves to be superficial or nonsensical if I awoke to a higher stage of lucidity. Even without invoking the skeptical option, I would actually expect an account of the world which fully encompassed what I am, and embedded it into a causal metaphysics, to have a character rather different, and rather richer, than the physics we actually have. I'm also aware that there are limits to my own understanding of basic concepts like existence, cause, time and so forth, and that further progress here might not only change the way I feel about reality, but might reveal vast new tracts of existence I had not hitherto suspected. On a personal level, the possible future transmutations of my own being remain unknown, though the experience of others suggests that it ends in the grave.

So much for criticism at the level of worldview. At the level of sensibility... it seems to me that Dawkins grasps the implications of his worldview better than Watts (that is, if one reads Watts as an expression of the same facts under a different sensibility). There is agony as well as wonder in the materialist universe. Most of it consists of empty cosmic tedium and lifeless realms occasionally swept by vast violences (but of course, this is already a strong supposition about the nature of the rest of the universe, namely that it's a big desert), but life in our little bubble of air and water can surely be viewed as vicious and terrible without much difficulty. That we come from the world does not mean we will inevitably manage to make our peace with it.

Mostly you talk about various forms of nihilism and self-alienation as emotional errors. I think that both the nihilism and the "joy in the merely real" come from a sort of subjective imagining and have very little connection to knowledge. The people for whom materialism threatens nihilism at first imagine themselves to be living in one sort of world; then, they imagine another sort of world, and they have those responses. Meanwhile, the self-identified materialists have been having their experiences while already imagining themselves to be living in a materialist world, so they don't see a problem.

Now in general I am unimpressed (to say the least) with the specific materialistic accounts of subjectivity that materialists have to offer. So I think that the reflections of a typical materialist on how their feelings are really molecules, or whatever, are really groundless daydreams not much removed from a medieval astronomer thrilling to the thought of the celestial spheres. It's just you imagining how it works, and you're probably very wrong about the details.

However, I don't think these details actually play much role in the everyday well-being of materialists anyway. Insofar as they are mentally healthy, it is because things are functioning well at the level of subjectivity, psychological self-knowledge, and so forth. Belief that everything is made of atoms isn't playing a role here. So the real question is, what's going on in the non-materialist or the reluctant materialist, for their mental health to be disturbed by the adoption of such a belief? That is an interesting topic of psychology that might be explored. I think you get a few aspects of it right, but that it is far more subtle and diverse than you allow for. There may be psychological makeups where the nihilist response really is the appropriate emotional reaction to the possibility or the subjective certainty of materialism.

But for me the bottom line is this: discussing rationalist materialism as a total worldview simply reminds me of just how tentative, incomplete, and even problematic such a worldview is, and it impels me to make further efforts towards actually knowing the truth, rather than just lingering in the aesthetics made available by acceptance of one particular possibility as reality.

Comment author: simplicio 22 March 2010 12:02:06AM 2 points [-]

My response to all this is mostly at the level of worldview. I don't have your confidence that I have the basics of reality sorted out... I could be dreaming; these consistencies might show themselves to be superficial or nonsensical if I awoke to a higher stage of lucidity... I'm also aware that there are limits to my own understanding of basic concepts like existence, cause, time and so forth, and that further progress here might not only change the way I feel about reality, but might reveal vast new tracts of existence I had not hitherto suspected.

I think I see what you're saying. However, I feel that hoping for ultimate realities undreamt-of hitherto is giving too much weight to one's own wishes for how the universe ought to be. There is no reason I can think of why the grand nature of reality has to be "richer" than physics (whatever that means). This reality, whether it inspires us or not, is where we find ourselves.

Now in general I am unimpressed (to say the least) with the specific materialistic accounts of subjectivity that materialists have to offer. So I think that the reflections of a typical materialist on how their feelings are really molecules, or whatever, are really groundless daydreams not much removed from a medieval astronomer thrilling to the thought of the celestial spheres. It's just you imagining how it works, and you're probably very wrong about the details.

Well now, I hope you were being facetious when you implied materialists believe that feelings are molecules. You are allowed to be unimpressed by materialist accounts of subjectivity, of course. However, you should seriously consider what kind of account would impress you. An account of subjectivity or consciousness or whatever is kind of like an explanation of a magic trick. It often leaves you with a feeling of "that can't be the real thing!"

Comment author: torekp 24 March 2010 12:54:24AM *  1 point [-]

I think that both the nihilism and the "joy in the merely real" come from a sort of subjective imagining and have very little connection to knowledge. The people for whom materialism threatens nihilism at first imagine themselves to be living in one sort of world; then, they imagine another sort of world, and they have those responses. Meanwhile, the self-identified materialists have been having their experiences while already imagining themselves to be living in a materialist world, so they don't see a problem.

Doesn't this support simplicio's thesis? If there's little connection to knowledge - which I take to mean that neither emotional response follows logically from the knowledge - then epistemic rationality is consistent with joy. And where epistemic rationality is not at stake, instrumental rationality favors a joyful response, if it is possible.

Comment author: byrnema 22 March 2010 12:23:32PM *  4 points [-]

I completely disagree with your post, but I really appreciate it. Perhaps as an artful and accurate node of what people who are satisfied or not satisfied with materialism disagree about.

The materialist in me figures from first principles, that it would seem that life has no meaning, morality has no basis, love is an illusion, everything is futile, etc. This is an intellectual and emotional response dove-tailed together. I would say that the intellectual response is first, and the emotional response comes second, because the melancholy is only there if I dwell on it.

As far as I can tell, the only argument that materialism-satisfied materialists have against the intellectual response that generates my negative emotional response is that they lack a negative emotional response. So I see it quite the other way: satisfied materialists lack the emotional response -- in a nod to the normative tone of your post -- that they should have.

Materialism is very compelling, but it has this flaw in its current (hopefully incomplete) formulation. That's the pill to swallow. I would like to see this problem tackled head on and resolved. (I'll add that admitting that some subset of people are not designed to be happy with materialism would be one resolution.)

Comment author: mattnewport 22 March 2010 04:16:41PM 12 points [-]

The materialist in me figures from first principles, that it would seem that life has no meaning, morality has no basis, love is an illusion, everything is futile, etc.

Perhaps part of the difference between those who are satisfied/not satisfied with materialism is in what role something other than materialism could play here. I just don't get how any of the non-materialist 'answers' are more satisfying than the materialist ones. If it bothers you that morality is 'arbitrary', why is it more satisfying if it is the arbitrary preferences of god rather than the arbitrary preferences of humans? Just as I don't get how the answer 'because of god' to the question 'why is there something rather than nothing' is more satisfying for some people than the alternative materialist answer of 'it just is'.

As Eliezer says in Joy in the Merely Real:

You might say that scientists - at least some scientists - are those folk who are in principle capable of enjoying life in the real universe.

Comment author: LauraABJ 22 March 2010 05:27:05PM 8 points [-]

Ok, so I am not a student of literature or religion, but I believe there are fundamental human aesthetic principles that non-materialist religious and wholistic ideas satisfy in our psychology. They try to explain things in large concepts that humans have evolved to easily grasp rather than the minutiae and logical puzzles of reality. If materialists want these memes to be given up, they will need to create equally compelling human metaphor, which is a tall order if we want everything to convey reality correctly. Compelling metaphor is frequently incorrect. My atheist-jewish husband loves to talk about the beauty of scripture and parables in the Christian bible and stands firm against my insistence that any number of novels are both better written and provide better moral guidance. I personally have a disgust reaction whenever he points out a flowery passage about morality and humanity that doesn't make any actual sense. HOW CAN YOU BE TAKEN IN BY THAT? But unlike practicing religious people, he doesn't 'believe' any of it, he's just attracted to it aesthetically, as an idea, as a beautiful outgrowth of the human spirit. Basically, it presses all the right psychological buttons. This is not to say that materialists cannot produce equally compelling metaphors, but it may be a very difficult task, and the spiritualists have a good, I don't know, 10,000 years on us in honing in on what appeals to our primitive psychology.

Comment author: Jack 22 March 2010 07:27:46PM *  10 points [-]

Why produce new metaphors when we can subvert ones we already know are compelling?

For it is written: The Word of God is not a voice from on High but the whispers of our hopes and desires. God's existence is but His soul, which does not have material substance but resides in our hearts and the Human spirit. Yet this is not God's eternal condition. We are commanded: for the God without a home, make the universe His home. For the God without a body, make Him a body with your own hands. For the God without a mind, make Him a mind like your mind, but worthy of a god. And instill in this mind, in this body, in this universe the soul of God copied from your own heart and the hearts of your brothers and sisters. The Ancients dreamed that God had created the world only because they could not conceive that the world would create God. For God is not the cause of our humility but the unfulfilled aim of our ambition. So learn about the universe so that you may build God a home, learn about your mind so you may build a better one for God, learn about your hopes and desires so that you may give birth to your own savior. With God incarnate will come the Kingdom of God and eternal life.

Comment author: soreff 23 March 2010 11:21:16PM 0 points [-]

This is reminding me of Stross's ReMastered's "unborn god"...

Comment author: mattnewport 22 March 2010 05:50:05PM 3 points [-]

Ok, so I am not a student of literature or religion, but I believe there are fundamental human aesthetic principles that non-materialist religious and wholistic ideas satisfy in our psychology.

I'm wondering whether your statement is true only when you substitute 'some people's' for 'our' in 'our psychology'. I don't feel a god-shaped emotional hole in my psyche. I'm inclined to believe byrenma's self report that she does. I've talked about this with my lapsed-catholic mother and she feels similarly but I just don't experience the 'loss' she appears to.

Whether this is because I never really experienced much of a religious upbringing (I was reading The Selfish Gene at 8, I've still never read the Bible) or whether it is something about our personality types or our knowledge of science I don't know but there appears to be an experience of 'something missing' in a materialist world view amongst some people that others just don't seem to have.

Comment author: LauraABJ 23 March 2010 12:58:29AM 5 points [-]

While not everyone experiences the 'god-shaped hole,' it would be dense of us not to acknowledge the ubiquity of spirituality across cultures just because we feel no need for it ourselves (feel free to replace 'us' and 'we' with 'many of the readers of this blog'). Spirituality seems to be an aesthetic imperative for much of humanity, and it will probably take a lot teasing apart to determine what aspects of it are essential to human happiness, and what parts are culturally inculcated.

Comment author: mattnewport 23 March 2010 01:25:35AM 4 points [-]

Well, coming back to the original comment I was responding to:

The materialist in me figures from first principles, that it would seem that life has no meaning, morality has no basis, love is an illusion, everything is futile, etc.

I don't feel that way, despite being a thoroughgoing materialist for as long as I can remember being aware of the concept. I also don't really see how believing in the 'spiritual' or non-material could change how I feel about these concepts. It does seem to be somewhat common for people to feel that only spirituality can 'save' us from feeling this way but I don't really get why.

I acknowledge that some people do see 'spirituality' (a word that I admittedly have a tenuous grasp on the supposed meaning of) as important to these things which is why I'm postulating that there is some difference in the way of thinking or perhaps personality type of people who don't see a dilemma here and those for whom it is a source of tremendous existential angst.

Comment author: NancyLebovitz 23 March 2010 01:40:27AM 3 points [-]

I think Core transformation offers a plausible theory.

People are capable of feeling oneness, being loved (without a material source) and various other strong positive emotions, but are apt to lose track of how to access them.

Dysfunctional behavior frequently is the result of people jumping to the conclusion that if only some external condition can be met, they'll feel one of those strong positive emotions.

Since the external condition (money, respect, obeying rules) isn't actually a pre-condition for the emotion but the belief about the purpose of the dysfunctional behavior isn't conscious, the person keeps seeking joy or peace or whatever in the wrong place.

Core transformation is based on the premise that it's possible to track the motives for dysfunctional behavior back to the desired emotion, and give them access to the emotion-- the dysfunctional behavior evaporates, and the person may find other parts of their life getting better.

I've done a little with this system-- enough to think there's at least something to it.

Comment author: Academian 22 March 2010 08:35:09PM 1 point [-]

Do you take awe in the whole of humanity, Earth, or the universe as something greater than yourself? Does it please you to think that even if you die, the universe, life, or maybe even the human race will go on existing long afterward?

Maybe you don't feel the hole because you've already filled it :)

Comment author: mattnewport 22 March 2010 09:46:03PM 2 points [-]

I've experienced an emotion I think is awe but generally only in response to the physical presence of something in the natural world rather than to sitting and thinking. Being on top of a mountain at sunrise, staring at the sky on a clear night, being up close to a large and potentially dangerous animal and other such experiences have produced the emotion but it is only evoked weakly if at all by sitting and contemplating the universe.

I don't think I have a very firm grip on the varieties of 'religious' experience. I am not really clear on the distinction between awe and wonder for example though I believe they are considered separate emotions.

Comment author: RobinZ 22 March 2010 08:59:04PM 0 points [-]

I can't speak for mattnewport, but I don't take awe, as a rule - I just haven't developed a taste for it. I am occasionally awed, I admit - by acts of cleverness, bravery, or superlative skill, most frequently - but I am rarely rocked back on my heels by "goodness, isn't this universe huge!" and other such observations.

Comment author: PhilGoetz 23 March 2010 03:27:05AM *  4 points [-]

Perhaps part of the difference between those who are satisfied/not satisfied with materialism is in what role something other than materialism could play here. I just don't get how any of the non-materialist 'answers' are more satisfying than the materialist ones.

The answers are satisfying because they're not really answers. They're part of a completely different value and belief system - a large, complex structure that has evolved because it is good at generating certain feelings in those who hold it; feelings which hijack those people's emotional systems to motivate them to spread it. Very much like the fly bacteria (or was it a virus?) that reprograms its victims' brains to climb upwards before they die so that their bodies will spread its spores more effectively.

Comment author: tut 23 March 2010 08:52:50AM *  3 points [-]

I think that the standard example of that is a fungus that infects ants. And the bad pun is "Is it just a fluke?" that the ant climbs to the top of a straw, and that it's behind gets red and swollen like a berry, so that the birds are sure to eat it.

Comment author: PhilGoetz 23 March 2010 10:39:01PM 0 points [-]

Rabies is another example.

Comment author: byrnema 22 March 2010 09:48:22PM 3 points [-]

If it bothers you that morality is 'arbitrary', why is it more satisfying if it is the arbitrary preferences of god rather than the arbitrary preferences of humans?

I believe I can answer this question. The question is a misunderstanding of what "God" was supposed to be. (I think theists often have this misunderstanding as well.)

We live in a certain world, and it natural for some people (perhaps only certain personality types) to feel nihilistic about that world. There are many, many paths to this feeling -- the problem of evil, the problem of free will, the problem of objective value, the problem of death, etc. There doesn't seem to be any resolution within the material world so when we turn away from nihilism, as we must, we hope that there's some kind of solution outside the material. This trust, an innate hope, calls on something transcendental to provide meaning.

However you articulate that hope, if you have it, I think that is theism. Humans try and describe what this solution would be explictly, but then our solution is always limited by our current worldview of what the solution could be (God is the spirit in all living things; God is love and redemption from sin; God is an angry father teaching and exacting justice ). In my opinion, religion hasn't kept up with changes in our worldview and is ready for a complete remodeling.

Perhaps we are ready for a non-transcendent solution, as that would seem most appropriate given our worldview in non-religious areas, but I just don't see any solutions yet.

I've been listening carefully, and people who are satisfied with materialism seem to still possess this innate hope and trust; but they are either unable to examine the source of it or they attribute it to something inadequate. For example, someone once told me that for them, meaning came from the freedom to choose their own values instead of having them handed down by God.

But materialism tells us we don't get to choose. We need to learn to be satisfied with being a river, always choosing the path determined by our landscape. The ability to choose would indeed be transcendental. So I think some number of people realized that without something exceptional, we don't have freedom. In religions, this is codified as God is necessary for the possibility of free will.

So if I say 'there is no God', I'm not denying the existence of a supreme being that could possibly take offense. I'm giving up on freedom, value and purpose. I would like to see, in my lifetime, that those things are already embedded in the material world. Then I would still believe in God -- even more so -- but my belief would be intellectually justified and consistent within my current (scientific) world view.

But if the truth is that they're not there, anywhere, I do wonder what it would take to make me stop believing in them.

Comment author: Furcas 23 March 2010 01:00:54AM *  1 point [-]

Without relaunching the whole discussion, there's one thing I'd like to know: Do you acknowledge that the concepts you're "giving up on" ('transcendental' freedom, value, and purpose, as you define them) are not merely things that don't exist, but things that can't exist, like square circles?

Comment author: byrnema 23 March 2010 01:25:14AM *  2 points [-]

I only know that I believe they should exist. I gave up on figuring out if they could exist. Specifically, what I've "given up on" is a reconciliation of epistemic and instrumental rationality in this matter.

Comment author: Jack 22 March 2010 10:01:36PM 1 point [-]

How'd I do here?

Comment author: byrnema 23 March 2010 12:47:01AM *  1 point [-]

If God doesn't exist, creating him as the purpose of my existence is something I could get behind.

And then I would want the God of the future to be omnipotent enough to modify the universe so that he existed retroactively, so that the little animals dying in the forest hadn't been alone, after all. (On the day I intensely tried to stop valuing objective purpose, I realized that this image was one of my strongest and earliest attachments to a framework of objective value.)

God wouldn't have to modify the universe in any causal way, he would just need to send information back in time (objective-value-information). Curiosity about the possibility of a retroactive God motivated this thread. If it is possible for a God created in the future to propagate backwards in time, then I would rate the probability of God existing currently as quite nearly 1.

Comment author: Rain 21 March 2010 01:09:22PM *  4 points [-]

I take exception to this passage, and feel that it is an unnecessary attack:

I have actually heard several smart people po-facedly lament the fact that the universe will end with a whimper. If this seriously bothers you psychologically, then your psychology is severely divorced from the reality that you inhabit.

Comment author: SoullessAutomaton 21 March 2010 05:34:58PM 5 points [-]

It's a reasonable point, if one considers "eventual cessation of thought due to thermodynamic equilibrium" to have an immeasurably small likelihood compared to other possible outcomes. If someone points a gun at your head, would you be worrying about dying of old age?

Comment author: orthonormal 21 March 2010 05:55:44PM 6 points [-]

There are plenty of transhumanists here who believe that (with some nonnegligible probability) the heat death of the universe will be the relevant upper bound on their experience of life.

Comment author: SoullessAutomaton 21 March 2010 06:20:11PM 4 points [-]

Which is fair enough I suppose, but it sounds bizarrely optimistic to me. We're talking about a time span a thousand times longer than the current age of the universe. I have a hard time giving weight to any nontrivial proposition expected to be true over that kind of range.

Comment author: Rain 21 March 2010 07:09:29PM 3 points [-]

I believe we have a duty to attempt to predict the future as far as we possibly can. I don't see how we can take moral or ethical stances without predicting what will happen as a result of our actions.

Comment author: billswift 21 March 2010 11:56:26PM 1 point [-]

We need to predict as far as we can, ethical decision making requires that we take into account all foreseeable consequences of our actions. But with the unavoidable complexity of society, there are serious limits as to how far it is reasonable even to attempt to look ahead; the impossibility of anyone (or even a group) seeing very far is one reason centralized economies don't work. And the complexity of all social interactions is at least an order of magnitude greater than strictly economic interactions.

Comment author: Rain 22 March 2010 07:44:44PM *  3 points [-]

I've been trying to think of a good way to explain my problem with evaluation of [utility | goodness | rightness] given that we're very bad at predicting the future. I haven't had much luck at coming up with something I was willing to post, though I consider the topic extremely important.

For example, how much effort should Clippy put into predicting and simplifying the future (basic research, modeling, increases in ability to affect the universe, active reductions to surrounding complexity, etc.) instead of making paperclips?

The answer "however much it predicts will be useful" seems like a circular problem.

Comment author: billswift 23 March 2010 12:34:53AM 0 points [-]

They are circular problems; they share a general structure with adaptation problems though, and I have found reading serious books on evolution, some of Dawkins's are particularly good, and on economics, try Sowell's Knowledge and Decisions, to be helpful. These types of problems cannot be solved, at best you can only get incrementally improved answers - depending on the costs of acquiring and analyzing further information versus the expected value of that information.

Comment author: simplicio 21 March 2010 05:57:08PM *  0 points [-]

I'm sorry you feel that way but, to be honest, I don't repent of my statement. I simply can't imagine why the ultimate fate of an (at that point uninhabited) cosmos should matter to a puny hoo-man (except intellectually). It's like a mayfly worrying about the Andromeda galaxy colliding with the Milky Way.

I think the confusion here is similar to the fear of being dead (not fear of dying). You sort of imagine how horrible it'll be to be a corpse, just sitting around in a grave. But there will be no one there to experience how bad being dead is, and when the universe peters out in the end, no one will be there to be disappointed. If you care emotionally about entropic heat death, you should logically also feel bad every time an ice cube melts.

Comment author: Rain 21 March 2010 07:07:06PM *  1 point [-]

I care about what to measure (utility function) as much as I care about when to measure it (time function). For any measure, there's a way to maximize it, and I'd like to see whatever measure humans decide is appropriate to be maximized across as much time as possible. So worrying about far future events is important insofar as I'd like my values to be maximized even then.

As for worrying about ice cubes, you're right, it would be inconsistent of me to say otherwise, so I will say that I do. However, I apply a weighted scale of care, and our future galactic empire tends to weigh pretty heavily when compared with something like that.

ETA: Care about ice cube loss is so small I can't feel it. Dealing with entropy / resource consumption, my caring gets large enough I can start feeling it around the point of owning and operating large home appliances, automobiles, etc., and ramps up drastically for things like inefficient power plants, creating new humans, and war.

Comment author: Vladimir_Nesov 21 March 2010 10:28:36AM 2 points [-]

Whether something is good is also a factual question.

Comment author: bogdanb 21 March 2010 01:47:09PM 3 points [-]

Care to elaborate?

Comment author: orthonormal 21 March 2010 05:51:28PM 4 points [-]

The parent is assuming the naturalistic reduction of morality that EY argued for in the Metaethics Sequence, in which "good" is determined by a currently opaque but nonetheless finite computation (at least for a particular agent, but then there's the additional claim that humanity has enough in common that this answer shouldn't vary between people any significant amount).

Comment author: Vladimir_Nesov 21 March 2010 08:48:14PM *  4 points [-]

"good" is determined by a currently opaque but nonetheless finite computation

With a finite definition, but not at all finite or even knowable consequences (they are knowably good, but what they are exactly, one can't know).

there's the additional claim that humanity has enough in common that this answer shouldn't vary between people any significant amount

It's going to vary a very significant amount, just a lot less than the distance from any other preference we might happen to construct, and as such, for example, creating a FAI modeled on any single person is hugely preferable for other people to letting an arbitrary AGI to develop, even if this AGI was extensively debugged and trained, and looks to possess all the right qualities.

Comment author: bogdanb 01 April 2010 05:33:05PM 0 points [-]

Well, OK, let’s suppose* I agree with that. Could you elaborate on what that means in the context of the post? (Or link to somewhere where you did, if so.)

(*: Even after re-reading the AID post linked by orthonormal, I’m not sure what you mean by “knowably good” above, but I think that answering to the paragraph above would be more helpful than an abstract discussion.)

Comment author: [deleted] 16 May 2012 05:19:09PM 1 point [-]

Alex Rosenberg is arguing for the more gloomy take on materialism.

From amazon:

"His bracing and ultimately upbeat book takes physics seriously as the complete description of reality and accepts all its consequences. He shows how physics makes Darwinian natural selection the only way life can emerge, and how that deprives nature of purpose, and human action of meaning, while it exposes conscious illusions such as free will and the self."

Comment author: stainlesssteelneuron 22 March 2010 06:33:10PM 1 point [-]

Brilliant.

Comment author: [deleted] 16 May 2012 07:20:59PM 0 points [-]

I like to do some plain ole' dissolving of my unupdated concept of the world and asking "What did I value about X (in the unupdated version)?" and compare the result to see if those features are withstanding or not in the updated version. And oftentimes I only care about that which is left unchanged, since my starting-point is often how normality come about rather than what normality is. Come to think of it, this sounds somewhat like a re-phrasing of EY:s stand on reductionism(?).

Comment author: xamdam 21 March 2010 07:22:34PM 0 points [-]

<senseless entertainment> Strangely relevant: "Hard pill in a chewable form": http://www.youtube.com/watch?v=UmjmFNrgt5k </senseless entertainment>

Comment deleted 21 March 2010 07:16:48PM *  [-]
Comment author: PhilGoetz 21 March 2010 08:13:52PM *  1 point [-]

Huh? "More mixed"? What could be better, and how is it getting worse?

Comment author: CronoDAS 21 March 2010 08:07:51PM *  1 point [-]

Personally, I really, really hate the laws of thermodynamics; among other things, they make survival more difficult because I have to eat and maintain my body temperature. It would be nice to be powered by a perpetual motion machine, wouldn't it?

Comment author: PhilGoetz 21 March 2010 08:18:26PM 0 points [-]

The laws of the physics are the rules, without which we couldn't play the game. They make it hard for any one player to win. If you took any of the laws away, you'd probably be a paperclip-equivalent by now. And even if you weren't, living without physics would be like playing tennis without a net. You'd have no goals or desires as we understand them.

Comment author: SoullessAutomaton 23 March 2010 03:12:18AM 4 points [-]

The laws of the physics are the rules, without which we couldn't play the game. They make it hard for any one player to win.

Except that, as far as thermodynamics goes, the game is rigged and the house always wins. Thermodynamics in a nutshell, paraphrased from C. P. Snow:

  1. You can't win the game.
  2. You can't break even.
  3. You can't stop playing.
Comment author: Jack 21 March 2010 08:31:05PM 3 points [-]

I assume Crono was objecting to these particular laws of physics not the idea of there being any laws of physics at all. I'm actually not sure if there can be existence without laws of physics.