This website is devoted to the art of rationality, and as such, is a wonderful corrective to wrong facts and, more importantly, wrong procedures for finding out facts.

There is, however, another type of cognitive phenomenon that I’ve come to consider particularly troublesome, because it militates against rationality in the irrationalist, and fights against contentment and curiousity in the rationalist. For lack of a better word, I’ll call it perverse-mindedness.

The perverse-minded do not necessarily disagree with you about any fact questions. Rather, they feel the wrong emotions about fact questions, usually because they haven’t worked out all the corollaries.

Let’s make this less abstract. I think the following quote is preaching to the choir on a site like LW:

“The universe that we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil, no good, nothing but pitiless indifference.”
-Richard Dawkins, "God's Utility Function," Scientific American (November, 1995).

Am I posting that quote to disagree with it? No. Every jot and tittle of it is correct. But allow me to quote another point of view on this question.

“We are not born into this world, but grow out of it; for in the same way an apple tree apples, the Earth peoples.”

This quote came from an ingenious and misguided man named Alan Watts. You will not find him the paragon of rationality, to put it mildly. And yet, let’s consider this particular statement on its own. What exactly is wrong with it? Sure, you can pick some trivial holes in it – life would not have arisen without the sun, for example, and Homo sapiens was not inevitable in any way. But the basic idea – that life and consciousness is a natural and possibly inevitable consequence of the way the universe works – is indisputably correct.

So why would I be surprised to hear a rationalist say something like this? Note that it is empirically indistinguishable from the more common view of “mankind confronted by a hostile universe.” This is the message of the present post: it is not only our knowledge that matters, but also our attitude to that knowledge. I believe I share a desire with most others here to seek truth naively, swallowing the hard pills when it becomes necessary. However, there is no need to turn every single truth into a hard pill. Moreover, sometimes the hard pills also come in chewable form.

What other fact questions might people regard in a perverse way?

How about materialism, the view that reality consists, at bottom, in the interplay of matter and energy? This, to my mind, is the biggie. To come to facilely gloomy conclusions based on materialism seems to be practically a cottage industry among Christian apologists and New Agers alike. Since the claims are all so similar to each other, I will address them collectively.

“If we are nothing but matter in motion, mere chemicals, then:

  1. Life has no meaning;
  2. Morality has no basis;
  3. Love is an illusion;
  4. Everything is futile (there is no immortality);
  5. Our actions are determined; we have no free will;
  6. et
  7. cetera.”


The usual response from materialists is to say that an argument from consequences isn’t valid – if you don’t like the fact that X is just matter in motion, that doesn’t make it false. While eminently true, as a rhetorical strategy for convincing people who aren’t already on board with our programme, it’s borderline suicidal.

I have already hinted at what I think the response ought to be. It is not necessarily a point-by-point refutation of each of these issues individually. The simple fact is, not only is materialism true, but it shouldn’t bother anyone who isn’t being perverse about it, and it wouldn’t bother us if it had always been the standard view.

There are multiple levels of analysis in the lives of human beings. We can speak of societies, move to individual psychology, thence to biology, then chemistry… this is such a trope that I needn’t even finish the sentence.

However, the concerns of, say, human psychology (as distinct from neuroscience), or morality, or politics, or love, are not directly informed by physics. Some concepts only work meaningfully on one level of analysis. If you were trying to predict the weather, would you start by modeling quarks? Reductionism in principle I will argue for until the second coming (i.e., forever). Reductionism in practice is not always useful. This is the difference between proximate and ultimate causation. The perverse-mindedness I speak of consists in leaping straight from behaviour or phenomenon X to its ultimate cause in physics or chemistry. Then – here’s the “ingenious” part – declaring that, since the ultimate level is devoid of meaning, morality, and general warm-and-fuzziness, so too must be all the higher levels.

What can we make of someone who says that materialism implies meaninglessness? I can only conclude that if I took them to see Seurat’s painting “A Sunday Afternoon on the Island of La Grande Jatte," they would earnestly ask me what on earth the purpose of all the little dots was. Matter is what we’re made of, in the same way as a painting is made of dried pigments on canvas. Big deal! What would you prefer to be made of, if not matter?

It is only by the contrived unfavourable contrast of matter with something that doesn’t actually exist – soul or spirit or élan vital or whatever – that somebody can pull off the astounding trick of spoiling your experience of a perfectly good reality, one that you should feel lucky to inhabit.

I worry that some rationalists, while rejecting wooly dualist ideas about ghosts in the machine, have tacitly accepted the dualists’ baseless assumptions about the gloomy consequences of materialism. There really is no hard pill to swallow.

What are some other examples of perversity? Eliezer has written extensively on another important one, which we might call the disappointment of explicability. “A rainbow is just light refracting.” “The aurora is only a bunch of protons hitting the earth’s magnetic field.” Rationalists are, sadly, not immune to this nasty little meme. It can be easily spotted by tuning your ears to the words “just” and “merely.” By saying, for example, that sexual attraction is “merely” biochemistry, you are telling the truth and deceiving at the same time. You are making a (more or less) correct factual statement, while Trojan-horsing an extraneous value judgment into your listener’s mind as well: “chemicals are unworthy.” On behalf of chemicals everywhere, I say: Screw you! Where would you be without us?

What about the final fate of the universe, to take another example? Many of us probably remember the opening scene of Annie Hall, where little Alfie tells the family doctor he’s become depressed because everything will end in expansion and heat death. “He doesn’t do his homework!” cries his mother. “What’s the point?” asks Alfie.

Although I found that scene hilarious, I have actually heard several smart people po-facedly lament the fact that the universe will end with a whimper. If this seriously bothers you psychologically, then your psychology is severely divorced from the reality that you inhabit. By all means, be depressed about your chronic indigestion or the Liberal Media or teenagers on your lawn, but not about an event that will happen in 1014 years, involving a dramatis personae of burnt-out star remnants. Puh-lease. There is infinitely more tragedy happening every second in a cup of buttermilk.

The art of not being perverse consists in seeing the same reality as others and agreeing about facts, but perceiving more in an aesthetic sense. It is the joy of learning something that’s been known for centuries; it is appreciating the consilience of knowledge without moaning about reductionism; it is accepting nature on her own terms, without fatuous navel-gazing about how unimportant you are on the cosmic scale. If there is a fact question at stake, take no prisoners; but you don’t get extra points for unnecessary angst.

New Comment
255 comments, sorted by Click to highlight new comments since: Today at 8:54 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I worry that some rationalists, while rejecting wooly dualist ideas about ghosts in the machine, have tacitly accepted the dualists’ baseless assumptions about the gloomy consequences of materialism.

There actually is a way in which they're right.

My first thought was, "You've got it backwards - it isn't that materialism isn't gloomy; it's that spiritualism is even gloomier." Because spiritual beliefs - I'm usually thinking of Christianity when I say that - don't really give you oughtness for free; they take the arbitrary moral judgements of the big guy in the sky and declare them correct. And so you're not only forced to obey this guy; you're forced to enjoy obeying him, and have to feel guilty if you have any independent moral ideas. (This is why Christianity, Islam, communism, and other similar religions often make their followers morally-deficient.)

But what do I mean by gloomier? I must have some baseline expectation which both materialism and spirituality fall short of, to feel that way.

And I do. It's memories of how I felt when I was a Christian. Like I was a part of a difficult but Good battle between right and wrong.

Now, hold off for a moment on asking whet... (read more)

I disagree with most of this but vote it up for being an excellent presentation of a complex and important position that must be addressed (though as noted, I think it can be) and hasn't been adequately addressed to satisfy (or possibly even to be understood by) all or most LW readers.

Phil, I suggest, that you try to look at Christian and secular children (and possibly those of some other religions) and decide empirically whether they really seem to differ so much in happiness or well being. Looking at people in a wide range of cultures in situations would in general be helpful, but especially that contrast or mostly, I suspect, lack of contrast.

Phil, I suggest, that you try to look at Christian and secular children (and possibly those of some other religions) and decide empirically whether they really seem to differ so much in happiness or well being.

Children are where not to look. Dogs psychologically resemble wolf-pups; they are childlike. Religion, like the breeding of dogs, is neotenous; it allows retention of childlike features into adulthood. To see the differences I'm talking about, you therefore need to look at adults.

Anyway, if you're asking me to judge based on who is the happiest, you've taken the first step down the road to wireheading. Dogs have been genetically reprogrammed to develop in a way that wires their value system to getting a pat on the head from their master.

The basic problem here is how we can simultaneously preserve human values, and not become wireheads, when some people are already wireheads. The religious worldview I spoke of above is a kind of wireheading. Would CEV dismiss it as wireheading? If so, what human values aren't wireheading? How do we walk the tightrope between wireheads and moral realists? Is there even a tightrope to walk there?

IAWYC except for the last paragraph. While CEV isn't guaranteed to be a workable concept, and while it's dangerous to get into the habit of ruling out classes of counterargument by definition, I think there's a problem with criticizing CEV on the grounds "I think CEV will probably go this way, but I think that way is a big mistake, and I expect we'd all see it as a mistake even if we knew more, thought faster, etc." This is exactly the sort of error the CEV project is built to avoid.

4Rain14y
I was a strong proponent of CEV as the most-correct theory I had heard on the topic of what goals to set, but I've become more skeptical as Eliezer started talking about potential tweaks to avoid insane results like the dog scenario above. It seems similar in nature to the rule-building method of goal definition, where you create a list, and which has been roundly criticized as near impossible to do correctly.
5Strange714y
That's why I prefer the 'would it satisfy everyone who ever lived?' strategy over CEV. Humanity's future doesn't have to be coherent. Coherence is something that happens at evolutionary choke-points, when some species dies back to within an order of magnitude of the minimum sustainable population. When some revolutionary development allows unprecedented surpluses, the more typical response is diversification. Consider the trilobites. If there had been a trilobite-Friendly AI using CEV, invincible articulated shells would comb carpets of wet muck with the highest nutrient density possible within the laws of physics, across worlds orbiting every star in the sky. If there had been a trilobite-engineered AI going by 100% satisfaction of all historical trilobites, then trilobites would live long, healthy lives in a safe environment of adequate size, and the cambrian explosion (or something like it) would have proceeded without them. Most people don't know what they want until you show it to them, and most of what they really want is personal. Food, shelter, maybe a rival tribe that's competent enough to be interesting but always loses when something's really at stake. The option of exploring a larger world, seldom exercised. It doesn't take a whole galaxy's resources to provide that, even if we're talking trillions of people.
3orthonormal14y
I realized a pithy way of stating my objection to that strategy: given how unlikely I think it is that the test could be passed fairly by a Friendly AI, an AI passing the test is stronger evidence that the AI is cheating somehow than that the AI is Friendly.
2Strange714y
If the AI is programmed so that it genuinely wants to pass the test (or the closest feasible approximation of the test) fairly, cheating isn't an issue. This isn't a matter of fast-talking it's way out of a box. A properly-designed AI would be horrified at the prospect of 'cheating,' the way a loving mother is horrified at the prospect of having her child stolen by fairies and replaced with a near-indistinguishable simulacrum made from sticks and snow.
6PhilGoetz14y
It is probably possible to pass that test by exploiting human psychology. It is probably impossible to do well on that test by trying to convince humans that your viewpoint is right. You're talking past orthonormal. You're assuming a properly-designed AI. He's saying that accomplishing the task would be strong evidence of unfriendliness.
4orthonormal14y
What Phil said, and also: Taboo "fairly"— this is another word the specification of which requires the whole of human values. Proving that the AI understands what we mean by fairness and wants to pass the test fairly is no easier than proving it Friendly in the first place.
0Strange714y
"Fairly" was the wrong word in this context. Better might be 'honest' or 'truthful.' A truthful piece of information is one which increases the recipient's ability to make accurate predictions; an honest speaker is one whose statements contain only truthful information.
3RobinZ14y
About what? Anything? That sounds very easy. Remember Goodhart's Law - what we want is G, Good, not any particular G* normally correlated with Good.
1Strange714y
Walking from Helsinki to Saigon sounds easy, too, depending on how it's phrased. Just one foot in front of the other, right? Humans make predictions all the time. Any time you perceive anything and are less than completely surprised by it, that's because you made a prediction which was at least partly successful. If, after receiving and assimilating the information in question, any of your predictions is reduced in accuracy, any part of that map becomes less closely aligned with the territory, then the information was not perfectly honest. If you ignore or misinterpret it for whatever reason, even when it's in some higher sense objectively accurate, that still fails the honesty test. A rationalist should win; an honest communicator should make the audience understand. Given the option, I'd take personal survival even at the cost of accurate perception and ability to act, but it's not a decision I expect to be in the position of needing to make: an entity motivated to provide me with information that improves my ability to make predictions would not want to kill me, since any incoming information that causes my death necessarily also reduces my ability to think.
3orthonormal14y
What Robin is saying is, there's a difference between * "metrics that correlate well enough with what you really want that you can make them the subject of contracts with other human beings", and * "metrics that correlate well enough with what you really want that you can make them the subject of a transhuman intelligence's goals". There are creative avenues of fulfilling the letter without fulfilling the spirit that would never occur to you but would almost certainly occur to a superintelligence, not because xe is malicious, but because they're the optimal way to achieve the explicit goal set for xer. Your optimism, your belief that you can easily specify a goal (in computer code, not even English words) which admits of no undesirable creative shortcuts, is grossly misplaced once you bring smarter-than-human agents into the discussion. You cannot patch this problem; it has to be rigorously solved, or your AI wrecks the world.
2RobinZ14y
Sure, but I don't want to be locked in a box watching a light blink very predictably on and off.
0Strange714y
Building the box reduces your ability to predict anything taking place outside the box. Even if the box can be sealed perfectly until the end of time without killing you (which would in itself be a surprise to anyone who knows thermodynamics), cutting off access to compilations of medical research reduces your ability to predict your own physiological reactions. Same goes for screwing with your brain functions.
4RobinZ14y
I do not think you should be as confident as you are that your system is bulletproof. You have already had to elaborate and clarify and correct numerous times to rule out various kinds of paperclipping failures - all it takes is one elaboration or clarification or correction forgotten to allow for a new one, attacking the problem this way.
0Strange714y
How confident do you think I am that my plan is bulletproof?
0RobinZ14y
Given that you asked me the question, I reckon you give it somewhere between 1:100 and 2:1 odds of succeeding. I reckon the odds are negligible.
0Strange714y
That's our problem right there: you're trying to persuade me to abandon a position I don't actually hold. I agree that an AI based strictly on a survey of all historical humans would have negligible chance of success, simply because a literal survey is infeasible and any straightforward approximation of it would introduce unacceptable errors.
3RobinZ14y
...why are you defending it, then? I don't even see that thinking along those lines is helpful.
0Strange714y
For everyone else, it was a chance to identify flaws in a proposition. No such thing as too much practice there. For me, it was a chance to experience firsthand the thought processes involved in defending a flawed proposition, necessary practice for recognizing other such flawed beliefs I might be holding; I had no religious upbringing to escape, so that common reference point is missing. Furthermore, I knew from the outset that such a survey wouldn't be practical, but I've been suspicious of CEV for a while now. It seems like it would be too hard to formalize, and at the same time, even if successful, too far removed from what people spend most of their time caring about. I couldn't be satisfied that there wasn't a better way to do it until I'd tried to find such a way myself.
7orthonormal14y
It's polite to give some signal that you're playing devil's advocate if you know you're making weak arguments. This is not a sufficient condition for establishing the optimality of CEV. Indeed, I'm not sure there isn't a better way (nor even that CEV is workable), just that I have at present no candidates for one.
1Strange714y
I apologize. I thought I had discharged the devil's-advocacy-signaling obligation by ending my original post on the subject with a request to be proved wrong. I agree that personal satisfaction with CEV isn't a sufficient condition for it being safe. For that matter, having proposed and briefly defended this one alternative isn't really sufficient for my personal satisfaction in either CEV's adequacy or the lack of a better option. But we have to start somewhere, and if someone did come up with a better alternative to CEV, I'd want to make sure that it got fair consideration.
0PhilGoetz14y
Your trilobite example is at odds with your everyone-who-lived strategy. The impact of the trilobite example is to show that CEV is fundamentally wrong, because trilobite cognition, no matter how far you extrapolate it, would never lead to love, or value it if it arose by chance. Some degree of randomness is necessary to allow exploration of the landscape of possible worlds. CEV is designed to prevent exploration of that landscape.

Let me expand upon Vladimir's comment:

Some degree of randomness is necessary to allow exploration of the landscape of possible worlds. CEV is designed to prevent exploration of that landscape.

You have not yet learned that a certain argumentative strategy against CEV is doomed to self-referential failure. You have just argued that "exploring the landscape of possible worlds" is a good thing, something that you value. I agree, and I think it's a reflectively consistent value, which others generally share at some level and which they might share more completely if they knew more, thought faster, had grown up farther together, etc.

You then assume, without justification, that "exploring the landscape of possible worlds" will not be expressed as a part of CEV, and criticize it on these grounds.

Huh? What friggin' definition of CEV are you using?!?

EDIT: I realized there was an insult in my original formulation. I apologize for being a dick on the Internet.

3PhilGoetz14y
Because EY has specifically said that that must be avoided, when he describes evolution as something dangerous. I don't think there's any coherent way of saying both that CEV will constrain future development (which is its purpose), and that it will not prevent us from reaching some of the best optimums. Most likely, all the best optimums lie in places that CEV is designed to keep us away from, just as trilobite CEV would keep us away from human values. So CEV is worse than random.

Most likely, all the best optimums lie in places that CEV is designed to keep us away from, just as trilobite CEV would keep us away from human values.

That a "trilobite CEV" would never lead to human values is hardly a criticism of CEV's effectiveness. The world we have now is not "trilobite friendly"; trilobites are extinct!

CEV, as I understand it, is very weakly specified. All it says is that a developing seed AI chooses its value system after somehow taking into account what everyone would wish for, if they had a lot more time, knowledge, and cognitive power than they do have. It doesn't necessarily mean, for example, that every human being alive is simulated, given superintelligence, and made to debate the future of the cosmos in a virtual parliament. The combination of better knowledge of reality and better knowledge of how the human mind actually works may make it extremely clear that the essence of human values, extrapolated, is XYZ, without any need for a virtual referendum, or even a single human simulation.

It is a mistake to suppose, for example, that a human-based CEV process will necessarily give rise to a civilizational value system which attac... (read more)

Because EY has specifically said that that must be avoided, when he describes evolution as something dangerous.

That doesn't mean that you can't examine possible trajectories of evolution for good things you wouldn't have thought of yourself, just that you shouldn't allow evolution to determine the actual future.

I don't think there's any coherent way of saying both that CEV will constrain future development (which is its purpose), and that it will not prevent us from reaching some of the best optimums.

I'm not sure what you mean by "constrain" here. A process that reliably reaches an optimum (I'm not saying CEV is such a process) constrains future development to reach an optimum. Any nontrivial (and non-self-undermining, I suppose; one could value the nonexistence of optimization processes or something) value system, whether "provincially human" or not, prefers the world to be constrained into more valuable states.

Most likely, all the best optimums lie in places that CEV is designed to keep us away from

I don't see where you've responded to the point that CEV would incorporate whatever reasoning leads you to be concerned about this.

7orthonormal14y
Or to take one step back: It seems that you think there are two tiers of values, one consisting of provincial human values, and another consisting of the true universal values like "exploring the landscape of possible worlds". You worry that CEV will catch only the first group of values. From where I stand, this is just a mistaken question; the values you worry will be lost are provincial human values too! There's no dividing line to miss.
1PhilGoetz14y
I understand what you're saying, and I've heard that answer before, repeatedly; and I don't buy it. Suppose we were arguing about the theory of evolution in the 19th century, and I said, "Look, this theory just doesn't work, because our calculations indicate that selection doesn't have the power necessary." That was the state of things around the turn of the century, when genetic inheritance was assumed to be analog rather than discrete. An acceptable answer would be to discover that genes were discrete things that an organism had just 2 copies of, and that one was often dominant, so that the equations did in fact show that selection had the necessary power. An unacceptable answer would be to say, "What definition of evolution are you using? Evolution makes organisms evolve! If what you're talking about doesn't lead to more complex organisms, then it isn't evolution." Just saying "Organisms become more complex over time" is not a theory of evolution. It's more like an observation of evolution. A theory means you provide a mechanism and argue convincingly that it works. To get to a theory of CEV, you need to define what it's supposed to accomplish, propose a mechanism, and show that the mechanism might accomplish the purpose. You don't have to get very far into this analysis to see why the answer you've given doesn't, IMHO, work. I'll try to post something later this afternoon on why.
6PhilGoetz14y
I won't get around to posting that today, but I'll just add that I know that the intent of CEV is to solve the problems I'm complaining about. I know there are bullet points in the CEV document that say, "Renormalizing the dynamic", "Caring about volition," and, "Avoid hijacking the destiny of humankind." But I also know that the CEV document says, and I think there is what you could call an order-of-execution problem, and I think there's a problem with things being ill-defined, and I think the desired outcome is logically impossible. I could be wrong. But since Eliezer worries that this could be the case, I find it strange that Eliezer's bulldogs are so sure that there are no such problems, and so quick to shoot down discussion of them.
-1PhilGoetz14y
This is one of the things I don't understand: If you think everything is just a provincial human value, then why do you care? Why not play video games or watch YouTube videos instead of arguing about CEV? Is it just more fun? (There's a longish section trying to answer this question in the CEV document, but I can't make sense of it.) There's a distinction that hasn't been made on LW yet, between personal values and evangelical values. Western thought traditionally blurs the distinction between them, and assumes that, if you have personal values, you value other people having your values, and must go on a crusade to get everybody else to adopt your personal values. The CEVer position is, as far as I can tell, that they follow their values because that's what they are programmed to do. It's a weird sort of double-think that can only arise when you act on the supposition that you have no free will with which to act. They're talking themselves into being evangelists for values that they don't really believe in. It's like taking the ability to follow a moral code that you know has no outside justification from Nietzsche's "master morality", and combining it with the prohibition against value-creation from his "slave morality".
5ata14y
That's how most values work. In general, I value human life. If someone does not share this value, and they decide to commit murder, then I would stop them if possible. If someone does not share this value, but is merely apathetic about murder rather than a potential murderer themselves, then I would cause them to share this value if possible, so there will be more people to help me stop actual murderers. So yes, at least in this case, I would act to get other people to adopt my values, or inhibit them from acting on their own values. Is this overly evangelical? What is bad about it? In any case, history seems to indicate that "evangelizing your values" is a "universal human value".

Groups that didn't/don't value evangelizing their values:

  • The Romans. They don't care what you think; they just want you to pay your taxes.
  • The Jews. Because God didn't choose you.
  • Nietzschians. Those are their values, dammit! Create your own!
  • Goths. (Angst-goths, not Visi-goths.) Because if everyone were a goth, they'd be just like everyone else.

We get into one sort of confusion by using particular values as examples. You talk about valuing human life. How about valuing the taste of avocados? Do you want to evangelize that? That's kind of evangelism-neutral. How about the preferences you have that make one particular private place, or one particular person, or other limited resource, special to you? You don't want to evangelize those preferences, or you'd have more competition. Is the first sort of value the only one CEV works with? How does it make that distinction?

We get into another sort of confusion by not distinguishing between the values we hold as individuals, the values we encourage our society to hold, and the values we want God to hold. The kind of values you want your God to hold are very different from the kind of values you want people to hold, in the same way that you want the referee to have different desires than the players. CEV mushes these two very different things together.

0ata14y
Good points. I haven't thoroughly read the CEV document yet, so I don't know if there is any discussion of this, but it does seem that it should make a distinction between those different types of values and preferences.
0Vladimir_Nesov14y
You never learn.
3PhilGoetz14y
Folks. Vladimir's response is not acceptable in a rational debate. The fact that it currently has 3 points is an indictment of the Less Wrong community.
6JGWeissman14y
Normally I would agree, but he was responding to "Some degree of randomness is necessary". Seriously, you should know that isn't right.
2PhilGoetz14y
That post is about a different issue. It's about whether introducing noise can help an optimization algorithm. Sounds similar; isn't. The difference is that the optimization algorithm already knows the function that it's trying to optimize. The basic problem with CEV is that it requires reifying values in a strange way so that there are atomic "values" that can be isolated from an agent's physical and cognitive architecture; and that (I think) it assumes that we have already evolved to the point where we have discovered all of these values. You can make very general value statements, such as that you value diversity, or complexity. But a trilobite can't make any of those value statements. I think it's likely that there are even more important fundamental value statements to be made that we have not yet conceptualized; and CEV is designed from the ground up specifically to prevent such new values from being incorporated into the utility function. The need for randomness is not because random is good; it's because, for the purpose of discovering better primitives (values) to create better utility functions, any utility function you can currently state is necessarily worse than random.
5JGWeissman14y
Since when is randomness required to explore the "landscape of possible worlds"? Or the possible values that we haven't considered? A methodical search would be better. How did you miss that lesson from Worse Than Random, when it included an example (the pushbutton combination lock) of exploring a space of potential solutions?
0PhilGoetz14y
Okay, you don't actually need randomness, if you can work out a way of doing a methodical variation of all possible parameters. (For problems of this nature, using random processes allows you to specify the statistical properties that you want the solution to have, which is often much simpler than specifying a deterministic process that has those properties. That's one reason randomness is useful.) The point I'm trying to make is that you need not to limit yourself to "searching", meaning trying to optimize a function. You can only search when you know what you're looking for. A value system can't be evaluated from the outside. You have to try it on. Rationally, where "rational" means optimizing existing values, you wouldn't do that. So randomness (or a rationally-ordered but irrationally-pursued exploration of parameter space) will lead to places no rational agent would go.
3JGWeissman14y
[EDIT: Wow, the parent comment completely changed since I responded to it. WTF?] How do you plan to map a random number into a search a space that you could not explore systematically? According to which utility function?
0PhilGoetz14y
I have a bad habit of re-editing a comment for several minutes after first posting it. Suppose you want to test a program whose input variables are distributed normally. You can write a big complicated equation to sample at uniform intervals from the cumulative distribution function for the gaussian distribution. Or you can say "x = mean; for i=1 to 10 { x += rnd(2)-1 }". Very often, the only data you know about your space is randomly-sampled data. So you look at that randomly-sampled data, and come up with some simple random model that would generate data with similar properties. The nature of the statistics you've gathered, such as the mean, variance, and correlations between observed variables, make it very hard to construct a deterministic model that would reproduce those statistics, but very easy to build a random model that does. Some people really do have the kinds of misconceptions Eliezer was talking about; but the idea that there are hordes of scientists who attribute magical properties to randomness just isn't true. This is not a fight you need to fight. And railing against all use of randomness in the simulation or study of complex processes just puts a big sticker on your head that says "I have no experience with what I'm talking about!" We're having 2 separate arguments here. I hope you realize that my comment that you originally responded to was not claiming that randomness has some magical power. It was about the need, when considering the future of the universe, for trying things out not just because your current utility function suggests they will have high utility. I used "random" as shorthand for "not directed by a utility function". According to the utility function that your current utility function doesn't like, but that you will be delighted with once you try it out.
3JGWeissman14y
Yes, I understand you can use randomness as an approximate substitute for actually understanding the implications of your probability distributions. That does not really address my point, the randomness does not grant you access to a search space you could not otherwise explore. If you analyze randomly-sampled data by considering the probability distribution of results for a random sampling, instead for the specific sampling you actually used, you are vulnerable to the mistake described here. You can deterministically build a model that accounts for your uncertainty. Having a probability distribution is not the same thing as randomly choosing results from that distribution. First of all, I am not "railing against all use of randomness in the simulation or study of complex processes". I am objecting to your claim that "randomness is required" in an epistemilogical process. Second, you should not presume to warn me about stickers on my head. You should realize that "randomness is required" does sound very much like "claiming that randomness has some magical power", and if you mispoke, the correct response to the objection would be to admit that you made a mistake and apologize for the miscommunication, not to try to defend the wrong claim. It appears that you don't understand the purpose of utility functions. I do not want to have a utility function U that maximizes U(U), that assigns to itself higher utility than any other utility function assigns to itself. I want to achieve states of the world that maximize my current utility function.
0PhilGoetz14y
You mean, for instance, by saying, I'm not defending the previous wrong claim about "needing randomness". I'm arguing against your wrong claim, which appears to be that one should never use randomness in your models. It appears that you still don't understand what my basic point is. You can't improve your utility function by a search using your utility function. We have better utility functions than trilobites did. We could not have found them using trilobite utility functions. Trilobite CEV would, if performing optimally, have ruled them out. Extrapolate.
3JGWeissman14y
Wow, you are actually compounding the rudeness of abusing the edit feature to completely rewrite your comment by then analyzing my response to the original version as if it were responding to the edited version. How did you get from "randomness is never required" to "randomness is never useful"? I acknowledge that sometimes randomness can be a good enough approximate substitute for the much harder strategy of actually understanding the implications of a probability distribution. I understand your argument. It is wrong. You have not actually responded to my objection. To refute my objection, you would have to explain why I should want to give up my current utility function U0 in favor of some other utility function U such that (1) U(U) > U0(U0) even though (2) U0(U0) > U0(U) Since U0 is my current utility function, and therefore (2) describes my current wants, you will not be able to convince me that I should be persuaded by (1), which is a meaningless comparison. Adopting U as my utility function does not help me maximize U0. To the extent that trilobites can even be considered to have utility functions, my utility function is better than the trilobite utility function according to my values. The trilobites would disagree. An optimal human CEV would be a human SUCCESS and a trilobite FAIL. Likewise, an optimal trilobite CEV would be a trilobite SUCCESS and a human FAIL. There is no absolute universal utilility function that says one of these is better than the others. It is my human values that cause me to say that the human SUCCESS is better.
1Strange714y
Unless, of course, it turns out that humans really like trilobites and would be willing to devote significant resources to keeping them alive, understanding their preferences, and carrying out those preferences (without compromising other human values). In that case, it's mutual success.
-2PhilGoetz14y
You're thinking of tribbles.
0Strange714y
Tribbles, while cute, directly compete with humans for food. In the long view, trilobites might have an easier time finding their niche.
-2PhilGoetz14y
I'm breaking this out into a separate reply, because it's its own sub-thread: If no utility function, and hence no world state, is objectively better than any other, then all utility functions are wireheading. Because the only distinction between wireheading, and not wireheading, is that the wirehead only cares about his/her own qualia, not about states of the world. If the only reason you care about states of the world is because of how your utility function evaluates them - that is to say, what qualia they generate in you - you are a wirehead.
2JGWeissman14y
You have it backwards. I do not care about things because of how my utility function evaluates them. Rather, my utility function evaluates things the way it does because of how I care about it. My utility function is a description of my preferences, not the source of them.
1PhilGoetz14y
I don't think the order of execution matters here. If there's no objective preference over states of the world, then there's no objective reason to prefer "not wireheading" (caring about states of the world) over "wireheading" (caring only about your percepts).
5JGWeissman14y
There is no "objective" reason to do anything. Knowing that, what are you going to do anyways? Myself, I am still going to things for my subjective reasons.
1PhilGoetz14y
Okay; but then don't diss wireheading.
2wnoise14y
You appear to have an overexpansive definition of wireheading. Having an arbitrary utility function is not the same as wireheading. Wireheading is a very specific sort of alteration of utility functions that we (i.e. most humans, with our current, subjective utility functions, nearly universally) see as very dangerous, because it throws away what we currently care about. Wireheading is a "parochial" definition, not universal. But that's OK.
0PhilGoetz14y
What's your definition of wireheading? I didn't define it as having an arbitrary utility function. I defined it as a utility function that depends only on your qualia.
4wnoise14y
What else can the utility function as implemented by your hardware depend on besides your qualia, and computations derived from your qualia? Calling utility functions "wireheading" is a category error. Wireheading is either: 1. Directly acting on the machinery that implements one's utility function to trivially satisfy this hardware, i.e. by directly injecting qualia rather than providing the qualia via what they are normally correlated with. 2. More broadly, altering one's utility function to one that is trivial to broadly satisfy, such as by reinforcement via 1.
0PhilGoetz14y
If you read my original comment, it's clear that I meant wireheading is having a utility function that depends only on your qualia. Or maybe "choosing to have". Huh? So you think there's nothing inside your head except qualia? Beliefs aren't qualia. Subconscious information isn't qualia. This sounds like a potentially good definition. But I'm unclear then why anyone using utility theory, and that definition, would object to wireheading. If you've got a utility function, and you can satisfy it, that's the thing to do, right? Why does it matter how you satisfy it? You seem to be saying that the hardware implementation isn't your real utility function, it's just an implementation of it. As if the utility function stood somewhere outside you.
1wnoise14y
Beliefs and subconcious information are derived from qualia and the information about the external world that they correlate with, no? Utility functions are a convenient mathematical description to describe preferences of entities in game theory and some decision theories, when these preferences are consistent. It's useful as a metaphor for "what we want", but when used loosely like this, there are troubles. As applied to humans, this flat-out doesn't work. Empirically and as a general rule, we're not consistent, and most of us can readily be money-pumped. We do not have a nice clean module that weighs outcomes and assigns real numbers to them. Nor do we feed outcome weights into a probability weighting module, and then choose the maximum utility. Our values change on reflection. Heck, we're not even unitary entities. Our consciousness is multi-faceted. There are the left and right brains communicating and negotiating through the corpus callosum. The information immediately accessible to the consciousness, what we identify with, is rather different than the information our subconscious uses. We are a gigantic hack of an intelligence built upon the shifting sands of stimulus-response and reinforcement conditioning. These joints in our selves make it easier to wirehead, and essentially kill our current selves, leaving only animal-level instincts, if that. There are multiple utility functions running around here. The basic point was that what I consider important now matters to what choices I make now. The fact that I can make the future me have a new utility function, satisfied by wireheading, does not register positively on my current utility function. In fact, because it throws away almost everything I now care about, I am unlikely to do it now. My goals are "satisfy my current utility function", and are always that, because that's what we mean by the abstraction of utility function. My goals are not to satisfy what preferences I may later have. My goals are not
0PhilGoetz14y
Not as far as I know, no. You may be equating "qualia" with "percepts". That's not right. If that analysis were correct, there would be no difficulty about wireheading. It would simply be an error. There is a difficulty about wireheading, and I'm trying to talk about it. I'm looking at static situations: Is there something objectively wrong with a person plugged into themselves giving themselves orgasms forever? The LW community has a consensus that there is something wrong with that. Yet they also have a consensus that there are no objective values. These are inconsistent. You're trying to say that wireheading is an error not because the final wirehead state reached is wrong, but because the path from here to there involved an error. That's not a valid objection, for the reasons you gave in your comment: Humans are messy, and random variation is a natural part of the human hardware and software. And humans have been messy for some time. So if you can become a wirehead by a simple error, many people must already have made that error. And CEV has to incorporate their wirehead preferences equally with everyone else's. There's something inconsistent about saying that human values are good, but the process generating those values is bad.
3wnoise14y
Well, I'm still not convinced there is a useful difference, though I see why philosophers would separate the concepts. There is nothing objectively wrong with that, no. The LW community has a consensus that there is something wrong with that judged by our current parochial values that we want to maintain. Not objectively wrong, but widely held inter-subjective agreement that lets us cooperate in trying to steer the future away from a course where everyone gets wireheaded. No, I'm saying that the final state is wrong according to my current values. That's what I mean by wrong: against my current values. Because it is wrong, any path reaching it must have an error in it somewhere. We haven't had the technology to truly wirehead until quite recently, though various addictions can be approximations. Currently, there's not enough wireheads, or addicts for that matter, to make much of a difference. Those that are wireheads want nothing more than to be wireheads, so I'm not sure that they would effect anything else under CEV. That's one of the horrors of wireheading -- all other values become lost. What we would have to worry about is a proselytizing wirehead, who wishes everyone else would convert. That seems an even harder end-state to reach than a simple wirehead. Personally, I don't want CEV applied to the whole human race. I think large swathes of the human race hold values that conflict badly with mine, and still would after perfect reflection. Wireheads would just be a small subset of that.
2Rain14y
One of my intuitions about about human value is that it is highly diverse, and any extrapolation will be unable to find consensus / coherence in the way desired by CEV. As such, I've always thought that the most likely outcome of augmenting human value through the means of successful FAI would be highly diverse subpopulations all continuing to diverge, with a sort of evolutionary pressure for who receives the most resources. Wireheads should be easy to contain under such a scenario, and would leave expansion to the more active groups.
1PhilGoetz14y
I was reverting to my meaning of "wireheading". Sorry about that. We agree on that. I think one problem with CEV is that, to buy into CEV, you have to buy into this idea you're pushing that values are completely subjective. This brings up the question of why anyone implementing CEV would want to include anybody else in the subset whose values are being extrapolated. That would be an error. You could argue that it's purely pragmatic - the CEVer needs to compromise with the rest of the world to avoid being crushed like a bug. But, hey, the CEVer has an AI on its side. You could argue that the CEVer's values include wanting to make other people happy, and believes it can do this by incorporating their values. There are 2 problems with this: * They would be sacrificing a near-infinite expected utility from propagating their values over all time and space, for a relatively infinitessimal one-time gain of happiness on the part of those currently alive here on Earth. So these have to be CEVers with high discounting of the future. Which makes me wonder why they're interested in CEV. * Choosing the subset of people who manage to develop a friendly AI and set up CEV strongly selects for people who have the perpetuation of values as their dominant value. If someone claims that he will incorporate other peoples' values in his CEV at the expense of perpetuating his own values because he's a nice guy, you should expect that he has to date put more effort into being a nice guy than into CEV.
0RobinZ14y
I think I see your point: a wireheading utility function would value (1) for providing the reward with less effort, while a nonwireheading utility function would disvalue (1) for providing the reward without the desideratum.
0Strange714y
You should define 'qualia,' then, in such a way that makes it clear how they're causally isolated from the rest of the universe.
0PhilGoetz14y
I didn't say they were causally isolated. If you think that the notion of "qualia" requires them to be causally isolated from the universe (which is my guess at why you even bring the idea up), then the burden is on you to explain why everyone who discusses consciousness except Daniel Dennett is silly.
-3Strange714y
In that case, nothing can be said to depend only on the qualia, because anything that depends on them is also indirectly influenced by whatever the qualia themselves depend on.
0PhilGoetz14y
When you say a function depends only on a set of variables, you mean that you can compute the function given the value of those variables.
-2Strange714y
Emotional responses aren't independent variables, they're functions of past and present sensory input.
0PhilGoetz14y
Are there any independent variables in the real world? Variables are "independent" given a particular analysis. When you say a function depends only on a set of variables, you mean that you can compute the function given the value of those variables. It doesn't matter whether those variables are dependent on other variables.
-3PhilGoetz14y
No. That statement is three comments above the comment in which you said I should acknowledge my error. It was already there when you wrote that comment. And I also acknowledged my misstatement in the comment you were replying to, and elaborated on what I had meant when I made the comment. Good! We agree. Good! We agree again. And we agree yet again! And here is where we part ways. Maybe there is no universal utility function. That's a... I won't say it's a reasonable position, but I understand its appeal. I would call it an over-reasoned position, like when a philosopher announces that he has proved that he doesn't exist. It's time to go back to the drawing board when you come up with that conclusion. Or at least to take your own advice, and stop trying to change the world when you've already said it doesn't matter how it changes. But to believe that your utility function is nothing special, and still try to take over the universe and force your utility function on it for all time, is insane. (Yes, yes, I know Eliezer has all sorts of disclaimers in the CEV document about how CEV should not try to take over the universe. I don't believe that it's logically possible; and I believe that his discussions of Friendly AI make it even clearer that his plans require complete control. Perhaps the theory is still vague enough that just maybe there's a way around this; but I believe the burden of proof is on those who say there is a way around it.) It would be consistent with the theory of utility functions if, in promoting CEV, you were acting on an inner drive that said, "Ooh, baby, I'm ensuring the survival of my utility function. Oh, God, yes! Yes! YES!" But that's not what I see. I see people scribbling equations, studying the answers, and saying, "Hmm, it appears that my utility function is directing me to propagate itself. Oh, dear, I suppose I must, then." That's just faking your utility function. I think it's key that the people I'm speaking of who believe
2JGWeissman14y
Let's recap. You made a wrong claim. I responded to the wrong claim. You disputed my response. I refuted your disputation. You attempted to defend your claim. I responded to your defense. You edited your defense by replacing it with the acknowledgment of your mistake. You responded to my response still sort of defending your wrong claim, and attacking me for refuting your wrong claim. I defended my refutation, pointing out the you really did make the wrong claim and continued to defend it. And now you attack my defense, claiming that you did in fact acknowledge your mistake, and this should somehow negate your continued defense after the acknowledgement. Do you see how you are wrong here? When you acknowledge your claim is wrong, you should not at the same time criticize me for refuting your point. I do believe my utility function is special. I don't expect the universe (outside of me, my fellow humans, and any optimizing processes we spawn off) to agree with me. But, like Eliezer says, "We'll see which one of us is still standing when this is over."
-2PhilGoetz14y
No, that isn't what happened. I'm not sure which comment the last sentence is supposed to refer to, but I'm p > .8 it didn't happen that way. If it's referring to the statement, "Okay, you don't actually need randomness," I wrote that before I ever saw your first response to that comment. But that doesn't match up with what you just described; there weren't that many exchanges before that comment. It also doesn't match up with anything after that comment, since I still don't acknowledge any such mistake made after that comment. We're talking about 2 separate claims. The wrong claim that I made was in an early statement where I said that you "needed randomness" to explore the space of possible utility functions. The right claim that I made, at length, was that randomness is a useful tool. You are conflating my defense of that claim, with defending the initial wrong claim. You've also said that you agree that randomness is a useful tool, which suggests that what is happening is that you made a whole series of comments that I say were attacking claim 2, and that you believe were attacking claim 1.
1Strange714y
I'm not planning to tile the universe with myself, I just want myself or something closely isomorphic to me to continue to exist. The two most obvious ways to ensure my own continued existence are avoidance of things that would destroy me, particularly intelligent agents which could devote significant resources to destroying me personally, and making redundant copies. My own ability to copy myself is limited, and an imperfect copy might compete with me for the same scarce resources, so option two is curtailed by option one. Actual destruction of enemies is just an extension of avoidance; that which no longer exists within my light-cone can no longer pose a threat. Your characterization of my utility function as arbitrary is, itself, arbitrary. Deal with it.
2Strange714y
That description could apply to an overwhelming majority of the possible self-consistent utility functions (which are, last I checked, infinite in number), including all of those which lead to wireheading. Please be more specific.
-1PhilGoetz14y
Utility function #311289755230920891423. Try it. You'll like it. I have no solution to wireheading. I think a little wireheading might even be necessary. Maybe "wireheading" is a necessary component of "consciousness", or "value". Maybe all of the good places lie on a continuum between "wireheading" and "emotionless nihilism".
1Strange714y
Fallacy of moderation. Besides, wireheading and self-destructive nihilism aren't opposite extremes on a spectrum, they're just failure states within the solution space of possible value systems. A string of random numbers is not an explanation. I have a simple solution to wireheading... simple for me, anyway. I don't like it, so I won't seek it out, nor modify myself in any way that might reasonably cause me to like it or want to seek it out.
0PhilGoetz14y
The fallacy of moderation is only a fallacy when someone posits that two things that are on a continuum, that aren't actually on a continuum. (If they are on a continuum, it's only a fallacy if you have independent means for finding a correct answer to the problem that the arguing groups have made errors on, rather than simply combining their utility functions.) The question I'm raising is whether wireheading is in fact just an endpoint on the same continuum that our favored states lie. How do you define wireheading? I define it as valuing your qualia instead of valuing states of the world. But could something that didn't value its qualia be conscious? Could it have any fun? Would we like to be it? Isn't valuing your qualia part of the definition of what a qualia is?
-3thomblake14y
Your mom's an indictment of the Less Wrong community.
2MichaelVassar14y
I also dislike tweaks, but I think that Eliezer does too. I certainly don't endorse any sort of tweak that I have heard and understood.
3Nick_Tarleton14y
FWIW, Eliezer seems to have suggested an anti-selfish-bastard tweak here.
1MichaelVassar14y
Thanks! I'm unhappy to see that, but my preferences are over states of the world, not beliefs, unless they simply strongly favor the belief that they are over states of the world. Fortunately, we have some time, but that does bode ill I think. OTOH, the general trend, though not the universal trend, is for CEV to look more difficult and stranger with time.
0NancyLebovitz14y
I don't trust CEV. The further you extrapolate from where you are, the less experience you have with applying the virtue you're trying to implement.
3MichaelVassar14y
So you would like experience with the interactions through which our virtues unfold and are developed to be part of the extrapolation dynamic? http://www.google.com/search?q=%22grown+up+further+together%22&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a That always was intended I think. If that's not what you mean, well, if you can propose alternatives to CEV that don't automatically fail and which also don't look to me like variations on CEV I think you will be the first to do so. CEV is terribly underspecified, so it's hard to think hard about the problem and propose something that doesn't already fall within the current specification.
-3PhilGoetz14y
There's several grounds for criticism here. Criticizing CEV by saying, "I think CEV will lead to good dogs, because that's what a lot of people would like," sounds valid to me, but would merit more argumentation (on both sides). Another problem I mentioned is a possibly fundamental problem with CEV. Is it legitimate to say that, when CEV assumes that reasoned extrapolation trumps all existing values, that that is not the same as asserting that reason is the primary value? You could argue that reason is just an engine in service of some other value. There's some evidence that that actually works, as demonstrated by the theologians of the Roman Catholic Church, who have a long history of using reason to defeat reason. But I'm not convinced that makes sense. If it doesn't, then it means that CEV already assumes from the start the very kind of value that its entire purpose is to prevent being assumed. Third, most human values, like dog-values, are neutral with respect to rationality or threatened by rationality. The dog itself needs to not be much more rational or intelligent than it is. The only solution is to say that the rationality and the values are in the FAI sysop, while the conscious locus of the values is in the humans. That is, the sysop gets smarter and smarter, with dog-values as its value system. It knows that to get the experiential value out of dog-values, the conscious experiencer needs limited cognition; but that's okay, because the humans are the designated experiencers, while the FAI is the designated thinker and keeper-of-the-values. There are two big problems with this. 1. By keeping the locus of consciousness out of the sysop, we're steering dangerously close to one of the worst-possible-of-all-worlds, which is building a singleton that, one way or the other, eventually ends up using most of the universe's computational energy, yet is not itself conscious. That's a waste of a universe. 2. Value systems are deictic, meaning they use the word
0simplicio14y
You have a point here. But as you mentioned, we aren't really capable of such a state, nor would it be virtuous to chase after one. You guys have totally lost me with this AI stuff. I guess there's probably a sequence on it somewhere...

I tend to think that the hazard of perverse response to materialism has been fairly adequately dealt with in this community. OTOH, the perverse response to psychology has not. The fact that something is grounded in "status seeking", "conditioning", or "evolutionary motives" generally no more deprives the higher or more naive levels of validity or reality than does materialism, hence my quip that "I believe exactly what Robin Hanson believes, except that I'm not cynical"

3NancyLebovitz14y
If anyone's addressed the interaction between status-seeking, conditioning, and/or evolved drives and the fact that people manage to do useful and sometimes wonderful things anyway, I haven't seen it.
5MichaelVassar14y
I'm just confused. Those terms are short-hand for a model that exists to predict the world. If that model doesn't help you to predict the world, throw the model out, just don't bemoan the world fitting the model if and when it does fit. The world is still the world, as well as being a thing described by a model that is typically phrased cynically.
2simplicio14y
I was very tempted to include evo-psych in my list, but decided it probably warrants more than a cursory treatment.

You only included the last sentence of Dawkins' quote. Here's the full quote:

The total amount of suffering per year in the natural world is beyond all decent contemplation. During the minute that it takes me to compose this sentence, thousands of animals are being eaten alive, many others are running for their lives, whimpering with fear, others are slowly being devoured from within by rasping parasites, thousands of all kinds are dying of starvation, thirst, and disease. It must be so. If there ever is a time of plenty, this very fact will automatically lead to an increase in the population until the natural state of starvation and misery is restored. In a universe of electrons and selfish genes, blind physical forces and genetic replication, some people are going to get hurt, other people are going to get lucky, and you won't find any rhyme or reason in it, nor any justice. The universe that we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil, no good, nothing but pitiless indifference.

The universe is perverse. You have to learn to love it in spite of that.

The universe is perverse. You have to learn to love it in spite of that.

What? Why would you love the indifferent universe? It has to be transformed.

Right. Materialism tells us that we're probably going to die and it's not going be okay; the right way to feel good about it is to do something about it.

1BenAlbahari14y
My attitude is easier to transform than the universe's attitude.

Maybe easier, but is it the right thing to do? Obvious analogy is wireheading. See also: Morality as Fixed Computation.

5Nick_Tarleton14y
Emotions ≠ preferences. It may be that something in the vague category "loving the universe" is (maybe depending on your personality) a winning attitude (or more winning than many people's existing attitudes) regardless of your morality. (Of course, yes, in changing your attitude you would have to be careful not to delude yourself about your preferences, and most people advocating changing your attitude don't seem to clearly make the distinction.)
9Vladimir_Nesov14y
I certainly make that distinction. But it seems to me that "loving" the current wasteland is not an appropriate emotion. Wireheading is wrong not only when/because you stop caring about other things.

But it seems to me that "loving" the current wasteland is not an appropriate emotion.

Granted. It seems to me that the kernel of truth in the original statement is something like "you are not obligated to be depressed that the universe poorly satisfies your preferences", which (ISTM) some people do need to be told.

Since when has being "good enough" been a prerequisite for loving something (or someone)? In this world, that's a quick route to a dismal life indeed.

There's the old saying in the USA: "My country, right or wrong; if right, to be kept right; and if wrong, to be set right." The sentiment carries just as well, I think, for the universe as a whole. Things as they are may be very wrong indeed, but what does it solve to hate the universe for it? Humans have a long history of loving not what is perfect, but what is broken--the danger lies not in the emotion, but in failing to heal the damage. It may be a crapsack universe out there, but it's still our sack of crap.

By all means, don't look away from the tragedies of the world. Figuratively, you can rage at the void and twist the universe to your will, or you can sit the universe down and stage a loving intervention. The main difference between the two, however, is how you feel about the process; the universe, for better or worse, really isn't going to notice.

0[anonymous]14y
Insisting on being unhappy that the universe poorly satisfies your preferences is certainly contrary, if not perverse. Of course, humans greatly value their ability to imagine and desire that the universe be different. This desire might only be perverse if it is impossible to modify the universe to satisfy your preferences. This is the situation that dis-satisfied materialists could find themselves in: a materialistic world is a world that cannot be modified to suit their preferences. [last paragraph taken out as off-topic and overly speculative]
0[anonymous]14y
Emotions ≠ preferences. It seems likely to me that loving the universe is (maybe depending on your personality) a winning attitude (or is more winning than many people's attitudes) regardless of your morality.
-9byrnema14y

The amount of pain in nature is immense. Suffering? I'm not so sure. That's a technical question, even if we don't yet know how to ask the right question. A black widow male is certainly in pain as it's eaten but is very likely not suffering. Many times each day I notice that I have been in pain that I was unaware of. The Continental Philosophy and Women's Studies traditions concern themselves with suffering that people aren't aware of, but don't suggest that such suffering comes in varieties that many animals could plausible experience.

4BenAlbahari14y
This belief people have that "beings kinda different to me" aren't suffering strikes me as near-far bias cranked up to 11. Perhaps you don't notice the pain because it's relatively minor. I'm assuming you didn't have your leg chewed off.
8orthonormal14y
In some people, perhaps that is the reasoning; but there really is more to this discussion than anthropocentrism. Suffering as we experience it is actually a very complicated brain activity, and it's virtually certain that the real essence of it is in the brain structure rather than the neurotransmitters or other correlates. AFAIK, the full circuitry of the pain center is common to mammals, but not to birds (I could be wrong), fish, or insects. Similar neurotransmitters to ours might be released when a bug finds itself wounded, and its brain might send the impulse to writhe and struggle, but these are not the essence of suffering. (Similarly, dopamine started out as the trigger for reinforcing connections in very simple brains, as a feedback mechanism for actions that led to success which makes them more likely to execute next time. It's because of that role that it got co-opted in the vast pleasure/reward/memory complexes in the mammalian brain. So I don't see the release of dopamine in a 1000-neuron brain to be an indication that pleasure is being experienced there.)
4BenAlbahari14y
I agree with your points on pain and suffering; more about that on a former Less Wrong post here. However, reducing the ocean of suffering still leaves you with an ocean. And that suffering is in every sense of the word perverse. If you were constructing a utopia, your first thought would hardly be "well, let's get these animals fighting and eating each other". Anyone looking at your design would exclaim: "What kind of perverse utopia is that?! Are you sick?!". Now, it may be the case that you could give a sophisticated explanation as to why that suffering was necessary, but it doesn't change the fact that your utopia is perverted. My point is we have to accept the perversion. And denying perversion is simply more perversion.
3MichaelVassar14y
To specify a particular theory, my guess is that suffering is an evolved elaboration on pain unique to social mammals or possibly shared by social organisms of all sorts. It seems likely to me to basically mediate an exchange of long-term status for help from group members now.
3BenAlbahari14y
Perhaps: pain is near-mode; suffering is far-mode. Scenario: my leg is getting chewed off. Near-mode thinking: direct all attention to attempt to remove the immediate source of pain / fight or flight / (instinctive) scream for attention Far-mode thinking: reevaluate the longer-term life and social consequences of having my leg chewed off / dwell on the problem in the abstract
1orthonormal14y
I agree with this point, and I'd bet karma at better than even odds that so does Michael Vassar.
5MichaelVassar14y
I agree, but I wonder if my confidence in my extrapolation agreeing is greater or less than your confidence in my agreeing was. I tend to claim very much greater than typical agnosticism about the subjective nature of nearby (in an absolute sense) mind-space. I bet a superintelligence could remove my leg without my noticing and I'm curious as to the general layout of the space of ways in which it could remove my leg and have me scream and express horror or agony at my leg's loss without my noticing. I really do think that at a best guess, according to my extrapolated values, human suffering outweights that of the rest of the biosphere, most likely by a large ratio (best guess might be between one and two orders of magnitude). Much more importantly, at a best guess, human 'unachieved but reasonably achievable without superintelligence flourishing' outweighs the animal analog by many orders of magnitude, and if the two can be put on a common scale I wouldn't be surprised if the former is a MUCH bigger problem than suffering. I also wouldn't be shocked if the majority of total suffering in basically Earth-like worlds (and thus the largest source of expected suffering given our epistemic state) comes from something utterly stupid, such as people happening to take up the factory farming of some species which happens, for no particularly good reason, to be freakishly capable of suffering. Sensitivity to long tails tends to be a dominant feature of serious expected utility calculus given my current set of heuristics. The modal dis-value I might put on a pig living its life in a factory farm is under half the median which is under half the mean.
5Nick_Tarleton14y
That's surely a common reason, but are you sure you're not letting morally loaded annoyance at that phenomenon prejudice you against the proposition? The cognitive differences between a human and a cow or a spider go far beyond "kinda", and, AFAIK, nobody really knows what "suffering" (in the sense we assign disutility to) is. Shared confusion creates room for reasonable disagreement over best guesses (though possibly not reasonable disagreement over how confused we are). (See also.)
5Morendil14y
It doesn't take much near-thinking to draw a distinction between "signals to our brain that are indicative of damage inflicted to a body part" on the one hand, and "the realization that major portions of our life plans have to be scrapped in consequence of damaged body parts" on the other. The former only requires a nervous system, the latter requires the sort of nervous system that makes and cares about plans.

Yes, but that assumes this difference is favorable to your hypothesis. David Foster Wallace from "Consider The Lobster":

Lobsters do not, on the other hand, appear to have the equipment for making or absorbing natural opioids like endorphins and enkephalins, which are what more advanced nervous systems use to try to handle intense pain. From this fact, though, one could conclude either that lobsters are maybe even more vulnerable to pain, since they lack mammalian nervous systems’ built-in analgesia, or, instead, that the absence of natural opioids implies an absence of the really intense pain-sensations that natural opioids are designed to mitigate. I for one can detect a marked upswing in mood as I contemplate this latter possibility...

The entire article is here and that particular passage is here. And later:

Still, after all the abstract intellection, there remain the facts of the frantically clanking lid, the pathetic clinging to the edge of the pot. Standing at the stove, it is hard to deny in any meaningful way that this is a living creature experiencing pain and wishing to avoid/escape the painful experience. To my lay mind, the lobster’s behavior in the kettle

... (read more)

In this last paragraph (which btw is immediately preceded, in the article, by an observation strikingly similar to mine in the grandparent), I would argue that "frantically" and "pathetic" are projections: the emotions they refer to originate in the viewer's mind, not in the lobster's.

We are demonstrably equipped with mental mechanisms whereby we can observe behaviour in others, and as a result of such observations we can experience "ascribed emotions", which can sometimes take on an intensity not far removed from the sensations that originate in ourselves. That's where our intuition that the lobster is in pain comes from.

Later in the article, the author argues that lobsters "are known to exhibit preferences". Well, plants are known to exhibit preferences; they will for instance move so as to face the sun. We do not infer that plants can experience suffering.

We could build a robot today that would sense aspects of its surrounding such as elevated temperature, and we could program that robot to give a higher priority to its "get the hell away from here" program when such conditions obtained. We would then be in a position to observe the robot doing the same thing as the lobster; we would, quite possibly, experience empathy with the robot. But we would not, I think, conclude that it is morally wrong to put the robot in boiling water. We would say that's a mistake, because we have not built into the robot the degree of personhood which would entitle it to such conclusions.

6RobinZ14y
cf. "The Soul of the Mark III Beast", Terrel Miedaner, included in The Mind's I, Dennett & Hofstadter.
3JenniferRM14y
Trust this community to connect the idea to the reference so quickly. "In Hofstadter we trust" :-) For those who are not helped by the citation, it turns out that someone thoughtfully posted the relevant quote from the book on their website. I recommend reading it, the story is philosophically interesting and emotionally compelling.
7Tyrrell_McAllister14y
The story was also dramatized in a segment of the movie Victim of the Brain, which is available in its entirety from Google Video. The relevant part begins at around 8:40. Here is the description of the movie:
4JenniferRM14y
That was fascinating. A lot of the point of the story - the implicit claim - was that you'd feel for an entity based on the way its appearance and behavior connected to your sympathy - like crying sounds eliciting pity. In text that's not so hard because you can write things like "a shrill noise like a cry of fright" when the simple robot dodges a hammer. The text used to explain the sound are automatically loaded with mental assumptions about "fright", simply to convey the sound to the reader. With video the challenge seems like it would be much harder. It becomes more possible that people would feel nothing for some reason. Perhaps for technical reasons of video quality or bad acting, or for reasons more specific to the viewer (desensitized to video violence?), or maybe because the implicit theory about how mind-attribution is elicited is simply false. Watching it turned out to be interesting on more levels than I'd have thought because I did feel things, but I also noticed the visual tropes that are equivalent to mind laden text... like music playing as the robot (off camera) cries and the camera slowly pans over the wreckage of previously destroyed robots. Also, I thought it was interesting the way they switched the roles for the naive mysterian and the philosopher of mind, with the mysterian being played by a man and the philosopher being played by a woman... with her hair pinned up, scary eye shadow, and black stockings. "She's a witch! Burn her!"
2khafra14y
Some Jainists and Buddhists infer that plants can experience suffering. The stricter Jainist diet avoids vegetables that are harvested by killing plants, like carrots and potatoes, in favor of fruits and grains that come voluntarily or from already-dead plants.
2Morendil14y
That's a preference of theirs; fine by me, but not obviously evidence-based.
3khafra14y
I don't mean to suggest that plants are clearly sentient, just that it's plausible, even for a human, to have a coherent value system which attempts to avoid the suffering of anything which exhibits preferences.
3Morendil14y
I'd agree with that sentence if you replaced the word "suffering", unsuitable because of its complex connotations, with "killing", which seems adequate to capture the Jainists' intuitions as represented in the link above.
1RobinZ14y
Although it is relevant to note that the motive may be to avoid suffering - I wasn't there when the doctrine was formed, and haven't read the relevant texts, but it is possible that the presence of apparent preferences was interpreted as implying thus.

tuning your ears to the words “just” and “merely.”

Indeed! See also this classic essay by Jerry Weinberg on Lullaby Words. "Just" is one of them, can you think of others before reading the essay? ;)

9Richard_Kennaway14y
"Fundamentally" and all of its near-synonyms: "really", "essentially", "at bottom", "actually", etc. Usually, these mean "not". ("How was that party you went to last night?" "Oh, it was all right really.") ("Yes, I kidnapped you and chained you in my basement, but fundamentally, underneath it all, I'm essentially a nice guy.")
4Morendil14y
Good one. On a related note, I often find myself starting a sentence with "The fundamental issue" - and when I catch myself and ask if what I'm talking about is the single issue that in fact underlies all others, and answer myself "no" - then I revise the sentence so something line "One important issue"... Here the lullaby is in two parts, a) everything is less important than this thing and b) there is only this one thing to care about. It's rarely the case that either is true, let alone both.
7CronoDAS14y
In mathematics, "obvious" is one of those words. It tends to mean "something I don't know how to justify."
6PeteSchult14y
A joke along these lines has the math professor claiming that the proof of some statement is trivial. They pause for a moment, think, then leave the classroom. Half an hour later, they come back and say, "Yes, it was trivial."
6RobinZ14y
I heard about a professor (I think physics) who was always telling his students that various propositions were "simple", despite the fact that the students always struggled to show them. Eventually, the students went to the TA (the one I heard the story from), who told the professor. So, the next class the professor said, "I have heard that the students do not want me to say 'simple'. I will no longer do so. Now, this proposition is straightforward..."

At the Princeton graduate school, the physics department and the math department shared a common lounge, and every day at four o'clock we would have tea. It was a way of relaxing in the afternoon, in addition to imitating an English college. People would sit around playing Go, or discussing theorems. In those days topology was the big thing.

I still remember a guy sitting on the couch, thinking very hard, and another guy standing in front of him, saying, "And therefore such-and-such is true."

"Why is that?" the guy on the couch asks.

"It's trivial! It's trivial!" the standing guy says, and he rapidly reels off a series of logical steps: "First you assume thus-and-so, then we have Kerchoff's this-and-that; then there's Waffenstoffer's Theorem, and we substitute this and construct that. Now you put the vector which goes around here and then thus-and-so..." The guy on the couch is struggling to understand all this stuff, which goes on at high speed for about fifteen minutes!

Finally the standing guy comes out the other end, and the guy on the couch says, "Yeah, yeah. It's trivial."

We phy

... (read more)
4nhamann14y
Most of the time I've run into the word "obviously" is in the middle of a proof in some textbook, and my understanding of the word in that context is that it means "the justification of this claim is trivial to see, and spelling it out would be too tedious/would disrupt the flow of the proof."

I thought the mathematical terms went something like this:

  • Trivial: Any statement that has been proven
  • Obviously correct: A trivial statement whose proof is too lengthy to include in context
  • Obviously incorrect: A trivial statement whose proof relies on an axiom the writer dislikes
  • Left as an exercise for the reader: A trivial statement whose proof is both lengthy and very difficult
  • Interesting: Unproven, despite many attempts
4CronoDAS14y
Well, that's what it's supposed to mean. One of my professors (who often waxed sarcastic during lectures) described it as a very dangerous word...
3kpreid14y
Do you really assert that it is more often used incorrectly (that the fact is not actually obvious)?

I assert that it ("obviously" in math) is most often used correctly, but that people spend more time experiencing it used incorrectly -- because they spend more time thinking about it when it is not obvious.

3CronoDAS14y
No, I guess not.
2CronoDAS14y
A list of common proof techniques. ;)
7NancyLebovitz14y
Voted up because that's an excellent link.
2Document11y
http://c2.com/cgi/wiki?AlarmBellPhrase
0[anonymous]14y
I failed to predict any of the words, but I didn't think about it for very long.

Although I found that scene hilarious, I have actually heard several smart people po-facedly lament the fact that the universe will end with a whimper. If this seriously bothers you psychologically, then your psychology is severely divorced from the reality that you inhabit. By all means, be depressed about your chronic indigestion or the Liberal Media or teenagers on your lawn, but not about an event that will happen in 1014 years, involving a dramatis personae of burnt-out star remnants. Puh-lease. There is infinitely more tragedy happening every second in a cup of buttermilk.

So, what's your argument here? That we shouldn't care about the far future because it is temporally very removed from us? I personally deeply dislike this implication of modern cosmology, because it imposes an upper limit on sentience. I would much prefer that happiness continues to exist indefinitely than that it ceases to exist simply because the universe can no longer support it.

3khafra14y
Your personally being inconvenienced by the heat death of the universe is even less likely than winning the powerball lottery; if you wouldn't spend $1 on a lottery ticket, why spend $1 worth of time worrying about the limits of entropy? Sure, it's the most unavoidable of existential risks, but it's vanishingly unlikely to be the one that gets you.

Why should I only emotionally care about things that will affect me?

I don't see any good reason to be seriously depressed about any Far fact; but if any degree of sadness is ever an appropriate response to anything Far, the inevitability of death seems like one of the best candidates.

What I liked in a nutshell:

What would you prefer to be made of, if not matter?

On behalf of chemicals everywhere, I say: Screw you!

If there is a fact question at stake, take no prisoners; but you don’t get extra points for unnecessary angst.

[-][anonymous]12y140

"Love is Wonderful biochemistry."

"Rainbows are a Wonderful refraction phenomena"

"Morality is a Wonderful expression of preference"

And so on. Let's go out and replace 'just' and 'merely' with 'wonderful' and assorted terms. Let's sneak Awesomeness into reductionism.

3EphemeralNight12y
This may be the wrong tact. As I pointed out above, I think it likely that the problem lies not in the nature of the phenomenon but in the way a person relates to the phenomenon emotionally. Particularly, that for natural accidents like rainbows, most people simply can't relate emotionally to the physics of light refraction, even if they sort of understand it. So, I think a more effective tact would be to focus on the experience of seeing the rainbow, rather than the rainbow itself, because if a person is focusing on the rainbow itself, then they inevitably will by disappointed by the reductionist explanation supplanting their instinctive sense of there being something ontologically mental behind the rainbow. Because, however you word it, the rainbow is just a refraction phenomena, but when you look at the rainbow and experience the sight of the rainbow there are lots of really awesome things happening in your own brain that are way more interesting than the rainbow by itself is. I think trying to assign words like "just" or "wonderful" to physical processes that cause rainbows is an example of the Mind Projection Fallacy. So, let's not try to get people excited about what makes the rainbow. Let's try to get people excited about what makes the enjoyment of seeing one.
0VKS12y
It may be true that saying these things may not get everybody to see the beauty we see in the mechanics of those various phenomena. But perhaps saying "Rainbows are a wonderful refraction phenomena" can help get across that even if you know that the rainbows are refraction phenomena, you can still see feel wonder at them in the same way as before. The wonder at their true nature can come later. I guess what I'm getting at is the difference between "Love is wonderful biochemistry" and "Love is a wonderful consequence of biochemistry". The second, everybody can perceive. The first, less so.
0EphemeralNight12y
This kind of touches my point You're talking about two separate physical processes here, and I hold that the latter is the only one worth getting excited about. Or, at least the only one worth trying to get laypeople excited about.
0VKS12y
Eh, both phenomena are things we can reasonably get excited about. I don't see that there's much point in trying to declare one inherently cooler than the other. Different people get excited by different things. I do see, though, that so long as they think that learning about either the cause of their wonder or the cause of the rainbows will steal the beauty from them, no progress will be made on any front. What I'm trying to say is that once that barrier is down, once they stop seeing science as the death of all magic (so to speak), then progress is much easier. Arguably, only then should you be asking yourself whether to explain to them how rainbows work or why one feels wonder when one looks at them.
0EphemeralNight12y
Okay, maybe we need to taboo "excited". This right here is at the crux of my point. I am predicting that, for your average neurotypical, explaining their wonder produces significantly less feeling of stolen beauty than explaining the rainbow. Because, in the former case, you're explaining something mental, whereas in the latter case, you're explaining something mental away. The rainbow may still be there, but it's status as a Mentally-Caused Thing is not.
0VKS12y
If people react badly to having somebody explain how their love works, what makes you think that things will go better with wonder? And, in a different mental thread, I'm going to posit that really, what you talk about matters much less than how you talk about it, in this context. You can (hopefully) get the point across by demonstrating by example that wonder can survive (and even thrive) after some science. At least if, as I suspect, people can perceive wonder through empathy. So, if you feel wonder, feel it obviously and try to get them to do so also. And just select whatever you feel the most wonder at. Less dubiously, presentation is fairly important to making things engaging. Now, I would guess that the more familiar you are with a subject, the easier it becomes to make it engaging. So select whether you explain rainbow or the wonder of rainbows based on that. Maybe. I'm speculating.
0[anonymous]12y
That is an interesting analysis. I think I might view "just" and "wonderful" more like physically null words, so as to say they do not have any meaning beyond interpretation. I guess I am just getting too rational for interacting with normal people psychology purely by typical-mindedness.
0A1987dM12y
This reminds me of something, though I can't remember for sure which something it was.
0MarkusRamikin12y
Didn't know we were into affirmations around here. I'm gonna need me some pepto...
-1[anonymous]12y
I am big on simple tricks to raise the sanity waterline.

From what I can tell, my framing depends upon my emotions more than the reverse, though there's a bit of a feedback cycle as well.

That is to say, if I am feeling happy on a sunny day, I will say that the amazing universe is carrying me along a bright path of sunshine and joy, providing light to dark places, and friendly faces to accompany me, and holy crap that sunlight's passing millions of miles to warm our lives, how awesome is that?

But if I am feeling depressed on that very same day, I will say that the sun's radiation is slowly breaking down the atoms of my weak flesh on the path toward decay and death while all energy slips into entropy and... well, who really cares, anyway?

The art of not being perverse consists in seeing the same reality as others and agreeing about facts, but perceiving more in an aesthetic sense.

If emotions drive the words, as I feel they do, then this statement, while true, comes from the bright side: "Say happy things, look at the world in a happy way, and you, too, will be happy!"

My dark side disagrees: "There's yet another happy person telling me I shouldn't be depressed, because they're not, and it's not so hard, is it? Great. Thanks for all your help. "

2simplicio14y
I understand how it might sound like that. Of course a sunny disposish is not always possible or even desirable - cheeriness can be equally self-indulgent, and in many ways nature really is trying to kill us. But there are some fact questions that people feel bad about quite gratuitously. That's what I would like to change. These are the obstacles to human contentedness that people only encounter if they actually go out looking for obstacles, looking for something to feel bad about. There's lots to legitimately be upset about in this world, lots of suffering endured by people not unlike us. We don't need extra suffering contrived ex nihilo by our minds.

I can only conclude that if I took them to see Seurat’s painting “A Sunday Afternoon on the Island of La Grande Jatte," they would earnestly ask me what on earth the purpose of all the little dots was.

... which we might call the disappointment of explicability. “A rainbow is just light refracting.” “The aurora is only a bunch of protons hitting the earth’s magnetic field.” Rationalists are, sadly, not immune to this nasty little meme.

It occurred to me upon reading this, that perhaps your analogy about the painting is overlooking something importan... (read more)

0Friendly-HI12y
Brilliant train of thought, there may very well be something to this idea. I used the painting analogy myself in debating anti-materialists but could always see, how that analogy didn't really satisfy them the way it satisfied me and you've possibly give a valuable clue why.

In my experience, the inability to be satisfied with a materialistic world-view comes down to simple ego preservation, meaning, fear of death and the annihilation of our selves. The idea that everything we are and have ever known will be wiped out without a trace is literally inconceivable to many. The one common factor in all religions or spiritual ideologies is some sort of preservation of 'soul', whether it be a fully platonic heaven like the Christian belief, a more material resurrection like the Jewish idea, or more abstract ideas found in Eastern a... (read more)

If we are nothing but matter in motion, mere chemicals, then there are only molecules drunkenly bumping into each other and physicists are superstitious fools for believing in the macroscopic variables of thermodynamics such as temperature and pressure.

I find the philosophical position of "nothing buttery" silly because, in the name of materialist reductionism, it asks us to give up thermodynamics. It is indeed an example of perverse mindedness.

6simplicio14y
Not sure what you're arguing against here. Temperature and pressure are explicable in terms of "molecules drunkenly bumping into each other." Or have I misunderstood?
8AlanCrowe14y
The "Nothing But" argument claims that the things explained by materialistic reduction are explained away. In particular, the "Nothing But" argument claims that materialistic reduction, by explaining love and morality and meaning thereby explains them away, destroying them. The flaw I see in the "Nothing But" argument is that materialistic reduction also explains temperature and presssure. If to explain is necessarily to explain away then the "Nothing But" argument is not merely claiming that materialistic reduction is trashing love and beauty, the "Nothing But" argument is also claiming that materialistic reduction is trashing temperature and pressure. That is a silly claim and shows that there must be something wrong with the "Nothing But" argument. I think that there is a socially constructed blind spot around this point. People see that the "Nothing But" argument is claiming that materialistic reduction destroys love, beauty, temperature, and pressure. However claiming that materialistic reduction destroys temperature and pressure is silly. If you acknowledge the point then the "Nothing But" argument is obviously silly, which leaves nothing to discuss, and this is blunt to the point of rudeness. So, for social reasons, we drop the last two and let the "Nothing But" argument make the more modest claim that materialistic reduction destroys love and beauty. Then we can get on with our Arts versus Science bun fight. In brief, I'm agreeing with you. I just wanted to add a striking example of a meaning above the base level of atoms and molecules. You do not have to look at a pointillist painting to experience the reality of something above the base level. It is enough to breath on your hand and feel the pressure exerted by the warm air.
0simplicio14y
Oh, I see, sorry for the misunderstanding. Yes! Excellent point. I'm not even sure what "explaining away" means, for that matter. It seems to be another one of these notions that comes with a value judgment dangling from it.
6JGWeissman14y
http://lesswrong.com/lw/oo/explaining_vs_explaining_away/

It's said that "ignorance is bliss", but that doesn't mean knowledge is misery!

I recall studies showing that major positive/negative events in people's lives don't really change their overall happiness much in the long run. Likewise, I suspect that seeing things in terms of grim, bitter truths that must be stoically endured has very little to do with what those truths are.

4ktismael14y
I recall reading (One of Tyler Cowen's books, I think) that happiness is highly correlated with capacity for self-deception. In this case, positive / negative events would have little impact, but not necessarily because people accepted them, but more because the human brain is a highly efficient self-deception machine. Similarly, a tendency toward depression correlated with an ability to make more realistic predictions about one's life. So I think it may in fact be a particular aspect of human psychology that encourages self-deception and responds negatively to reality. None of this is to say that these effects can't be reduced or eliminated through various mental techniques, but I don't think it's sufficient to just assert it as cultural.
0CronoDAS14y
That's a pretty good line!

Let's talk about worldviews and the sensibilities appropriate to them. A worldview is some thesis about the nature of reality: materialism, solipsism, monotheism, pantheism, transhumanism, etc. A sensibility is an emotion or a complex of emotions about life.

Your thesis is: rationalist materialism is the correct worldview; its critics say negative things about its implications for sensibility; and some of us are accepting those implications, but incorrectly. Instead we can (should?) feel this other way about reality.

My response to all this is mostly at th... (read more)

3simplicio14y
I think I see what you're saying. However, I feel that hoping for ultimate realities undreamt-of hitherto is giving too much weight to one's own wishes for how the universe ought to be. There is no reason I can think of why the grand nature of reality has to be "richer" than physics (whatever that means). This reality, whether it inspires us or not, is where we find ourselves. Well now, I hope you were being facetious when you implied materialists believe that feelings are molecules. You are allowed to be unimpressed by materialist accounts of subjectivity, of course. However, you should seriously consider what kind of account would impress you. An account of subjectivity or consciousness or whatever is kind of like an explanation of a magic trick. It often leaves you with a feeling of "that can't be the real thing!"
1torekp14y
Doesn't this support simplicio's thesis? If there's little connection to knowledge - which I take to mean that neither emotional response follows logically from the knowledge - then epistemic rationality is consistent with joy. And where epistemic rationality is not at stake, instrumental rationality favors a joyful response, if it is possible.

We are not born into this world, but grow out of it; for in the same way an apple tree apples, the Earth peoples.

Your interpretation of this is overly charitable. The analogy to the apple tree makes it basically teleological; as apples define an apple tree, people define the earth. This phrasing implies a sort of purpose, importance (how important are apples to an apple tree?) and moral approval. Also, "We are not born into this world" is a false statement. And the process by which the earth generates people is pretty much nothing like the way in which an apple tree produces apples.

0CronoDAS14y
I think you misspoke there...
0Psychohistorian14y
Touche. Fixed.

I take exception to this passage, and feel that it is an unnecessary attack:

I have actually heard several smart people po-facedly lament the fact that the universe will end with a whimper. If this seriously bothers you psychologically, then your psychology is severely divorced from the reality that you inhabit.

6SoullessAutomaton14y
It's a reasonable point, if one considers "eventual cessation of thought due to thermodynamic equilibrium" to have an immeasurably small likelihood compared to other possible outcomes. If someone points a gun at your head, would you be worrying about dying of old age?
8orthonormal14y
There are plenty of transhumanists here who believe that (with some nonnegligible probability) the heat death of the universe will be the relevant upper bound on their experience of life.
6SoullessAutomaton14y
Which is fair enough I suppose, but it sounds bizarrely optimistic to me. We're talking about a time span a thousand times longer than the current age of the universe. I have a hard time giving weight to any nontrivial proposition expected to be true over that kind of range.
3Rain14y
I believe we have a duty to attempt to predict the future as far as we possibly can. I don't see how we can take moral or ethical stances without predicting what will happen as a result of our actions.
1billswift14y
We need to predict as far as we can, ethical decision making requires that we take into account all foreseeable consequences of our actions. But with the unavoidable complexity of society, there are serious limits as to how far it is reasonable even to attempt to look ahead; the impossibility of anyone (or even a group) seeing very far is one reason centralized economies don't work. And the complexity of all social interactions is at least an order of magnitude greater than strictly economic interactions.
6Rain14y
I've been trying to think of a good way to explain my problem with evaluation of [utility | goodness | rightness] given that we're very bad at predicting the future. I haven't had much luck at coming up with something I was willing to post, though I consider the topic extremely important. For example, how much effort should Clippy put into predicting and simplifying the future (basic research, modeling, increases in ability to affect the universe, active reductions to surrounding complexity, etc.) instead of making paperclips? The answer "however much it predicts will be useful" seems like a circular problem.
0billswift14y
They are circular problems; they share a general structure with adaptation problems though, and I have found reading serious books on evolution, some of Dawkins's are particularly good, and on economics, try Sowell's Knowledge and Decisions, to be helpful. These types of problems cannot be solved, at best you can only get incrementally improved answers - depending on the costs of acquiring and analyzing further information versus the expected value of that information.
0simplicio14y
I'm sorry you feel that way but, to be honest, I don't repent of my statement. I simply can't imagine why the ultimate fate of an (at that point uninhabited) cosmos should matter to a puny hoo-man (except intellectually). It's like a mayfly worrying about the Andromeda galaxy colliding with the Milky Way. I think the confusion here is similar to the fear of being dead (not fear of dying). You sort of imagine how horrible it'll be to be a corpse, just sitting around in a grave. But there will be no one there to experience how bad being dead is, and when the universe peters out in the end, no one will be there to be disappointed. If you care emotionally about entropic heat death, you should logically also feel bad every time an ice cube melts.
1Rain14y
I care about what to measure (utility function) as much as I care about when to measure it (time function). For any measure, there's a way to maximize it, and I'd like to see whatever measure humans decide is appropriate to be maximized across as much time as possible. So worrying about far future events is important insofar as I'd like my values to be maximized even then. As for worrying about ice cubes, you're right, it would be inconsistent of me to say otherwise, so I will say that I do. However, I apply a weighted scale of care, and our future galactic empire tends to weigh pretty heavily when compared with something like that. ETA: Care about ice cube loss is so small I can't feel it. Dealing with entropy / resource consumption, my caring gets large enough I can start feeling it around the point of owning and operating large home appliances, automobiles, etc., and ramps up drastically for things like inefficient power plants, creating new humans, and war.

On behalf of chemicals everywhere, I say: Screw you! Where would you be without us?

As Monsanto (and some of my user friends :-) ) tells us, "Without chemicals, life itself would be impossible."

More seriously, this post voiced some of the things I've been thinking about lately. It's not that it doesn't all reduce to physics in the end, but the reduction is complicated and probably non-linear, so you have to look at things in a given domain according to the empirically based rules for that domain. Even in chemistry (at least beyond the hydrogen ... (read more)

2orthonormal14y
Let me be the first to say, Welcome to Less Wrong! You're quite right, and your comment touches on some of the topics of the reductionism sequence here, in particular the eponymous post.

So facts can fester because you only allow yourself to judge them by their truthfulness, even though your actual relation with them is of a nonfactual nature.

One I had problems with: Humans are animals. It's true, isn't it?! But it's only bothering people for its stereotypical subtext. "Humans are like animals: mindless, violent and dirty."

Festering facts?

3orthonormal14y
Ah yes, it's time to dust off YSITTBIDWTCIYSTEIWEWITTAW again. Er, make that ADBOC.
0CytokineStorm14y
Oops, sorry about that.
1wedrifid14y
Well... that's got a significant element of truth to it too, but I need not be bothered about that either.

Bravo for an excellent post!

The one point I want to make is that gloominess is our natural emotional response to many reductionist truths. It is difficult not to see a baseless morality in evolution, hard not to feel worthless before the cosmos, challenging not to perceive meaninglessness in chemical neurology. Perhaps realising the fallacies of these emotional conclusions must necessarily come after the reductionist realisations themselves.

8Eliezer Yudkowsky14y
I'd still deny this. You need the right (wrong) fallacies to jump to those conclusions. Maybe the fallacies are easy to invent, or maybe our civilization ubiquitously primes people with them, but it still takes an extra and mistaken step.
7byrnema14y
I would call it a conundrum, rather than a fallacy. If my terminal values are impossible to satisfy in a materialistic world,then I'm just out of luck, not factually wrong.
1BenAlbahari14y
What if the priming is developmental? I wonder if there's any parents out there who have tried to bring up their kids with rational beliefs. E.g. No lies about "bunny heaven"; instead take the kid on a field-trip to a slaughterhouse. And if so, how did it effect how well adjusted the kids were?
5NancyLebovitz14y
Insulating children from death is a relatively modern behavior. For a long time, most people grew up around killing animals for food, and there was still religion.
2Strange714y
For this to really work, I think it would require more cultural support than just one set of parents. Maybe something like the school system and interactive history museum Sachisuke wrote about?
0Strange713y
Of the people who voted this up, I am curious: How much of Sachisuke Masamura's work have you read? PM me.
1Alex Flint14y
I agree. I think it is the particularities of human psychology leads people to such conclusions. The gloomy conclusions are in no way inherent in the premises.
3Nick_Tarleton14y
I think Eliezer is claiming that human psychology does not lead to those conclusions; culturally transmitted errors are required.

We're evolutionarily optimized for the savannah, not for the stars. It doesn't seem to me that our present selves are really as capable of being effortlessly content with our worldview as some of our forebears were, because we have some lingering Wrong Questions and wrong expectations written into our minds. Some part of us really wants to see agency in the basic causal framework of our lives, as much as we know this isn't so.

Now that's not a final prescription for hopelessness, because we can hope not to be running on the same bug-riddled brainware for ... (read more)

3Roko14y
Speak for yourself... that never occurred to me as something natural or something I "Wanted to see".
0Sniffnoy14y
I think you have an extra negation in the first sentence of your last paragraph?
0orthonormal14y
No, I think it's right as written. Our religious next-door neighbor may not feel disillusioned, and we might, and this is not necessarily a moral failing in us.
0Sniffnoy14y
Oh, whoops. I accidentally read the "does" as a "doesn't", reading the extra negation right into there...

I completely disagree with your post, but I really appreciate it. Perhaps as an artful and accurate node of what people who are satisfied or not satisfied with materialism disagree about.

The materialist in me figures from first principles, that it would seem that life has no meaning, morality has no basis, love is an illusion, everything is futile, etc. This is an intellectual and emotional response dove-tailed together. I would say that the intellectual response is first, and the emotional response comes second, because the melancholy is only there if I... (read more)

The materialist in me figures from first principles, that it would seem that life has no meaning, morality has no basis, love is an illusion, everything is futile, etc.

Perhaps part of the difference between those who are satisfied/not satisfied with materialism is in what role something other than materialism could play here. I just don't get how any of the non-materialist 'answers' are more satisfying than the materialist ones. If it bothers you that morality is 'arbitrary', why is it more satisfying if it is the arbitrary preferences of god rather than the arbitrary preferences of humans? Just as I don't get how the answer 'because of god' to the question 'why is there something rather than nothing' is more satisfying for some people than the alternative materialist answer of 'it just is'.

As Eliezer says in Joy in the Merely Real:

You might say that scientists - at least some scientists - are those folk who are in principle capable of enjoying life in the real universe.

8LauraABJ14y
Ok, so I am not a student of literature or religion, but I believe there are fundamental human aesthetic principles that non-materialist religious and wholistic ideas satisfy in our psychology. They try to explain things in large concepts that humans have evolved to easily grasp rather than the minutiae and logical puzzles of reality. If materialists want these memes to be given up, they will need to create equally compelling human metaphor, which is a tall order if we want everything to convey reality correctly. Compelling metaphor is frequently incorrect. My atheist-jewish husband loves to talk about the beauty of scripture and parables in the Christian bible and stands firm against my insistence that any number of novels are both better written and provide better moral guidance. I personally have a disgust reaction whenever he points out a flowery passage about morality and humanity that doesn't make any actual sense. HOW CAN YOU BE TAKEN IN BY THAT? But unlike practicing religious people, he doesn't 'believe' any of it, he's just attracted to it aesthetically, as an idea, as a beautiful outgrowth of the human spirit. Basically, it presses all the right psychological buttons. This is not to say that materialists cannot produce equally compelling metaphors, but it may be a very difficult task, and the spiritualists have a good, I don't know, 10,000 years on us in honing in on what appeals to our primitive psychology.

Why produce new metaphors when we can subvert ones we already know are compelling?

For it is written: The Word of God is not a voice from on High but the whispers of our hopes and desires. God's existence is but His soul, which does not have material substance but resides in our hearts and the Human spirit. Yet this is not God's eternal condition. We are commanded: for the God without a home, make the universe His home. For the God without a body, make Him a body with your own hands. For the God without a mind, make Him a mind like your mind, but worthy of a god. And instill in this mind, in this body, in this universe the soul of God copied from your own heart and the hearts of your brothers and sisters. The Ancients dreamed that God had created the world only because they could not conceive that the world would create God. For God is not the cause of our humility but the unfulfilled aim of our ambition. So learn about the universe so that you may build God a home, learn about your mind so you may build a better one for God, learn about your hopes and desires so that you may give birth to your own savior. With God incarnate will come the Kingdom of God and eternal life.

0soreff14y
This is reminding me of Stross's ReMastered's "unborn god"...
3mattnewport14y
I'm wondering whether your statement is true only when you substitute 'some people's' for 'our' in 'our psychology'. I don't feel a god-shaped emotional hole in my psyche. I'm inclined to believe byrenma's self report that she does. I've talked about this with my lapsed-catholic mother and she feels similarly but I just don't experience the 'loss' she appears to. Whether this is because I never really experienced much of a religious upbringing (I was reading The Selfish Gene at 8, I've still never read the Bible) or whether it is something about our personality types or our knowledge of science I don't know but there appears to be an experience of 'something missing' in a materialist world view amongst some people that others just don't seem to have.
6LauraABJ14y
While not everyone experiences the 'god-shaped hole,' it would be dense of us not to acknowledge the ubiquity of spirituality across cultures just because we feel no need for it ourselves (feel free to replace 'us' and 'we' with 'many of the readers of this blog'). Spirituality seems to be an aesthetic imperative for much of humanity, and it will probably take a lot teasing apart to determine what aspects of it are essential to human happiness, and what parts are culturally inculcated.
4mattnewport14y
Well, coming back to the original comment I was responding to: I don't feel that way, despite being a thoroughgoing materialist for as long as I can remember being aware of the concept. I also don't really see how believing in the 'spiritual' or non-material could change how I feel about these concepts. It does seem to be somewhat common for people to feel that only spirituality can 'save' us from feeling this way but I don't really get why. I acknowledge that some people do see 'spirituality' (a word that I admittedly have a tenuous grasp on the supposed meaning of) as important to these things which is why I'm postulating that there is some difference in the way of thinking or perhaps personality type of people who don't see a dilemma here and those for whom it is a source of tremendous existential angst.
3NancyLebovitz14y
I think Core transformation offers a plausible theory. People are capable of feeling oneness, being loved (without a material source) and various other strong positive emotions, but are apt to lose track of how to access them. Dysfunctional behavior frequently is the result of people jumping to the conclusion that if only some external condition can be met, they'll feel one of those strong positive emotions. Since the external condition (money, respect, obeying rules) isn't actually a pre-condition for the emotion but the belief about the purpose of the dysfunctional behavior isn't conscious, the person keeps seeking joy or peace or whatever in the wrong place. Core transformation is based on the premise that it's possible to track the motives for dysfunctional behavior back to the desired emotion, and give them access to the emotion-- the dysfunctional behavior evaporates, and the person may find other parts of their life getting better. I've done a little with this system-- enough to think there's at least something to it.
1Academian14y
Do you take awe in the whole of humanity, Earth, or the universe as something greater than yourself? Does it please you to think that even if you die, the universe, life, or maybe even the human race will go on existing long afterward? Maybe you don't feel the hole because you've already filled it :)
3mattnewport14y
I've experienced an emotion I think is awe but generally only in response to the physical presence of something in the natural world rather than to sitting and thinking. Being on top of a mountain at sunrise, staring at the sky on a clear night, being up close to a large and potentially dangerous animal and other such experiences have produced the emotion but it is only evoked weakly if at all by sitting and contemplating the universe. I don't think I have a very firm grip on the varieties of 'religious' experience. I am not really clear on the distinction between awe and wonder for example though I believe they are considered separate emotions.
0RobinZ14y
I can't speak for mattnewport, but I don't take awe, as a rule - I just haven't developed a taste for it. I am occasionally awed, I admit - by acts of cleverness, bravery, or superlative skill, most frequently - but I am rarely rocked back on my heels by "goodness, isn't this universe huge!" and other such observations.
4PhilGoetz14y
The answers are satisfying because they're not really answers. They're part of a completely different value and belief system - a large, complex structure that has evolved because it is good at generating certain feelings in those who hold it; feelings which hijack those people's emotional systems to motivate them to spread it. Very much like the fly bacteria (or was it a virus?) that reprograms its victims' brains to climb upwards before they die so that their bodies will spread its spores more effectively.
4tut14y
I think that the standard example of that is a fungus that infects ants. And the bad pun is "Is it just a fluke?" that the ant climbs to the top of a straw, and that it's behind gets red and swollen like a berry, so that the birds are sure to eat it.
0PhilGoetz14y
Rabies is another example.
4byrnema14y
I believe I can answer this question. The question is a misunderstanding of what "God" was supposed to be. (I think theists often have this misunderstanding as well.) We live in a certain world, and it natural for some people (perhaps only certain personality types) to feel nihilistic about that world. There are many, many paths to this feeling -- the problem of evil, the problem of free will, the problem of objective value, the problem of death, etc. There doesn't seem to be any resolution within the material world so when we turn away from nihilism, as we must, we hope that there's some kind of solution outside the material. This trust, an innate hope, calls on something transcendental to provide meaning. However you articulate that hope, if you have it, I think that is theism. Humans try and describe what this solution would be explictly, but then our solution is always limited by our current worldview of what the solution could be (God is the spirit in all living things; God is love and redemption from sin; God is an angry father teaching and exacting justice ). In my opinion, religion hasn't kept up with changes in our worldview and is ready for a complete remodeling. Perhaps we are ready for a non-transcendent solution, as that would seem most appropriate given our worldview in non-religious areas, but I just don't see any solutions yet. I've been listening carefully, and people who are satisfied with materialism seem to still possess this innate hope and trust; but they are either unable to examine the source of it or they attribute it to something inadequate. For example, someone once told me that for them, meaning came from the freedom to choose their own values instead of having them handed down by God. But materialism tells us we don't get to choose. We need to learn to be satisfied with being a river, always choosing the path determined by our landscape. The ability to choose would indeed be transcendental. So I think some number of people realized
1Furcas14y
Without relaunching the whole discussion, there's one thing I'd like to know: Do you acknowledge that the concepts you're "giving up on" ('transcendental' freedom, value, and purpose, as you define them) are not merely things that don't exist, but things that can't exist, like square circles?
3byrnema14y
I only know that I believe they should exist. I gave up on figuring out if they could exist. Specifically, what I've "given up on" is a reconciliation of epistemic and instrumental rationality in this matter.
1Jack14y
How'd I do here?
1byrnema14y
If God doesn't exist, creating him as the purpose of my existence is something I could get behind. And then I would want the God of the future to be omnipotent enough to modify the universe so that he existed retroactively, so that the little animals dying in the forest hadn't been alone, after all. (On the day I intensely tried to stop valuing objective purpose, I realized that this image was one of my strongest and earliest attachments to a framework of objective value.) God wouldn't have to modify the universe in any causal way, he would just need to send information back in time (objective-value-information). Curiosity about the possibility of a retroactive God motivated this thread. If it is possible for a God created in the future to propagate backwards in time, then I would rate the probability of God existing currently as quite nearly 1.

Whether something is good is also a factual question.

4bogdanb14y
Care to elaborate?
5orthonormal14y
The parent is assuming the naturalistic reduction of morality that EY argued for in the Metaethics Sequence, in which "good" is determined by a currently opaque but nonetheless finite computation (at least for a particular agent, but then there's the additional claim that humanity has enough in common that this answer shouldn't vary between people any significant amount).
4Vladimir_Nesov14y
With a finite definition, but not at all finite or even knowable consequences (they are knowably good, but what they are exactly, one can't know). It's going to vary a very significant amount, just a lot less than the distance from any other preference we might happen to construct, and as such, for example, creating a FAI modeled on any single person is hugely preferable for other people to letting an arbitrary AGI to develop, even if this AGI was extensively debugged and trained, and looks to possess all the right qualities.
0bogdanb14y
Well, OK, let’s suppose* I agree with that. Could you elaborate on what that means in the context of the post? (Or link to somewhere where you did, if so.) (*: Even after re-reading the AID post linked by orthonormal, I’m not sure what you mean by “knowably good” above, but I think that answering to the paragraph above would be more helpful than an abstract discussion.)

It has been a while since I've read Watts, but I suspect you're misreading his attitude here. In essence the buddhist (particularly the Zen Buddhist) attitude toward reality is very similar to the materialist view which you endorse. That is, that reality exists, and our opinions about it should be recognized as illusory. This can be confused for nihilism or despair, but really is distinct. Take the universe as it is, and experience it directly, without allowing your expectations of how it should be to affect that experience.

Perhaps he doesn't share thi... (read more)

4simplicio14y
I am rather fond of Watts, having read many of his books & listened to his lectures as a youngster. He seems to vacillate between accepting the scientific worldview and inserting metaphysical claims about consciousness as a fundamental phenomenon (as well as other weird claims). For instance, you can find in "The Book on the Taboo..." a wonderful passage about life as "tubes" with an input and an output, playing a huge game of one-upmanship; "this all seems wonderfully pointless," he says, "but after a while it seems more wonderful than pointless." But in the same book he basically dismisses scientists as trying so hard to be rigorous that they make life not worth living. And you can find him ranting about how Euclid must have been kind of stupid because he started with straight lines (as opposed to organic shapes). The guy frustrates the hell out of me, because with a couple years of undergrad science under his belt he could've been a correct philosopher as well as an original one.
1ktismael14y
Yeah, I suppose his understanding is not consistent, like most of us he has (had) blindspots in which emotion takes over. I, too, found him interesting and frustrating as a writer. Mostly, I wanted to bring up the distinction between nihilism and what I guess I'll refer to as the buddhist doctrine of "acceptance". I'm not sure how that distinction is to be drawn, since they look quite similar. Perhaps I could compare it to the difference between agnosticism (or skepticism) and "hard" atheism. The first, here from Dawkins says "There's probably no god, so quit worrying and enjoy your life." The second, a la Penn Jillette says "There is no God". Nihilism seems to make a claim to knowledge closer to the first, as "Nothing matters". Acceptance seems closer to the first, "It probably doesn't matter whether or not it matters." But I could be full of crap with this whole line of argument. Anyway, your paraphrase here makes it pretty clear that at least part of the time he suffered from the "mechanism = despair" fallacy, so I suppose it doesn't especially matter here.
1simplicio14y
I think I get the distinction. I suspect Watts would say something like "all of these things - materialism, spiritualism, etc. are just concepts. Reality is reality." Which sounds nice until you realize he means subjectively experienced reality. Elevating the latter to some sort of superior status is a big mistake imo, although the distinction between reality and our conceptions of it is well founded.
1ktismael14y
Well, I hesitate to challenge your reading of Watts, as you've definitely retained more than I have, but I would say that subjectively experienced reality isn't the goal of understanding, rather an attempt to bring once perception closer to actual reality. So I suspect that the doctrine of acceptance would say that if your eyes and ears contradict what appears to be actually happening, then you should let your eyes and ears go. But of course there is always perception bias, and I'm sure the subject is well covered on LW elsewhere. And, in buddhism all of this is weighted down with a lot of mysticism and even with that this is a highly idealized version anyway. For FSM's sake, the majority of buddhists are sending their prayers up to heaven with incense. So perhaps I should just let it go, eh? :) Anyway, thanks for your comments, it may be helping me set some of my thoughts on all this.

"We are not born into this world, but grow out of it; for in the same way an apple tree apples, the Earth peoples.”

This statement is patently false in many ways and there is no way to justify saying that "the basic idea is indisputably correct". The basic idea that the OP imputed was not derivable from this statement in any way that I can see. Am I missing some crucial bit of context?

Some non-trivial holes: We ARE born into this world; we do not grow out of it in any sense, even metaphorical (though I think many here hope to accomplish the... (read more)

6Johnicholas14y
The claim "we do not grow out of it in any sense, even metaphorical" is overly strong. Consider: The process of evolution is just as natural as (on the one hand) the process of birth and (on the other hand) the process of hydrogen fusing into helium. Considering "the earth" as an agent in the process of evolution is no more peculiar than considering the earth as an agent in the statement "The earth moves around the sun." The claim "we are not born into this world" is literally false, but if we assume (from context) a philosophical notion of "we are born, tabula rasa, into this world and philosophy is us wondering what to make of it", it is rejecting the notion that humans (or viewpoints, or consciousnesses) are somehow special and atomic, made out of a substance fundamentally incompatible to, say, mud.

I dunno, man, my angst at the state of the universe isn't that it is meaningless, but that it is all too meaningful and horrible and there is no reason for the horror to ever stop.

[-][anonymous]12y10

Alex Rosenberg is arguing for the more gloomy take on materialism.

From amazon:

"His bracing and ultimately upbeat book takes physics seriously as the complete description of reality and accepts all its consequences. He shows how physics makes Darwinian natural selection the only way life can emerge, and how that deprives nature of purpose, and human action of meaning, while it exposes conscious illusions such as free will and the self."

[-][anonymous]12y00

I like to do some plain ole' dissolving of my unupdated concept of the world and asking "What did I value about X (in the unupdated version)?" and compare the result to see if those features are withstanding or not in the updated version. And oftentimes I only care about that which is left unchanged, since my starting-point is often how normality come about rather than what normality is. Come to think of it, this sounds somewhat like a re-phrasing of EY:s stand on reductionism(?).

Strangely relevant: "Hard pill in a chewable form": http://www.youtube.com/watch?v=UmjmFNrgt5k

I worry that some rationalists, while rejecting wooly dualist ideas about ghosts in the machine, have tacitly accepted the dualists’ baseless assumptions about the gloomy consequences of materialism.

The real problem for me has not been that materialism implies in principle that things are going to be gloomy, for example because of lack of free will, souls, consciousness, etc. It is not the rules of physics that I find problematic.

It is the particular arrangement of atoms, the particular initial conditions that are the issue. Things could be good under materialism, but actually, they are going more mixed.

1PhilGoetz14y
Huh? "More mixed"? What could be better, and how is it getting worse?
0Roko14y
Mixed in terms of goodness/badness from the point of view of my preferences.
1CronoDAS14y
Personally, I really, really hate the laws of thermodynamics; among other things, they make survival more difficult because I have to eat and maintain my body temperature. It would be nice to be powered by a perpetual motion machine, wouldn't it?
3Roko14y
You have to critique the rules in aggregate, rather than in isolation. The laws of thermodynamics are not actually basic laws by the way - the basic laws are the standard model plus gravity. Thermodynamics (may be, probably is) an emergent property of these laws.
0PhilGoetz14y
The laws of the physics are the rules, without which we couldn't play the game. They make it hard for any one player to win. If you took any of the laws away, you'd probably be a paperclip-equivalent by now. And even if you weren't, living without physics would be like playing tennis without a net. You'd have no goals or desires as we understand them.
6SoullessAutomaton14y
Except that, as far as thermodynamics goes, the game is rigged and the house always wins. Thermodynamics in a nutshell, paraphrased from C. P. Snow: 1. You can't win the game. 2. You can't break even. 3. You can't stop playing.
4Jack14y
I assume Crono was objecting to these particular laws of physics not the idea of there being any laws of physics at all. I'm actually not sure if there can be existence without laws of physics.
[-][anonymous]12y-10

Maybe, that is the problem. Can't you look at a coastline and see the beauty of it without thinking about fractals? Can you not enjoy a flower w/o thinking of Phi?

No, why should I? It adds to the awesomeness of coastlines that they are paradoxically unmeansurable, and that flower leaves grow according to repulsion which results in fibbonachi spiral systems.

I can already do the simple trick of "that's a pretty thing" but when I think about the maths it gets better.

Also, if by reductionism you are talking about reducing objects down to their i

... (read more)
-2Monkeymind12y
"What do you mean?" I may have wrongly determined (because of your name) that you held the same view as other plasma cosmologists (the Electric Universe folks) that I have been talking with the last couple of weeks. Their view is that reality is at the single level, but 'observable reality' (the multi-level model) is the interface between the brain and reality. Consequently, all their discussions are about the interface (phenomena). If so, then understanding the difference between an object and a concept might help one come up with ways to make reductionism kewl for the 'normal' folk. Math is an abstract and dynamic language that may be good for describing (predicting) phenomena like rainbows (concepts) but raindrops are static objects and better understood by illustration. While the math concepts make the rainbow all the more beautiful and wonderful for you, this may not be the case for normal folks. I for one have a better "attitude" about so called knowledge when it makes sense. When I understand the objects involved, the phenomena is naturally more fascinating. But as you suggested, I may be totally misunderstanding the Scourge of Perverse-mindedness. BTW: The negative thumbs are not mine, but most likely your peers trying to tell you not to talk to me. If you doubt this check my history.... Take care!
[-][anonymous]12y-30

You are misunderstanding the purposes of this discussion.

I don't have any problems, I can hardly not see anything as beautiful without maths.

But normalfolk are not so fortunate. How do we trick them into thinking that reductionism is cool?