All of Ziphead's Comments + Replies

Ziphead10

Perhaps this is a known issue, but since I haven't seen it discussed, I thought I'd mention that images don't seem to work in some (all?) of the old posts imported from Overcoming Bias. See for example:

Timeless Physics

The first few pics in that particular post is available from an external server if you click them, but I don't see them inline. The last picture seems to have been hosted at Overcoming Bias, and is no longer accessible.

1Eliezer Yudkowsky
Work is underway, apparently.
Ziphead00

My point is that people striving to be rational should bite this bullet. As you point out, this might cause some problems - which is the challenge I propose that rationalists should take on.

You may wish to think of your actions as non-arbitrary (that is, justified in some special way, cf. the link Nick Tarleton provided), and you may wish to (non-arbitrarily) criticize the actions of others etc. But wishing doesn't make it so. You may find it disturbing that you can't "non-arbitrarily" say that "striving for truth is better than killing babi... (read more)

2Jess_Riedel
I think the problem is much more profound than you suggest. It is not something that rationalists can simply take on with a non-infinitesimal confidence that progress will be made. Certainly not amateur rationalists doing philosophy in their spare time (not that this isn't healthy). I don't mean to say that rationalists should give up, but we have to choose how to act in the meantime. Personally, I find the situation so desperate that I am prepared to simply assume moral realism when I am deciding how to act, with the knowledge that this assumption is very implausible. I don't believe this makes me irrational. In fact, given our current understanding of the problem, I don't know of any other reasonable approaches. Incidentally, this position is reminiscent of both Pascal's wager and of an attitude towards morality and AI which Eliezer claimed to previously hold but now rejects as flawed.
Ziphead00

It seems to me that your position can be interpreted in at least two ways.

Firstly, you might mean that it is useful to have common standards for behavior to make society run more smoothly and peacefully. I think almost everyone would agree with this, but these common standards might be non-moral. People might consider them simple social convections that they adopt for reasons of self-interest (to make their interactions with society flow more smoothly), but that have no special metaphysical status and do not supersede their personal values if a conflict ar... (read more)

1Technologos
I don't think they do have any "special metaphysical status," and indeed I agree that they are "simple social conventions." Do I make statements about moral rights and wrongs? Only by reference to a framework that I believe the audience accepts. In LWs case, this seems broadly to be utilitarian or some variant. That's precisely my point--morality doesn't have to have any metaphysical status. Perhaps the problem is simply that we haven't defined the term well enough. Regardless, I suspect that more than a few LWers are moral skeptics, in that they don't hold any particular philosophy to be universally, metaphysically right, but they personally value social well-being in some form, and so we can usually assume that helping humanity would be considered positively by a LW audience. As long as everyone's "personal values" are roughly compatible with the maintenance of society, then yes, losing the sense of morality that excludes such values may not be a problem. I was simply including the belief that personal values should not produce antisocial utility functions (that is, utility functions that have a positive term for another person's suffering) as morality. Do I think that these things are metaphysically supported? No. But do I think that with fewer prosocial utility functions, we would likely see much lower utilities for most people? Yes. Of course, whether you care about that depends on how much of a utilitarian you are.
Ziphead10

I'm continually surprised that so many people here take various ideas about morality seriously. For me, rationality is very closely associated with moral skepticism, and this view seems to be shared by almost all the rationalist type people I meet IRL here in northern Europe. Perhaps it has something to do with secularization having come further in Europe than in the US?

The rise of rationality in history has undermined not only religion, but at the same time and for the same reasons, all forms of morality. As I see it, one of the main challenges for people... (read more)

2PhilGoetz
I think you need to define what you mean by "morality" a lot more carefully. It's hard to attribute meaning to the statement "People should act without morals." Even if you mean "Everyone should act strictly within their own self-interest", evolutionary psychology would demand that you define the unit of identity (the body? the gene?), and would smuggle most of what we think of as "morality" back into "self-interest".
-2byrnema
Certainly, the main ideological tenet of Less Wrong is rationality. However, other tenets slip in, some more justified than others. One of the tenets that I think needs a much more critical view is something I call "reductionism" (perhaps closer to Daniel Dennetts "greedy reductionism" than what you think of). The denial of morality is perhaps one of the best examples of the fallacy of reductionism. Human morality exists and is not arbitrary. Intuitively, the denial of morality is absurd and ideologies that result in conclusions that are intuitively absurd should require extraordinary justification to keep believing in them. In other words, you must be a rationalist first and a reductionist second. First, science is not reductionist. Science doesn’t claim that everything can be understood by what we already understand. Science makes hypotheses and if the hypotheses don’t explain everything, it looks for other hypotheses. So far, we don’t understand how morality works. It is a conclusion of reductionism – not science – that morality is meaningless or doesn’t exist. Science is silent on the nature of morality: science is busy observing, not making pronouncements by fiat (or faith). We believe that everything in the world makes sense. That everything is explainable, if only we knew enough and understood enough, by the laws of the universe, whatever they are. (As rationalists, this is the only single, fundamental axiom we should take on faith.) Everything we observe is structured by these laws, resulting in the order of the universe. Thus a falsehood may be arbitrary, but a truth, and any observation, will always not be arbitrary, because it must follow these laws. In particular, we observe that there are laws, and order, over all spatial and temporal scales. At the atomic/molecular scale, we have the laws of physics (classical mechanics). At the organism level, we have certain laws of biology (mutation, natural selection, evolution, etc). At the meta-cognitive level,
2Jess_Riedel
Moral skepticism is not particularly impressive as it's the simplest hypothesis. Certainly, it seems extremely hard to square moral realism with our immensely successful scientific picture of a material universe. The problem is that we still must choose how to act. Without a morality, all we can say is that we prefer to act in some arbitrary way, much as we might arbitrarily prefer one food to another. And...that's it. We can make no criticism whatsoever about the actions of others, not even that they should act rationally. We cannot say that striving for truth is any better than killing babies (or following a religion?) anymore than we can say green is a better color than red. At best we can make empirical statements of the form "A person should act in such-and-such manner in order to achieve some outcome". Some people are prepared to bite this bullet. Yet most who say they do continue to behave as if they believed their actions were more than arbitrary preferences.
1Technologos
My position may be one of those you criticize. I believe something that bears an approximation to "morality" is both worth adhering to and important. I think a particular kind of morality helps human societies win. Morality, as I understand it, consists of a set of constraints on acceptable utility functions combined with observable signals of those constraints. Do I believe that this type of morality is in any sense ultimately correct? No. In a technical sense, I am a complete and total moral skeptic. However, I do think publicly-observable moral behavior is useful for coordination and cooperation, among other things. To the extent that this makes us better off--to the extent it makes me better off--I would certainly think that even a moral skeptic might find it interesting. Perhaps LWers are "too uncritical toward their moral prejudices." But it's at least worth examining which of those "moral prejudices" are useful, where this doesn't conflict with other, more deeply held values. Finally, morality broadly enough construed is a condition of rationality: if morality is taken to simply be your set of values and preferences, then it is literally necessary to a well-defined utility function, which is itself (arguably) a necessary component of rationality.
Ziphead20

This is really from times before OB, and might be all too obvious, but the most important thing I’ve learned from your writings (so far) is bayesian probability. I had come in touch with the concept previously, but I didn’t understand it fully or understand why it was very important until I read your early explanatory essays on the topic. When you write your book, I’m sure that you will not neglect to include really good explanations of these things, suitable for people who have never heard of them before, but since no one else has mentioned it in this thread so far, I thought I might.

Ziphead170

Expecting Short Inferential Distances

One of many posts that gave me a distinct concept for something I previously had been only vaguely aware of, and this one kept coming back to me all the time. By now, I don’t think it’s an extreme exaggeration to say that I make use of this insight every time I communicate with someone, and of all the insights I picked up from OB, this might be the one I most frequently try to explain to others. It doesn’t seem like the most important thing, but for some reason, it immediately struck me as the most frequently useful one.

2NQbass7
For me it's between inferential distance and cached thoughts, at least for ones I explain to other people. For ones I use myself, Line of Retreat is probably the one I actively pursue most frequently. Though I end up using Absence of Evidence pretty often as well.
1CarlShulman
Inferential distance is the most frequently useful thing I learned at OB, followed by leaving a line of retreat. However, I use other insights I had previously encountered elsewhere more frequently.
3MichaelVassar
I think I'll second that, though the attitude that there's an exact right amount to update on every piece of information is really important too.
Ziphead70

I can’t remember a time when I was not very much concerned with rationality. I think my father (a neuroscientist) encouraged those kinds of ideas from the time I was learning to speak my first few words, always reasoning with me, nudging me to think straight. I developed a deep interest in science from about the age of five and there was never any competition from other ways of viewing the world. Things like game theory and heuristics and biases came to me much later (when studying economics), and although I was excited about it, it didn’t really rock my w... (read more)