Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Psychohistorian2 05 October 2007 06:09:22AM 4 points [-]

Tiiba: Because it is very hard to read ambiguity into moral acts. One can say that six days is not meant literally (even if the original language says that - though I'm not saying it does; I don't know). One cannot say that the firstborn of Egypt were all just sleeping.

Furthermore, one cannot explain away deception. Maybe God actually made the Universe in six days but wants us to think it was longer to test our faith. Yes, that's a lousy argument, but one might conceive of it being true. As for other offenses, God makes the laws of physics, so he obeys them at his whim.

By contrast, an action making God appear evil necessarily makes him incomprehensible for many religions. If you say that God is good, and that he slaughtered innocent children, and you believe that such a slaughter is wrong, then any defense of God must change the meaning of "God is good" to something completely unrecognizable. Either good is true of God by definition (What He does is good) or it is the "big plan" strategy, in which case it is actually good but you are too stupid to understand why, meaning he is good in a way that we necessarily cannot understand.

So, to end this rambling, people pick moral attacks because they don't allow the "Well, it's obviously false, therefore, it isn't meant literally!" defense. It also attacks concepts of God on a somewhat different level.

Comment author: Vamair0 11 January 2017 06:51:37AM *  0 points [-]

If there is a heaven and the killed firstborn went there, then killing them (or anyone else, for that matter) is quite harmless. And killing is wrong for people not because it causes harm, but because God forbids it. It's a strange view, but not an obviously inconsistent one. On the other hand I've always shied away from moral attacks just because the counterargument of "So, God's not benevolent, now what? You still had to worship it for a few decades or you are going to literally burn for eternity" seemed so obvious. Like it seems pointless to argue that Dumbledore is evil when you're trying to prove he never existed.

Comment author: ImNotAsSmartAsIThinK 28 May 2016 10:36:13PM *  0 points [-]

At least this tells me I didn't make a silly mistake in my post. Thank you for the feedback.

As for your objections,

All models are wrong, some models are useful.

exactly captures my conceit. Reductionism is correct in the sense that is, in some sense, closer to reality than anti- or contra-reductionism. Likely in a similar sense that machine code is closer to the reality of a physical computation than a .cpp file, though the analogy isn't exact, for reasons that should become clear.

I'm typing this on a laptop, which is a intricate amalgam of various kinds of atoms. Hypothetically, you could explain the positioning of the atoms in terms of dense quantum mechanical computations (or a more accurate physical theory, which would exist ex hypothesi), and/or we could explain it in terms of economics, computer science and the vagaries of my life. The former strictly contains more information than the latter, and subsumes the latter to the extend that it represents reality and contradicts it to the extend it's misleading.

At an objective level, then, the strictly reductionist theory wins on merit.

Reductionism functions neatly to explain reality-in-general, and even to explain certain orderly systems that submit to a reductionist analysis. If you want completeness, reductionism will give you completeness, at the limit. But sometimes, a simple explanation is nice. It'd be convenient to compress, to explain evolution in abstract terms.

The compression will be lossy, because we don't actually have access to reality's dataset. But lossy data is okay, and more okay to more casual the ends. Pop science books are very lossy, and are sufficient for delivering a certain type of entertainment. A full reprinting of a paper's collected data is about as lossless as we tend to get.

A lossless explanation is reductionist, and centribus paribus, we ought to go with the reductionist explanation. Given a choice between a less lossy, very complex explanation and a lossy, but simple explanation, you should probably go gather more data. But failing that, you should go with one that suits your purposes. A job where every significant bit digit of accuracy matters chooses the first, as an example.

Comment author: Vamair0 29 May 2016 03:33:03PM 0 points [-]

A lossless explanation is reductionist

Isn't that what people mean when they say reductionism is right?

Comment author: Vamair0 26 May 2016 10:49:21AM 1 point [-]

I think it's not so much a sum of properties as a union of property sets. If a system has a property that's not a part of a union then it's "more than the sum of its components". On the other hand I find the notion of something being "more than the sum of its parts" about as annoying as the frequent ads with "1 + 1 = 3 Buy two and get one for free!" equation. That is, very annoying.

Comment author: gjm 04 January 2016 04:48:16PM 4 points [-]

I know of an old prime number that happens to end with a 2.

Comment author: Vamair0 06 May 2016 07:41:45AM 1 point [-]

How old is it, exactly?

Comment author: Vamair0 06 May 2016 07:12:33AM *  0 points [-]

It seems interesting that a lot of spiritual experiences are something that happens in non-normal situations. To get them people may try denying food or sleep, stay in the same place for a long time without motion, working themselves to exhaustion, eating poisons, going to a place of different atmospheric pressure or do something else they don't normally try to do. The whole process is suspiciously similar to program testing, when you try the program in some situations its creator (evolution in case of humans) haven't "thought" much about. And then sometimes there are bugs. And if you don't follow the protocols for already discovered bugs you either risk crashing something really important or getting nothing at all. Bugs are real and may give a valuable information on the program's inner workings, but they're not "the final truth about the underlaying reality".

The belief of the revelatory nature of spiritual experiences may be a result of a "just world" bias. When you get your reward you've been working for for years, it's easier to believe you understood something profound about the reality rather than that you've discovered an error in your brain. If that's the case then "If you spin a lot, you'll get vertigo" or "if you sit on your hand long enough, there would be strange feeling there" or "look through the autostereogram picture to see it in 3D" may be thought of as spiritual experiences, but they're too easy and mundane for that.

Comment author: Gram_Stone 05 May 2016 04:37:56PM -1 points [-]

I've had similar thoughts in the past few days. It does seem that utilitarianism merely prescribes the moral action, without saying anything about the goodness or badness of people. Of course, I've seen self-identifying utilitarians talk about culpability, but they seem to be quickly tacking this on without thinking about it.

Comment author: Vamair0 05 May 2016 07:16:19PM 2 points [-]

It is possible to talk about utilitarian culpability, but it's a question of "would blaming/punishing this (kind of) person lead to good results". Like you usually shouldn't blame those who can't change their behavior as a response to blame unless they self-modified themselves to be this way or if them being blameless would motivate others that can... That reminds me of the Eight Short Studies On Excuses, where Yvain has demonstrated an example of such an approach.

Comment author: Vamair0 05 May 2016 11:50:55AM 0 points [-]

Isn't the question of someone being a good or a bad person at all a part of virtue ethics? That is, for a utilitarian the results of the bystander's and murderer's actions were the same, and therefore actions were as bad as each other, but that doesn't mean a bystander is as bad as the murderer, because that's not a part of utilitarian framework at all. Should we implement the policy of blaming or punishing them the same way? That's a question for utilitarianism. And the answer is probably "no".

Comment author: bogus 23 April 2016 01:33:38AM 0 points [-]

I'm asking if there is a rational non-meta reason to believe they do "stop at the neck" even if we throw away all the IQ/nations data.

Of course there are. The standard argument is that the history of human evolution suggests that increased intelligence and favorable personality traits were strongly selected for, and traits which are strongly selected tend to reach fixation rather quickly.

Comment author: Vamair0 23 April 2016 08:52:06AM *  0 points [-]

But then the difference in intelligence would be almost completely shared + nonshared environment. And twin studies suggest it's very inheritable. It also seems to be a polygenic trait, so there can be quite a lot of new mutations there that haven't yet reached fixation even if it's strongly selected for.

Comment author: ChristianKl 20 April 2016 07:22:26PM 2 points [-]

People in our society differ in how they think about genetic differences. There are people who think that race matters a great deal and other you think it doesn't matter. It's useful to have a metric that distinguishes those people.

If you have that metric you can ask interesting questions such as whether people who are well calibrated are more likely to score high on that metric. It's interesting whether the metric changes from year to year.

That means the question tries to point at a property that people disagree about. In this case it's whether genetic differences are important. The question doesn't define "important" but there are various right wing people such as neoreocons and red-pill-folks who identify with the term "human biodiversity". The question doesn't try to ask for a specific well-defined belief but points to that cluster of beliefs. It's the same way that the feminism question doesn't point to a well-defined belief. You don't need a well-defined belief to get valuable information from a poll.

The question made it into the the survey because I complained about the usage of tribal labels such as liberal/conversative where people have to pick one choice as a way to measure political beliefs. I argued that focusing on agreement on issues is more meaningful and provides better data.

Comment author: Vamair0 20 April 2016 07:51:59PM *  0 points [-]

Thank you for the explaination.

Sorry, I'm still not getting it. Doesn't matter.

Comment author: ChristianKl 20 April 2016 06:12:12PM 1 point [-]

I don't think "ignoring the context" is well described as going deep. Part of critical reading is to think about why someone writes what they write instead just trying to focus on the literal meaning of words. It's rather only engaging with the surface.

Comment author: Vamair0 20 April 2016 06:51:57PM 0 points [-]

It's ignoring the context that can be described as not going deep enough. My other usual algorithm "if the question seems easy, look for a deeper meaning" is not without its faults either. Btw, what the context of a single question that asks me to describe my opinion of something as I understand the term actually is?

Alright, I got it, I fail critical reading forever. Yet. Growth mindset. What was the real meaning?

View more: Next