Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: JenniferRM 30 January 2017 05:21:28PM 3 points [-]

I appreciate the poll, but a large part of my goal was to just get a lot of comments, hopefully at the "Ping" level, because I want to see how many people are here with at least that amount of "social oomph" when the topic is themselves.

For people responding to this poll, please also give a very small overall comment that you used the poll.

Comment author: Vamair0 31 January 2017 06:17:40PM 1 point [-]


Comment author: Lumifer 19 January 2017 08:38:13PM *  0 points [-]

While rationality is nominally that which wins

I don't think it is.

Rationality is a combination of keeping your map of the world as correct as you can ("epistemic rationality", also known as "science" outside of LW) and doing things which are optimal in reaching your goals ("instrumental rationality", also known as "pragmatism" outside of LW).

The "rationalists must win" point was made by EY to, basically, tie rationality to the real world and real success as opposed to declaring oneself extra rational via navel-gazing. It is basically "don't tell me you're better, show me you're better".

For a trivial example consider buying for $1 a lottery ticket which has a 1% chance of paying out $1000. It is rational to buy the ticket, but the expected outcome (mode, in statitics-speak) is that you will lose.

I see post-rationality as being the continued exploration of the former project (to win, crudely, though it includes even figuring out what winning means) without constraining oneself to the boundaries of the latter.

So, um, how to win using any means necessary..? I am not sure where do you want to go outside of the "boundaries of the latter".

Comment author: Vamair0 20 January 2017 11:11:52AM 1 point [-]

Rationality is a combination of keeping your map of the world as correct as you can ("epistemic rationality", also known as "science" outside of LW)

I'm not sure that's what people usually mean by science. And most of the questions we're concerned about in our lives ("am I going to be able to pay the credit in time?") are not usually considered to be scientific ones.

Other than that minor nitpick, I agree.

Comment author: Psychohistorian2 05 October 2007 06:09:22AM 4 points [-]

Tiiba: Because it is very hard to read ambiguity into moral acts. One can say that six days is not meant literally (even if the original language says that - though I'm not saying it does; I don't know). One cannot say that the firstborn of Egypt were all just sleeping.

Furthermore, one cannot explain away deception. Maybe God actually made the Universe in six days but wants us to think it was longer to test our faith. Yes, that's a lousy argument, but one might conceive of it being true. As for other offenses, God makes the laws of physics, so he obeys them at his whim.

By contrast, an action making God appear evil necessarily makes him incomprehensible for many religions. If you say that God is good, and that he slaughtered innocent children, and you believe that such a slaughter is wrong, then any defense of God must change the meaning of "God is good" to something completely unrecognizable. Either good is true of God by definition (What He does is good) or it is the "big plan" strategy, in which case it is actually good but you are too stupid to understand why, meaning he is good in a way that we necessarily cannot understand.

So, to end this rambling, people pick moral attacks because they don't allow the "Well, it's obviously false, therefore, it isn't meant literally!" defense. It also attacks concepts of God on a somewhat different level.

Comment author: Vamair0 11 January 2017 06:51:37AM *  1 point [-]

If there is a heaven and the killed firstborn went there, then killing them (or anyone else, for that matter) is quite harmless. And killing is wrong for people not because it causes harm, but because God forbids it. It's a strange view, but not an obviously inconsistent one. On the other hand I've always shied away from moral attacks just because the counterargument of "So, God's not benevolent, now what? You still had to worship it for a few decades or you are going to literally burn for eternity" seemed so obvious. Like it seems pointless to argue that Dumbledore is evil when you're trying to prove he never existed.

Comment author: ImNotAsSmartAsIThinK 28 May 2016 10:36:13PM *  0 points [-]

At least this tells me I didn't make a silly mistake in my post. Thank you for the feedback.

As for your objections,

All models are wrong, some models are useful.

exactly captures my conceit. Reductionism is correct in the sense that is, in some sense, closer to reality than anti- or contra-reductionism. Likely in a similar sense that machine code is closer to the reality of a physical computation than a .cpp file, though the analogy isn't exact, for reasons that should become clear.

I'm typing this on a laptop, which is a intricate amalgam of various kinds of atoms. Hypothetically, you could explain the positioning of the atoms in terms of dense quantum mechanical computations (or a more accurate physical theory, which would exist ex hypothesi), and/or we could explain it in terms of economics, computer science and the vagaries of my life. The former strictly contains more information than the latter, and subsumes the latter to the extend that it represents reality and contradicts it to the extend it's misleading.

At an objective level, then, the strictly reductionist theory wins on merit.

Reductionism functions neatly to explain reality-in-general, and even to explain certain orderly systems that submit to a reductionist analysis. If you want completeness, reductionism will give you completeness, at the limit. But sometimes, a simple explanation is nice. It'd be convenient to compress, to explain evolution in abstract terms.

The compression will be lossy, because we don't actually have access to reality's dataset. But lossy data is okay, and more okay to more casual the ends. Pop science books are very lossy, and are sufficient for delivering a certain type of entertainment. A full reprinting of a paper's collected data is about as lossless as we tend to get.

A lossless explanation is reductionist, and centribus paribus, we ought to go with the reductionist explanation. Given a choice between a less lossy, very complex explanation and a lossy, but simple explanation, you should probably go gather more data. But failing that, you should go with one that suits your purposes. A job where every significant bit digit of accuracy matters chooses the first, as an example.

Comment author: Vamair0 29 May 2016 03:33:03PM 0 points [-]

A lossless explanation is reductionist

Isn't that what people mean when they say reductionism is right?

Comment author: Vamair0 26 May 2016 10:49:21AM 1 point [-]

I think it's not so much a sum of properties as a union of property sets. If a system has a property that's not a part of a union then it's "more than the sum of its components". On the other hand I find the notion of something being "more than the sum of its parts" about as annoying as the frequent ads with "1 + 1 = 3 Buy two and get one for free!" equation. That is, very annoying.

Comment author: gjm 04 January 2016 04:48:16PM 4 points [-]

I know of an old prime number that happens to end with a 2.

Comment author: Vamair0 06 May 2016 07:41:45AM 1 point [-]

How old is it, exactly?

Comment author: Vamair0 06 May 2016 07:12:33AM *  0 points [-]

It seems interesting that a lot of spiritual experiences are something that happens in non-normal situations. To get them people may try denying food or sleep, stay in the same place for a long time without motion, working themselves to exhaustion, eating poisons, going to a place of different atmospheric pressure or do something else they don't normally try to do. The whole process is suspiciously similar to program testing, when you try the program in some situations its creator (evolution in case of humans) haven't "thought" much about. And then sometimes there are bugs. And if you don't follow the protocols for already discovered bugs you either risk crashing something really important or getting nothing at all. Bugs are real and may give a valuable information on the program's inner workings, but they're not "the final truth about the underlaying reality".

The belief of the revelatory nature of spiritual experiences may be a result of a "just world" bias. When you get your reward you've been working for for years, it's easier to believe you understood something profound about the reality rather than that you've discovered an error in your brain. If that's the case then "If you spin a lot, you'll get vertigo" or "if you sit on your hand long enough, there would be strange feeling there" or "look through the autostereogram picture to see it in 3D" may be thought of as spiritual experiences, but they're too easy and mundane for that.

Comment author: Gram_Stone 05 May 2016 04:37:56PM -1 points [-]

I've had similar thoughts in the past few days. It does seem that utilitarianism merely prescribes the moral action, without saying anything about the goodness or badness of people. Of course, I've seen self-identifying utilitarians talk about culpability, but they seem to be quickly tacking this on without thinking about it.

Comment author: Vamair0 05 May 2016 07:16:19PM 2 points [-]

It is possible to talk about utilitarian culpability, but it's a question of "would blaming/punishing this (kind of) person lead to good results". Like you usually shouldn't blame those who can't change their behavior as a response to blame unless they self-modified themselves to be this way or if them being blameless would motivate others that can... That reminds me of the Eight Short Studies On Excuses, where Yvain has demonstrated an example of such an approach.

Comment author: Vamair0 05 May 2016 11:50:55AM 0 points [-]

Isn't the question of someone being a good or a bad person at all a part of virtue ethics? That is, for a utilitarian the results of the bystander's and murderer's actions were the same, and therefore actions were as bad as each other, but that doesn't mean a bystander is as bad as the murderer, because that's not a part of utilitarian framework at all. Should we implement the policy of blaming or punishing them the same way? That's a question for utilitarianism. And the answer is probably "no".

Comment author: bogus 23 April 2016 01:33:38AM 0 points [-]

I'm asking if there is a rational non-meta reason to believe they do "stop at the neck" even if we throw away all the IQ/nations data.

Of course there are. The standard argument is that the history of human evolution suggests that increased intelligence and favorable personality traits were strongly selected for, and traits which are strongly selected tend to reach fixation rather quickly.

Comment author: Vamair0 23 April 2016 08:52:06AM *  0 points [-]

But then the difference in intelligence would be almost completely shared + nonshared environment. And twin studies suggest it's very inheritable. It also seems to be a polygenic trait, so there can be quite a lot of new mutations there that haven't yet reached fixation even if it's strongly selected for.

View more: Next