Posts

Sorted by New

Wiki Contributions

Comments

This reminds me of the bit in Steven Landsburg's (excellent) book "The Armchair Economist" in which he makes the point that data on what happens on third down in football games is a very poor guide to what would happen on third down if you eliminated fourth down.

It seems to me that what's internal about morality is the buy-in, the acceptance that I ought to care about the other fella at all. But much of what remains is external in the sense that the specific rules of morality to which I subject myself are (at least to a large extent) the product of objective reasoning.

I like the idea of a fictional sequence involving a rationality master and students. But I can't stand the Jeffreyssai character. He's just so intolerably smug and self-satisfied, very much in the mold of some of the martial arts instructors I had when I was young. More recently I took boxing classes, and the teacher was like Mickey from the Rocky movies. Much better persona; Jeffreyssai should take note.

After I explained "percentile", he said "One in three hundred", so I laughed briefly and said "Yes."

The "Yes" part is fine. The "I laughed briefly" part would be better done away with.

My sister used to be a teacher in a special education school. She would sometimes let some kids do things that other kids weren't allowed to do; a kid particularly prone to some kind of negative reaction to an otherwise mandatory activity might be allowed not to participate (I don't recall exactly). When the other kids protested that it wasn't fair, she would reply: "fair is when everyone gets what they need, not when everyone gets the same." Not totally satisfactory, but in my mind not totally bogus either. How hungry each person is does have some bearing on what's a fair division of the pie.

It seems to me like the word "axioms" belongs in here somewhere.

Of course the feeling of love had to evolve, and of course it had to evolve from something that was not love. And of course the value of the love that we feel is not woven into the fabric of the universe; it's only valuable to us. But it's still a very happy thing that love exists, and it's also sort of a lucky thing; it is not woven into the fabric of the universe that intelligent beings (or any beings for that matter) have to have anything that feels as good as love does to us. This luck may or may not be "surprising" in the sense that it may or may not be the case that the evolution of love (or something else that feels as good to the one who feels it) is highly likely conditional on evolving intelligence. I don't know the actual answer to this, but the point is that I can at least conceive of a sense in which in which the existence of love might be regarded as surprising.

BTW, the same point can be made about the religious (specifically Protestant) origins of the Enlightenment. The Enlightenment wasn't always there, and it didn't fall from the sky, so it had to have its origins in something that wasn't the Enlightenment. To the extent that Protestantism had some attributes that made it fertile soil for the Enlightenment to grow from, great. But that doesn't make old-timey Protestantism liberal or good, and it certainly doesn't entitle contemporary* Protestantism to a share of the credit for the Enlightenments' achievements.

It is for this reason that Robert Aumann, super-smart as he is, should be entirely ignored when he opines about the Palestinian-Israeli conflict.

http://www.overcomingbias.com/2007/07/theyre-telling-.html

It's clear that there are some questions to which there are (and likely never will be) fully satisfactory answers, and it is also clear that there is nothing to be done about it but to soldier on and do the best you can (see http://www.overcomingbias.com/2007/05/doubting_thomas.html). However, there really are at least a few things that pretty much have to be assumed without further examination. I have in mind the basic moral axioms like the principle that the other guy's welfare is something that you should be at all concerned about in the first place.

I seem to recall that there is a strand of philosophy that tries to figure out what unproven axioms would be the minimum necessary foundation on which to build up something like "conventional" morality. They felt the need to do this precisely because of the multi-century failure of philosophers to come up with basis for morality that was unarguable "all the way down" to absolute first principles. I don't know anything about AI, but it sounds like what Eliezer is talking about here has something of the same flavor.

Load More