Don't think "silver spoons", think "clean drinking water".
I like "we are the cards we are dealt", which expresses nicely a problem with common ideas of blame and credit. I disagree that intelligence is the unfairest card of all - I think that a relatively dim person born into affluence in the USA has a much better time of it than a smart person born into poverty in the Congo.
Interesting. There's a paradox involving a game in which players successively take a single coin from a large pile of coins. At any time a player may choose instead to take two coins, at which point the game ends and all further coins are lost. You can prove by induction that if both players are perfectly selfish, they will take two coins on their first move, no matter how large the pile is. People find this paradox impossible to swallow because they model perfect selfishness on the most selfish person they can imagine, not on a mathematically perfect selfishness machine. It's nice to have an "intuition pump" that illustrates what *genuine* selfishness looks like.
Are you arguing that a few simple rules describe what we're all trying to get at with our morality? That everyone's moral preference function is the same deep down? That anything that appears to be a disagreement about what is desirable is actually just a disagreement about the consequences of these shared rules, and could therefore always be resolved in principle by a discussion between any two sufficiently wise, sufficiently patient debaters? And that moral progress consists of the moral zeigeist moving closer to what those rules capture?
That certainly would be convenient for the enterprise of building FAI.
Paul, do you think that your own morality is optimum or can you conceive of someone more moral than yourself - not just a being who better adheres to your current ideals, but a being with better ideals than you?
Yes I can.
If you take the view that ethics and aesthetics are one and the same, then in general it's hard to imagine how any ideals other than your own could be better than your own for the obvious reason that I can only measure them against my own.
What interests me about the rule I propose (circular preferences are bad!) is that it is exclusively a meta-rule - it cannot measure my behavour, only my ideals. It provides a meta-ethic that can show flaws in my current ethical thinking, but not how to correct them - it provides no guidance on which arrow in the circle needs to be reversed. And I think it covers the way in which I've been persuaded of moral positions in the past (very hard to account for otherwise) and better yet allows me to imagine that I might be persuaded of moral points in the future, though obviously I can't anticipate which ones.
If I can imagine that through this rule I could be persuaded to take a different moral stance in the future, and see that as good, then I'm definitely elevating a different set of ideals - my imagined future ideals - over my current ideals.
I'm by no means sure that the idea of moral progress can be salvaged. But it might be interesting to try and make a case that we have fewer circular preferences now than we used to.
It's not known whether the Universe is finite or infinite, this article gives more details:
http://en.wikipedia.org/wiki/Shape_of_the_Universe
If the Universe is infinite, then it has always been so even from the moment after the Big Bang; an infinite space can still expand.
It hadn't quite sunk in until this article that looked at from a sum-over-histories point of view, only identical configurations interfere; that makes decoherence much easier to understand.
Would this get easier or harder if you started with, say, gliders in Conway's Life?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
- There's a huge conspiracy covering it up
- Well, that's just what one of the Bad Guys would say, isn't it?
- Why should I have to justify myself to you?
- Oh, you with your book-learning, you think you're smarter than me?
- They said that to Einstein and Galileo!
- That's a very interesting question, let me show you the entire library that's been written about it (where if there were a satisfactory answer it would be shortish)
- How can you be so sure?