Comment author: jsalvati 08 December 2008 08:50:30PM 0 points [-]

OK, Tim Tyler's link is interesting. I don't know every much about evolution (basically what I've read here plus a little bit); can someone who knows more say whether this is an idea worth paying attention to? And if it's not, why is it confused?

Comment author: jsalvati 08 December 2008 08:49:35PM 0 points [-]

OK, Tim Tyler's link is interesting. I don't know every much about evolution (basically what I've read here plus a little bit); can someone who knows more say whether this is an idea worth paying attention to? And if it's not, why is it confused?

In response to Thanksgiving Prayer
Comment author: jsalvati 28 November 2008 06:24:17AM 0 points [-]

haha, that's great!

Comment author: jsalvati 24 October 2008 11:03:38PM 0 points [-]

Didn't we already have this exact post?

Comment author: jsalvati 24 September 2008 04:54:01AM 1 point [-]

Maybe people have an instinct to preserve their former strategies, because doing so often works. If you find out a new fact, you don't usually have to abandon your whole set of beliefs. Are view shattering facts/arguments more common for abstract issues?

Comment author: jsalvati 19 September 2008 04:39:47AM -2 points [-]

"you cannot come up with clever reasons why the gaps in your model don't matter." Sure, sometimes you can't, but sometimes you can; sometimes there are things which seem relevant but which are genuinely irrelevant, and you can proceed without understanding them. I don't think it's always obvious which is which, but of course, it's a good idea to worry about falsely putting a non-ignorable concept into the "ignorable" box.

Comment author: jsalvati 15 September 2008 04:50:14AM 2 points [-]

Excellent analogy TGGP. (and I say that as a meat eater)

Comment author: jsalvati 19 August 2008 10:37:29PM 0 points [-]

IL My understanding was that Terminal Values are not something you ever observe directly (nobody can simply list their Terminal Values). Moral arguments change what use as our approximation to the Moral Calculation. However, if moral arguments did make our actual moral calculations diverge (that is, if our actual moral calculation is not a state function with respect to moral arguments) then that does disprove Eliezer's meta-ethics (along with any hope for a useful notion of morality it seems to me).

In response to Dumb Deplaning
Comment author: jsalvati 19 August 2008 01:27:38AM 0 points [-]

I haven't thought about this in-depth, but I almost always wait a while before I try to get off the plane.

Comment author: jsalvati 16 August 2008 05:52:53AM 1 point [-]

Moreover, even if they did have moralities, they would probably be very very different moralities, which means that the act of doing opposing things does not mean they are disagreeing, they are just maximizing for different criteria. The only reason it's useful to talk about human's disagreeing is that it is very likely that we are optimizing for the same criteria if you look deep enough.

View more: Next