habanero
habanero has not written any posts yet.

habanero has not written any posts yet.

Ok, let's do some basic friendly AI theory: Would a friendly AI discount the welfare of "weaker" beings as you and me (compared to this hyper-agent) lexically? Could that possibly be a fAI? If not, then I think we should also rethink our moral behaviour towards weaker beings in our game here for our decisions can result in bad things for them correspondingly.
My bad about the ritual. Thanks. Out of interest about your preferences: Imagine the grandmother and the dog next to each other. A perfect scientist starts to exchange pairs of atoms (let's assume here that both individuals contain the same amount of atoms) so that the grandmother more and... (read more)
That seems a little bit ad hoc to me. Either you care about dogs (and then even the tiniest non-zero amount of caring should be enough for the argument) or you don't. People often come up with lexical constructs when they feel uncomfortable with the anticipation of having to change their behaviour. As a consequentialist, I figured out that I care a bit about dog welfare, and being aware of my scope insensitivity, I can see why some people dislike biting the bullet which results from simple additive reasoning. An option would be, though, to say that one's brain (and anyhow therefore one‘s moral framework) is only capable of a certain amount... (read more)
However, I am willing to let that dog, or a million dogs, or any number of dogs, be tortured to save my grandmother from the same fate.
This sounds a bit like the dustspeck vs. torture argument, where some claim that no number of dustspecks could ever outweigh torture. I think that there we have to deal with scope insensitivity. On the utilitarian aggregation I recommend section V of following paper. It shows why the alternatives are absurd. http://spot.colorado.edu/~norcross/2Dogmasdeontology.pdf
Hello everyone!
I'm a 21 years old and study medicine plus bayesian statistics and economics. I've been lurking LW for about half a year and I now feel sufficiently updated to participate actively. I highly appreciate this high-quality gathering of clear-thinkers working towards a sane world. Therefore I oftenly pass LW posts on to guys with promising predictors in order to shorten their inferential distance. I'm interested in fixing science, bayesian reasoning, future scenarios (how likely is dystopia, i.e. astronomical amounts of suffering?), machine intelligence, game theory, decision theory, reductionism (e.g. of personal identity), population ethics and cognitive psychology. Thanks for all the lottery winnings so far!
It seems to me that we often treat EDT decisions with some sort of hindsight bias. For instance, given that we know that the action A (turning on sprinklers) doesn't increase the probability of the outcome O (rain) it looks very foolish to do A. Likewise, a DT that suggests doing A may look foolish. But isn't the point here that the deciding agent doesn't know that? All he knows is, that P(E|A)>P(E) and P(O|E)>P(O). Of course A still might have no or even a negative causal effect on O, but yet we have more reason the believe otherwise. To illustrate that, consider the following scenario:
Imagine you find yourself in a white... (read 370 more words →)