habanero

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

It seems to me that we often treat EDT decisions with some sort of hindsight bias. For instance, given that we know that the action A (turning on sprinklers) doesn't increase the probability of the outcome O (rain) it looks very foolish to do A. Likewise, a DT that suggests doing A may look foolish. But isn't the point here that the deciding agent doesn't know that? All he knows is, that P(E|A)>P(E) and P(O|E)>P(O). Of course A still might have no or even a negative causal effect on O, but yet we have more reason the believe otherwise. To illustrate that, consider the following scenario:

Imagine you find yourself in a white room with one red button. You have no idea why you're there and what this is all about. During the first half hour you are undecided whether you should press the button. Finally your curiosity dominates other considerations and you press the button. Immediately you feel a blissful release of happiness hormones. If you use induction it seems plausible to infer that considering certain time intervalls (f.i. of 1 minute) P(bliss|button) > P(bliss). Now the effect has ceased and you wished to be shot up again. Is it now rational to press the button a second time? I would say yes. And I don't think that this is controversial. And since we can repeat the pattern with further variables it should also work with the example above.

From that point of view it doesn't seem to be foolish at all - talking about the sprinkler again - to have a non-zero credence in A (turning sprinkler on) increasing the probability of O (rain). In situations with that few knowledge and no further counter-evidence (which f.i. might suggest that A might have no or a negative influence on O) this should lead an agent to do A.

Considering the doctor, again, I think we have to stay clear about what the doctor actually knows. Let's imagine a doctor who lost all his knowledge about medicine. Now he reads one study which shows that P(Y|A) > P(Y). It seems to me that given that piece of information (and only that!) a rational doctor shouldn't do A. However, once he reads the next study he can figure out that C (the trait "cancer") is confounding the previous assessment because most patients who are treated with A show C as well, whereas ~A don't. This update (depending on the probability values respectively) will then lead to a shift favoring the action A again.

To summarize: I think many objections against EDT fail once we really clarify what the agent knows respectively. In scenarios with few knowledge EDT seems to give the right answers. Once we add further knowledge an EDT updates his beliefs and won’t turn on the sprinkler in order to increase the probability of rain. As we know it from the hindsight bias it might be difficult to really imagine what actually would be different if we didn’t know what we do know now.

Maybe that's all streaked with flaws, so if you find some please hand me over the lottery tickets ; )

Ok, let's do some basic friendly AI theory: Would a friendly AI discount the welfare of "weaker" beings as you and me (compared to this hyper-agent) lexically? Could that possibly be a fAI? If not, then I think we should also rethink our moral behaviour towards weaker beings in our game here for our decisions can result in bad things for them correspondingly.

My bad about the ritual. Thanks. Out of interest about your preferences: Imagine the grandmother and the dog next to each other. A perfect scientist starts to exchange pairs of atoms (let's assume here that both individuals contain the same amount of atoms) so that the grandmother more and more transforms into the dog (of course there will be several weird intermediary stages). For the scientist knows his experiment very well, none of the objects will die; in the end it'll look like the two objects changed their places. At which point does the mother stop counting lexically more than the dog? Sometimes continuity arguments can be defeated by saying: "No I don't draw an arbitrary line; I adjust gradually whereas in the beginning I care a lot about the grandmother and in the end just very little about the remaining dog." But I think that this argument doesn't work here for we deal with a lexical prioritization. How would you act in such a scenario?

That seems a little bit ad hoc to me. Either you care about dogs (and then even the tiniest non-zero amount of caring should be enough for the argument) or you don't. People often come up with lexical constructs when they feel uncomfortable with the anticipation of having to change their behaviour. As a consequentialist, I figured out that I care a bit about dog welfare, and being aware of my scope insensitivity, I can see why some people dislike biting the bullet which results from simple additive reasoning. An option would be, though, to say that one's brain (and anyhow therefore one‘s moral framework) is only capable of a certain amount of caring for dogs and that this variable is independent of the number of dogs. For me that wouldn't work out though for I care about the content of sentient experience in a additive way. But for the sake of the argument : if a hyperintelligent alien came to the Earth (eg. an AI), what would you propose how the alien should figure out which mechanisms in the universe should be of moral concern? What would you think of the agent’s morality if it discounted your welfare lexically?

However, I am willing to let that dog, or a million dogs, or any number of dogs, be tortured to save my grandmother from the same fate.

This sounds a bit like the dustspeck vs. torture argument, where some claim that no number of dustspecks could ever outweigh torture. I think that there we have to deal with scope insensitivity. On the utilitarian aggregation I recommend section V of following paper. It shows why the alternatives are absurd. http://spot.colorado.edu/~norcross/2Dogmasdeontology.pdf

Hello everyone!

I'm a 21 years old and study medicine plus bayesian statistics and economics. I've been lurking LW for about half a year and I now feel sufficiently updated to participate actively. I highly appreciate this high-quality gathering of clear-thinkers working towards a sane world. Therefore I oftenly pass LW posts on to guys with promising predictors in order to shorten their inferential distance. I'm interested in fixing science, bayesian reasoning, future scenarios (how likely is dystopia, i.e. astronomical amounts of suffering?), machine intelligence, game theory, decision theory, reductionism (e.g. of personal identity), population ethics and cognitive psychology. Thanks for all the lottery winnings so far!