All of D227's Comments + Replies

D227-10

From a utilitarian perspective it doesn't matter how many people we divide up N * K among, be it ten or some Knuth up-arrow abomination, as long as the resulting suffering can register as suffering.

I agree with this statement 100%. That was the point in my TvCR thought experiment. People who obviously picked T should again pick T. No one except one commentor actually conceded this point.

The fewer slices we use, the more our flawed moral intuitions take notice of them and the more commensurate they look; actually, for small numbers of subjects it

... (read more)
1[anonymous]
The thing is, thought experiments are supposed to illustrate something. Right now, your proposed thought experiment is illustrating "we have trouble articulating our thoughts about rape" which is (1) obvious and (2) does not need most of the machinery in the thought experiment.
D227-10

Grognor,

Thanks for your reply. You are right you are consistent as you did admit in your second scenario that you would let the sickos have their fun.

I would like to continue the discussion on why my problem is wrong in a friendly and respectable way, but the negative score points really are threatening my ability to post, which is quite unfortunate.

D22700

Torture vs. Dust Specks attempts to illustrate scope insensitivity in ethical thought by contrasting a large unitary disutility against a fantastically huge number of small disutilities

Your Ten Very Committed Rapists example (still not happy about that choice of subject, by the way) throws out scope issues almost entirely. Ten subjects vs. one subject is an almost infinitely more tractable ratio than 3^3^3^3 vs. one, and that allows us to argue for one option or another by discounting one of the options for any number of reasons.

I do sincerely apologize... (read more)

1Nornagest
I just went over how the scenarios differ from each other in considerable detail. I could repeat myself in grotesque detail, but I'm starting to think it wouldn't buy very much for me, for you, or for anyone who might be reading this exchange. So let's try another angle. It sounds to me like you're trying to draw an ethical equivalence between dust-subjects in TvDS and rapists in TVCR: more than questionable in real life, but I'll grant that level of suffering to the latter for the sake of argument. Also misses the point of drawing attention to scope insensitivity, but that's only obvious if you're running a utilitarian framework already, so let's go ahead and drop it for now. That leaves us with the mathematics of the scenarios, which do have something close to the same form. Specifically: in both cases we're depriving some single unlucky subject of N utility in exchange for not withholding N \ K utility divided up among several subjects for some K > 1. At this level we can establish a mapping between both thought experiments, although the exact K*, the number of subjects, and the normative overtones are vastly, sillily different between the two. Fine so far, but you seem to be treating this as an open-and-shut argument on its own: "you surely would not let the victim [suffer]". Well, that's begging the question, isn't it? From a utilitarian perspective it doesn't matter how many people we divide up N \ K* among, be it ten or some Knuth up-arrow abomination, as long as the resulting suffering can register as suffering. The fewer slices we use, the more our flawed moral intuitions take notice of them and the more commensurate they look; actually, for small numbers of subjects it starts to look like a choice between letting one person suffer horribly and doing the same to multiple people, at which point the right answer is either trivially obvious or cognate to the trolley problem depending on how we cast it. About the only way I can make sense of what you're say
D227-10

Richard

I sincerely appreciate your reply. Why do we accept Omega in Eleizers thought experiment and not mine? In the original some people claim to obviously pick torture, yet unwilling to pick rape because why? Well, like you said, you refuse to believe that rapist suffer. That is fair. But if that is fair, then Bob might refuse to believe that people with specks in their eyes suffer as well...

You can not assign rules for one and not the other.

All you're saying is "suppose were actually good"? Well, suppose away. So what?

Not true. ... (read more)

0Richard_Kennaway
I also "refuse" to believe that the Earth is flat -- or to put it more accurately, I assert that it is false. The difference is that Bob would be wrong. Making random shit up and saying "what if this?", "what if that?" doesn't make for a useful discussion. Then again, I am not a utilitarian, so I have no problem with saying that the more someone wants to do an evil thing, the more they should be prevented from doing it.
D227-20

Then you are not consistent. For one example you are willing to allow suffering because the 50 years of torture is less than 3^^^3 dust holocaust yet. You claim that suffering is suffering. Yet only 10 deprived rapist already has you changing your thoughts.

I do not have an answer. If anything I would consider my self a weak dusk specker. The only thing that I claim is I am not arrogant, I am consistent in my stance. I do not know the answer but am willing to explore the dilemma of torture vs speck, and rape vs deprived rapists. Torture is rape is i... (read more)

0Grognor
You must have missed the part of my response where I say that given your premises, yes, I choose to let the fucking rapists commit the crime. The rest of my post just details how your premises are wrong. I am internally consistent. Your comment was saying that "if you change your answer here, it shows that you are not consistent." I replied with reasons that this is not true, and you replied by continuing on the premise that it is true. No! You do not get to decide whether I'm consistent! See also this comment, which deserves a medal. Your problem is wrong, which is why you're coming to this incorrect conclusion that I am inconsistent.
D22700

Unfortunately it looks like the lines between them have gotten a little blurry.

I will consider this claim, if you can show my how it is really different.

I have taken considerable care to construct a problem in which we are indeed are dealing with the trading suffering for potentially more suffering. It does not effect me one bit, that the topic has now switched from specks to rape. In fact if "detraction" happens, shouldn't it be the burden of the person who feels detracted to explain it? I merely ask for consistency.

In my mind I choose t... (read more)

2Nornagest
Torture vs. Dust Specks attempts to illustrate scope insensitivity in ethical thought by contrasting a large unitary disutility against a fantastically huge number of small disutilities. And I really do mean fantastically huge: if the experiences are ethically commensurate at all (as is implied by most utilitarian systems of ethics), it's large enough to swamp any reasonable discounting you might choose to perform for any reason. It also has the advantage of being relatively independent of questions of "right" or "deserving": aside from the bare fact of their suffering, there's nothing about either the dust-subjects or the torture-subject that might skew us one way or another. Most well-reasoned objections to TvDS boil down to finding ways to make the two options incommensurate. Your Ten Very Committed Rapists example (still not happy about that choice of subject, by the way) throws out scope issues almost entirely. Ten subjects vs. one subject is an almost infinitely more tractable ratio than 3^3^3^3 vs. one, and that allows us to argue for one option or another by discounting one of the options for any number of reasons. On top of that, there's a strong normative component: we're naturally much less inclined to favor people who get their jollies from socially condemned action, even if we've got a quasi-omniscient being standing in front of us and saying that their suffering is large and genuine. Long story short, about all these scenarios have in common is the idea of weighing suffering against a somehow greater suffering. Torture vs. Dust Specks was trying to throw light on a fairly specific subset of scenarios like that, of which your example isn't a member. Nozick's utility monster, by contrast, is doing something quite a lot like you are, i.e. leveraging an intuition pump based on a viscerally horrible utilitarian positive. I don't see the positive vs. negative utility distinction as terribly important in this context, but if it bothers you you could easily
D227-30

If you really understood how much torture 3^^^3 dust specks produces...

You make a valid point. I will not deny that you have a strong point. All I ask is that you not deny me of having you remain consistent with your reasoning. I have reposted a thought experiment, please tell me what your answer is:

Omega has given you choice to allow or disallow 10 rapists to rape someone. Why 10 rapists? Omega knows the absolute utility across all humans, and unfortunately as terrible as it sounds, the suffering/torturing of 10 rapists not being able to rape is mor... (read more)

0wedrifid
Errr... I don't care? I'm not a utilitarian. Utilitarian morals are more or less abhorrent to me. (So in conclusion I'd tell the rapists that they can go fuck themselves. But under no circumstances can they do the same to anyone else without consent.)
-2DanielLC
Please don't use loaded words like that. It's not worth while to let ten people rape someone. By using that word, you're bringing in connotation that doesn't apply. Bad things happening can't result in less suffering then no bad things happening, unless you allow negative suffering. In the example you gave, we can either choose for one person to suffer, or for ten. We must allow bad things to happen because they were the only options. There is no moral pattern or decision theory that can change that. I'd go with allowing the "rape". This situation is no different than if there were one rapist, ten victims, and the happiness from the rapist was less than the sadness from the victims. I'd hurt the fewer to help the many.
1ArisKatsaris
So is there an actual reason that you chose a topic as emotionally frought (and thus mind-killing) as rape, and at the same time created a made-up scenario where we're asked to ignore anything we know about "rape" by being forced to not use our judgment but Omega's on what constitutes utility? And anyway, i think people misunderstand the purpose of utility. Daniel acts according to his own utility function. That function isn't obliged to have a positive factor on the rapists' utility; it may very well be negative. If said factor is negative, then the more utility the rapists get out of their rape, the less he's inclined to let them commit it.
D227-30

That is the crux of the problem. Bob understands just as much as you claim you understand what 3^^^3 is. Yet he chooses the "Dust Holocaust".

First let me assume that you, peter_hurford, are a "Torturer" or rather, you are from the camp that obviously chooses 50 years. I have no doubt in my mind that you bring extremely rational and valid points to this discussions. You are poking holes in Bobs reasoning at its weakest points. This is a good thing.

I whole-heartedly concede that you have compelling points, by poking into holes into... (read more)

2Peter Wildeford
My very first reaction would be to say that you've stated a counterfactual... rape will never directly produce more utility than disutility. So the only way it could be moral if, somehow, unbeknown to us, this rape will somehow prevent then next Hitler from rising to power in some butterfly effect-y way that Omega knows of. I have to trust Omega if he's by definition infallible. If he says the utility is higher, then we still maximize it. It's like you're asking "Do you do the best possible action, even if the best possible action sounds intuitively wrong?"
4Nornagest
You've essentially just constructed a Utility Monster. That's a rather different challenge to utilitarian ethics than Torture vs. Dust Specks, though; the latter is meant to be a straightforward scope insensitivity problem, while the former strikes at total-utility maximization by constructing an intuitively repugnant situation where the utility calculations come out positive. Unfortunately it looks like the lines between them have gotten a little blurry. I'm really starting to hate thought experiments involving rape and torture, incidentally; the social need to signal "rape bad" and "torture bad" is so strong that it often overwhelms any insight they offer. Granted, there are perfectly good reasons to test theories on emotionally loaded subjects, but when that degenerates into judging ethical philosophy mostly by how intuitively benign it appears when applied to hideously deformed edge cases, it seems like something's gone wrong.
0Grognor
There are two major problems with your proposition. One is that Omega appears to be lying in this problem, very simply. In the universe where he isn't lying, though... I'm partly what you'd call a "negative utilitarian". That's minimize suffering first, then maximize joy. It does not appear to me that not being able to rape people for a small number of hedonists (like, say, the number of rapists on the planet) is greater than the suffering that would be inflicted if they had their way. If you accept those premises I just put forward, then you understand that my choice is to stop the rapists for utilitarian reasons also because I don't want them to do this again. So okay, least-convenient possible world time. Given that they won't cause any additional suffering after this incident, given that their suffering from not being able to commit rape is greater than the victim's (why this would be true I have no idea), then sure, whatever, let them have their fun shortly before their logically ridiculous universe is destroyed because the consequences of this incident as interpreted by our universe would not occur. I hope this justifies my position from a utilitarian standpoint, though I do have deontological concerns about rape. It's one of those things that seems to Actually Be Unacceptable, but I hope I've put this intuition sufficiently aside to address your concerns. One more thing... It kind of pisses me off that people still bring up the torture vs. dust specks thing. From where I stand, the debate is indisputably settled. But, ah, I guess you might call that "arrogance". But whatever.
2Richard_Kennaway
The problem with your problem is that it is wrong. You have Omega asserting something we have good reason to disbelieve. You might as well have Omega come in and announce that there is an entity somewhere who will suffer dreadfully if we don't start eating babies. All you're saying is "suppose were actually good"? Well, suppose away. So what? Do you see the difference between your Omega and the one who poses Newcomb's problem?
D22710

Are you familiar with prospect theory?

No, but I will surely read up on that now.

You seem to be describing what you (an imperfectly rational agent) would choose, simply using "PVG" to label the stuff that makes you choose what you actually choose, and you end up taking probability into consideration in a way similar to prospect theory.

Absolutely. In fact I can see how a theist will simply say, "it is my PVG to believe in God, therefore It is rational for me to do so."

I do not have a response to that. I will need to learn more before I can work this out in my head. Thank you for the insightful comments.

D22700

Having read, Influence, The Prince and, 48 laws of Power I found Cialdini's book the most satisfying to read because it was filled with empirical research. The latter books I mentioned were no doubt excellent reads however anecdotal. Also, Influence is presented in the least "dark arts" ways from the other two. The book is about learning to stay ahead of influence just as much as it is about influencing.

D22700

Thank you for your response. I believe I understand you correctly, I made a response to Manfred's comment in which I reference your response as well. Do you believe I interpreted you correctly?

An agent that has an empathetic utility functions will only edit its own code if and only if it maximizes expected utility of the same empathetic utility function. Do I get your drift?

2Matt_Simpson
I think that's right, though just to be clear an empathetic utility function isn't required for this behavior. Just a utility function and a high enough degree of rationality (and the ability to edit its own source code).
2MileyCyrus
Put another way: Suppose an agent has a utility function X. It can modify it's utility function to become Y. It will only make the switch from "X" to "Y" if it believes that switching will ultimately maximize X. It will not switch to Y simply because it believes it can get a higher amount of Y than X.
D22720

If Bob cares about cute puppies, then Bob will use his monstrous intelligence to bend the energy of the universe towards cute puppies. And love and flowers and sunrises and babies and cake.

I follow you. It does resolve my question of whether or not rationality + power necessarily involves a terrible outcomes. I had asked the question of whether or not a perfect rationalist given enough time and resources would become perfectly selfish. I believe I understand the answer as no.

Matt_Simpson gave a similar answer:

Suppose a rational agent has the ab

... (read more)
1wedrifid
Indeed. The equation for terrible outcomes is "rationality + power + asshole" (where 'asshole' is defined as the vast majority of utility functions, which will value terrible things. The 'rationality' part is optional to the extent that you can substitute it with more power. :)
D22700

Where would one go to read more about modafinil?

I have read Wikipedia and Erowid.

If you were to assign a percentage of how much all around "better" you feel when you are on it, what would it be? For example 10% better than off? 20%,30%?

3Logos01
I am very frequently uncomfortable assigning percentages to non-inherently-numerical observations, as humans are notoriously poor judges of probability. That being said, the citation list and "external links" entry for Modafinil on Wikipedia is very extensive. It might also help to follow through with the same on Adrafinil, as the latter is less politicized at this point. tl;dr version of the below: It's not about feeling "all around better": it's about having control over my productivity cycles, and being able to adapt to alternative cycles of alertness. The thing about modafinil is that it does not produce euphoric sensation. It's not that you "feel" anything in particular -- if anything, the frequency of headaches (a common side effect) is greater so there's a real argument that it makes you "feel" worse. In contrast, however, it also prevents the onset of mental and physical fatigue. Given the 12-hour metabolic half-life, this has a more prolongued noticeable impact than caffeine does (at least for me) in terms of whatever "pool of reserves" cognitive load drains; that is, it takes less effort to stay focused, and one experiences far less "grogginess". So in terms of allowing me the ability to retain alertness over prolonged periods without experiencing fatigue, it does very well. I have been known to go as long as five days without sleep (longest instance to date, there were external extenuating circumstances requiring this) without significant deleterious effects. Prolonged periods do require either escalating dosage or accepting decline in cognitive function (similar to being drunk; I've noticed a high correlation between how I behave after a 48 hour period and those with 'a light buzz' behave in terms of inhibition control and reflex response, aside from the window of peak onset from dosage). Under my regular dosage regimen I frequently sleep roughly three hours per day on-dose and then for twelve hours the day after the dosage window, followed by "norma
D22780

I'm a 28-yo male in the SF area previously from NYC.

This site is intimidating and I think there are many more just like me who are intimidated to introduce themselves because they might not feel they are as articulate or smart as some of the people on this forum. There are some posts that are so well written that I couldn't write in a 100 years. There is so much information that it seems overwhelming. I want to stop lurking and invite others to join too. I'm not a scientist and I didn't study AI in college, I just want to meet good people and so do yo... (read more)