All of PlaidX's Comments + Replies

PlaidX60

I don't like contentless discussions of art either, but spewing paragraph after paragraph of awkward, stilted jargon about your hypothetical personal feelings isn't content, especially when they relate to a movie you haven't even seen!

If my friend says "That movie sucked", and I disagree, I ask "why".

If my friend says "I liked the animation, but the timing is terrible. Everyone telegraphs their reactions", that's a discussion of the film that's actually going somewhere.

If my friend says "Like everyone, I enjoy the physica... (read more)

6[anonymous]
.
0[anonymous]
.
PlaidX170

The latter. Actually, I guess I still consume a lot of unknown things, but now almost exclusively online, where when the thing sucks, you can instantly move on to something else.

Much better to download a movie and watch five minutes of it and delete it than to coordinate going to the theater with someone, buy overpriced popcorn, watch a bunch of ads, then sit through an hour and a half of something you don't really like.

I can't really tell whether this is me failing to appreciate some aspect of human experience, or just that the way people tend to do things is stupid.

8Dorikka
Or you just have different preferences from some other people.
PlaidX20

Yeah, really what I find to be the ugliest thing about lesswrong by far is the sense of self-importance, which contributed to the post deletion quite a bit as well.

Maybe it's the combination of these factors that's the problem. When I read mainstream philosophical discourse about pushing a fat man in front of a trolley, it just seems like a goofy hypothetical example.

But lesswrong seems to believe that it carries the world on its shoulders, and that when they talk about deciding between torture and dust specks, or torture and alien invasion, or torture and... (read more)

1Zetetic
Do you think that maybe it could also be tied up with this sort of thing? Most of the ethical content of this site seems to be heavily related to the sort of approach Eliezer takes to FAI. This isn't surprising. Part of the mission of this site is to proselytize the idea that FAI is a dire issue that isn't getting anywhere near enough attention. I tend to agree with that idea. Existential risk aversion is really the backbone of this site. The flow of conversation is driven by it, and you see its influence everywhere. The point of being rational in the Lesswrongian sense is to avoid rationalizing away the problems we face each and every day, to escape the human tendency to avoid difficult problems until we are forced to face them. In any event, my main interest in this site is inexorably tied in with existential risk aversion. I want to work on AGI, but I'm now convinced that FAI is a necessity. Even if you disagree with that, it is still the case that there are going to be many ethical dilemmas coming down the pipe as we gain more and more power to change our environment and ourselves through technology. There are many more ways to screw up than there are to get it right. This is all there is to it; someone is going to be making some very hard decisions in the relatively near future, and there are going to be some serious roadblocks to progress if we do not equip people with the tools they need to sort out new, bizarre and disorienting ethical dilemmas. This I believe to likely be the case. We have extreme anti-aging, nanotech and AGI to look forward to, to name only a few. The ethical issues that come hand in hand with these sorts of technologies are immense and difficult to sort out. Very few people take these issues seriously; even fewer are trying to actually tackle them, and those who are don't seem to be doing a good enough job. It is my understanding that changing this state of affairs is a big motive behind lesswrong. Maybe lesswrong isn't all that it sh
0wedrifid
I resent the suggestion that I instinctively think of 3^^^3 dust specks! I have to twist my cortex in all sorts of heritage violating imaginative ways to come up with the horrible things I like to propose in goofy hypotheticals! I further assert that executing the kind of playfully ridiculous-but-literal conversation patterns that involve bizarre horrible things did not help my ancestors get laid.
2[anonymous]
.
1[anonymous]
.
PlaidX20

Considering this style of thinking has lead lesswrong to redact whole sets of posts out of (arguably quite delusional) cosmic horror, I think there's plenty of neurosis to go around, and that it runs all the way to the top.

I can certainly believe not everybody here is part of it, but even then, it seems in poor taste. The moral problems you link to don't strike me as philosophically illuminating, they just seem like something to talk about at a bad party.

0Zetetic
I catch your drift about the post deletion, and I think that there is a bit of neurosis in the way of secrecy and sometimes keeping order in questionable ways, but that wasn't what you brought up initially; you brought up the tendency to reason about moral dilemmas that are generally quite dark. I was merely pointing out that this seems like the norm in moral thought experiments, not just the norm on lesswrong. I might concede your point if you provide at least a few convincing counterexamples, I just haven't really seen any. If anything, I worry more about the tendency to call deviations from lesswrong standards insane, as it seems to be more of an in-group/out-group bias than is usually admitted, though it might be improving.
PlaidX210

I've found that I have the opposite problem. When given the opportunity to try something new, I take it, thinking "maybe this time", and invariably regret doing so.

Now I order the same food every time in restaurants, never go to shows, and am a happier person for it.

2Academian
Do you think you had an aversion to repetition? Or a propensity for variety?
PlaidX50

Someone even more cynical might say that lesswrong only departs from mainstream skeptical scientific consensus in ways that coincidentally line up exactly with the views of eliezer yudkowsky, and that it's basically an echo chamber.

That said, rational thinking is a great ideal, and I think it's awesome that lesswrong even TRIES to live up to it.

PlaidX60

I haven't read TOO much mainstream philosophy, but in what I have, I don't recall even a single instance of torture being used to illustrate a point.

Maybe that's what's holding them back from being truly rational?

PlaidX170

I've heard that before, and I grant that there's some validity to it, but that's not all that's going on here. 90% of the time, torture isn't even relevant to the question the what-if is designed to answer.

The use of torture in these hypotheticals generally seems to have less to do with ANALYZING cognitive algorithms, and more to do with "getting tough" on cognitive algorithms. Grinding an axe or just wallowing in self-destructive paranoia.

If the point you're making really only applies to torture, fine. But otherwise, it tends to read like "... (read more)

0Bongo
Not necessarily even wrong. The higher the stakes, the more people will care about getting a winning outcome instead of being reasonable. It's a legit way to cut through the crap to real instrumental rationality. Eliezer uses it in his TDT paper (page 51):
6TimFreeman
I agree. I wrote the article you're citing. I was hoping that by mocking it properly it would go away.
5Dreaded_Anomaly
One of the major goals of Less Wrong is to analyze our cognitive algorithms. When analyzing algorithms, it's very important to consider corner cases. Torture is an example of extreme disutility, so it naturally comes up as a test case for moral algorithms.
cousin_it200

If you try to do moral philosophy, you inevitably end up thinking a lot about people getting run over by trolleys and such. Also if you want to design good chairs, you need to understand people's butts really well. Though of course you're allowed to say it's a creepy job but still enjoy the results of that job :-)

PlaidX10

Ok, I'm back online. I basically flaked out partway through day two, I think I overextended myself.

However, the twitching or convulsing is still here, whenever I meditate, and after conferring with a medical professional, I'm pretty sure it's a meditation related thing, and not due to hyperventilation or somesuch. In fact, he explicitly said "yeah, that's from meditation. don't even try looking for a medical explanation."

SO, not exactly PLEASANT or ILLUMINATING results, but results nonetheless. I'm going to try going back to an hour or so of daily meditation and see how things develop for a while.

0DavidM
The twitching is typical, like I said. Not in the sense that every time you meditate, from now till forever, you're going to have it. But it's common enough in stage 1. There are also related things that can happen in stage 2, but they're not quite the same. So I'd say that they might be gone by stage 2 and probably will be by stage 3. Your body will get over it eventually. Think of it as your body trying to adapt to doing this new thing; it takes some time to iron the kinks out. Good luck with your practice! Let us know if anything interesting happens.
PlaidX00

It seems like an awful LOT of twitching, though. Like, so much so that I ended up hyperventilating to compensate for it. Is this really typical?

I should note that my concentration still isn't that great, and I haven't really experienced anything unusual on a mental level.

0DavidM
Please let us know how meditation is going for you once your retreat is over.
0AdeleneDawner
I get mindstate-related twitching sometimes, though not to the degree you're describing, and I'm not entirely confident that it's the same phenomenon. In my case, the mindstate-part that correlates with twitching is very subtle. I wouldn't expect someone who's unfamiliar with closely observing their own mind to be able to notice it at all. It does seem to correlate with stress, for me, so if you've been pushing yourself a lot in general recently you may want to back off on that for a couple days and try again. You may also want to try emergen-c vitamin supplement or a generic version thereof; a friend of mine suggested that to me when I was dealing with a particularly bad round of stress-related twitchyness, and it helped rather a lot, though that could obviously be psychosomatic.
-1DavidM
In my post I described mode one perception as having "various cognitive and emotional content but nothing very extreme aside from physical unpleasantness." Why do you expect some kind of overt mental alteration? I already said that twitching is typical. Edit: Lots of respect for doing a weeklong retreat.
PlaidX00

No, go ahead and say what you think, I'm a bit flummoxed at this point. Too much twitching.

-1DavidM
Some very general comments. Yvain may or may not be right about the etiology of your buzzing sensations (people get these sensations from many causes), but clearly what you're doing is affecting your breathing, which is the interesting part (you mention having meditated before but never had this experience until using my technique), and typical. Twitching, inability to hold a posture, feeling like your face or body is contorting is also typical. It occurs to you that twitching is related to the specific process of noting your breath, which is good. Also typical. Keep observing that. (Cf. my piece of advice in this post about paying attention to new things that seem strange or interesting.) I'd say you're in middle or late stage one. Keep noticing your breath, the interaction between your noting and your weird experiences, and your weird body sensations. Your experience will eventually change as you continue to meditate. Also, go back to being on retreat, away from the internet.
PlaidX00

Aha, I had a nagging feeling there might be something like that going on.

Any idea what the involuntary spasms are about? I did another hour of sitting, and while I didn't have the tingling and such this time, the spasm came back as strong as ever. In fact, I'm inclined to discontinue things until I can figure out what the deal is with them.

Even laying down, breathing calmly, I'm just twitchy as hell. It stops as soon as I stop meditating.

EDIT: Here's something from wikipedia.

Cortical reflex myoclonus is thought to be a type of epilepsy that originates in t... (read more)

3Scott Alexander
You're unlikely to have epilepsy. That's serious stuff. Meditators commonly report twitches (here is an annoying New Age page about them, because it was the first one I could find). I don't have any hard knowledge about them but my wild guess is that they're similar to hypnic jerks, basically your brain noticing it hasn't heard from your body lately and pinging it to make sure it's still there. The more serious twitches that get linked to kriyas are probably something more exotic, but what you're talking about doesn't sound like that. If you're tired, sleep better and they might go away. If not, see if you can make meditation less of a relaxing brink-of-sleep-inducing experience by some of the tips David mentioned above. The exceptionally large amount of twitching you're having now could also be linked to the previous hyperventilation. Note the part of the Wikipedia page that says alkalosis can cause "tetany" - that's involuntary muscle contraction. See if it goes away after a while breathing normally. Note that breathing normally during meditation is hard, at least for me.
PlaidX50

WOAH, holy crap. Ok, I'm doing a retreat (in my own house, by myself) and i'm only four and a half hours in, but i'm breaking retreat protocol and going on the computer because I have to tell you guys how unexpected what's happened so far is. Woo, ok, sensations subsiding, getting feeling back in my fingers.

I've been meditating for about six months now, starting at 20 minutes a day and gradually moving up to an hour and a half, with no discernible effect other than my butt getting sore. When daniel posted these articles, I was getting so demoralized with m... (read more)

0DavidM
Thanks for giving the experiment a try and reporting about your results so far. Please keep us updated. I have some comments about your reported experience, but since you do seem to be intending this as an experiment, I would rather not say much and let you see for yourself how things turn out. Despite that, if you feel the pressing need for some kind of feedback, feel free to send me a private message. By the way, my name is David, not Daniel!

I believe you are describing paraesthesiae from hyperventilation-induced respiratory alkalosis - ie you're breathing in too much oxygen too quickly and breathing out too much CO2 too quickly, it's turning your blood alkaline, and that's screwing with your nervous system.

It's not uncommon to mistake this for a spiritual result of breathing-related practices - I used to do so myself - but it isn't, it's not healthy, and you should try to avoid it by breathing at a more measured rate.

PlaidX70

Sorry, I read the first sentence first, and so experienced a minor full-body orgasm.

You're either the greatest imagineer I've ever met, or a big fat liar.

2Manfred
Well I mean it wasn't that great. But it's not that hard to get an intense feeling and involuntary muscle contractions, what's difficult is making it feel good, as opposed to like something you'd describe as "an intense feeling and involuntary muscle contractions." EDIT: Googling this topic is tricky.
PlaidX90

I know where you're coming from, but "they" is already the world's gender-neutral third person pronoun of choice, so why pick a different one? Even if it wasn't, you've got to pick your battles.

PlaidX130

the fact that God cannot do something that cannot be done does not limit His omnipotence.

The point is that "omnipotent" is itself a "hollow adjective", as you put it. Omnipotent doesn't mean "you can do anything that can be done", it means you can do anything, full stop.

1Sniffnoy
I always understood "omnipotent" as "can set the state of the universe to anything" (like someone pausing a simulation to make some changes).
6Clippy
Yes, it seems these critiques are more about the validity of the concept of literal omnipotence than about beings that purport to meet that standard. The problem is that literal omnipotence is impossible, and so humans that care about related problems should probably delineate what specific powers a being labeled as "omnipotent" has, rather than remain stuck on the definitional debate.
6prase
Partly because "can" is a hollow verb.
wilkox100

This bothered me too. If 'omnipotent' is defined as 'able to do things which can be done', we're all gods.

PlaidX60

Wow, I often curse the world for not dropping the information I need into my lap, but here it seems to be on a silver platter. When I got around to reading this post, I had literally 23 tabs open, all of them about research into meditation.

I've been meditating for about six months and in the last week or so, getting disenchanted with the mainstream (within theravada buddhism) model of the path, and looking into alternate sources of information.

It's excellent to see that there's people already succeeding in the independent investigation I was wearily beginning to attempt!

8Kaj_Sotala
http://www.interactivebuddha.com/Mastering%20Adobe%20Version.pdf has seemed like a good guide (esp. part III). Though I can't vouch for its accuracy yet, since I haven't even properly reached the access concentration stage in my own meditation practice yet.
PlaidX480

I like it, but stop using "ey". For god's sake, just use "they".

2Scott Alexander
I agree that "ey" is annoying and distracting, but I feel like someone's got to be an early adopter or else it will never stop being annoying and distracting.
3Emile
Note that these first show up in the section on signaling. Later on, there's a criticism of Deontology (using Rules as the final arbiter of what's right), by appealing to Rules: Still later on: Hmm.
6ArisKatsaris
Seconded. I strongly dislike Spivac pronouns. Use "they".

I reluctantly agree. I like Spivac pronouns, but I since most people haven't even heard of them, using them probably makes your FAQ less effective for most people.

PlaidX120

I still have a hard time seeing how any of this is going to go somewhere useful.

4lukeprog
Luckily, for the moment, some people are already finding it useful.
2thomblake
Here is my understanding: Ethics is the study of what one has most reason to do or want. On that definition, it is directly relevant to instrumental rationality. And if we want to discover the facts about ethics, we should determine what sort of things those facts would be, so that we might recognize them when we've found them - this, on one view, is the role of metaethics. This post is an intro to current thought on metaethics, which should at least make more clear the scope of the problem to any who would like to pursue it.
PlaidX140

[skims article]

Skim more. Got it.

3Vaniver
:D
PlaidX50

The role of fun in maintaining mental health should also be noted. All work and no play makes Jack a dull boy.

0CronoDAS
PlaidX140

For me, the important distinction between the salmon thing and the Mohammad thing is that getting zapped when you see a picture of a salmon is a reaction that doesn't go away through exposure. It can't be desensitized. Drawing Mohammad, or really any form of trolling, eventually gets savvy people to change the way they react.

That's not to say that trolling is necessarily good, but it is functionally different than what's happening with the salmon. See this article by Clay Shirky.

2torekp
Actually, Yvain didn't say. Maybe Brits can overcome their salmon reactions through a course of cognitive therapy, or exposure therapy. The scenario is under-described. That's why Yvain's first bullet point is wrong about "choice": it uses a backward-looking notion, where a forward-looking one is wanted.
PlaidX160

If my qualia were actually those of a computer chip, then rather than feeling hot I would feel purple (or rather, some quale that no human language can describe), and if you asked me why I went back indoors even though I don't have any particular objection to purple and the weather is not nearly severe enough to pose any serious threat to my health, I wouldn't be able to answer you or in any way connect my qualia to my actions.

But in the simulation, you WOULD have an objection to purple, and you would call purple "hot", right? Or is this some ... (read more)

-3dfranke
Yes. A simulation in which people experienced one sort of qualia but behaved as though they were experiencing another would go completely haywire.
PlaidX00

I will go to this as long as that libertarian guy won't be there.

PlaidX20

If the memetic hazard you're referring to is the same one as mine, I recommend benzodiazepines in the short term and vipassana meditation in the long term. And just thinking about it, though clearly you're already doing that.

I think to a large extent, the percieved threat of the thing is due to a generally neurotic perspective, common to many people here, which can twist abstract thinking into knots when given a sufficiently long and nonintuitive chain of reasoning. The trauma illuminates a serious problem with the mind rather than a serious threat from the idea.

0Armok_GoB
Sounds like it almost certainly is not the same one.
PlaidX20

Sorry, I couldn't see your reply when I did my edit. I should've reloaded.

PlaidX20

Alternative Voting, also known as instant runoff voting, produces results very similar to first past the post, while introducing massive headaches. You want range voting or approval voting instead.

IRV leads to 2-party domination

There are three IRV countries: Ireland (mandated in their 1937 constitution), Australia (adopted STV in the early 1900s, but in 1949 added "reweighting" to STV in their multi-winner elections, a change which does not matter for us since we are only considering single-winner elections - Australia and Ireland have both kin

... (read more)
3[anonymous]
Also, it is extraordinarily impolite to post a comment containing a link, then, after a reply has been posted, edit the link to point somewhere different. I don't currently have enough karma to downvote, but if I did I would definitely downvote such a flagrant breach of basic etiquette. For the record, PlaidX's original link was to http://www.rangevoting.org/IRVcs.html . The link now points to http://www.rangevoting.org/TarrIrv.html . And the argument there is a specious one, in that the situation described only works if the "best" voters know how everyone else is going to vote. Which they don't.
1[anonymous]
Yes, so in a highly-artificial scenario, AV can produce a result no worse than the same result under FPTP. Both range voting and approval voting can lead to the single least-popular candidate getting in. See http://archive.fairvote.org/?page=1920 And more to the point, neither of those are on the ballot paper. AV is.
PlaidX10

Hmm. I wonder what situationism says about living alone and not interacting with anyone. Does it mean no influence, or feedback from your own traits, or what?

3TheOtherDave
There's very little data about such people. (I assume you're thinking in terms of hermits, not just people are embedded in a social context but happen to live alone and not have many friends.)
PlaidX00

I came to the comments section to make this exact post.

PlaidX40

Fair enough. How many IQ points would you say make a fair exchange for lesswrong's teachings?

0jsalvatier
That I don't have a good handle on. I have gone back and forth on how practical they are.
PlaidX00

Oops, edits crossed in midstream. This reply made a lot more sense in conjunction with the original post as it was originally written.

Edit: Haha, yes.

0atucker
Yup, so much more sense. :P Actually, it seems like almost none of this relates any more. I guess I'll leave it up for posterity's sake, but the current topic is harder. I guess I should wait longer before responding to things.
PlaidX30

Hmm... I'm certainly not SURPRISED by it, but I don't share it, no. I see it as being a crooked sort of kludge necessitated by the idea that people are equally valuable. "people" is a very big and complicated category, and treating it as a single moral point leads to weirdness.

Practically speaking, a person gets created over an extremely protracted period. It's not when they're conceived, it's not when they're born, it's not when they learn to speak or use the internet, it's the entire process. In contrast, people die close to instantaneously. &q... (read more)

PlaidX120

For the op and others here who consider preservation of human life a terminal goal: do you also consider the creation of human life a terminal goal of the same magnitude? If not, why not?

I find it very unintuitive that something's creation could be unwarranted but its preservation vital, terminally, independent of any other considerations.

-1Pavitra
I'm not sure if I'm in the category you call for, but I feel that this comment ought to go somewhere under this post, and this seems like the best fit. I consider killing an arbitrary person much, much worse than killing a random person, because the latter cannot be used as a weapon to silence people you don't like. Abortion generally doesn't discriminate against the baby (although there are cultures where sex-selective abortion is popular, which makes me uncomfortable, and which will be more of a problem as they get more than one bit to discriminate on); a woman aborts not because she dislikes the particular child, but because she doesn't want to have a child at all. I don't like the idea of killing babies, but as long as it's done indiscriminately with regards to the individual baby, the only important moral difference from abstinence-as-birth-control is that it's painful to a creature with about the consciousness of a small animal. Ultimately, I think that monoculture is more dangerous than infanticide. (As a corollary, I have to support the "right" of religious parents to let their toddlers die of easily-curable diseases because they don't believe in evidence-based medicine. Though once children are able to ask to get out of their parents' households, they should be offered political asylum if necessary.) For mostly unrelated reasons, I think abortion should be mandatory if the baby is the product of rape. We can't afford, as a society, to let rape be a viable reproductive strategy. Yes, it's horrible and cruel to someone who's already a victim of a horrible, cruel travesty, but it's still better than letting rape continue to exist for the entire future.
4Oligopsony
When you state it like that is seems unintuitive, but the idea that creating new people is somewhere around morally neutral and that destroying existing people is morally bad is a very strong and common intuition. Do you really not share it? (Of course, "terminal" is the load-bearing word here. A knife that slices someone's neck has very different secondary consequences than a knife that rearranges reality such that it was as if the victim was never born, but neither biology nor cultural knowledge personal experience have given us any reason to form intuitions about the latter case.)
PlaidX10

Unfortunately, none of my interests seem to involve group activities.

I have difficulty meeting people I like even on the internet, where there's zillions of them and they can be easily sorted through.

PlaidX40

Sure. Learning things I couldn't learn on wikipedia, finding something good to eat, making a meaningful connection with people, enjoying myself, etc.

It's not some existential angst, I'm just hard to please.

PlaidX10

Meaning what?

1atucker
Going to or doing things with a bunch of people (hopefully related to your interests) o improve the chances of meeting someone you'd like.
PlaidX30

Classes, lectures, trying new food, going on dates... it's not that these things are ever huge letdowns, I'm just not glad I did them.

One of my biggest problems is making new friends. I try sometimes, despite my better judgment, but the amount of time and effort necessary to forge a friendship worth having, or perhaps to reformat the person in question into someone worth having for a friend, seems astronomical. It feels like I only managed to make the friends I have because when I started I had no friends and it was the only option, the way kids are forced by the world to learn a language.

0atucker
Changing other people is incredibly difficult to do. Have you tried just casting a wider net, so to speak?
2JGWeissman
When you try any of these things, is there something that you hope to accomplish, but don't? Is there some outcome you can imagine that would be make you glad you did it?
PlaidX20

I would like this to be true, but in my personal experience, it is not. Whenever I go try things, the result is the same. Waste of time, waste of time, waste of time and bus fare.

0AnnaSalamon
Could you give examples, PlaidX?
PlaidX-10

It wasn't just one person, it was three or four. And it wasn't just that they INVOKED torture, it was that they clung to it like a life preserver, like it was the magic ingredient for winning the argument.

This is so far outside the bounds of civil discourse, and yet it's routine in this community. I don't think it's unwarranted to be generally concerned.

2CuSithBell
Also note that, besides thought experiments, "extreme negative utility" is also observed in religious discourse. I'd say Hell is probably the archetypal example of [someone proposing] infini-torture [to win an argument].
2Dorikka
Off of the top of my head, torture and similar very unpleasant things are useful for at least two purposes. As in this post, you could attempt to quantify how much you value something (in this case, effective immortality) by how long you would be willing to exist in an extremely uncomfortable state (such as being tortured.) Similarly, if someone is attempting to make certain absolute statements (such as "I would never kill another human being.") regardless of circumstance, such conjecture can be used to quantify how much negative utility they attribute to committing such an act. If you feel severe discomfort in being in a conversation where someone is using torture as a hypothetical, I suppose that you could either leave the conversation or ask them to use a different hypothetical, but the whole point of using torture as a hypothetical in such a case is because it is extremely unpleasant, so their alternative, if chosen well, maybe be equally discomforting to you.
1TheOtherDave
I agree that if clinging desperately to magic assertions for winning arguments were routine in this community, that would warrant concern. I don't agree that it is, in fact, routine here.
PlaidX00

In the last meetup I went to, there was an obnoxious guy who was dominating the conversation, and somehow got into a relativism-based defense of something, I think just to be contrary.

Several other people jumped on him at this point, and soon the argument swung around to "what about torture? what if you were being tortured?" and he came up with rationalizations about how what doesn't kill you makes you stronger, it'd be a great story, etc. etc., and so they kept piling on qualifications, saying "they torture you for 50 years and then execute... (read more)

-2TheOtherDave
You are entirely justified in not swallowing your alienation to ugly, pointless, embarrassing, aggravating behavior like what you describe those folks engaging in. Rejecting that doesn't make you screwed up. But the conversation you describe doesn't suddenly become less ugly, pointless, embarrassing, or aggravating if these people had instead been being arguing the same way about, say, hypothetically losing billions of dollars, or hypothetically moving from Beverly Hills to Bangladesh, or hypothetically swimming the English Channel. That is, I don't think the event you're describing justifies the conclusion you're trying to use it to justify. That said, I also don't think you actually care about that.
2wedrifid
Someone was an ass in a conversation you were with. Evidently it affected you personally. But you have generalised that to a general assumption of neurosis for a broad class of people who happen to discuss abstract decision problems with 'torture' plugged in as the extreme case. More to the point you actively declare your smug judgement on the broad class 'you people'. Apart from indicating a clearly defective model of human psychology that is unnecessarily anti-social behaviour. The appropriate response to having an unpleasant conversation with an ass is not to become obnoxious yourself.
PlaidX-10

I see your point, but I guess my problem is that I don't see why constructing these tradeoffs is productive in the first place. It just seems like a party game where people ask what you'd do for a million dollars.

Like, in the situation here, with uploading, why does immortality even need to be part of equation? All he's really saying is "intuitively, it doesn't seem like an upload would 'really' be me". What happens to the upload, and what happens to the original, is just a carnival of distractions anyway. We can easily swap them around and see that they have no bearing on the issue.

0TheOtherDave
Yeah, as I said earlier, if you can't think of a better way to have the conversations but don't think those conversations are worth having at all, I have nothing to say to that. Like any conversation, they're interesting to the people they interest, and not to the people they don't... I don't really understand why people talk so much about football, for example.
PlaidX-10

I think part of what bothers me about these things is I get the impression the readers of lesswrong are PICKING UP these neuroses from each other, learning by example that this is how you go about things.

Need to clarify an ethical question, or get an intuitive read on some esoteric decision theory thing, or just make a point? Add torture! If yudkowsky does it, it must be a rational and healthy way to think, right?

1TheOtherDave
Interesting. Yeah, I can see where that impression comes from, though I'm not sure it's accurate. If you notice me using hypothetical suffering in examples where you can come up with an alternate example that expresses the same things except for the suffering, feel free to call me on it, either publicly or privately.
5TheOtherDave
I suggest you think more carefully about whether you really want to endorse the standard of judging (and potentially dismissing) what people say based on hastily constructed theories about their personality flaws. Anyway. Suppose you wanted to construct a hypothetical example that trades off, on the one hand, an immortal and basically positive lifespan, and on the other hand, X. What X would you, thankfully neurosis-free and admirably aware of the importance of choosing good hypotheticals, choose that could plausibly be traded off for that? I'm reminded of the old joke about a ham sandwich being preferable to eternal happiness.
-1wedrifid
I fundamentally disagree with your position. I had previously thought your question was one of ironic jest but now it seems like you have a genuine weakness when it comes to abstract thought. "Outward manifestation of some neurosis" - now that challenges my tolerance for tasteless hyperbole. Personally insulting the entire community you are participating in without provocation? That is something that is a genuine indication of an unhealthy psychological trait. Most stable humans have a strong instinctive aversion to alienating themselves from communities in which they are an ongoing participant.
PlaidX00

Why do people feel the need to discuss "huge relative disutilities"? What's the difference between that and being obnoxiously hyperbolic?

In the current example, I'm not even sure what kind of point he's trying to make. It sounds like he's saying "Some people like bagels. But what if someone poisoned your bagel with a poison that made your blood turn into fire ants?"

Is this an autism thing? There were people doing this at the meetup I went to as well.

1Zetetic
It seems like moral problems get a negative phrasing more often than not in general, not just when Yudkowsky is writing them. I mean, you have the Trolley problem, the violinist, pretty much all of these, the list goes on. Have you ever looked at the morality subsections of any philosophy forums? Everything is about rape, torture, murder etc. I just assumed that fear is a bigger motivator than potential pleasantness and is a common aspect of rhetoric in general. I think that at least on some level it's just the name of the game, moral dilemma -> reasoning over hard decisions during very negative situations, not because ethicist are autistic, but because that is the hard part of morality for most humans. When I overhear people arguing over moral issues, I hear them talking about whether torture is ever justified or if murder is ever o.k. Arguing about whether the tradeoff of killing one fat man to save five people is justified is more meaningful to us as humans than debating whether, say; we should give children bigger lollipops if it means there can't be as much raw material for puppy chow (ergo, we will end up with fewer puppies since we are all responsible and need to feed our puppies plenty, but we want as many puppies as possible because puppies are cute, but so are happy children). This isn't to say that simply because this is how it's done currently means that it is the most rational way to carry on a moral dialogue, only that you seem to be committing a fundamental attribution error due to a lack of general exposure to moral dilemmas and the people arguing them. Besides, it's not like I'm thinking about torture all the time just because I'm considering moral dilemmas in the abstract. I think that most people can differentiate between an illustration meant to show a certain sort of puzzle and reality. I don't get depressed or anxious after reading Lesswrong, if anything; I'm happier and more excited and revitalized. So I'm just not picking up on the neurosi
1TheOtherDave
FWIW, I'm neurotypical and not exceptionally obnoxious. Can't speak for "people." I can speak a little bit to why I do it, when I do. One difficulty with comparing consequentialist from deontological ethical frameworks is the fact that in many plausible scenarios, they make the same predictions. I can talk about why it's a bad idea to rob a bank in terms of its consequences, but a deontologist will just shrug and say "Or, you can just acknowledge that it's wrong to rob banks, which is simpler," and it's not clear we've accomplished anything. So to disambiguate them, it's helpful to introduce cases where optimizing consequences requires violating deontological rules. And to turn up the contrast, it's often helpful to (a) choose really significant deontological rules, rather than peripheral ones, and (b) introduce very large differences between the value of the +rules and -rules conditions. Which leads to large relative disutilities. Now, one can certainly say "But why is comparing consequentialist from deontological ethical frameworks so important that you're willing to think about such awful things in order to do it? Can't you come up with nicer examples? Or, if not, think about something else altogether?" To which I won't have a response. As for the current example, I'm not exactly sure what point he's making either, but see my comment on the post for my best guess as to what point he's making, and my reaction to that point.
4wedrifid
I don't know if it's an autism thing... but I'm definitely going to have to include that in a hypothetical one of these days. :)
3NihilCredo
Well, what other kind of disutility would you suggest that could conceivably counterbalance the attractiveness of immortality?
PlaidX180

Why does every other hypothetical situation on this site involve torture or horrible pain? What is wrong with you people?

Edit: I realize I've been unduly inflammatory about this. I'll restrict myself in the future to offering non-torture alternative formulations of scenarios when appropriate.

wedrifid100

Why does every other hypothetical situation on this site involve torture or horrible pain? What is wrong with you people?

We understand why edge cases and extremes are critical when testing a system - be that a program, a philosophy, a decision theory or even just a line of logic.

9TheOtherDave
I've often wondered that. In some sense, it's not actually true... lots of hypotheticals on this site involve entirely mundane situations. But it's true that when we start creating very large stakes hypotheticals, the torture implements come out. I suspect it's because we don't know how to talk about the opposite direction, so the only way we know to discuss a huge relative disutility is to talk about pain. I mean, the thing that is to how-I-am-now as how-I-am-now is to a thousand years of pain is... well, what, exactly?
1ewang
I cringed when I read about that "1000 years of terrible agony". Just thinking about that is bad enough.
PlaidX30

People on average increase in societal value from conception to childhood, and then it gets more complicated from there depending on how they turn out. And yes, typically their value declines as they become elderly.

But, as in your example with your adopted friend, even a baby that starts out unwanted, if society invests a bit in its welfare, will soon become part of the social fabric and so on and thereby become valued.

Certainly there are some people who literally nobody likes, but even then, there's still reason B.

As it happens, my best friend was adopted as well. But I hardly think the limiting factor in the number or quality of my friends is society's production of babies.

PlaidX10

Yeah, this is more or less what I meant by B, with the caveat that alice and bob may fundamentally disagree on who's better off dead.

PlaidX190

Let me turn your question around. If your utility function puts value in the mere existence of people, regardless of how they interact with the larger world, doesn't that mean having babies is as wonderful as killing people is terrible? Is somebody with 12 kids a hero?

Pfft160

Is somebody with 12 kids a hero?

Or a serial killer with a large family? "Sure he might have killed 3 people -- but he's a father of 5!"

0[anonymous]
I'm actually pretty sure some people who have had 12 kids are heroes or at least very altruistic when objectively analysed. Many many people that made great contributions have come from large families of overachievers. Genetics and upbringing matter a lot. And productivity gains made by lets say 6 of the kids can easily overshadow anything that one individual could have done (even when adjusted for the fact that the kids start contributing later). However overall if we look at the world today, the vast majority of people having 12 kids aren't heroes.
Load More