All of Raoul589's Comments + Replies

What about if she just said: 'duty'?

0Lumifer
That's not quite sufficient as it's the word "sacred" which does the heavy lifting. Saying it's her duty isn't particularly meaningful for a nurse -- it's her job, that's what she is paid to do. She is not doing you a favour, cleaning up shit is right there in her job description.

Sorry, I should clarify. I was saying that:

"Taking care of you is my sacred duty. I care about you. It is important that you tell me if there is something wrong."

Is precisely something that Swimmer963 could say even though she's annoyed. She doesn't have to deny that she's annoyed, or even imply it. In fact it's probably futile to try... of course she's annoyed, and the patient suspects that. That is exactly the motivation for her lie in the first place.

The statement above nevertheless conveys her overall commitment to the patient's wellbeing... (read more)

0Lumifer
If a nurse started talking to me about her "sacred duty", I certainly would not believe her.

I don't think that the nurse is implying that he is not annoyed. Both the patient and the nurse recognise that the 'crapping the bed' situation is an annoying one, and the nurse is not denying that. The nurse is simply making it clear that his annoyance is a secondary concern, and that instead the welfare of the patient is the primary concern. The nurse genuinely believes that his own annoyance is relatively less important, and he is conveying that literally to the patient. This is actually the true situation, so I am confused about how you think he is lying, even implicitly.

0Lumifer
If you go sufficiently upthread, you'll find that it started with a post by Swimmer963 who is a nurse and is relating her own experience. In particular, she says:

"Taking care of you is my sacred duty. I care about you. It is important that you tell me if there is something wrong."

This is true literally and in spirit.

0[anonymous]
To invoke a cheesy meme, I wish I could upvote twice, once for phrasing something that doesn't involve telling a white lie, and the second time for consciously reinforcing that patient care is a sacred duty.

Do you find any slapstick or dark comedy funny? I'm curious.

If a rival in some competitive domain (think work, or romance) is falling behind me, instead of feeling happy about this (schadenfreude) I feel sad and I tend to dissipate my own relative advantage by trying to bring my rival up to my level.

I also have limited emotional motivation to take revenge or even strategic retribution (because I don't enjoy the suffering of those who wrong me). I get angry or morally outraged, but anger can only take you so far - you need to be able to follow through with the punishment. So when I play real life zero sum prisoner's... (read more)

Removing the schadenfreude response from humanity as a whole would - I think - be a beautiful thing, but lacking this emotion has certainly been damaging to my own personal fitness.

1polymathwannabe
How?

I don't think I've ever experienced schadenfreude. As in, I'm not even sure what that emotion is supposed to feel like, from the inside. I get the impression that the few people I've said this to think that I'm lying about it for signalling purposes.

Is it common just not to feel schadenfreude, like not ever, for any reason? Lately I've started to wonder if I've been committing the typical mind fallacy on this.

3bramflakes
I feel it, but it's a weak emotion. I could easily imagine going without it.
1WalterL
I don't think so, I've never read of a case of it. I think most folks feel schadenfruede.
1polymathwannabe
That's an emotion humankind can do without, but that idea makes me wonder about the ethicality of genetically removing the potential for specific emotions.

Are there any Australians here who have done this? Recently? Is the situation different for residents rather than worker/tourists?

60% Introvert. At least, I used to think of myself as an introvert, but recently I've come to wonder if that really is what I am. My hometown is Adelaide, Australia, but I'm currently in Hangzhou, China. I'm 24.

For the first 23 years of my life I lived with my family. I used to think that I loved being by myself, because I never really felt the need to make any special effort to see friends. Also, I loved the times I was 'home alone'. However, I think that I may actually have been mistaken - I think I just took the company of my parents for granted, and fo... (read more)

3CAE_Jones
It all sounds pretty similar to my experience. Living with my family (parents, siblings, cousins) has grown increasingly stressful over the past decade or so, though, so I find that things are usually (not always; sometimes we get along just fine and it's fun times) worse when I'm there. I recently did a quick-and-dirty quantifying of different aspects of my life during different time periods, and found exactly what you said about "given" social interaction to be true. My first two years of college were horribly unpleasant and unproductive, and were also the two years that I was most alone (I didn't recognize this and was stubbornly clinging to individualism at the time); the same is true of the two years I spent at home after college (except by then I'd realized my folly; it was just absurdly difficult to do anything about it by then). I also find myself with an irrationally negative emotional reaction whenever I so much as think the word "lonely". "Lonesome" is slightly better, and "alone" is significantly better, but I still feel strong resistance to breaking the taboo on talking about it. (I was actually considering posting to see if there was any interest in an LW meetup anywhere I can reach. I'll probably just wind up trying to make the St Louis meetups if I can level up my ability to travel independently.)

Not making a special effort to move out of home when I started university.

Allowing akrasia to prevent me from applying for a single graduate position at any of the many companies that were hiring Computer Science graduates in my final year of study.

Allowing akrasia to prevent me from joining any clubs or associations at university.

Not getting a minimum-wage job for work experience when I was still young enough that the minimum pay for me was lower, giving me a competitive advantage.

Every time I lie, I regret it a little bit, as I wonder whether the long term trajectory of my life would have been different had I been totally truthful instead of 'polishing' the truth.

It's kind of like mini-cryonics!

2jefftk
And you don't have to take its efficacy on faith!

Last year, I had to choose what I would research in my honours year of my Computer Science degree. I actually remember thinking to myself, 'I'm going to use all of the techniques I have learned from LW'. I sat down for several hours, carefully analysing my situation, and came to the conclusion, I should research A. It is the superior option on every non-trivial metric I can think of. This is the rational decision.

But then, I chose to research B, because I would have been embarrassed to have to explain my choice of A to my family. And that was it.

Dammit, I wanted to hear the anecdote.

In case it's not clear: I'm not trying to contradict you; I am trying to get advice from you.

Suppose that you got a mysterious note from the future telling you that the demand for home-robotics will increase tenfold in the next decade, and you know this note to be totally reliable. You know nothing else that is not publicly known. What would you do next?

1[anonymous]
I'd advise finding a market bottleneck, like ColTan mining. You'll see any technology that can replace tantalum capacitors from further away than you'll manage to see software or design shifts.
5gwern
Do more research. Is this even nonpublic knowledge at all? The world economy grows at something like 2% a year, labor costs generally seem to go up, prices of computers and robotics usually falls... Do industry projections expect to grow their sales by <25% a year? If so, I might spend some of my hypothetical money on whatever the best approximation to a robotics index fund I can find, as the best of a bunch of bad choices. (Checking a few random entries in Wikipedia, maybe a fifth of the companies are publicly traded, so... that will be a pretty small index.) But I wouldn't be really surprised if in 10 years, I had not outperformed the general market.
-2MugaSofer
By "you know this note to be totally reliable" I assume you mean you have a fair idea how it got there (eg you just built a time portal. with the intention of sending through financial advice, and a hand, bearing the same tattoo you have, pushed through with the note) and not that you're psychic and literally know things with 100% certainty? IOW you have a high probability estimate that it's genuine, but not an infinitely high one (seems more realistic and applicable if nothing else.)

Suppose that you are literally certain (you're not just 100% confident, you actually have special perfect information) about the future tenfold growth in demand for home robotics. Are you claiming that there is literally no way of using this information to reliably extract money from the stock market? This surprises me.

Would you expect Vaniver's indexing to at least reliably turn a profit? Would you expect it to turn a large profit?

6gwern
I'll reuse my example: if you knew for certain that Facebook would be as huge as it was, what stocks, exactly, would you have invested in, pre-IPO, to capture gains from its growth? Remember, you don't know anything else, like that Google will go up from its IPO, you don't know anything about Apple being a huge success - all you know is that some social network will some day exist and will grow hugely. The best I can think of would be to sell any Murdoch stock you owned when you heard they were buying MySpace, but offhand I'm not sure that Murdoch didn't just stagnate rather than drop as MySpace increasingly turned out to be a writeoff. In the hypothetical that you didn't know the name of the company, you might've bought up a bunch of Google stock hoping that Orkut would be the winner, but while that would've been a decent investment (yay!) it would have had nothing to do with Orkut (awww!); illustrating the problem with highly illiquid markets in some areas... Depends on the specifics. Suppose the home robotic growth were concentrated in a single private company which exploded into the billions of annual revenue and took away the market share of all the others, forcing them to go bankrupt or merge or shrink. Home robotics will have increased - keikaku doori! - yet Vaniver's fund suffered huge losses or gone bankrupt (reindex when one of the robotics companies suffers share price collapses? Reindex into what, exactly? Another one of the doomed firms?). Then after the time period elapses and your special knowledge has become public knowledge, the robotics company goes public, and by EMH shares become a normal gamble where you could lose money as easily as make it. (Is this an impossibly rare scenario? Well, it sounds a lot like Facebook, actually! They grew fast, roflstomped a bunch of other social networks, there was no way to invest in them or related businesses before the IPO, and post-IPO, I believe investors have done the opposite of profit.)

If I was keeping my porfolio indexed to the market, wouldn't I be selling Blockbuster shares each month as Blockbuster lost market share? Why would I end up holding lots of Blockbuster?

1Vaniver
I apologize, I was unclear; I'm recommending 'buy and hold indexing' where you correct imbalances by buying the stocks you have less of with new investment income, rather than correcting imbalances by selling stocks you have too much of to buy stocks you have too little of. This is a good way to invest for individual investors who have a constant influx of investment funds and who pay trading fees that are a large percentage of their order sizes. If you have a large pool of capital that you begin with, or you want to actively manage money you've already invested, then you may want to actively correct imbalances. It's helpful to work out the expected value of a rebalancing trade, and make sure that's larger than the fees you pay (and you may decide to only rebalance once it gets above some larger threshold). Here, you do end up with mostly Netflix- but you bought a lot of Blockbuster when it was expensive, and sold it when it was cheap, whereas the projection investor who knew that Netflix was going to worth 30 times what Blockbuster would be would have put 3% of their money into Blockbuster and 97% into Netflix, and so the majority of their current shares would come from when they put a lot of money into cheap Netflix stock. I haven't heard about that sort of projection investing playing well with rebalancing- and if I remember correctly, it was designed for allocating a large pool which you have complete access to, rather than doing dollar cost averaging with a constant income stream.

Right. Is there no more sophisticated strategy though?

-2MugaSofer
Buy Google - if home robotics turns into a thing they'll probably be running it, whether because they set a bunch of geniuses on the problem or they bought out the company that first started making these robots. More seriously, I suppose you might be able to extrapolate some other information from that - for example, human servants would be even less useful, and materials/services used to produce robots might become more valuable.
0CCC
Perhaps buying stock in companies that make microchips? Those home robotics companies are going to be spending a fair amount on microchips to fuel their growth...

I have a related question about buying stocks. Suppose (for example) that I knew with 100% certainty that the global demand for home robotics would grow tenfold in the next decade.

If this was the only information that I had that wasn't generally known, is there any action I could take based on this information to reliably make money from the stock market (at least over the next ten years)?

-4Richard_Kennaway
Start a company developing domestic robots and make a success of it. Then (optionally) take it public.
2Shmi
If you have 100% confidence in something, you then logically should go for maximum leverage, regardless of the risk, and so stock up on derivatives, like options and futures, rather than buy and hold stocks or indices. But of course people are generally poorly calibrated, so someone who thinks they are 100% right will probably be wrong half the time.
2Vaniver
So, from a time savings perspective you would want a fund that specializes in home robotics. If one of those exists, though, that suggests that your knowledge isn't as unique as you'd like. What I would probably do is find a news website for home robotics producers- a trade magazine is what used to fill this niche, and might still do so- to have a good idea of how relative companies are doing. This looks like a promising place to start, but that gets you as informed as similar investors, and you'd like to be more informed. Then, try to keep a portfolio that's fairly balanced in all noteworthy home robotics companies. I'd probably go the 'buy and hold' route- try and keep your portfolio roughly apportioned relative to market share by buying up shares of companies underrepresented in your portfolio every month. This is the 'indexing' approach- basically, you trust that the home robotics market as a whole will go up, and that the market is better at predicting who will go up than you will. If you're more confident in your ability to predict trends, you want to hold companies relative to their expected market share at the end of your trading period- to use an old example, the first strategy would have you holding lots of Blockbuster and some Netflix and the second strategy would have you holding lots of Netflix and some Blockbuster. There is a giant obstacle here, though, which is that a large part of the stock price is determined by the financials of the company, which take a relatively large investment of time and energy to understand. If you're indexing, you basically offload this work to other investors; if you do it yourself, you can have a decent idea of what the companies are worth on the books, and then adjust by your estimate of how well they'll do in the near future.
-1MugaSofer
At a guess, I'd say you should buy stock in companies working on home robotics.

In this way, defection seems to have two social meanings:

Defecting proactively is betrayal. Defecting reactively is punishment.

We seem to have strong negative opinions of the former and somewhat positive opinions of the latter. I think in your salesman example you're talking about punishment being crucial. In fact, the defection of the customer is only necessary as a response to the salesman's original defection.

I am curious as to whether you have a similarly real life example of where proactive defection (i.e. betrayal) is crucial (for some societal or group benefit)?

4wedrifid
And for this reason we tend to be predisposed to interpreting the behavior of enemies as 'proactive/betrayal' and our own as 'reactive/punishment' (where we acknowledge that we have defected at all).

Does it follow from that that you could consider taking the perspective of your post wirehead self?

0Kawoomba
Consider in the sense of "what would my wire headed self do", yes. Similar to Anja's recent post. However, I'll never (can't imagine the circumstances) be in a state of mind where doing so would seem natural to me.

You will only wirehead if that will prevent you from doing active, intentional harm to others. Why is your standard so high? TheOtherDave's speculative scenario should be sufficient to have you support wireheading, if your argument against it is social good - since in his scenario it is clearly net better to wirehead than not to.

0lavalamp
All of the things he lists are not true for me personally and I had trouble imagining worlds in which they were true of me or anyone else. (Exception being the resource argument-- I imagine e.g. welfare recipients would consume fewer resources but anyone gainfully employed AFAIK generally adds more value to the economy than they remove.)

It seems, then, that anti-wireheading boils down to the claim that 'wireheading, boo!'.

This is not a convincing argument to people whose brains don't say to them 'wireheading, boo!'. My impression was that denisbider's top level post was a call for an anti-wireheading argument more convincing than this.

1lavalamp
I use my current value system to evaluate possible futures. The current me really doesn't like the possible future me sitting stationary in the corner of a room doing nothing, even though that version of me is experiencing lots of happiness. I guess I view wireheading as equivalent to suicide; you're entering a state in which you'll no longer affect the rest of the world, and from which you'll never emerge. No arguments will work on someone who's already wireheaded, but for someone who is considering it, hopefully they'll consider the negative effects on the rest of society. Your friends will miss you, you'll be a resource drain, etc. We already have an imperfect wireheading option; we call it drug addiction. If none of that moves you, then perhaps you should wirehead.

As a wirehead advocate, I want to present my response to this as bluntly as possible, since I think my position is more generally what underlies the wirehead position, and I never see this addressed.

I simply don't believe that you really value understanding and exploration. I think that your brain (mine too) simply says to you 'yay, understanding and exploration!'. What's more, the only way you even know this much, is from how you feel about exploration - on the inside - when you are considering it or engaging in it. That is, how much 'pleasure' or wirehe... (read more)

2Kindly
So what would "really valuing" understanding and exploration entail, exactly?
5TheOtherDave
If I were about to fall off a cliff, I would prefer that you satisfy your brain's desire to pull me back by actually pulling me back, not by hacking your brain to believe you had pulled me back while I in fact plunge to my death. And if my body needs nutrients, I would rather satisfy my hunger by actually consuming nutrients, not by hacking my brain to believe I had consumed nutrients while my cells starve and die. I suspect most people share those preferences. That pretty much summarizes my objection to wireheading in the real world. That said, if we posit a hypothetical world where my wireheading doesn't have any opportunity costs (that is, everything worth doing is going to be done as well as I can do it or better, whether I do it or not), I'm OK with wireheading. To be more precise, I share the sentiment that others have expressed that my brain says "Boo!" to wireheading even in that world. But in that world, my brain also says "Boo!" to not wireheading for most the same reasons, so that doesn't weigh into my decision-making much, and is outweighed by my brain's "Yay!" to enjoyable experiences. Said more simply: if nothing I do can matter, then I might as well wirehead.
2ArisKatsaris
Because my brain does indeed say "yay!" about stuff, but hacking my brain to constantly say "yay!" isn't one of the stuff that my brain says "yay!" about.
4lavalamp
Because my brain says 'boo' about the thought of that.

I think that you are right that we don't disagree on the 'basis of morality' issue. My claim is only that which you said above: there is no objective bedrock for morality, and there's no evidence that we ought to do anything other than max out our utility functions. I am sorry for the digression.

We disagree if you intended to make the claim that 'our goals' are the bedrock on which we should base the notion of 'ought', since we can take the moral skepticism a step further, and ask: what evidence is there that there is any 'ought' above 'maxing out our utility functions'?

A further point of clarification: It doesn't follow - by definition, as you say - that what is valuable is what we value. Would making paperclips become valuable if we created a paperclip maximiser? What about if paperclip maximisers outnumbered humans? I think benthamite is right:... (read more)

4randallsquared
I think I must be misunderstanding you. It's not so much that I'm saying that our goals are the bedrock, as that there's no objective bedrock to begin with. We do value things, and we can make decisions about actions in pursuit of things we value, so in that sense there's some basis for what we "ought" to do, but I'm making exactly the same point you are when you say: I know of no such evidence. We do act in pursuit of goals, and that's enough for a positivist morality, and it appears to be the closest we can get to a normative morality. You seem to say that it's not very close at all, and I agree, but I don't see a path to closer. So, to recap, we value what we value, and there's no way I can see to argue that we ought to value something else. Two entities with incompatible goals are to some extent mutually evil, and there is no rational way out of it, because arguments about "ought" presume a given goal both can agree on. To the paperclip maximizer, they would certainly be valuable -- ultimately so. If you have some other standard, some objective measurement, of value, please show me it. :) By the way, you can't say the wirehead doesn't care about goals: part of the definition of a wirehead is that he cares most about the goal of stimulating his brain in a pleasurable way. An entity that didn't care about goals would never do anything at all.
3nshepperd
What is valuable is what we value, because if we didn't value it, we wouldn't have invented the word "valuable" to describe it. By analogy, suppose my favourite colour is red, but I speak a language with no term for "red". So I invent "xylbiz" to refer to red things; in our language, it is pretty much a synonym for "red". All objects that are xylbiz are my favourite colour. "By definition" to some degree, since my liking red is the origin of the definition "xylbiz = red". But note that: things are not xylbiz because xylbiz is my favourite colour; they are xylbiz because of their physical characteristics. Nor is xylbiz my favourite colour because things are xylbiz; rather xylbiz is my favourite colour because that's how my mind is built. It would, however, be fairly accurate to say that if an object is xylbiz, it is my favourite colour, and it is my favourite colour because it is xylbiz (and because of how my mind is built). It would also be accurate to say that "xylbiz" refers to red things because red is my favourite colour, but this is a statement about words, not about redness or xylbizness. Note that if my favourite colour changed somehow, so now I like purple and invent the word "blagg" for it, things that were previously xylbiz would not become blagg, however you would notice I stop talking about "xylbiz" (actually, being human, would probably just redefine "xylbiz" to mean purple rather than define a new word). By the way, the philosopher would probably ask "what evidence is there that we should value what mental states feel like from the inside?"

What evidence is there that we should value anything more than what mental states feel like from the inside? That's what the wirehead would ask. He doesn't care about goals. Let's see some evidence that our goals matter.

1jooyous
What would evidence that our goals matter look like?
0randallsquared
Just to be clear, I don't think you're disagreeing with me.

'I don't want that' doesn't imply 'we don't want that'. In fact, if the 'we' refers to humanity as a whole, then denisbider's position refutes the claim by definition.

Even if I could have selected the links I wouldn't have tried it, because you just know that clicking on something like that will open a new page and delete all of your entered data.

Raoul589260

I just took the survey, making this my first post that someone will read!

For what it's worth, I'm probably going to be in Auckland early next year, and I would come to the meetup.