All of Philip_W's Comments + Replies

By changing x and y, we represent your altruism to the other parties in the situation; if x is greater than 1, then you would rather give the commune money than have it yourself,

Small correction: you want to buy the widget as long as x > 7/8 .

You should also almost never expect x>1, because that means you should immediately spend your money on that cause until x becomes 1 or you run out of credit. x=1 means that something is the best marginal way to allocate money that you know of right now.

2Vaniver
I think we're using margins differently. Yes, you shouldn't expect situations with x>1 to be durable, but you should expect x>1 before every charitable donation that you make. Otherwise you wouldn't make the donation! And so x=1 is the 'money in the bank' valuation, instead of the upper bound.

It's probably too small scale to be statistically significant. The God acts on large sample sizes and problems with many different bottlenecks. I would guess that most of the cost was tied up in a single technique.

Status works like OP describes, when going from "dregs" to "valued community member". Social safety is a very basic need, and EA membership undermines that for many people by getting them to compare themselves to famous EAs, rather than to a more realistic peer group. This is especially true in regions with a lower density of EAs, or where all the 'real' EAs pack up and move to higher density regions.

I think the OP meant "high" as a relative term, compared to many people who feel like dregs.

People don't have that amount of fine control over their own psychology. Depression isn't something people 'do to themselves' either, at least not with the common implications of that phrase.

Also, this was a minimal definition based on a quick search of relevant literature for demonstrated effects, as I intended to indicate with "at least". Effects of objectification in the perpetrator are harder to disentangle.

Sociology and psychology. Determine patterns in human desires and behaviour, and determine universal rules. Either that, or scale up your resources and get yourself an fAI.

0CCC
This is a difficult problem, which very few people (if any) have ever solved properly. It's (probably) not insoluble, but it's also not easy... Good luck.

'Happiness' is a vague term which refers to various prominent sensations and to a more general state, as vague and abstract as CEV (e.g. "Life, Liberty, and the pursuit of Happiness"). 'Headache', on the other hand, primarily refers to the sensation.

If you take an aspirin for a headache, your head muscles don't stop clenching (or whatever else the cause is); it just feels like it for a while. A better pill would stop the clenching, and a better treatment still would make you aware of the physiological cause of the clenching and allow you to change it to your liking.

Having a good factual model of a person would be necessary, and perhaps sufficient, for making that judgment favourably. When moving beyond making people more equal and free in their means, the model should be significantly better than their self-model. After that, the analyst would probably value the thus observed people caring about self-determination in the territory (so no deceiving them to think they're self-determining), and act accordingly.

If people declare that analysing people well enough to know their moral values is itself being a busybody, it ... (read more)

0CCC
For a single person, yes, but it takes a significant investment of time to build an accurate, factual model of a single person. It becomes impractical to do so when making decisions that affect even a mere hundred people. How would you recommend scaling this up for large groups?

In the analogy, water represents the point of the quote (possibly as applied to CEV). You're saying there is no point. I don't understand what you're trying to say in a way that is meaningful, but I won't bother asking because 'you can't do my thinking for me'.

Edit: fiiiine, what do you mean?

Be careful when defining the winner as someone other than the one currently sitting on a mound of utility.

Most lesswrong users at least profess to want to be above social status games, so calling people out on it increases expected comment quality and personal social status/karma, at least a little.

4dxu
Unfortunately, professing something does not make it true any more than putting a sign saying "Cold" on a refrigerator that isn't plugged in will make it cold.

You may not be able to make a horse drink, but you can still lead it to water rather than merely point out it's thirsty. Teaching is a thing that people do with demonstrated beneficial results across a wide range of topics. Why would this be an exception?

-1Lumifer
I'm not pointing out it's thirsty, I'm pointing out there is no water where it thinks to drink.
4dxu
I think you overestimate the extent to which many LW users comment to help others understand things, as opposed to (say) gain social status at their expense.

I don't think that helps AndHisHorse figure out the point.

-6Lumifer

Congratulations!

I might just have to go try it now.

'he' in that sentence ('that isn't the procedure he chose') still referred to Joe. Zubon's description doesn't justify the claim, it's a description of the consequence of the claim.

My original objection was that 'they' ("I think they would have given up on this branch already.") have a different procedure than Joe has ("all you have to do is do a brute force search of the space of all possible actions, and then pick the one with the consequences that you like the most."). Whomever 'they' refers to, you're expecting them to care about hu... (read more)

What do you mean with "never-entered" (or "entered") states? Ones Joe doesn't (does) declare real to live out? If so, the two probably correlate but Joe may be mistaken. A full simulation of our universe running on sufficient hardware would contain qualia, so the infinitely powerful process which gives Joe the knowledge which he uses to decide which universe is best may contain qualia as well, especially if the process is optimised for ability-to-make Joe-certain-of-his-decision rather than Joe's utility function.

0Luke_A_Somers
I meant, Zubon's description did not justify your claim that 'that isn't the procedure he chose'.

How about now?

Alicorn100

We got married almost a year ago :D. I can't keep track of who-all spouse is dating (it fluctuates a lot) but I have three other nodes on the Big Unruly Chart Thing, one of whom is also dating spouse. Going very smoothly :)

While Joe could follow each universe and cut it off when it starts showing disutility, that isn't the procedure he chose. He opted to create universes and then "undo" them.

I'm not sure whether "undoing" a universe would make the qualia in it not exist. Even if it is removed from time, it isn't removed from causal history, because the decision to "undo" it depends on the history of the universe.

-1Luke_A_Somers
Regardless of whether undoing would work, I presume that never-entered states would not have qualia associated with them.

Read it more carefully. One or several paragraphs before the designated-human aliens, it is mentioned that CelestAI found many sources of complex radio waves which weren't deemed "human".

From your username it looks like you're Dutch (it is literally "the flying Dutchman" in Dutch), so I'm surprised you've never heard of the Dutch bible belt and their favourite political party, the SGP. They get about 1.5% of the vote in the national elections and seem pretty legit. And those are just the Christians fervent enough to oppose women's suffrage. The other two Christian parties have around 15% of the vote, and may contain proper believers as well.

I think he means "I cooperate with the Paperclipper IFF it would one-box on Newcomb's problem with myself (with my present knowledge) playing the role of Omega, where I get sent to rationality hell if I guess wrong". In other words: If Elezier believes that if Elezier and Clippy were in the situation that Elezier would prepare for one-boxing if he expected Clippy to one-box and two-box if he expected Clippy to two-box, Clippy would one-box, then Elezier will cooperate with Clippy. Or in other words still: If Elezier believes Clippy to be ignorant... (read more)

In a sense they did eat gold, like we eat stacks of printed paper, or perhaps nowadays little numbers on computer screens.

That doesn't seem true. How can the victim know for sure that the blackmailer is simulating them accurately or being rational?

Suppose you get mugged in an alley by random thugs. Which of these outcomes seems most likely:

  1. You give them the money, they leave.

  2. You lecture them about counterfactual reasoning, they leave.

  3. You lecture them about counterfactual reasoning, they stab you.

Any agent capable of appearing irrational to a rational agent can blackmail that rational agent. This decreases the probability of agents which appear irrational being irrational, but not necessarily to the point that you can dismiss them.

Philip_W150

I think I might have been a datapoint in your assessment here, so I feel the need to share my thoughts on this. I would consider myself socially progressive and liberal, and I would hate not being included in your target audience, but for me your wearing cat ears to the CFAR workshop cost you weirdness points that you later earned back by appearing smart and sane in conversations, by acceptance by the peer group, acclimatisation, etc.

I responded positively because it fell within the 'quirky and interesting' range, but I don't think I would have taken you a... (read more)

1Kaj_Sotala
Thank you! I appreciate the datapoint.

Ah, "actual" threw me off. So you mean something close to "The lifetime projected probability of being born(/dying) for people who came into existence during the last year".

Thanks, edited.

Karma sink.

[This comment is no longer endorsed by its author]Reply

If you're on the autism spectrum and think Tell culture is a bad idea, upvote this comment.

[This comment is no longer endorsed by its author]Reply

If you're on the autism spectrum and think Tell culture is a good idea, upvote this comment.

[This comment is no longer endorsed by its author]Reply

I'm on the autism spectrum (PDD-NOS), and Tell culture sounds like a good idea to me.

[pollid:807]

0Vaniver
If you hit the "show help" button to the bottom right, there's a link to polls help.
0Philip_W
Karma sink.
0Philip_W
If you're on the autism spectrum and think Tell culture is a bad idea, upvote this comment.
0Philip_W
If you're on the autism spectrum and think Tell culture is a good idea, upvote this comment.

birth rate

I wouldn't consider abortion a "birth", per se.

0ike
Exactly, so only people who aren't aborted count as born, in which case the birth rate is 80%.

That's just not true. Death rate, as the name implies, is a rate - the population that died in this year divided by the average total population. If "death rate" is 100%, then "birth rate" is 100% by the same reasoning, because 100% of people were born.

0ike
That depends on whether fetuses are people ... If yes, the actual birth rate is around 80%. http://www.cdc.gov/reproductivehealth/Data_Stats/Abortion.htm

You seem to be talking about what I would call sympathy, rather than empathy. As I would use it, sympathy is caring about how others feel, and empathy is the ability to (emotionally) sense how others feel. The former is in fine enough state - I am an EA, after all - it's the latter that needs work. Your step (1) could be done via empathy or pattern recognition or plain listening and remembering as you say. So I'm sorry, but this doesn't really help.

I'll admit I don't really have data for this. But my intuitive guess is that ...

Have you made efforts to research it? Either by trawling papers or by doing experiments yourself?

students don't just need to be able to attend school; they need a personal relationship with a teacher who will inspire them.

Your objection had already been accounted for: $500 to SCI = around 150 people extra attend school for a year. I estimated the number of students that will have a relationship with their teacher as good as the average you provide at around 1:150.

But

... (read more)
0tjohnson314
This is based on my own experience, and on watching my friends progress through school. I believe that the majority of successful people find their life path because someone inspired them. I don't know where I could even look to find hard numbers on whether that's true or not, but I'd like to be that person for as many people as I can. My emotional brain is still struggling to accept that, and I don't know why. I'll see if I can coax a coherent reason from it later. But my rational brain says that you're right and I was wrong. Thanks.

Did MIRI answer you? I would expect them to have answered by now, and I'm curious about the answer.

you can do things to change yourself so that you do care.

Would you care to give examples or explain what to look for?

7Capla
The biggest thing is just to act like you are already the sort of person who does care. Go do the good work. Find people who are better than you. Hang out with them. "You become like the 6 people you spend the most time with" and all that. (I remember reading the chapter on penetrating Azkaban in HP:MoR, and feeling how much I didn't care. I knew that there are places in the world where the suffering is as great as in that fictional place, but that it didn't bother me, I would just go about my day and go to sleep, where the fictional Harry is deeply shaken by his experience. I felt, "I'm not Good [in the moral sense] enough" and then thought that if I'm not good enough, I need to find people who are, who will help me be better. I need to find my Hermiones.) I'm trying to find the most Good people of my generation, but I realized long ago that I shouldn't be looking for Good people, so much as I should be looking for people who are actively seeking to be better than they are. (If you want to be as Good as you can be, please message me. Maybe we can help each other.) My feeling of moral inadequacy compared to Harry's feelings towards Azkaban (fictional) aren't really fair. My brain isn't designed to be moved abstract concepts. Harry (fictional) saw that suffering first hand and was changed by it, I only mentally multiply. I'm thinking that I need to put myself in situations where I can experience the awfulness of the world viscerally. People make fun of teenagers going to "help" build houses in the third world: it's pretty massively inefficient to ship untrained teenagers to Mexico to do manual labor (or only sort of do it), when their hourly output would be much higher if they just got a college degree and donated. Yet I know at least one person (someone who I respect, one of my "Hermines") who went to build houses in Mexico for a month and was heavily impacted by it and it spurred her to be of service more generally. (She told me that on the flight back to the st

(separated from the other comment, because they're basically independent threads).

I've concluded that my impact probably comes mostly from my everyday interactions with people around me, not from money that I send across the world.

This sounds unlikely. You say you're improving the education and mental health of on-the-order-of 100 students. Deworm the World and SCI improve attendance of schools by 25%, meaning you would have the same effect, as a first guess and to first order at least, by donating on-the-order-of $500/yr. And that's just one of the si... (read more)

0tjohnson314
(Sorry, I didn't see this until now.) I'll admit I don't really have data for this. But my intuitive guess is that students don't just need to be able to attend school; they need a personal relationship with a teacher who will inspire them. At least for me, that's a large part of why I'm in the field that I chose. It's possible that I'm being misled by the warm fuzzy feelings I get from helping someone face-to-face, which I don't get from sending money halfway across the world. But it seems like there's many things that matter in life that don't have a price tag.

Empathy is a useful habit that can be trained, just as much as rationality can be.

Could you explain how? My empathy is pretty weak and could use some boosting.

0tjohnson314
For me it works in two steps: 1) Notice something that someone would appreciate. 2) Do it for them. As seems to often be the case with rationality techniques, the hard part is noticing. I'm a Christian, so I try to spend a few minutes praying for my friends each day. Besides the religious reasons, which may or may not matter to you, I believe it puts me in the right frame of mind to want to help others. A non-religious time of focused meditation might serve a similar purpose. I've also worked on developing my listening skills. Friends frequently mention things that they like or dislike, and I make a special effort to remember them. I also occasionally write them down, although I try not to mention that too often. For most people, there's a stronger signaling effect if they think you just happened to remember what they liked.

Assuming his case is similar to mine: the altruism-sense favours wireheading - it just wants to be satisfied - while other moral intuitions say wireheading is wrong. When I imagine wireheading (like timujin imagines having a constant taste of sweetness in his mouth), I imagine still having that part of the brain which screams "THIS IS FAKE, YOU GOTTA WAKE UP, NEO". And that part wouldn't shut up unless I actually believed I was out (or it's shut off, naturally).

When modeling myself as sub-agents, then in my case at least the anti-wireheading and ... (read more)

Right, I thought you were RobinZ. By the context, it sounds like he does consider serenity incongruous with heroic responsibility:

There are no rational limits to heroic responsibility. It is impossible to fulfill the requirements of heroic responsibility. What you need is the serenity to accept the things you cannot change, the courage to change the things you can, and the wisdom to know the difference.

With my (rhetorical) question, I expressed doubt towards his interpretation of the phrase, not (necessarily) all reasonable interpretations of it.

and

... (read more)
0wedrifid
I agree. I downvoted RobinZ's comment and ignored it because the confusion about what heroic responsibility means was too fundamental, annoyingly difficult to correct and had incidentally already been argued for far more eloquently elsewhere in the thread. In contrast I fundamentally agree with most of what you have said on this thread so the disagreement on one conclusion regarding a principle of rationality and psychology is more potentially interesting. I agree with your rejection of the whole paragraph. My objection seems to be directed at the confusion about heroic (and arguably mundane) responsibility rather than the serenity wisdom heuristic. I can empathize with being uncomfortable with colloquial expressions which deviate from literal meaning. I can also see some value in making a stand against that kind of misuse due to the way such framing can influence our thinking. Overconfident or premature ruling out of possibilities is something humans tend to be biased towards. Whatever you call it it sounds like you have the necessary heuristics in place to avoid the failure modes the wisdom quote is used to prevent. (Avoiding over-responsibility and avoiding pointless worry loops). The phrasing "The X to" intuitively brings to my mind a relative state rather than an absolute one. That is, while getting to some Zen endpoint state of inner peace or tranquillity is not needed but there are often times when moving towards that state to a sufficient degree will allow much more effective action. ie. it translates to "whatever minimum amount of acceptance of reality and calmness is needed to allow me correctly account for opportunity costs and decide according to the bigger picture". That can work. If used too much it sometimes seems to correlate with developing pesky emotional associations (like 'Ugh fields') with related stimulus but that obviously depends on which emotional cognitive processes result in the 'numbness' and soforth.

In ethics, the question would be answered by "yes, this ethical system is the only acceptable way to make decisions" by definition. In practice, this fact is not sufficient to make more than 0.01% of the world anywhere near heroically responsible (~= considering ethics the only emotionally/morally/role-followingly acceptable way of making decisions), so apparently the question is not decided by ethics.

Instead, roles and emotions play a large part in determining what is acceptable. In western society, the role of someone who is responsible for eve... (read more)

In that case, I'm confused about what serenity/acceptance entails, why you seem to believe heroic responsibility to be incongruent with it, and why it doesn't just fall under "courage" and "wisdom" (as the emotional fortitude to withstand the inevitable imperfection/partial failure and accurate beliefs respectively). Not wasting (computational) resources on efforts with low expected utility is part of your responsibility to maximise utility, and I don't see a reason to have a difference between things I "can't change" and things I might be able to change but which are simply suboptimal.

2wedrifid
A human psychological experience and tool that can approximately be described by referring to allocating attention and resources efficiently in the face of some adverse and difficult to influence circumstance. I don't. I suspect you are confusing me with someone else. Yes. Yet for some reason merely seeing an equation and believing it must be maximised is an insufficient guide to optimally managing the human machinery we inhabit. We have to learn other things - including things which can be derived from the equation - in detail and and practice them repetitively. The Virtue of Narrowness may help you. I have different names for "DDR Ram" and "A replacement battery for my Sony Z2 android" even though I can see how they both relate to computers.

No: the concept that our ethics is utilitarian is independent from the concept that it is the only acceptable way of making decisions (where "acceptable" is an emotional/moral term).

3V_V
What is an acceptable way of making decisions (where "acceptable" is an emotional/moral term) looks like an ethical question, how can it be independent from your ethics?

HPJEV isn't supposed to be a perfect executor of his own advice and statements. I would say that it's not the concept of heroic responsibility is at fault, but his own self-serving reasoning which he applies to justify breaking the rules and doing something cool. In doing so, he fails his heroic responsibility to the over 100 expected people whose lives he might have saved by spending his time more effectively (by doing research which results in an earlier friendly magitech singularity, and buying his warm fuzzies separately by learning the spell for trans... (read more)

5wedrifid
Yes, I do. Most other humans do, too and it's a sufficiently difficult and easy to neglect skill that it is well worth preserving as 'wisdom'. Non-human intelligences will not likely have 'serenity' or 'acceptance' but will need some similar form of the generalised trait of not wasting excessive amounts of computational resources exploring parts of solution space that have insufficient probability of significant improvement.

As you point out - and eli-sennesh points out, and the trope that most closely resembles the concept points out - 'heroic responsibility' assumes that everyone other than the heroes cannot be trusted to do their jobs.

This would only be true if the hero has infinite resources, actually able to redo everyone's work. In practice, deciding how your resources should be allocated requires a reasonably accurate estimate of how likely everyone is to do their job well. Swimmer963 shouldn't insist on farming her own wheat for her bread (like she would if she did... (read more)

0RobinZ
My referent for 'heroic responsibility' was HPMoR, in which Harry doesn't trust anyone to do a competent job - not even someone like McGonagall, whose intelligence, rationality, and good intentions he had firsthand knowledge of on literally their second meeting. I don't know the full context, but unless McGonagall had her brain surgically removed sometime between Chapter 6 and Chapter 75, he could actually tell her everything that he knew that gave him reason to be concerned about the continued good behavior of the bullies in question, and then tell her if those bullies attempted to evade her supervision. And, in the real world, that would be a perfect example of comparative advantage and opportunity cost in action: Harry is a lot better at high-stakes social and magical shenanigans relative to student discipline than McGonagall is, so for her to expend her resources on the latter while he expends his on the former would produce a better overall outcome by simple economics. (Not to mention that Harry should face far worse consequences if he screws up than McGonagall would - even if he has his status as Savior of the Wizarding World to protect him.) (Also, leaving aside whether his plans would actually work.) I am advocating for people to take the initiative when they can do good without permission. Others in the thread have given good examples of this. But you can't solve all the problems you touch, and you'll drive yourself crazy if you blame yourself every time you "could have" prevented something that no-one should expect you to have. There are no rational limits to heroic responsibility. It is impossible to fulfill the requirements of heroic responsibility. What you need is the serenity to accept the things you cannot change, the courage to change the things you can, and the wisdom to know the difference.

No, it doesn't. If you're uncertain about your own reasoning, discount the weight of your own evidence proportionally, and use the new value. In heuristic terms: err on the side of caution, by a lot if the price of failure is high.

Philip_W130

You and Swimmer963 are making the mistake of applying heroic responsibility only to optimising some local properties. Of course that will mean damaging the greater environment: applying "heroic responsibility" basically means you do your best AGI impression, so if you only optimise for a certain subset of your morality your results aren't going to be pleasant.

Heroic responsibility only works if you take responsibility for everything. Not just the one patient you're officially being held accountable for, not just the most likely Everett branches, ... (read more)

-1V_V
So "heroic responsibility" just means "total utilitarianism"?

FWIW, this is more commonly known as "cognitive behavioural therapy", with focus on "schema therapy".

I still don't see why repeat castings with hatred would require higher amounts of effort each time,

This is weird: In many cases hatred would peter out into indifference, rather than positive value, which ought to make AK easier. In fact, the idea that killing gets easier with time because of building indifference is a recognised trope. It's even weirder that the next few paragraphs are an author tract on how baseline humans let people die out of apathy all the time, so it's not like Yudkowski is unfamiliar with the ease with which people kill.

075th
Perhaps, but this is not likely to happen in the middle of a battle where you're trying to kill each other. And even if you felt indifference, you would still have to think of trying to cast Avada Kedavra from your indifference, not from your hate, which is how you learned to cast AK in the first place and never questioned. You would have to force a new mindset of calm emptiness upon yourself, which would take practice. Even the worst Death Eaters are not likely to have taken an analytical approach to battle, realized the possibility, and then practiced killing people in their spare time with indifference to make sure it was reliable in the (other guy's) heat of the moment.

Concerning historical analogues: From what I understand about their behaviour, it seems like the Rotary Club pattern-matches some of the ideas of Effective Altruism, specifically the earning-to-give and community-building aspects. They have a million members who give on average over $100/yr to charities picked out by Rotary Club International or local groups. This means that in the past decade, their movement has collected one billion dollars towards the elimination of Polio. Some noticeable differences include:

  1. I can't find any mention of Rotary spending
... (read more)
Load More