It's probably too small scale to be statistically significant. The God acts on large sample sizes and problems with many different bottlenecks. I would guess that most of the cost was tied up in a single technique.
Status works like OP describes, when going from "dregs" to "valued community member". Social safety is a very basic need, and EA membership undermines that for many people by getting them to compare themselves to famous EAs, rather than to a more realistic peer group. This is especially true in regions with a lower density of EAs, or where all the 'real' EAs pack up and move to higher density regions.
I think the OP meant "high" as a relative term, compared to many people who feel like dregs.
People don't have that amount of fine control over their own psychology. Depression isn't something people 'do to themselves' either, at least not with the common implications of that phrase.
Also, this was a minimal definition based on a quick search of relevant literature for demonstrated effects, as I intended to indicate with "at least". Effects of objectification in the perpetrator are harder to disentangle.
Sociology and psychology. Determine patterns in human desires and behaviour, and determine universal rules. Either that, or scale up your resources and get yourself an fAI.
'Happiness' is a vague term which refers to various prominent sensations and to a more general state, as vague and abstract as CEV (e.g. "Life, Liberty, and the pursuit of Happiness"). 'Headache', on the other hand, primarily refers to the sensation.
If you take an aspirin for a headache, your head muscles don't stop clenching (or whatever else the cause is); it just feels like it for a while. A better pill would stop the clenching, and a better treatment still would make you aware of the physiological cause of the clenching and allow you to change it to your liking.
Having a good factual model of a person would be necessary, and perhaps sufficient, for making that judgment favourably. When moving beyond making people more equal and free in their means, the model should be significantly better than their self-model. After that, the analyst would probably value the thus observed people caring about self-determination in the territory (so no deceiving them to think they're self-determining), and act accordingly.
If people declare that analysing people well enough to know their moral values is itself being a busybody, it ...
In the analogy, water represents the point of the quote (possibly as applied to CEV). You're saying there is no point. I don't understand what you're trying to say in a way that is meaningful, but I won't bother asking because 'you can't do my thinking for me'.
Edit: fiiiine, what do you mean?
Be careful when defining the winner as someone other than the one currently sitting on a mound of utility.
Most lesswrong users at least profess to want to be above social status games, so calling people out on it increases expected comment quality and personal social status/karma, at least a little.
You may not be able to make a horse drink, but you can still lead it to water rather than merely point out it's thirsty. Teaching is a thing that people do with demonstrated beneficial results across a wide range of topics. Why would this be an exception?
I don't think that helps AndHisHorse figure out the point.
Congratulations!
I might just have to go try it now.
'he' in that sentence ('that isn't the procedure he chose') still referred to Joe. Zubon's description doesn't justify the claim, it's a description of the consequence of the claim.
My original objection was that 'they' ("I think they would have given up on this branch already.") have a different procedure than Joe has ("all you have to do is do a brute force search of the space of all possible actions, and then pick the one with the consequences that you like the most."). Whomever 'they' refers to, you're expecting them to care about hu...
What do you mean with "never-entered" (or "entered") states? Ones Joe doesn't (does) declare real to live out? If so, the two probably correlate but Joe may be mistaken. A full simulation of our universe running on sufficient hardware would contain qualia, so the infinitely powerful process which gives Joe the knowledge which he uses to decide which universe is best may contain qualia as well, especially if the process is optimised for ability-to-make Joe-certain-of-his-decision rather than Joe's utility function.
How about now?
We got married almost a year ago :D. I can't keep track of who-all spouse is dating (it fluctuates a lot) but I have three other nodes on the Big Unruly Chart Thing, one of whom is also dating spouse. Going very smoothly :)
While Joe could follow each universe and cut it off when it starts showing disutility, that isn't the procedure he chose. He opted to create universes and then "undo" them.
I'm not sure whether "undoing" a universe would make the qualia in it not exist. Even if it is removed from time, it isn't removed from causal history, because the decision to "undo" it depends on the history of the universe.
Read it more carefully. One or several paragraphs before the designated-human aliens, it is mentioned that CelestAI found many sources of complex radio waves which weren't deemed "human".
From your username it looks like you're Dutch (it is literally "the flying Dutchman" in Dutch), so I'm surprised you've never heard of the Dutch bible belt and their favourite political party, the SGP. They get about 1.5% of the vote in the national elections and seem pretty legit. And those are just the Christians fervent enough to oppose women's suffrage. The other two Christian parties have around 15% of the vote, and may contain proper believers as well.
I think he means "I cooperate with the Paperclipper IFF it would one-box on Newcomb's problem with myself (with my present knowledge) playing the role of Omega, where I get sent to rationality hell if I guess wrong". In other words: If Elezier believes that if Elezier and Clippy were in the situation that Elezier would prepare for one-boxing if he expected Clippy to one-box and two-box if he expected Clippy to two-box, Clippy would one-box, then Elezier will cooperate with Clippy. Or in other words still: If Elezier believes Clippy to be ignorant...
In a sense they did eat gold, like we eat stacks of printed paper, or perhaps nowadays little numbers on computer screens.
That doesn't seem true. How can the victim know for sure that the blackmailer is simulating them accurately or being rational?
Suppose you get mugged in an alley by random thugs. Which of these outcomes seems most likely:
You give them the money, they leave.
You lecture them about counterfactual reasoning, they leave.
You lecture them about counterfactual reasoning, they stab you.
Any agent capable of appearing irrational to a rational agent can blackmail that rational agent. This decreases the probability of agents which appear irrational being irrational, but not necessarily to the point that you can dismiss them.
I think I might have been a datapoint in your assessment here, so I feel the need to share my thoughts on this. I would consider myself socially progressive and liberal, and I would hate not being included in your target audience, but for me your wearing cat ears to the CFAR workshop cost you weirdness points that you later earned back by appearing smart and sane in conversations, by acceptance by the peer group, acclimatisation, etc.
I responded positively because it fell within the 'quirky and interesting' range, but I don't think I would have taken you a...
Ah, "actual" threw me off. So you mean something close to "The lifetime projected probability of being born(/dying) for people who came into existence during the last year".
Thanks, edited.
Karma sink.
If you're on the autism spectrum and think Tell culture is a bad idea, upvote this comment.
If you're on the autism spectrum and think Tell culture is a good idea, upvote this comment.
I'm on the autism spectrum (PDD-NOS), and Tell culture sounds like a good idea to me.
[pollid:807]
birth rate
I wouldn't consider abortion a "birth", per se.
That's just not true. Death rate, as the name implies, is a rate - the population that died in this year divided by the average total population. If "death rate" is 100%, then "birth rate" is 100% by the same reasoning, because 100% of people were born.
You seem to be talking about what I would call sympathy, rather than empathy. As I would use it, sympathy is caring about how others feel, and empathy is the ability to (emotionally) sense how others feel. The former is in fine enough state - I am an EA, after all - it's the latter that needs work. Your step (1) could be done via empathy or pattern recognition or plain listening and remembering as you say. So I'm sorry, but this doesn't really help.
I'll admit I don't really have data for this. But my intuitive guess is that ...
Have you made efforts to research it? Either by trawling papers or by doing experiments yourself?
students don't just need to be able to attend school; they need a personal relationship with a teacher who will inspire them.
Your objection had already been accounted for: $500 to SCI = around 150 people extra attend school for a year. I estimated the number of students that will have a relationship with their teacher as good as the average you provide at around 1:150.
...But
Did MIRI answer you? I would expect them to have answered by now, and I'm curious about the answer.
you can do things to change yourself so that you do care.
Would you care to give examples or explain what to look for?
(separated from the other comment, because they're basically independent threads).
I've concluded that my impact probably comes mostly from my everyday interactions with people around me, not from money that I send across the world.
This sounds unlikely. You say you're improving the education and mental health of on-the-order-of 100 students. Deworm the World and SCI improve attendance of schools by 25%, meaning you would have the same effect, as a first guess and to first order at least, by donating on-the-order-of $500/yr. And that's just one of the si...
Empathy is a useful habit that can be trained, just as much as rationality can be.
Could you explain how? My empathy is pretty weak and could use some boosting.
Assuming his case is similar to mine: the altruism-sense favours wireheading - it just wants to be satisfied - while other moral intuitions say wireheading is wrong. When I imagine wireheading (like timujin imagines having a constant taste of sweetness in his mouth), I imagine still having that part of the brain which screams "THIS IS FAKE, YOU GOTTA WAKE UP, NEO". And that part wouldn't shut up unless I actually believed I was out (or it's shut off, naturally).
When modeling myself as sub-agents, then in my case at least the anti-wireheading and ...
Evidence please?
Right, I thought you were RobinZ. By the context, it sounds like he does consider serenity incongruous with heroic responsibility:
There are no rational limits to heroic responsibility. It is impossible to fulfill the requirements of heroic responsibility. What you need is the serenity to accept the things you cannot change, the courage to change the things you can, and the wisdom to know the difference.
With my (rhetorical) question, I expressed doubt towards his interpretation of the phrase, not (necessarily) all reasonable interpretations of it.
...and
In ethics, the question would be answered by "yes, this ethical system is the only acceptable way to make decisions" by definition. In practice, this fact is not sufficient to make more than 0.01% of the world anywhere near heroically responsible (~= considering ethics the only emotionally/morally/role-followingly acceptable way of making decisions), so apparently the question is not decided by ethics.
Instead, roles and emotions play a large part in determining what is acceptable. In western society, the role of someone who is responsible for eve...
In that case, I'm confused about what serenity/acceptance entails, why you seem to believe heroic responsibility to be incongruent with it, and why it doesn't just fall under "courage" and "wisdom" (as the emotional fortitude to withstand the inevitable imperfection/partial failure and accurate beliefs respectively). Not wasting (computational) resources on efforts with low expected utility is part of your responsibility to maximise utility, and I don't see a reason to have a difference between things I "can't change" and things I might be able to change but which are simply suboptimal.
No: the concept that our ethics is utilitarian is independent from the concept that it is the only acceptable way of making decisions (where "acceptable" is an emotional/moral term).
HPJEV isn't supposed to be a perfect executor of his own advice and statements. I would say that it's not the concept of heroic responsibility is at fault, but his own self-serving reasoning which he applies to justify breaking the rules and doing something cool. In doing so, he fails his heroic responsibility to the over 100 expected people whose lives he might have saved by spending his time more effectively (by doing research which results in an earlier friendly magitech singularity, and buying his warm fuzzies separately by learning the spell for trans...
As you point out - and eli-sennesh points out, and the trope that most closely resembles the concept points out - 'heroic responsibility' assumes that everyone other than the heroes cannot be trusted to do their jobs.
This would only be true if the hero has infinite resources, actually able to redo everyone's work. In practice, deciding how your resources should be allocated requires a reasonably accurate estimate of how likely everyone is to do their job well. Swimmer963 shouldn't insist on farming her own wheat for her bread (like she would if she did...
No, it doesn't. If you're uncertain about your own reasoning, discount the weight of your own evidence proportionally, and use the new value. In heuristic terms: err on the side of caution, by a lot if the price of failure is high.
You and Swimmer963 are making the mistake of applying heroic responsibility only to optimising some local properties. Of course that will mean damaging the greater environment: applying "heroic responsibility" basically means you do your best AGI impression, so if you only optimise for a certain subset of your morality your results aren't going to be pleasant.
Heroic responsibility only works if you take responsibility for everything. Not just the one patient you're officially being held accountable for, not just the most likely Everett branches, ...
FWIW, this is more commonly known as "cognitive behavioural therapy", with focus on "schema therapy".
I still don't see why repeat castings with hatred would require higher amounts of effort each time,
This is weird: In many cases hatred would peter out into indifference, rather than positive value, which ought to make AK easier. In fact, the idea that killing gets easier with time because of building indifference is a recognised trope. It's even weirder that the next few paragraphs are an author tract on how baseline humans let people die out of apathy all the time, so it's not like Yudkowski is unfamiliar with the ease with which people kill.
Concerning historical analogues: From what I understand about their behaviour, it seems like the Rotary Club pattern-matches some of the ideas of Effective Altruism, specifically the earning-to-give and community-building aspects. They have a million members who give on average over $100/yr to charities picked out by Rotary Club International or local groups. This means that in the past decade, their movement has collected one billion dollars towards the elimination of Polio. Some noticeable differences include:
Small correction: you want to buy the widget as long as x > 7/8 .
You should also almost never expect x>1, because that means you should immediately spend your money on that cause until x becomes 1 or you run out of credit. x=1 means that something is the best marginal way to allocate money that you know of right now.