It's not a question of timeframes, but of how likely you are to lose the war, how big the concessions would have to be to prevent the war, and how much the war would cost you even if you win (costs can have flow-through effects into the far future).
Not that any of this matters to the NK discussion.
The idea is that isolationism and destruction aren't cheaper than compromise. Of course this doesn't work if there's no mechanism of verification between the entities, or no mechanism to credibly change the utility functions. It also doesn't work if the utility functions are exactly inverse, i.e. neither side can concede priorities that are less important to them but more important to the other side.
A human analogy, although an imperfect one, would be to design a law that fulfills the most important priorities of a parliamentary majorit...
The problem is that the same untrustworthiness is true for the US regime. It has shown in the part that it's going to break it's agreements with North Korea if it finds it convenient. Currently, how the US regime handles Iran they are lying and broke their part of the nonprofilation agreement.
This lack of trustworthiness means that in a game theoretic sense there's no way for North Korea to give up the leverage that they have when they give up their nuclear weapons but still get promised economic help in the future.
>The symmetric system is in favor of action.
This post made me think how much I value the actions of others, rather than just their omissions. And I have to conclude that the actions I value most in others are the ones that *thwart* actions of yet other people. When police and military take action to establish security against entities who would enslave or torture me, I value it. But on net, the activities of other humans are mostly bad for me. If I could snap my fingers and all other humans dropped dead (became inactive), I would instrumentally be bette...
>as the world branches, my total measure should decline many orders of magnitude every second
I'm not sure why you think that. From any moment in time, it's consistent to count all future forks toward my personal identity without having to count all other copies that don't causally branch from my current self. Perhaps this depends on how we define personal identity.
>but it doesn't affect my decision making.
Perhaps it should - tempered by the possibilities that your assumptions are incorrect, of course.
Another accounting trick: Coun...
That's a clever accounting trick, but I only care what happens in my actual future(s), not elsewhere in the universe that I can't causally affect.
>Thus, by not signing for cryonics she increases the share of her futures where she will be hostily resurrected in total share of her futures.
But she decreases the share of her futures where she will be resurrected at all, some of which contain hostile resurrection, and therefore she really decreases the share of her futures where she will be hostilely resurrected. She just won't consciously experience those where she doesn't exist, which is better than suffering from the perspective of those who consider suffering negative utility.
>It is even possible (in fact, due to resource constraints, it is likely) that they’re at odds with one another.
They're almost certainly extremely at odds with each other. Saving humanity from destroying itself points in the other direction from reducing suffering, not by 180 degrees, but at a very sharp angle. This is not just because of resource constraints, but even more so because humanity is a species of torturers and it will try to spread life to places where it doesn't naturally occur. And that life obviously will contain large amounts ...
>Our life could be eternal and thus have meaning forever.
Or you could be tortured forever without consent and without even being allowed to die. You know, the thing organized religion has spent millennia moralizing through endless spin efforts, which is now a part of common culture, including popular culture.
Let's just look at our culture, as well as contemporary and historical global cultures. Do we have:
I’m confused about OpenAI’s agenda.
Ostensibly, their funding is aimed at reducing the risk of AI dystopia. Correct? But how does this research prevent AI dystopia? It seems more likely to speed up its arrival, as would any general AI research that’s not specifically aimed at safety.
If we have an optimization goal like “Let’s not get kept alive against our will and tortured in the most horrible way for millions of years on end”, then it seems to me that this funding is actually harmful rather than helpful, because it increases the probability that AI dystopia arrives while we are still alive.
People disagree (individual people over time as OpenAI's policies have changed, and different people within the EAsphere), over whether OpenAI is net positive or harmful. So, if you're confused about "isn't this... just bad?" well, know that you're not alone in that outlook.
Arguments that OpenAI (and Deepmind) are pursuing reasonable strategies include something like:
Most AI researchers are excited about AI research and are going to keep doing it somewhere, and if OpenAI or Deepmind switched to a "just focus on safety&quo...
Not all proposed solutions to x-risk fit this pattern: If government spends taxes to build survival shelters that will shelter only a chose few who will then go on to perpetuate humanity in case of a cataclysm, most tax payers receive no personal benefit.
Similarly, if government-funded programs solve AI value loading problems and the ultimate values don't reflect my personal self-regarding preferences, I don't benefit from the forced funding and may in fact be harmed by it. This is also true for any scientific research whose effect can be harmful to me personally even if it reduces x-risk overall.
What have you read about it that has caused you to stop considering it, or to overlook it from the start?
I reject impartiality on the grounds that I'm a personal identity and therefore not impartial. The utility of others is not my utility, therefore I am not a utilitarian. I reject unconditional altruism in general for this reason. It amazes me in hindsight that I was ever dumb enough to think otherwise.
Can you teach me how to see positive states as terminally (and not just instrumentally) valuable, if I currently don’t?
Teach, no, but there are some ...
The utility of others is not my utility, therefore I am not a utilitarian. I reject unconditional altruism in general for this reason.
When I say that I'm a utilitarian (or something utilitarian-ish), I mean something like: If there were no non-obvious bad side-effects — e.g., it doesn't damage my ability to have ordinary human relationships in a way that ends up burning more value than it creates — I'd take a pill that would bind my future self to be unwilling to sacrifice two strangers to save a friend (or to save myself), all else being eq...
I observe that you are communicating in bad faith and with hostility, so I will use my right to exit for any further communication with you.
My read of this thread is that your (Andaro's) original comment pointed at a particular subset of relationships, which are 'bad' but seem better than the alternatives to the person inside them, where the reason to trust the judgment of the person inside them is that right to exit means they will leave relationships that are better than their alternatives. Paperclip Maximizer then pointed out that a major class of reasons people stay in abusive relationships is that their alternatives are manipulated by the abuser, either through explicit or ...
What? Why? No sane person would classify "he will murder me if I leave" as "the right to exit isn't blocked". I don't expect much steelmanning from the downvote-bots here, but if you're strawmanning on a rationalist board, good-faith communication becomes disincentivized. It's not like I have skin in the game; all my relationships are nonviolent and I neither give a shit about feminism nor anti-feminism.
Still, if "she's such a nice person but sometimes she explodes" isn't compatible with revealed ...
I didn't read the whole post, but most of that is just the right to exit being blocked by various mechanisms, including socioeconomic pressure and violence. And the socioeconomic ones aren't even necessarily incompatible with revealed preference; if the alternative is homelessness, this may suck, but the partner still has no obligation to continue the relationship and the socioeconomic advantages are obviously a part of the package.
if we are able to wirehead in an effective manner it might be morally obligatory to force them into wireheading to maximize utility.
Not interested in this kind of "moral obligation". If you want to be a hedonistic utilitarian, use your own capacity and consent-based cooperation for it.
I think it's worth making the distinction between reward hacking, pleasure wireheading, and addiction more clearly. There's some overlap, but these are different concepts with different implications for our utility.
The whole ideological subtext reeks with puritan moralism. You imply that we exist to make humanity's future bigger, rather than to do whatever the hell we actually prefer.
As long as pleasure wireheading is consensual, you longtermists can simply forgo your own pleasure wireheading and instead work very hard on the whole growth an...
The demand for sexual violence in fiction is easy to explain. It allows us to fantasize about behavior that would be prohibitively disadvantageous in practice, and it allows us to reflect on hypothetical situations that are relevant to our interests, such as how to deal with violent people.
My default model for abusive relationships *where the right to exit is not blocked* is indeed revealed preference. Not necessarily revealed preference for the abuse, but for the total package of goods and bads in the relationship.
The sex and romance market is a market af...
Indeed, as mentioned, without altruism, voting behaviour is fairly inexplicable.
I vote to reward or penalize politicians based on their previous choices, rather than to create better outcomes. That is, I look back, not forward.
There are some exceptions, e.g. when a candidate before assuming office is sending unusually credible signals, e.g. glorifying torture or some such. Other than that, I mostly ignore promises, and instead implement reciprocity for past decisions.
Edited after more reflection:
Whereas the expected benefit of voting to you alone is the Br...
I agree with other commenters that the slavery framing is unhelpful. However, I mostly do agree with Jordan Peterson otherwise.
Human rights set expectations how we treat each other. From my perspective, respect for them is conditional on reciprocity. I will not respect the rights of an individual who doesn't respect mine. Their function is to set standards of behavior that make everybody better off.
A benefit of human rights, rather than mammal rights or just smaller-identity rights is that they benefit everyone who can understand the concept, so they&...
I have no idea what toonalfrink's goals for the conversation are. But when someone writes something like,
>So you find yourself in this volunteering opportunity with some EA's and they tell you some stuff you can do, and you do it, and you're left in the dark again. Is this going to steer you into safe waters? Should you do more? Impress more? Maybe spend more time on that Master's degree to get grades that set you apart, maybe that'll get you invited with the cool kids?
then the only sensible option from my perspective is to take...
I do think it makes sense to step back, but in the opposite order (you can't rederive your entire ontology and goal structure every time something doesn't make sense - it's too much work and you'd never get anything done).
"Why am I seeking status?" and "Why is EA and/or EA-organizations the right way to go about A?" seem like plausible steps-backwards to take given the questions toon is raising here.
"Why altruism?" is a question every altruist should take seriously at least once, but none of the dilemmas r...
If the required kind of multiverse exists, this leads to all kinds of contradictions.
For example, in some universes, Personal Identity X may have given consent to digital resurrection, while in others, the same identity may have explicitly forbidden it. In some universes, their relatives and relationships may have positive prefrences regarding X's resurrection, in others, they may have negative preferences.
Given your assumed model of personal identity and the multiverse, you will always find that shared identities have contradicting preferences. They ...
To be precise, this seems like a cost to Alice of Bob having a wide circle, if Alice and Bob are close. If they aren't, and especially if we bring in a veil of ignorance, then Alice is likely to benefit somewhat from Bob having a wide circle.
Yes, but Alice doesn't benefit from Bob's having a circle so wide it contains nonhuman animals, far future entities or ecosystems/biodiversity for their own sake.
and my reaction is that none of that stops children from dying of malaria, which is really actually a thing I care about and don't wan...
But an expanding circle of moral concern increases value differences. If I have to pay for a welfare system, or else pay for a welfare system and also biodiversity maintainance and also animal protection and also development aid and also a Mars mission without a business model and also far-future climate change prevention, I'd rather just pay for the welfare system. Other ideological conflicts would also go away, such as the conflict between preventing animal suffering and maintaining pristine nature, ethical natalism vs. ethical anti-natalism, and so on.
Yes, it certainly cuts both ways. Of course, your country's welfare system is also available to you and your family if you ever need it, and you benefit more directly from social peace and democracy in your country, which is helped by these transfers. It is hard to see how you could have a functioning democracy without poor people voting for some transfers, so unless you think democracy has no useful function for you, that's a cost in your best interest to pay.
The moral circle is not ever expanding, and I consider that a good thing.
A very wide moral circle is actually very costly to a person. Not only can it cause a lot of stress to think of the suffering of beings in the far future or nonhuman animals in farming or in the wild, but it also requires a lot of self-sacrifice to actually live up to this expanded circle.
In addition, it can put you at odds with other well-meaning people who care about the same beings, but in a different way. For example, when I still cared about future generations, I mostly cared ab...
I agree. I certainly didn't mean to imply that the Trump administration is trustworthy.
My point was that the analogy of AIs merging their utility functions doesn't apply to negotiations with the NK regime.