Andaro

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Andaro30

I agree. I certainly didn't mean to imply that the Trump administration is trustworthy.

My point was that the analogy of AIs merging their utility functions doesn't apply to negotiations with the NK regime.

Andaro20

It's not a question of timeframes, but of how likely you are to lose the war, how big the concessions would have to be to prevent the war, and how much the war would cost you even if you win (costs can have flow-through effects into the far future).

Not that any of this matters to the NK discussion.

Andaro30

The idea is that isolationism and destruction aren't cheaper than compromise. Of course this doesn't work if there's no mechanism of verification between the entities, or no mechanism to credibly change the utility functions. It also doesn't work if the utility functions are exactly inverse, i.e. neither side can concede priorities that are less important to them but more important to the other side.

A human analogy, although an imperfect one, would be to design a law that fulfills the most important priorities of a parliamentary majority, even if each individual would prefer a somewhat different law.

I don't think something like this is possible with untrustworthy entities like the NK regime. They're torturing and murdering people as they go, of course they're going to lie and break agreements too.

Andaro40

>The symmetric system is in favor of action.

This post made me think how much I value the actions of others, rather than just their omissions. And I have to conclude that the actions I value most in others are the ones that *thwart* actions of yet other people. When police and military take action to establish security against entities who would enslave or torture me, I value it. But on net, the activities of other humans are mostly bad for me. If I could snap my fingers and all other humans dropped dead (became inactive), I would instrumentally be better off than I am now. Sure, I'd lose their company and economic productivity, but it would remove all intelligent adversaries from my universe, including those who would torture me.

>The Good Place system...

I think it's worth noting that you have chosen an example of a system where people will not just be tortured, but tortured *for all eternity without the right to actually ever die* and not even the moral philosopher character manages to formulate a coherent in-depth criticism of that philosophy. I know it's a comedy show, but it's still premised on the acceptance that there would be a system of eternal torture and that system would be moralized as justice, and of course nonconsensual without an exit option.

Andaro10

>It is even possible (in fact, due to resource constraints, it is likely) that they’re at odds with one another.

They're almost certainly extremely at odds with each other. Saving humanity from destroying itself points in the other direction from reducing suffering, not by 180 degrees, but at a very sharp angle. This is not just because of resource constraints, but even more so because humanity is a species of torturers and it will try to spread life to places where it doesn't naturally occur. And that life obviously will contain large amounts of suffering. People don't like hearing that, especially in the x-risk reduction demographic, but it's pretty clear the goals are at odds.

Since I'm a non-altruist, there's not really any reason to care about most of that future suffering (assuming I'll be dead by then), but there's not really any reason to care about saving humanity from extinction, either.

There are some reasons why the angle is not a full 180 degrees: There might be aliens who would also cause suffering and humanity might compete with them for resources, humanity might wipe itself out in ways that also cause suffering such as AGI, or there might be a practical correlations between political philosophies that cause high-suffering and also high-extinction-probability, e.g. torturers are less likely to care about humanity's survival. But none of these make the goals point in the same direction.

Andaro30

>Our life could be eternal and thus have meaning forever.

Or you could be tortured forever without consent and without even being allowed to die. You know, the thing organized religion has spent millennia moralizing through endless spin efforts, which is now a part of common culture, including popular culture.

Let's just look at our culture, as well as contemporary and historical global cultures. Do we have:

  • a consensus of consensualism (life and suffering should be voluntary)? Nope, we don't.
  • a consensus of anti-torture (torturing people being illegal and immoral universally)? Nope, we don't.
  • a consensus of proportionality (finite actions shouldn't lead to infinite punishments)? Nope, we don't.

You'd need at least one of these to just *reduce* the probability of eternal torture, and then it still wouldn't guarantee an acceptable outcome. And we have none of these.

They would if they could, and the only reason you're not being already tortured for all eternity is because they haven't found a way to implement it.

The probability of getting it done is small, but that is not an argument in favor of your suggestion; if it can't be done, you don't get eternal meaning either, if it can be done, you have effectually increased the risk of eternal torture for all of us by working in this direction.

Andaro190

I’m confused about OpenAI’s agenda.

Ostensibly, their funding is aimed at reducing the risk of AI dystopia. Correct? But how does this research prevent AI dystopia? It seems more likely to speed up its arrival, as would any general AI research that’s not specifically aimed at safety.

If we have an optimization goal like “Let’s not get kept alive against our will and tortured in the most horrible way for millions of years on end”, then it seems to me that this funding is actually harmful rather than helpful, because it increases the probability that AI dystopia arrives while we are still alive.

Load More