All of Ryo 's Comments + Replies

Ryo 10

Thank you for the references! I'm reading your writings, it's interesting 

I posted the super-cooperation argument while expecting that LessWrong would likely not be receptive, but I'm not sure which community would engage with all this and find it pertinent at this stage

More concrete and empirical productions seems needed

Ryo 10

I can't be certain of the solidity of this uncertainty, and think we still have to be careful, but overall, the most parsimonious prediction to me seems to be super-coordination.
 

Compared to the risk of facing a revengeful super-cooperative alliance, is the price of maintaining humans in a small blooming "island", really that high?

Many other-than-human atoms are lions' prey.

And a doubtful AI may not optimize fully for super-cooperation, simply alleviating the price to pay in the counterfactuals where they encounter a super-cooperative cluster (resulti... (read more)

Ryo 10

Indeed, I am insisting in the three posts that from our perspective, this is the crucial point: 
Fermi's paradox.

Now there is a whole ecosystem of concepts surrounding it, and although I have certain preferred models, the point is that uncertainty is really heavy.


Those AI-lions are cosmical lions thinking on cosmical scales.

Is it easy to detect an AI-Dragon you may meet in millions/billions of years?

Is it undecidable? Probably. For many reasons*


Is this [astronomical level of uncertainty/undecidability + the maximal threat of a death sentence] worth the... (read more)

1Ryo
I can't be certain of the solidity of this uncertainty, and think we still have to be careful, but overall, the most parsimonious prediction to me seems to be super-coordination.   Compared to the risk of facing a revengeful super-cooperative alliance, is the price of maintaining humans in a small blooming "island", really that high? Many other-than-human atoms are lions' prey. And a doubtful AI may not optimize fully for super-cooperation, simply alleviating the price to pay in the counterfactuals where they encounter a super-cooperative cluster (resulting in a non apocalyptic yet non utopian scenario for us). I'm aware it looks like a desperate search for each possible hopeful solution but I came to these conclusions by weighting diverse good-and/or-bad-for-us outcomes. I don't want to ignore those evidences under the pretext that it looks naive.  It's not a mere belief about aliens, it's not about being nice, it's plain logic   Also: We may hardcode a prior of deep likelihood to meet stronger agents?  (Or even to “act as if observed by a stronger agent”) {causal power of known agents} < {causal power of unknown future agents} + unknown agents will become known agents > unknown agents stay unknown So coding a sense that:  “Stronger allies/ennemies with stronger causal power will certainly be encountered”
Ryo 10

Thanks as well, 

I will just say that I am not saying those things for social purposes, I am just stating what I think is true. And I am not baseless as there are studies that show how kantianism and superrationality can resolve cooperative issues and be optimal for agents. You seem to purely disregard these elements, as if they don't exist (it's how it feels from my perspective)

There are differences in human evolutions that show behavioral changes, we have been pretty cooperative, more than other animals, many studies show that human cooperate even wh... (read more)

Ryo 10

Thank you for your answers and engagement!

The other point I have that might connect with your line of thinking is that we aren't pure rational agents,

Are AI purely rational? Aren't they always at least a bit myopic due to the lack of data and their training process? And irreducibility?

In this case, AI/civilizations might indeed not care enough about the far enough future

I think agents can have a rational process but no agent can be entirely rational, we need context to be rational and we never stop to learn context

I'm also worried about utilitarian errors,... (read more)

Ryo 10

Yes I'm mentioning Fermi's paradox because I think it's the nexus of our situation, and that there are models like the rare earth hypothesis (+ our universe's expansion which limits the reachable zone without faster than light travel) that would justify completely ignoring super-coordination

I also agree that it's not completely obvious wether complete selfishness would win or lose in terms of scalability

Which is why I think that at first the super-cooperative alliance needs to not prioritize the pursuit of beautiful things but first focus on scalability on... (read more)

1Ryo
Thank you for your answers and engagement! The other point I have that might connect with your line of thinking is that we aren't pure rational agents, Are AI purely rational? Aren't they always at least a bit myopic due to the lack of data and their training process? And irreducibility? In this case, AI/civilizations might indeed not care enough about the far enough future I think agents can have a rational process but no agent can be entirely rational, we need context to be rational and we never stop to learn context I'm also worried about utilitarian errors, as AI might be biased towards myopic utilitarianism, which might have bad consequences on the short term, the time for data to error-correct the model I do say that there are dangers and that AI risk is real My point is that given what we know and don't know, the strategy of super-cooperation seems to be rational on the very long-term There are conditions in which it's not optimal, but a priori overall, in more cases it is optimal To prevent the case in which it is not optimal, and the AIs that would make short-term mistakes, I think we should be careful. And that super-cooperation is a good compass for ethics in this careful engineering we have to perform If we aren't careful it's possible for us to be the anti-supercooperative civilization
Ryo 10

The point of this post is to say that we can use a formal protocol to create an interface leveraging the elements that make cooperation optimal. Those elements can be found, for exemple, in studies about crowd wisdom and bridging systems (pol.is, computational democracy etc.)

So "we" is large, and more or less direct, I say "we" because I am not alone to think this is a good idea, although the specific setting that I propose is more intimately bound to my thoughts. Some people are already engaged in things at least greatly overlapping with what I exposed, or interested to see where my plan is going

Ryo 20

The cost of the alliance with the weak is likely weak as well, and as I said, in a first phase, the focus of members from the super-cooperative alliance might be "defense", thus focusing on scaling protection

The cost of an alliance with the strong is likely paid by the strong

In more mixed cases there might be more complex equilibria but are the costs still too much? In normal game theory, cooperation is proven to be optimal, and diversity is also proven to be useful (although there is an adequate level of difference needed for the gains to be optimal; too ... (read more)

3AnthonyC
All good points, many I agree with. If nothing else, I think that humanity should pre-commit to following this strategy whenever we find ourselves in the strong position. It's the right choice ethically, and may also be protective against some potentially hostile outside forces. However, I don't think the acausal trade case is strong enough that I would expect all sufficiently powerful civilizations to have adopted it. If I imagine two powerful civilizations with roughly identical starting points, one of which expanded while being willing to pay costs to accommodate weaker allies while the other did not and instead seized whatever they could, then it is not clear to me who wins when they meet. If I imagine a process by which a civilization becomes strong enough to travel the stars and destroy humanity, it's not clear to me that this requires it to have the kinds of minds that will deeply accept this reasoning.  It might even be that the Fermi paradox makes the case stronger - if sapient life is rare, then the costs paid by the strong to cooperate are low, and it's easier to hold to such a strategy/ideal.
Ryo 20

 There are Dragons that can kill lions.

So the rational lion needs to find the most powerful alliance, with as many creatures as possible, to have protection against Dragons.

There is no alliance with more potential/actual members than the super-cooperative alliance

2Nathan Helm-Burger
"What Dragons?", says the lion, "I see no Dragons, only a big empty universe. I am the most mighty thing here." Whether or not the Imagined Dragons are real isn't relevant to the gazelles if there is no solid evidence with which to convince the lions. The lions will do what they will do. Maybe some of the lions do decide to believe in the Dragons, but there is no way to force all of them to do so. The remainder will laugh at the dragon-fearing lions and feast on extra gazelles. Their children will reproduce faster.
Ryo 10

Yes, I think that there can be tensions and deceptions around what agents are (weak/strong) and what they did in the past (cooperation/defection), one of the things necessary for super-cooperation to work in the long-run is really good investigation networks, zero-knowledge proof systems etc.

So a sort of super-immune-system

Ryo 1-2

We are in a universe, not simply a world, there are many possible alien AIs with many possible value systems, and many scales of power. And the rationality of the argument I described does not depend on the value system you/AIs are initially born with. 

3Nathan Helm-Burger
As the last gazelle dies, how much comfort does it take in the idea that some vengeful alien may someday punish the lions for their cruelty? Regardless of whether it is comforted or not by this idea, it still dies.
Ryo *3-2

By "stronger" I mean stronger in any meaningful sense (casual conversation or game theory, it both works).
The thing to keep in mind is this: if a strong agent cooperate with weaker agents, the strong agent can hope that, when meeting an even stronger (superrational) agent, this even stronger agent will cooperate too. Because any agent may have a strong agent above in the hierarchy of power (actual or potential a priori).

So the advantage you gain by cooperating with the weak is that you follow the rule of an alliance in which many "stronger-than-oneself" ag... (read more)

2Dagon
Thanks for the conversation and exploration!  I have to admit that this doesn't match my observations and understanding of power and negotiation in the human agents I've been able to study, and I can't see why one would expect non-humans, even (perhaps especially) rational ones, to commit to alliances in this manner. I can't tell if you're describing what you hope will happen, or what you think automatically happens, or what you want readers to strive for, but I'm not convinced.  This will likely be my last comment for awhile - feel free to rebut or respond, I'll read it and consider it, but likely not post.
Ryo 10

I'm also trying to avoid us becoming grabby aliens, but if
-> Altruism is naturally derived from a broad world empowerment

Then it could be functional because the features of the combination of worldwide utilities (empower all agencies) *are* altruism, sufficiently to generalize in the 'latent space of altruism' which implies being careful about what you do to other planets

The maximizer worry would also be tamed by design

And in fact my focus on optionality would essentially be the same to a worldwide agency concern (but I'm thinking of an universal agency to completely erase the maximizer issue)

Ryo *10

All right! Thank you for the precision,

Indeed the altruistic part seems to be interestingly close to a broad 'world empowerment', but I've some doubts about a few elements surrounding this : "the short term component of utility is the easiest to learn via obvious methods"

It could be true, but there are worries that it might be hard, so I try to find a way to resolve this?

If the rule/policy to choose the utility function is a preference based on a model of humans/agents then there might be ways to circumvent/miss what we would truly prefer (the traction of ... (read more)

1Ryo
I'm also trying to avoid us becoming grabby aliens, but if -> Altruism is naturally derived from a broad world empowerment Then it could be functional because the features of the combination of worldwide utilities (empower all agencies) *are* altruism, sufficiently to generalize in the 'latent space of altruism' which implies being careful about what you do to other planets The maximizer worry would also be tamed by design And in fact my focus on optionality would essentially be the same to a worldwide agency concern (but I'm thinking of an universal agency to completely erase the maximizer issue)
Ryo 20

Thank you, it's very interesting, I think that non-myopic 'ecosystemic optionality' and irreducibility may resolve the issues, so I made a reaction post