R&Ds human systems http://aboutmako.makopool.com
Jellychip seems like a necessary tutorial game. I sense comedy in the fact that everyone's allowed to keep secrets and intuitively will try to do something with secrecy despite it being totally wrongheaded. Like the only real difficulty of the game is reaching the decision to throw away your secrecy.
Escaping the island is the best outcome for you. Surviving is the second best outcome. Dying is the worst outcome.
You don't mention how good or bad they are relative to each other though :) an agent cannot make decisions under uncertainty without knowing that.
I usually try to avoid having to explain this to players by either making it a score game or making the outcomes binary. But the draw towards having more than two outcomes is enticing. I guess in a roleplaying scenario, the question of just how good each ending is for your character is something players would like to decide for themselves. I guess as long as people are buying into the theme well enough, it doesn't need to be made explicit, in fact, not making it explicit makes it clearer that player utilities aren't comparable and that makes it easier for people to get into the cohabitive mindset.
So now I'm imagining a game where different factions have completely different outcomes. None of them are conquest, nor death. They're all weird stuff like "found my mother's secret garden" or "fulfilled a promise to a dead friend" or "experienced flight".
the hook
I generally think of hookness as "oh, this game tests a skill that I really want to have, and I feel myself getting better at it as I engage with the game, so I'll deepen my engagement".
There's another component of it that I'm having difficulty with, which is "I feel like I will not be rejected if I ask friends to play this with me." (well, I think I could get anyone to play it once, the second time is the difficult one) And for me I see this quality in very few board games, and to get there you need to be better than the best board games out there, because you're competing with them, so that's becoming very difficult. But since cohabitive games rule that should be possible for us.
And on that, I glimpsed something recently that I haven't quite unpacked. There's a certain something about the way Efka talks about Arcs here ... he admitted that it wasn't necessarily all fun. It was an ordeal. And just visually, the game looks like a serious undertaking. Something you'd look brave for sitting in front of. It also looks kind of fascinating. Like it would draw people in. He presents it with the same kind of energy as one would present the findings of a major government conspiracy investigation, or the melting of the clathrates. It does not matter whether you want to play this game, you have to, there's no decision to be made as to whether to play it or not, it's here, it fills the room.
And we really could bring an energy like that, because I think there are some really grim findings along the path to cohabitive enlightenment. But I'm wary of leaning into that, because I think cohabitive enlightenment is also the true name of peace. Arcs is apparently controversial. I do not want cohabitive games to be controversial.
(Plus a certain degree of mathematician crankery: his page on Google Image Search, and how it disproves AI
I'm starting to wonder if a lot/all of the people who are very cynical about the feasibility of ASI have some crank belief or other like that. Plenty of people have private religion, for instance. And sometimes that religion informs their decisions, but they never tell anyone the real reasons underlying these decisions, because they know they could never justify them. They instead say a load of other stuff they made up to support the decisions that never quite adds up to a coherent position because they're leaving something load-bearing out.
I don't think the intelligence consistently leads to self-annihilation hypothesis is possible. At least a few times it would amount to robust self-preservation.
Well.. I guess I think it boils down to the dark forest hypothesis. The question is whether your volume of space is likely to contain a certain number of berserkers, and the number wouldn't have to be large for them to suppress the whole thing.
I've always felt the logic of berserker extortion doesn't work, but occasionally you'd get a species that just earnestly wants the forest to be dark and isn't very troubled by their own extinction, no extortion logic required. This would be extremely rare, but the question is, how rare.
Light speed migrations with no borders means homogeneous ecosystems, which can be very constrained things.
In our ecosystems, we get pockets of experimentation. There are whole islands where the birds were allowed to be impractical aesthetes (indonesia) or flightless blobs (new zealand). In the field-animal world, islands don't exist, pockets of experimentation like this might not occur anywhere in the observable universe.
If general intelligence for field-animals costs a lot, has no immediate advantages (consistently takes say, a thousand years of ornament status before it becomes profitable), then it wouldn't get to arise. Could that be the case?
We could back-define "ploitation" as "getting shapley-paid".
Yeah. But if you give up on reasoning about/approximating solomonoff, then where do you get your priors? Do you have a better approach?
Buried somewhere in most contemporary bayesians' is the solomonoff prior (the prior that the most likely observations are those that have short generating machine encodings) Do we have standard symbol for the solomonoff prior? Claude suggests that is the most common, but is more often used as a distribution function, or perhaps for Komogorov? (which I like because it can also be thought to stand for "knowledgebase", although really it doesn't represent knowledge, it pretty much represents something prior to knowledge)
I'd just define exploitation to be precisely the opposite of shapley bargaining, situations where a person is not being compensated in proportion to their bargaining power.
This definition encompasses any situation where a person has grievances and it makes sense for them to complain about them and take a stand, or, where striking could reasonably be expected to lead to a stable bargaining equilibrium with higher net utility (not all strikes fall into this category).
This definition also doesn't fully capture the common sense meaning of exploitation, but I don't think a useful concept can.
A moral code is invented[1] by a group of people to benefit the group as a whole, it sometimes demands sacrifice from individuals, but a good one usually has the quality that at some point in a person's past, they would have voluntarily signed on with it. Redistribution is a good example. If you have a concave utility function, and if you don't know where you'll end up in life, you should be willing to sign a pledge to later share your resources with less fortunate people who've also signed the pledge, just in case you become one of the less fortunate. The downside of not being covered in that case is much larger than the upside of not having to share in the other case.
For convenience, we could decide to make the pledge mandatory and the coverage universal (ie, taxes and welfare) since there aren't a lot of humans who would decline that deal in good faith. (Perhaps some humans are genuinely convex egoists and wouldn't sign that deal, but we outnumber them, and accomodating them would be inconvenient, so we ignore them.)
If we're pure of heart, we could make the pledge acausal and implicit and adhere to it without any enforcement mechanisms, and I think that's what morality usually is or should be in the common sense.
But anyway, it sometimes seems to me that you often advocate a morality regarding AI relations that doesn't benefit anyone who currently exists, or, the coalition that you are a part of. This seems like a mistake. Or worse.
I wonder if it comes from a place of concern that... if we had public consensus that humans would prefer to retain full control over the lightcone, then we'd end up having stupid and unnecessary conflicts with the AIs over that, while, if we pretend we're perfectly happy to share, relations will be better? You may feel that as long as we survive and get a piece, it's not worth fighting for a larger piece? The damages from war would be so bad for both sides that we'd prefer to just give them most of the lightcone now?
And I think stupid wars aren't possible under ASI-level information technology. If we had the capacity to share information and find out who'd win a war and skip to a surrender deal, doing so always has higher EV for both sides than actually fighting. The reason wars are not skipped that way today is that we still lack the capacity to simultaneously mutually exchange proofs of force capacity, but we're getting closer to having that every day. Generally, in that era, coexisting under confessed value differences will be pretty easy. Honestly I feel like it already ought to be easy, for humans, if we'd get serious about it.
Though, as Singer says, much of morality is invented only in the same sense as mathematics is invented, being so non-arbitrary that it seems to have a kind of external observer-independent existence and fairly universal truths, which powerful AIs are likely to also discover. But the moralities in that class are much weaker (I don't think Singer fully recognises the extent of this), and I don't believe they have anything to say about this issue.