How to combine this with the fact that "the nudge" apparently doesn't work https://phys.org/news/2022-08-nudge-theory-doesnt-evidence-future.html ?
≈ 41,000
Why not ?
The way I understood the story, to define a function on two numbers from I need to fill-in a table with 59*59 cells, by picking for each cell a number from . If 20% of it is still to be filled, then there are 0.2*59*59 decisions to be made, each with 59 possibilities.
Right?
Thank you for heads up!
Could you please clarify for parents like me, who don't fully understand Minecraft's ecosystem and just want their kids to stay safe:
1. If my kids only use Minecraft downloaded from the Microsoft Store, and only ever downloaded content from the in-game marketplace - what's the chance they are affected?
2. Am I right in thinking that "mods" = "something which modifies/extends the executable", while "add-ons"="more declarative content which just interacts with existing APIs, like maps, skins, and configs"?
3. Am I right that "Minecraft from Micosoft Store" + "content from in-game marketplace" would translate to "Bedrock Edition" + "add-ons"?
4. Am I right that the fractureiser affects "Java Edition" + "mods" only?
Upon seeing the title (but before reading the article) I thought it might be about a different hypothetical phenomenon: one in which an agent which is capable of generating very precise models of reality might completely lose any interest in optimizing reality whatsover - after all it never (except "in training" which was before "it was born") cared about optimizing the world - it just executes some policy which was adaptive during training to optimize the world, but now, these are just some instincts/learned motions, and if it can execute them on a fake world in his head, it might be easier to feel good for it.
For consider: porn. Or creating neat arrangements of buildings when playing SimCity. Or trying to be polite to characters in Witcher. We, humans, have some learned intuitions on how we want the world to be, and then try to arrange even fake worlds in this way, even if this disconnected from real world outside. And we take joy from it.
Can it be, that a sufficiently advanced AGI will wire-head in this particular way: by seeing no relevant difference between atomic-level model of reality in its head and atomic-level world outside?
Thanks for clarifying! I agree the twitter thread doesn't look convincing.
IIUC your hypothesis, then translating it to AI Governance issue, it's important to first get general public on your side, so that politicians find it in their interest to do something about it.
If so, then perhaps meanwhile we should provide those politicians with a set of experts they could outsource the problem of defining the right policy to? I suspect politicians do not write rules themselves in situations like that, they rather seek people considered experts by the public opinion? I worry, that politicians may want to use this occasion to win something more than public support, say money/favor from companies, and hence pick not the right experts/laws - hence perhaps it is important to not only work on public perception of the threat but also on who the public considers experts?
Why? (I see several interpretations of your comment)
What did it take to ban slavery in Britain:
TL;DR: Become the PM and propose laws which put foot in the door, by banning bad things in the new areas at least, and work from there. Also, be willing to die before seeing the effects
Source: https://twitter.com/garius/status/1656679712775880705
I agree that my phrasing was still problematic, mostly because it seems to matter if she said something spontaneously or as a response to a specific question. In the first case, one has to consider how often people feel compelled to say some utterance in various life scenarios. So for example in case one has two boys the utterance "i have to pick up Johny from kindergarten" might have to compete with "i have to pick up Robert from kindergarten" and might be strange/rare if both are in similar age and thus both should be picked up etc. Still, I think that without knowing much about how people organize their daily routines, my best bet for the question "does she have two boys?" would be 33%.
It's get funnier with "i have to pick up my younger one, John from kindergarten" :)
Cool puzzle. (I've wrote like 4 versions of this comment each time changing explanation and conclusions and each time realizing I am still confused).
Now, I think the problem is that we don't pay much attention to:
What should one do when one has drawn a red ball?
(Yeah, I strategically use word "one" instead of "I" to sneak assumption that everyone should do the same thing)
I know, it sounds like an odd question, because, the way the puzzle is talked about, I have no agency when I got a red ball, and I can only wait in despair as the owners of green balls make their moves.
And if you imagine a big 2-dimensional array where each of 100 columns is an iteration of a game, and each of 20 rows is a player, and look at an individual row (a player) then, we'd expect, say 50 columns to be "mostly green", of them roughly 45 have the player "has drawn green" cell, and 50 columns to be "mostly red", with 5 of them having "has drawn green". If you focus just on those 45+5 columns, and note that 45:5 is 0.9:0.1, then yeah, indeed the chance that the column is "mostly green" given "I have drawn green" is 0.9.
AND coincidentally, if you only focus on those 45+5 columns, it looks like to optimize the collective total score limited to those 45+5 columns, the winning move is to take the bet, because then you'll get 0.9*12-0.1*52 dollars.
But what about the other 50 columns??
What about the rounds in which that player has chosen "red"?
Turns out they are mostly negative. So negative, that it overwhelms the gains of the 45+5 columns.
So, the problem is that when thinking about the move in the game, we should not think about
1. "What is the chance one is in mostly green column if one has a green ball?" (to which the answer is 90%)
but rather:
2. "What move should one take to maximize overall payout when one has a green ball?" (to which the answer is: pass)
and that second question is very different from:
3. "What move should one take to maximize payout limited just to the columns in which they drew a green ball when seeing a green ball?" (to which the answer is: take the bet!)
The 3. question even though it sounds very verbose (and thus weird) is actually the one which was mentally substituted (by me, and I think most people who see the paradox?) naturally when thinking about the puzzle, and this is what leads to paradox.
The (iterated) game has 45+5+50 columns, not just 45+5, and your strategy affects all of them, not just the 45+5 where you are active.
How can that be? Well, I am not good at arguing this part, but to me it feels natural, that if rational people are facing same optimization problem, they should end up with same strategy, so whatever I end up doing I should expect that others will end up doing it too, so I should take that into account when thinking what to do.
It still feels feel a bit strange to me mathematically, that a solution which seems to be optimal for 20 various different subsets (each having 45+5 columns) of 100 columns individually, is somehow not optimal for the whole 100 columns.
The intuition for why it is possible is that a column which has 18 green fields in it, will be included in 18 sums, and a column which has just 2 green fields in it will be counted in just 2 of them, so this optimization process, focuses too much on the "mostly green" columns, and neglects those "mostly red".
Is it inconsistent to at the same time think:
"The urn is mostly green with ppb 90%" and
"People who think urn is mostly green with ppb 90% should still refuse the bet which pays $12 vs $-52"?
It certainly sounds inconsistent, but what about this pair of statements in which I've only changed the first one:
"The urn is mostly green with ppb 10%" and
"People who think urn is mostly green with ppb 90% should still refuse the bet which pays $12 vs $-52?"
Hm, now it doesn't sound so crazy, at least to me.
And this is something a person who has drawn a red ball could think.
So, I think the mental monologue of someone who drew a green ball should be:
"Yes, I think that the urn is mostly green with ppb 90%, by which I mean, that if I had to pay -lg(p) Bayes points when it turns out to be mostly green, and -lg(1-p) if it isn't, then I'd choose p=0.9. Like, really, if there is a parallel game with such a rules, I should play p=0.9 in it. But still, in this original puzzle game, I should pass, because whatever I'll do now, is whatever people will tend to do in cases like this, and I strongly believe that "People who think urn is mostly green with ppb 90% should still refuse the bet which pays $12 vs $-52", because I can see how this strategy optimizes the payoff in all 100 columns, as opposed to just those 5+45 I am active in. The game in the puzzle doesn't ask me what I think the urn contained, nor for a move which optimizes the payoff limited to the rounds in which I am active. The game asks me: what should be the output of this decisions process so that the sum over all 100 columns is the largest. To which the answer is: pass".