polytope

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Answer by polytope32

I assume you're familiar with the case of the parallel postulate in classical geometry as being independent of other axioms? Where that independence corresponds with the existence of spherical/hyperbolic geometries (i.e. actual models in which the axiom is false) versus normal flat Euclidean geometry (i.e. actual models in which it is true).

To me, this is a clear example of there being no such thing as an "objective" truth about the the validity of the parallel postulate - you are entirely free to assume either it or incompatible alternatives. You end up with equally valid theories, it's just those theories are applicable to different models, and those models are each useful in different situations, so the only thing it comes down to is which models you happen to be wanting to use or explore or prove things about on a given day.

Similarly for the huge variety of different algebraic or topological structures (groups, ordered fields, manifolds, etc) - it is extremely common to have statements that are independent of the axioms, e.g. in a ring it is independent of the axioms whether multiplication is commutative or not. And both choices are valid. We have commutative rings, and we have noncommutative rings, and both are self-consistent mathematical structures that one might wish to study.

Loosely analogous to how one can write a compiler/interpreter for a programming language within other programming languages, some theories can easily simulate other theories. Set theories are particularly good and convenient for simulating other theories, but one can also simulate set theories within other seemingly more "primitive" theories (e.g. simulating it in theories of basic arithmetic via Godel numbering). This might be analogous to e.g. someone writing a C compiler in Brainfuck. Just like how it's meaningless to talk about whether a programming language or a given sub-version or feature extension of a programming language is more "objectively true" than another, there are many who take the position that the same holds for different set theories.

When you say you're "leaning towards a view that maintains objective mathematical truth" with respect to certain axioms, is there some fundamental principle by which you're discriminating the axioms that you want to assign objective truth from axioms like the parallel postulate or the commutativity of rings, which obviously have no objective truth? Or do you think that even in these latter cases there is still an objective truth?

polytope142

This thread analyzes what is going on under the hood with the chess transformer. It is a stronger player than the Stockfish version it was distilling, at the cost of more compute but only by a fixed multiplier, it remains O(1).


I found this claim suspect because this basically is not a thing that happens in board games. In complex strategy board games like Chess, practical amounts of search on top of a good prior policy and/or eval function (which Stockfish has), almost always outperforms any pure forward pass policy model that doesn't do explicit search, even when that pure policy model is quite large and extensively trained. With any reasonable settings, it's very unlikely that the distillation of Stockfish into a pure policy model produces a better player than Stockfish.

I skimmed the paper (https://arxiv.org/pdf/2402.04494), and had trouble finding such a claim, and indeed it seems the original poster of that thread later retracted that claim as due to their own mistake in interpreting the data table of the paper. The post where they acknowledge the mistake is much less prominent than the original post, link here: https://x.com/sytelus/status/1848239379753717874 . The chess transformer remains quite a bit weaker than the Stockfish it tries to predict/imitate.

Do you think a vision transformer trained on 2-dimensional images of the board state would also come up with a bag of heuristics or would it naturally learn a translation invariant algorithm taking advantage of the uniform way the architecture could process the board? (Let's say that there are 64 1 pixel by 1 pixel patches, perfectly aligned with the 64 board locations of an 8x8 pixel image, to make it maximally "easy" for both the model and for interpretability work.)

And would it differ based on whether one used an explicit 2D positional embedding, or a learned embedding, or a 1D positional embedding that ordered the patches from top to bottom, right to left?

I know that of course giving a vision transformer the actual board state like this shortcircuits the cool part where OthelloGPT tries to learn its own representation of the board. But I'm wondering if even in this supposedly easy setting it still would end up imperfect with a tiny error rate and a bag-of-heuristics-like way of computing legal moves.

And brainstorming a bit here: a slightly more interesting setting that might not shortcircuit the cool part would be if the input to the vision transformer was a 3D "video" of the moves on the board. E.g. the input[t][x][y] is 1 if on turn t, a move was made at (x,y), and 0 otherwise. Self-attention would presumably be causally-masked on the t dimension but not on x and y. Would we get a bag of heuristics here in the computation of the board state and the legal moves from that state?

polytope214

(KataGo dev here, I also provided a bit of feedback with the authors on an earlier draft.)

@gwern - The "atari" attack is still a cyclic group attack, and the ViT attack is also still a cyclic group attack. I suspect it's not so meaningful to put much weight on the particular variant that one specific adversary happens to converge to. 

This is because the space of "kinds" of different cyclic group fighting situations is combinatorically large and it's sort of arbitrary what local minimum the adversary ends it because it doesn't have much pressure to find more once it finds one that works. Even among just the things that are easily put into words without needing a diagram - how big is the cycle? Does the cyclic group have a big eye (>= 4 points behaves tactically distinctly) or a small eye (<=3 points), or no eye? Is the eye a two-headed-dragon-style eye, or not? Does it have more loose connections or is it solid? Is the group inside locally dead/unsettled/alive? Is the cycle group racing against an outside group for liberties or only making eyes of its own, or both? How many liberties do all the various groups each have? Are there ko-liberties? Are there approach-liberties? Is there a cycle inside the cycle? etc. 

This is the same as how in Go the space of different capturing race situations in general is combinatorically large, with enough complexity that many situations are difficult even for pro players who have studied them for a lifetime.

The tricky bit here is that there seems to not be (enough) generalization between the exponentially large space of large group race situations in Go more broadly and the space of situations with cyclic groups. So whereas the situations in "normal Go" get decently explored by self-play, cyclic groups are rare in self-play so there isn't enough data to learn them well, leaving tons of flaws, even for some cases humans consider "simple". A reasonable mental model is that any particular adversary will probably find one or two of them somewhat arbitrarily, and then rapidly converge to exploit that, without discovering the numerous others.

The "gift" attack is distinct and very interesting. There isn't a horizon effect involved, it's just a straightforward 2-3 move blind spot of both the policy and value heads. Being only 2-3 moves this is why it also gets fixed more easily by search than cyclic groups. As for why it happens, as a bit of context I think these are true:

  • In 99.9...% of positions, the flavor of "superko" rule doesn't affect the value of a position or the correct move.
  • The particular shape used by the gift adversary and similar shapes do occur with reasonable frequency in real games without the superko rule being relevant (due to different order of moves), in which case the "gift shape" actually is harmless rather than being a problem.

I've taken a look at the raw neural net outputs and it's also clear that the neural net has no idea that the superko rule matters - predictions don't vary as you change the superko rule in these positions. So my best guess is that that the neural net perhaps "overgeneralizes" and doesn't easily learn that in this one specific shape with this specific order of moves, the superko rule, which almost never matters, suddenly does matter and flips the result.

Apparently not a writeup (yet?), but there appears to be a twitter post here from LC0 with an comparison plot of accuracy on tactics puzzles:  https://x.com/LeelaChessZero/status/1757502430495859103?s=20

polytope43

Yes, rather than resolving the surprise of "the exact sequence HHTHTTHTTH" by declaring that it shouldn't be part of the set of events, I would prefer to resolve it via something like:

  • It should be part of the set of events I'm allowed to consider just like any other subset of all 10-flip sequences. 
  • We do observe events (or outcomes that if constructed as singleton events) all the time that would we would have predicted to be exceedingly improbable (while they may be improbable individually, a union of them may not be).
  • Observing some particular unlikely event like "the exact sequence HHTHTTHTTH occurs" should in fact raise my relative belief in any hypothesis by a large factor if that hypothesis would have uniquely predicted that to occur, as compared to others that would have made a far more non-specific prediction. (up to a factor of at most 2^10 unless the other hypothesis considered that sequence to be unlikelier than uniform) 
  • Even if all this is true, I still do not and should not feel surprised in such a case because I think surprise has more to do the amount by which something shifts the beliefs I have that my brain intuits to be important for various reasons. It has little to do with the likelihood of events I observe, other than how it affects those beliefs. I didn't have any prior reason to assign any meaningful weight to hypotheses about the coin that would predict that exact sequence and no others, such that even after scaling them by a large factor, my overall beliefs about the coin and the distribution of likely future flips should remain very similar to before, therefore I feel little surprise.
  • By contrast I might feel a little more surprise seeing "HHHHHHHHHH". And again the reason is not really because of the likelihood or unlikelihood of that sequence, and it also has little to do with which sequences I'm being told I can define to be a mathematical event or not. Rather I think it's closer to something like "this coin is biased heads" or "this coin always flips heads" are competing hypotheses to "this coin is fair" that while initially extremely unlikely would not be outlandish to consider, and if true it would affect my conception of the coin and predictions of its future flips. So this time the large relative boost would come closer to shifting my beliefs in a way that would impact how I think about the coin and make future predictions, therefore I feel more surprise.
polytope41

Here's my intuition-driving example/derivation.

Fix a reference frame and suppose you are on a frictionless surface standing next to a heavy box equal to your own mass, and you and the box always start at rest relative to one another. In every example, you will push the box leftward, adding 1 m/s leftward velocity to the box, and adding 1 m/s rightward velocity to yourself. 

Let's suppose we didn't know what "kinetic energy" is, but let's suppose such a concept exists, and that whatever it is, an object of your mass has 0 units of it when at rest, and it is a continuous monotonic function of the absolute value of that object's velocity. Let's also take as an assumption that when you perform such a push like the above, you are always adding precisely 1 unit of this thing called "kinetic energy" to you and the box combined.

Okay so suppose the box and you are at rest and you perform this push, and start moving at 1m/s left and right, respectively. You and the box started with 0 units of kinetic energy, and you added 1 unit total. Since you and the box have the same absolute value of velocity, your energies are equal, so you each must have gotten 1/2 of a unit. Great, therefore we derive 1 m/s is 1/2 unit of kinetic energy.

Now suppose you and the box start out at a velocity of 1m/s rightward, so you have 1/2 unit of energy each, for a total of 1 unit. You perform the same push, bringing the total kinetic energy to 2 units. The box ends up at 0 m/s, so it has 0 units of energy now. You end up going 2m/s rightward, with all the energy. Great, therefore we derive 2 m/s is 2 units of kinetic energy.

Now suppose you and the box start out at a velocity of 2m/s rightward, so you have 2 units of energy each, for a total of 4 units. You perform the same push, bringing the total kinetic energy to 5 units. The box ends up at 1 m/s, so it has 1/2 unit of energy now, since we derived earlier that 1m/s is 1/2 unit of energy. You end up going 3m/s rightward. So you must have the other 4.5 units of energy. Therefore we derive 3 m/s is 4.5 units of kinetic energy.

We can continue this indefinitely, without running into any inconsistencies or contradictions. This "kinetic energy" thing so far seems to be a self-consistent concept given these assumptions! In general, we derive that an object of our mass moving at velocity v is has a kinetic energy of 1/2 v^2 units.

And I hope this makes it clearer why kinetic energy has to behave quadratically. A quadratic function f is precisely the kind of function such the quantity f(x+c) + f(x-c) - 2f(x) is constant with respect to x. It's the only function that satisfies the property that a fixed "amount of push" of the propellant you are carrying away from you always adds the same total energy into the combined system of you + propellant.

And it also gives some intuition for why you end up with more energy when you fire the propellant while moving faster. When you pushed the box while initially at 0 m/s, your kinetic energy went from 0 units to 0.5 units (+0.5), but when you pushed the box while initially at 1 m/s, your kinetic energy went from 0.5 units to 2 units (+1.5), and when you pushed the box while initially at 2 m/s, your kinetic energy went from 2 units to 4.5 units (+2.5) and in all cases you only added 1 unit of energy yourself. Where does the extra energy come from? From slowing down the box's rightward motion, and/or from not speeding up the box to go leftward from rest.

> lack of sufficient evidence. 

Perhaps more specifically, evidence that is independent from the person that is to be trusted or not. Presumably when trusting someone else that something is true, often one does so due to believing that the other person is being honest and reliable enough such that that their word is sufficient evidence to then take some action. It's just that there isn't sufficient evidence without that person's word.

I am also curious why the zero-shot transfer is so close to 0% but not 0%. Why do those agents differ so much, and what do the exploits for them look like?

The exploits for the other agents are pretty much the same exploit, they aren't really different. From what I can tell as an experienced Go player watching the adversary and other human players use the exploit, the zero shot transfer is not so high because the adversarial policy overfits to memorize specific sequences that let you set up the cyclic pattern and learns to do so in a relatively non-robust way. 

All the current neural-net-based Go bots share the same massive misevaluations in the same final positions. Where they differ is that they may have arbitrarily different preferences among almost equally winning moves, so during the long period that the adversary is in a game-theoretically-lost position, any different victim all the while still never realizing any danger, may nonetheless just so happen to choose different moves. If you consider a strategy A that might broadly minimize the number of plausible ways a general unsuspecting victim might mess up your plan by accident, and a strategy B that leaves more total ways open but those ways are not the ones that small set of victim networks you are trained to exploit would stumble into (because you've memorized their tendencies enough to know they won't), the adversary is incentivized more towards B than A.

This even happens after the adversary "should" win. Even after it it finally reaches a position that is game-theoretically winning, it often blunders several times and plays moves that cause the game to be game-theoretically lost again, before eventually finally winning again. I.e. it seems overfit to the fact that the particular victim net is unlikely to take advantage of its mistakes, so it never learns that they are in fact mistakes. In zero-shot transfer against a different opponent this unnecessarily may give the opponent, who shares the same weakness but may just so happen to play in different ways, chances to stumble on a refutation and win again. Sometimes even without the victim even realizing that it was a refutation of anything and that they were in trouble in the first place.

I've noticed human exploiters play very differently than that. Once they achieve a game-theoretic-winning position they almost always close all avenues for counterplay and stop giving chances to the opponent that would work if the opponent were to suddenly become aware.

Prior to that point, when setting up the cycle from a game-theoretically lost position, most human players I've seen also play slightly differently too. Most human players are far less good at reliably using the exploit, because they haven't practiced and memorized as much the ways to get any particular bot to not accidentally interfere with them as they do so. So the adversary does much better than them here. But as they learn to do better, they tend do so in ways that I think transfer better (i.e. from observation my feeling is they maintain a much stronger bias towards things like "strategy A" above).

(I'm the main KataGo dev/researcher)

Just some notes about KataGo - the degree to which KataGo has been trained to play well vs weaker players is relatively minor. The only notable thing KataGo does is in some self-play games to give up to an 8x advantage in how many playouts one side has over the other side, where each side knows this. (Also KataGo does initialize some games with handicap stones to make them in-distribution and/or adjust komi to make the game fair). So the strong side learns to prefer positions that elicit higher chance of mistakes by the weaker side, while the weak side learns to prefer simpler positions where shallower search doesn't harm things as much.

This method is cute because it adds pressure to only learn "general high-level strategies" for exploiting a compute advantage, instead of memorizing specific exploits (which one might hypothesize to be less likely to generalize to arbitrary opponents). Any specific winning exploit learned by the stronger side that works too well will be learned by the weaker side (it's the same neural net!) and subsequently will be avoided and stop working.

And it's interesting that "play for positions that a compute-limited yourself might mess up more" correlates with "play for positions that a weaker human player might mess up in".

But because this method doesn't adapt to exploit any particular other opponent, and is entirely ignorant of a lot of tendencies of play shared widely across all humans, I would still say it's pretty minor. I don't have hard data, but from firsthand subjective observation I'm decently confident that top human amateurs or pros do a better job playing high-handicap games (> 6 stones) against players that more than that many ranks weaker than them than KataGo would, despite KataGo being stronger in "normal" gameplay. KataGo definitely plays too "honestly", even with the above training method, and lacks knowledge of what weaker humans find hard.

If you really wanted to build a strong anti-human handicap game bot in Go, you'd absolutely start by learning to better model human play, using the millions of games available online.

(As for the direct gap with the very best pro players, without any specific anti-bot exploits, at tournament-like time controls I think it's more like 2 stones rather than 3-4. I could believe 3-4 for some weaker pros, or if you used ultra-blitz time controls, since shorter time controls tend to favor bots over humans).

Load More