Undergraduate student studying Maths in the UK. Particularly interested in probability, game theory, AI alignment, and maths puzzles.
I'm not sure this effect is as strong as one might think. For one, Dario Amodei (CEO of Anthropic) claimed his P(doom) was around 25% (specifically, "the chance that his technology could end human civilisation"). I remember Sam Altman saying something similar, but can't find an exact figure right now. Meanwhile, Yann LeCun (Chief AI Scientist at Meta) maintains approximately the stance you describe. None of this, as far as I'm aware, has led to significant losses for OpenAI or Anthropic.
Is it really the case that making these claims at an institutional level, on a little corner of one's website, is so much stronger than the CEO of one's company espousing these views very publicly in interviews? Intuitively, this seems like it wouldn't make a massive difference.
In a worst-case scenario, I can imagine that this puts OpenAI directly in the firing line of regulators, whilst Meta gets off far more lightly.
I'm interested to know if there's any precedent for this, ie. a company being regulated further because they claimed their industry needed it, while those restrictions weren't applied universally.
These are the probabilities of each state:
State | Probability |
---|---|
with being the probability of all three parts of a component being fine. (Obviously, , because .)
This is not enough information to solve for x, of course, but note that . Note also that and (ie A is not strongly correlated or anti-correlated with or ). However, by quite a long way: is fairly strongly anti-correlated with .
Now here's the estimation bit, I suppose: given that holds, we'd probably expect a similar distribution of probabilities across values of and , given that is not (strongly) correlated with or . So etc. This resolves to .
State | Probability |
---|---|
This seems... not super unreasonable? At least, it appears slightly better than going for the most basic method, which is , so split the difference and say it's or thereabouts.
The key assumption here is that "if is pretty much uncorrelated with and , it's probably uncorrelated with the conjunction . This is not strictly-always true as a matter of probability theory, but we're making assumptions on incomplete information based on a real-world scenario, so I'd say this skewing our guess by a factor of 10% from the most naive approach is probably helpful-on-net.
This means in expectation, we guess the in-house machine to produce good widgets. I'd take that many from the Super Reliable Vendor if offered, but if they were offering less than that I'd roll the dice with the Worryingly Inconsistent In-House Machine. That is, I'm indifferent at .
The trick I usually use for this is to look at my watch (which is digital and precise to the second), and taking the seconds counter, which is uniformly distributed across . I then take option 0 if this is even and option 1 if it is odd.
(This also extends nicely to decisions with any small number of options: 3, 4, 5, and 6 are also factors of 60, so seconds modulo X is uniformly distributed across . Plus, I'm not very good at coming up with words.)
Question 1:
For any given value of , consider the probability of a queen being able to traverse the board vertically as being , and the probability of a rook being able to traverse the board horizontally (ie. from left edge to right edge) as being . This notation is to emphasise that these values are dependent on .
The key observation, as I see it, is that for any given board, exactly one of the conditions "queen can traverse the board vertically" and "rook can traverse the board horizontally using the missing squares of the board" is true. If the rook has a horizontal path across the missing squares, then this path cuts off the queen's access to a vertical path. This "virtual board" consisting of the missing squares has fraction of its squares intact.
Next, notice that every board with fraction of the squares intact has a "transpose board" (flipped along the leading diagonal) which has the exact same fraction intact. Any piece which can traverse a board vertically can traverse its transpose board vertically, and vice-versa. This means for a given , the fraction of boards the rook can traverse horizontally is the same as the fraction of boards it can traverse vertically.
Thus (again for fixed ) the probability of a rook being able to traverse a random board's complement horizontally is the same as the probability of a rook being able to traverse a random board's complement vertically. But we know that exactly one of "a queen can traverse the board vertically" and "a rook can traverse the board horizontally on the missing squares" is true, so .
This gives us . In the neighbourhood of the critical value , we have , and , so and . But this is only true if is the critical value for a rook, ie. .
I suppose the reason you asked for both probabilities together is that it helps to consider when either or both pieces can or cannot traverse certain boards, as I did. It would have been significantly more effort, if not impossible to deduce the probability of either individually without consideration as to the other.
Question 2:
We use the same argument as with the queen and the rook, but instead consider the bonsdman and antibondsman. By symmetry, they have the same probabilities, and the bondsman can traverse a board vertically if and only if an antibondsman cannot traverse the board's complement horizontally. This means for the same reason as before, , ie. .
With regard to your bonus question, I have a hunch that this is somehow related to the bond problem in percolation theory (somehow, your title may have provided a hint here). Maybe this is the transition / critical probability for when bonds form in some specific type of lattice.
Considering squares as representing nodes here and the bondsman's ability to travel between two squares to be an edge, this means every square is linked to exactly six squares (four cardinal directions and the two possible diagonals). This gives six edges, which indicates that we might be dealing with a cubic lattice?
Reflections:
I'm very interested to see the answers to these questions. I have never met percolation theory before, and found these problems really compelling (I'm a fairly new reader of this site, and this post inspired me to make my first comment here). I hope I've not done anything wrong with this comment (the spoiler tags should be working, and I don't think I've broken any rules). Thank you for the puzzles!
NB. A previous version of this comment had the right answer to Q1 by a mostly right method with an error, and the right answer to Q2 by a completely incorrect method. My comment has since been edited to reflect the correct version.
I originally saw this on Twitter, and posted mine in response, but I feel in-depth discussion is probably more productive here, so I appreciate you cross-posting this :)
One thing I'm interested in is your position on technical research vs. policy work. At least for me, seeing someone from an organisation focused on technical alignment research claim that "technical research is ~90% likely to be [less] useful" is a little worrying. Is this position mainly driven by timeline worries ("we don't have long, so the most important thing is getting governments to slow capabilities") or by a general pessimism about the field of technical alignment research panning out at all?