AK1089 has not written any posts yet.

I'm not sure this effect is as strong as one might think. For one, Dario Amodei (CEO of Anthropic) claimed his P(doom) was around 25% (specifically, "the chance that his technology could end human civilisation"). I remember Sam Altman saying something similar, but can't find an exact figure right now. Meanwhile, Yann LeCun (Chief AI Scientist at Meta) maintains approximately the stance you describe. None of this, as far as I'm aware, has led to significant losses for OpenAI or Anthropic.
Is it really the case that making these claims at an institutional level, on a little corner of one's website, is so much stronger than the CEO of one's company espousing these... (read more)
These are the probabilities of each state:
| State | Probability |
|---|---|
with being the probability of all three parts of a component being fine. (Obviously, , because .)
This is not enough information to solve for x, of course, but note that . Note also that and (ie A is not strongly correlated or anti-correlated with or ). However, by quite a long way: is fairly strongly anti-correlated with .
Now here's the estimation bit, I suppose: given that holds, we'd probably expect a similar distribution of probabilities across values of and , given that is not (strongly) correlated with or . So etc. This resolves to .
| State | Probability |
|---|---|
This seems... not super unreasonable? At least, it appears slightly better than going for the most basic method, which is , so split the difference and say it's or thereabouts.
The... (read more)
The trick I usually use for this is to look at my watch (which is digital and precise to the second), and taking the seconds counter, which is uniformly distributed across . I then take option 0 if this is even and option 1 if it is odd.
(This also extends nicely to decisions with any small number of options: 3, 4, 5, and 6 are also factors of 60, so seconds modulo X is uniformly distributed across . Plus, I'm not very good at coming up with words.)
Question 1:
For any given value of , consider the probability of a queen being able to traverse the board vertically as being , and the probability of a rook being able to traverse the board horizontally (ie. from left edge to right edge) as being . This notation is to emphasise that these values are dependent on .
The key observation, as I see it, is that for any given board, exactly one of the conditions "queen can traverse the board vertically" and "rook can traverse the board horizontally using the missing squares of the board" is true. If the rook has a horizontal path across the missing squares, then this path cuts off the queen's access
I originally saw this on Twitter, and posted mine in response, but I feel in-depth discussion is probably more productive here, so I appreciate you cross-posting this :)
One thing I'm interested in is your position on technical research vs. policy work. At least for me, seeing someone from an organisation focused on technical alignment research claim that "technical research is ~90% likely to be [less] useful" is a little worrying. Is this position mainly driven by timeline worries ("we don't have long, so the most important thing is getting governments to slow capabilities") or by a general pessimism about the field of technical alignment research panning out at all?