I'm not sure this effect is as strong as one might think. For one, Dario Amodei (CEO of Anthropic) claimed his P(doom) was around 25% (specifically, "the chance that his technology could end human civilisation"). I remember Sam Altman saying something similar, but can't find an exact figure right now. Meanwhile, Yann LeCun (Chief AI Scientist at Meta) maintains approximately the stance you describe. None of this, as far as I'm aware, has led to significant losses for OpenAI or Anthropic.
Is it really the case that making these claims at an institutional lev...
These are the probabilities of each state:
State | Probability |
---|---|
with being the probability of all three parts of a component being fine. (Obviously, , because .)
This is not enough information to solve for x, of course, but note that . Note also that and (ie A is not strongly correlated or anti-correlated with or ). However, ...
The trick I usually use for this is to look at my watch (which is digital and precise to the second), and taking the seconds counter, which is uniformly distributed across . I then take option 0 if this is even and option 1 if it is odd.
(This also extends nicely to decisions with any small number of options: 3, 4, 5, and 6 are also factors of 60, so seconds modulo X is uniformly distributed across . Plus, I'm not very good at coming up with words.)
Question 1:
For any given value of , consider the probability of a queen being able to traverse the board vertically as being , and the probability of a rook being able to traverse the board horizontally (ie. from left edge to right edge) as being . This notation is to emphasise that these values are dependent on .
The key observation, as I see it, is that for any given board, exactly one of the conditions "queen can traverse the board vertically" and "rook can traverse the board horizontally using the missing squares of t
I originally saw this on Twitter, and posted mine in response, but I feel in-depth discussion is probably more productive here, so I appreciate you cross-posting this :)
One thing I'm interested in is your position on technical research vs. policy work. At least for me, seeing someone from an organisation focused on technical alignment research claim that "technical research is ~90% likely to be [less] useful" is a little worrying. Is this position mainly driven by timeline worries ("we don't have long, so the most important thing is getting governments to