Oops, I meant lambda! edited :)
Thanks for the confirmation!
In addition to what you say, I would also guess that is a reasonable guess for P(no events in time t) when t > T, if it's reasonable to assume that events are Poisson-distributed. (but again, open to pushback here :)
Great post, thanks for sharing!
I don't have good intuitions about the Gamma distribution, and I'd like to have good intuitions for computing your Rule's outcomes in my head. Here's a way of thinking about it -- do you think it makes sense?
Let denote either or (whichever your rule says is appropriate).
I notice that for , your probability of zero events , where is what I'd call the estimated event rate .
So one nice intuitive inter...
In general, this post has prompted me to think more about the transition period between AI that's weaker than humans and stronger than all of human civilization, and that's been interesting! A lot of people assume that that takeoff will happen very quickly, but if it lasts for multiple years (or even decades) then the dynamics of that transition period could matter a lot, and trade is one aspect of that.
some stray thoughts on what that transition period could look like:
I love the genre of "Katja takes an AI risk analogy way more seriously than other people and makes long lists of ways the analogous thing could work." (the previous post in the genre being the classic "Beyond fire alarms: freeing the groupstuck.")
Digging into the implications of this post:
...In sum, for AI systems to be to humans as we are to ants, would be for us to be able to do many tasks better than AI, and for the AI systems to be willing to pay us grandly for them, but for them to be unable to tell us this, or even to warn us to get out of the way
Maybe one useful thought experiment is whether we could train a dog-level intelligence to do most of these tasks if it had the actuators of an ant colony, given our good understanding of dog training (~= "communication") and the fact that dogs still lack a bunch of key cognitive abilities humans have (so dog-human relations are somewhat analogous to human-AI relations).
(Also, ant colonies in aggregate do pretty complex things, so maybe they're not that far off from dogs? But I'm mostly just thinking of Douglas Hofstadter's "Aunt Hillary" here :)
My gu...
Yeah. It's conceivable you have an AI with some sentimental attachment to humans that leaves part of the universe as a "nature preserve" for humans. (Less analogous to our relationship with ants and more to charismatic flora and megafauna.)
In light of the FTX thing, maybe a particularly important heuristic is to notice cases where the worst-case is not lower-bounded at zero. Examples:
Thanks for your posts, Scott! This has been super interesting to follow.
Figuring out where to set the AM-GM boundary strikes me as maybe the key consideration wrt whether I should use GM -- otherwise I don't know how to use it in practical situations, plus it just makes GM feel inelegant.
From your VNM-rationality post, it seems like one way to think about the boundary is commensurability. You use AM within clusters whose members are willing to sacrifice for each other (are willing to make Kaldor-Hicks improvements, and have some common currency s.t. ...
FWIW, I went through pretty much the same sequence of thoughts, which jarred me out of what was otherwise a pleasant/flowing read. Given the difficulty people unfamiliar with the notation faced in looking it up, maybe you could say "∃ (there exists)", and/or link to the relevant Wiki page (https://en.wikipedia.org/wiki/Existential_quantification)?
If you're comfortable rephrasing the sentence a little more for clarity, I'd suggest replacing the part after the quantifier with something like "some length of delay between behavior and
Oops yes, sorry!