This is a special post for quick takes by Aprillion. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
1 comment, sorted by Click to highlight new comments since:

Pushing writing ideas to external memory for my less burned out future self:

  • agent foundations need path-dependent notion of rationality

    • economic world of average expected values / amortized big O if f(x) can be negative or you start very high
    • vs min-maxing / worst case / risk-averse scenarios if there is a bottom (death)
    • pareto recipes
  • alignment is a capability

    • they might sound different in the limit, but the difference disappears in practice (even close to the limit? 🤔)
  • in a universe with infinite Everett branches, I was born in the subset that wasn't destroyed by nuclear winter during the cold war - no matter how unlikely it was that humanity didn't destroy itself (they could have done that in most worlds and I wasn't born in such a world, I live in the one where Petrov heard the Geiger counter beep in some particular patter that made him more suspicious or something... something something anthropic principle)

    • similarly, people alive in 100 years will find themselves in a world where AGI didn't destroy the world, no matter what are the odds - as long as there is at least 1 world with non-zero probability (something something Born rule ... only if any decision along the way is a wave function, not if all decisions are classical and the uncertainty comes from subjective ignorance)
    • if you took quantum risks in the past, you now live only in the branches where you are still alive and didn't die (but you could be in pain or whatever)
    • if you personally take a quantum risk now, your future self will find itself only in a subset of the futures, but your loved ones will experience all your possible futures, including the branches where you die ... and you will experience everything until you actually die (something something s-risk vs x-risk)
    • if humanity finds itself in unlikely branches where we didn't kill our collective selves in the past, does that bring any hope for the future?