Concerning the first paragraph, I, the government, and most other Israelis disagree with that assessment. Iran has never given any indication that working on a two-state solution would appease them. As mentioned in the OP, Iran's projected goal is usually the complete removal of Israel, not the creation of a Palestinian state. On the other hand, containment of their nuclear capabilities is definitely possible, as repeated bombings of nuclear facilities can continue on forever, which has been our successful policy since the 1980s (before Iran, there was Iraq, which had its own nuclear program).
This seems silly to me - it is true that in a single instance, a quantum coin flip probably can't save you if classical physics has decided that you're going to die. But the exponential butterfly effect from all the minuscule changes that occur between splits from now should add up to providing us a huge possible spread of universes by the time AGI will arrive. In some of which the AI will be deadly, and in others, the seed of the AI will be picked just right for it to turn out good, or the exact right method for successful alignment will be the first one discovered.
Damn, being twenty sucks.
Does anyone have an alternative? I can't go to The Thiel Fellowship either because I've already gotten my undergraduate (started early).
Yes, though I actually think "belief" is more correct here. I assume that if MWI is correct then there will always exist a future branch in which humanity continues to exist. This doesn't concern me very much, because at this point I don't believe humanity is nearing extinction anyway (I'm a generally optimistic person). I do think that if I would share MIRI's outlook on AI risk, this would actually become very relevant to me as a concrete hope since my belief in MWI is higher than the likelihood Eliezer stated for humanity surviving AI.
Stress is a major motivator for everyone. Giving a fake overly optimistic deadline means that for every single project, you feel stressed (because you are missing a deadline) and work faster. You don't finish on time, but you finish faster than if you would have given an accurate estimate. I don't know how many people internalize it, but I think it makes sense that a manager would want you to "promise" to do something faster than possible - it'll just make you work harder.
Taking this into account, whenever I am asked to give an estimate, I try to give a best-case estimate (which is what most people give naturally). If I'd take the time I spent aimlessly on my phone into account when planning, I'd just spend even more time on my phone, because I wouldn't feel guilt.
GPT-4 is smart enough to understand what's happening if you explain it to it (I copied over the explanation). See this: