I think I feel a similar mix of love and frustration for your comment as I read your comment expressing with the post.
Let me be a bit theoretical for a moment. It makes sense for me to think of utilities as a sum where is the utility of things after singularity/superintelligence/etc and the utility for things before then (assuming both are scaled to have similar magnitudes so the relative importance is given by the scaling factors). There's no arguing about the shape of these or what factors people chose becaus...
I chose the college example because it's especially jarring / especially disrespectful of trying to separate the world into two "pre-AGI versus post-AGI" magisteria.
A more obvious way to see that x-risk matters for ordinary day-to-day goals is that parents want their kids to have long, happy lives (and nearly all of the variance in length and happiness is, in real life, dependent on whether the AGI transition goes well or poorly). It's not a separate goal; it's the same goal, optimized without treating 'AGI kills my kids' as though it's somehow better than...
I think you picked a good suggestion for a bad reason. Both because of the difference between market cap and price per coin, as the sibling comment has pointed out, and because you don't give any reason for this to change in the next year when it's been the case for the last N years. Here's what I think is a better reason.
Suggestion: Ethereum (ETH)
Reasoning: There are a number of upgrades planned in the next few years. The biggest problem for cryptocurrencies in general is the low through-put of transactions (leading to high fees due to high demand for a scarce resource). The Ethereum project has long-term plans to improve this with sharding and zero-knowledge proofs, but the sharding upgrade is not planned until 2023 and it's not clear how much of the value of zero-knowledge proofs will be captured by Ethereum as opposed to the Layer-2 chains that build on top of Ethereum....
It might be worth separating self-consciousness (awareness of how your self looks from within) from face-consciousness (awareness of how your self looks from outside). Self-consciousness is clearly useful as a cheap proxy for face-consciousness, and so we develop a strong drive to be able to see ourselves as good in order for others to do so as well. We see the difference between this separation and being a good person being only a social concept (suggested by Ruby) by considering something like the events in "Self-consciousness in social justice"...
One way to see this is to point out that when Alice tells Bob that everybody knows X, either Bob is asserting X because people act as if they don’t know X, or Bob does not know X. That’s why Alice is telling Bob in the first place.
It could also be that everybody (suitable quantification might be limited to: every student in this course/everyone at this party/every thinker on this site/every co-conspirator of our coup/etc) does in fact know X, but not everybody knows that everybody knows X. Depending on the circumstance of this being pointed out this can be...
Nice proof with a thought-provoking example! Think it could benefit from being translated into a more AI-relevant setting with the following key:
- Northland winning = no shutdown
- Southland winning = shutdown
- Send messenger to Northland = Act in a way that looks dangerous and causes evaluators to probably shut down
- Send messenger to Southland = Act in a way that looks safe and causes evaluators to probably not shut down
- Bet on Northland = Set up costly processes to do real work in future to attain high utility (e.g. build factories and labs that would need to be
... (read more)