Bostrom's argument may be underappreciated. You might like Roman Yampolskiy's work if you're deeply interested in exploring the Simulation argument.
Can you tell me your p(doom) and AGI timeline? Cause I think we can theoretically settle this:
I give you x$ now and in y years you give me back x times r $ back
Please tell me acceptable y, r for you (ofc in the sense of least-convenient-but-still-profitable)
I think we can conceivably gather data on the combination of "anthropic shadow is real & alignment is hard".
Predictions would be:
conditional on us finding alien civilizations that reached the same technological level, most of them will have been wiped by AI.
2. is my guess as to why there is a Great Filter. More so than Grabby Aliens.
That's good to know! Best of luck in your project
Feels deep but I don't get it.
Would you mind elaborating?
ANTHROPIC IMMORTALITY
Are other people here having the feeling of "we actually probably messed up AI alignment but I think we are going to survive for weird anthropic reasons"?
[Sorry if this is terrible formatting, sorry if this is bad etiquette]
I think the relevant idea here is the concept of anthropic immortality. It has been alluded to on LW more time than I could count and has even been discussed up explicitly in this context: https://alignmentforum.org/posts/rH9sXupnoR8wSmRe9/ai-safety-via-luck-2
Eliezer wrote somewhat cryptic tweets referencing it recently:
https://x.com/ESYudkowsky/status/1138936939892002816
https://x.com/ESYudkowsky/status/1866627455286648891
But for several weeks I've wished there was a definitive place on the internet where it is examined cause I have trouble wrapping my mind around the idea. Its value, theoretical defects, likelihood (even though it seems to break down probability calculation: https://x.com/ESYudkowsky/status/1138938670881239040 )
It doesn't help that it is related to and/or confused with quantum immortality (QI) which actually shows up on the internet (see in particular: https://www.lesswrong.com/posts/cjK6CTW9DyFAFtKHp/false-vacuum-the-universe-playing-quantum-suicide, https://www.lesswrong.com/posts/hB2CTaxqJAeh5jdfF/quantum-immortality-a-perspective-if-ai-doomers-are-probably), has its own LessWrong entry and a Wikipedia article. It doesn't help either that QI has become kind of a meme at this point.
If you check the context, EY is making the point that anthropic immortality is distinct from QI: https://x.com/knosciwi/status/1866619917979754593, which maybe a sign people got them mixed up?
I feel like there are multiple people "reinventing the wheel" and describing the concept independently.
All this to say:
- maybe someone should compile a broadly accessible entry!
- thinking about doing it myself but I don't know how valuable it would be (maybe everyone here nodded along to EY tweets and has a clear mind on this topic)
- could the curious coordinate to explore and document the concept together? perhaps we can start a thread to discuss it further
Humbly pinging relevant people, mainly authors from articles I linked to: @avturchin @Jozdien @James_Miller @Halfwit @Vladimir_Nesov
To me Feynman seems to fall quite on the von Neumann side of the spectrum.
Yes, they seem to represent two completely different types of extreme intelligence which is very interesting. I also agree that vN's ideas are more relevant for the community.
Yes. Grothendieck is undoubtedly less innovative and curious all across the board.
But I should have mentioned they are not of the same generation. vN helps build the atom bomb while G grows up in a concentration camp.
vN went along a scientific golden age. I'd argue it was probably harder to have the same impact on Science in the 1960s.
I also model G as having disdain for applying mathematical ideas to "impure" subjects. Maybe because of the Manhattan project itself as well as the escalation of the Cold War.
This would be consistent with a whole school of french mathematicians deifying pure math, N. Bourbaki in general, and being generally skeptical of the potential of pure math on the improvement of society, Roger Godement being the stereotype.
My point was that Grothendieck's mind is interesting to dissect for someone interested in a general theory of intelligence and AI alignment (and that the von Neumann metaphor becomes kinda tiring)
Promising. Where can interested researchers discuss this and what does the question bank look like so far?