Bostrom's argument may be underappreciated. You might like Roman Yampolskiy's work if you're deeply interested in exploring the Simulation argument.
Can you tell me your p(doom) and AGI timeline? Cause I think we can theoretically settle this:
I give you x$ now and in y years you give me back x times r $ back
Please tell me acceptable y, r for you (ofc in the sense of least-convenient-but-still-profitable)
I think we can conceivably gather data on the combination of "anthropic shadow is real & alignment is hard".
Predictions would be:
conditional on us finding alien civilizations that reached the same technological level, most of them will have been wiped by AI.
2. is my guess as to why there is a Great Filter. More so than Grabby Aliens.
That's good to know! Best of luck in your project
Feels deep but I don't get it.
Would you mind elaborating?
ANTHROPIC IMMORTALITY
Are other people here having the feeling of "we actually probably messed up AI alignment but I think we are going to survive for weird anthropic reasons"?
[Sorry if this is terrible formatting, sorry if this is bad etiquette]
I think the relevant idea here is the concept of anthropic immortality. It has been alluded to on LW more time than I could count and has even been discussed up explicitly in this context: https://alignmentforum.org/posts/rH9sXupnoR8wSmRe9/ai-safety-via-luck-2
Eliezer wrote somewhat cryptic tweets referencing it rece...
You don't survive for anthropic reasons. Anthropic reasons explain the situations where you happen to survive by blind luck.
To me Feynman seems to fall quite on the von Neumann side of the spectrum.
Yes, they seem to represent two completely different types of extreme intelligence which is very interesting. I also agree that vN's ideas are more relevant for the community.
Yes. Grothendieck is undoubtedly less innovative and curious all across the board.
But I should have mentioned they are not of the same generation. vN helps build the atom bomb while G grows up in a concentration camp.
vN went along a scientific golden age. I'd argue it was probably harder to have the same impact on Science in the 1960s.
I also model G as having disdain for applying mathematical ideas to "impure" subjects. Maybe because of the Manhattan project itself as well as the escalation of the Cold War.
This would be consistent with a whole ...
Pet peeve: AI community defaulted to von Neumann as being the ultimate smart human and therefore the basis of all ASI/human intelligence comparison when the mathematician Alexander Grothendieck exists somehow.
Von Neumann arguably had the highest processor-type "horsepower" we know of plus his breadth of intellectual achievements is unparalleled.
But imo Grothendieck is a better comparison point for ASI as his intelligence, while being strangely similar to LLMs in some dimensions, arguably more closely resembles what alien-like intelligence would be:
- ...
Hi! I'm Embee but you can call me Max.
I'm a mathematics for quantum physics graduate student considering redirecting my focus toward AI alignment research. My background includes:
- Graduate-level mathematics
- Focus on quantum physics
- Programming experience with Python
- Interest in type theory and formal systems
I'm particularly drawn to MIRI-style approaches and interested in:
- Formal verification methods
- Decision theory implementation
- Logical induction
- Mathematical bounds on AI systems
My current program feels too theoretical and disconnected from urgen...
The best pathway towards becoming a member is to produce lots of great AI Alignment content, and to post it to LessWrong and participate in discussions there. The LessWrong/Alignment Forum admins monitor activity on both sites, and if someone consistently contributes to Alignment discussions on LessWrong that get promoted to the Alignment Forum, then it’s quite possible full membership will be offered.
Got it. Thanks.
I've noticed that the karma system makes me gravitate towards posts of very high karma. Are there low-karma posts that impacted you? Maybe you think they are underrated or that they fail in interesting ways.
I'm still bothering you with inquiries on user information. I would like to check this in order to write a potential LW post. Do we have data on the prevalence of "mental illnesses" and do we have a rough idea of the average IQ among LWers (or SSCers since the community is adjacent) I'm particulary interested in the prevalence of people with autism and/or schizoid disorders. Thank you very much. Sorry if I used offensive terms. I'm not a native speaker.
What happens if and when a slightly unaligned AGI crowds the forum with its own posts? I mean, how strong is our "are you human?" protection?
Thank you so much.
Does someone have a guesstimate of the ratio of lurkers to posters on lesswrong? With 'lurker' defined as someone who has a habit of reading content but never posts stuff (or posts only clarification questions)
In other words, what is the size of the LessWrong community relative to the number of active contributors?
You could check out the LessWrong analytics dashboard: https://app.hex.tech/dac32525-33e6-44f9-bbcf-65a0ba40152a/app/9742e086-54ca-4dd9-86c9-25fc53f90f80/latest
In any given week there are around 40k unique logged out users, around ~4k unique logged in users and around 400 unique commenters (with about ~1-2k comments). So the ratio of lurkers to commenters is about 100:1, though more like 20:1 for people who visit more regularly and people who comment.
Promising. Where can interested researchers discuss this and what does the question bank look like so far?