All of Embee's Comments + Replies

Embee10

Promising. Where can interested researchers discuss this and what does the question bank look like so far?

2lunatic_at_large
The question bank doesn't exist yet because the language to ask the questions doesn't exist yet. I spent a few weeks after writing this post trying to familiarize myself with Lean as quickly as possible and I found out that people in the Lean community simply haven't formalized most of the objects I'd want to talk about (probabilistic networks, computational complexity, Nash equilibria, etc.). I tried to get a project off the ground formalizing these objects -- you can see the GitHub repository here and the planning document here. Unfortunately this project quickly ballooned beyond what I can handle alone -- I'm just an undergraduate student and winter break is over now. I still think it's insane that some kind of crash formalization program isn't currently underway. If you're interested in pursuing a project like this then I'd be happy to talk through where I left off and what the next steps could look like!
Answer by Embee10

Bostrom's argument may be underappreciated. You might like Roman Yampolskiy's work if you're deeply interested in exploring the Simulation argument.

Embee10

Can you tell me your p(doom) and AGI timeline? Cause I think we can theoretically settle this:

I give you x$ now and in y years you give me back x times r $ back

Please tell me acceptable y, r for you (ofc in the sense of least-convenient-but-still-profitable)

Embee10

I think we can conceivably gather data on the combination of "anthropic shadow is real & alignment is hard".

Predictions would be:

  1. we will survive this
  2. conditional on us finding alien civilizations that reached the same technological level, most of them will have been wiped by AI.

    2. is my guess as to why there is a Great Filter. More so than Grabby Aliens. 

Embee30

That's good to know! Best of luck in your project

Embee10

Feels deep but I don't get it.

Would you mind elaborating?

Embee50

ANTHROPIC IMMORTALITY

Are other people here having the feeling of "we actually probably messed up AI alignment but I think we are going to survive for weird anthropic reasons"?

[Sorry if this is terrible formatting, sorry if this is bad etiquette]

I think the relevant idea here is the concept of anthropic immortality. It has been alluded to on LW more time than I could count and has even been discussed up explicitly in this context: https://alignmentforum.org/posts/rH9sXupnoR8wSmRe9/ai-safety-via-luck-2

Eliezer wrote somewhat cryptic tweets referencing it rece... (read more)

3[anonymous]
The first link is from 2019. (Also those seem like standard EY tweets) Edit: although there is now also this recent one, from a few hours after your post https://x.com/ESYudkowsky/status/1880714995618767237
4avturchin
It actually not clear what EY means by "anthropic immortality". May be he means "Big Wold immortality", that is, the idea that in inflationary large universe has  infinitely many copies of Earth. From observational point of view it should not have much difference from quantum immortality. There are two different situations that can follow: 1. Future anthropic shadow. I am more likely to be in the world in which alignment is easy or AI decided not to kill us for some reasons 2. Quantum immortality. I am alone on Earth fill of aggressive robots and they fail to kill me.  We are working in a next version of my blog post "QI and AI doomers" and will transfrom it into as proper scientific article. 

You don't survive for anthropic reasons. Anthropic reasons explain the situations where you happen to survive by blind luck.

Embee10

To me Feynman seems to fall quite on the von Neumann side of the spectrum. 

Embee30

Yes, they seem to represent two completely different types of extreme intelligence which is very interesting. I also agree that vN's ideas are more relevant for the community.

Embee10

Yes. Grothendieck is undoubtedly less innovative and curious all across the board. 

But I should have mentioned they are not of the same generation. vN helps build the atom bomb while G grows up in a concentration camp.

vN went along a scientific golden age. I'd argue it was probably harder to have the same impact on Science in the 1960s.

 I also model G as having disdain for applying mathematical ideas to "impure" subjects. Maybe because of the Manhattan project itself as well as the escalation of the Cold War.

This would be consistent with a whole ... (read more)

Embee25-1

Pet peeve: AI community defaulted to von Neumann as being the ultimate smart human and therefore the basis of all ASI/human intelligence comparison when the mathematician Alexander Grothendieck exists somehow.

Von Neumann arguably had the highest processor-type "horsepower" we know of plus his breadth of intellectual achievements is unparalleled.
But imo Grothendieck is a better comparison point for ASI as his intelligence, while being strangely similar to LLMs in some dimensions, arguably more closely resembles what alien-like intelligence would be: 
- ... (read more)

4MinusGix
I agree Grothendieck is fascinating but I mostly just see him as interesting in different ways than von Neumann. von Neumann is often focused on because his subjects are areas that are relevant to either LessWrong's focuses or (for the cloning posts) that the subjects he was skilled at and polymath capabilities would help with alignment.
4Garrett Baker
I mean, one of them’s math built bombs and computers & directly influenced pretty much every part of applied math today, and the other one’s math built math. Not saying he wasn’t smart, but no question are bombs & computers more flashy. 
5Nathan Helm-Burger
Personally, I'd pick Feynman, but yeah, I agree that von Neumann seems an odd choice.
Embee100

Hi! I'm Embee but you can call me Max.

I'm a mathematics for quantum physics graduate student considering redirecting my focus toward AI alignment research. My background includes:
- Graduate-level mathematics
- Focus on quantum physics
- Programming experience with Python
- Interest in type theory and formal systems

I'm particularly drawn to MIRI-style approaches and interested in:
- Formal verification methods
- Decision theory implementation
- Logical induction
- Mathematical bounds on AI systems

My current program feels too theoretical and disconnected from urgen... (read more)

1Cole Wyeth
Check out my research program: https://www.lesswrong.com/s/sLqCreBi2EXNME57o   Particularly the open problems post (once you know what AIXI is).    For a balance between theory and implementation, I think Michael Cohen’s work on AIXI-like agents is promising. Also look into Alex Altair’s selection theorems, John Wentworth’s natural abstractions, Vanessa Kosoy’s infra-Bayesianism (and more generally learning theoretic agenda which I suppose I’m part of), and Abram Demski’s trust tiling.  If you want to connect with alignment researchers you could attend the agent foundations conference at CMU, apply by tomorrow:  https://www.lesswrong.com/posts/cuf4oMFHEQNKMXRvr/agent-foundations-2025-at-cmu
5eigenblake
If you're interested in mathematical bounds in AI systems and you haven't seen it already check out https://arxiv.org/pdf/quant-ph/9908043 Ultimate Physical Limits to Computation by Seth Loyd and related works. Online I've been jokingly saying "Intelligence has a speed of light." Well, we know intelligence involves computation so there has to be some upper bound at some point. But until we define some notion of Atomic Standard Reasoning Unit of Inferential Distance, we don't have a good way of talking about how much more efficient a computer like you and me are compared to Claude at natural language generation, for example.
Embee10

The best pathway towards becoming a member is to produce lots of great AI Alignment content, and to post it to LessWrong and participate in discussions there. The LessWrong/Alignment Forum admins monitor activity on both sites, and if someone consistently contributes to Alignment discussions on LessWrong that get promoted to the Alignment Forum, then it’s quite possible full membership will be offered.

Got it. Thanks.

Embee50

I've noticed that the karma system makes me gravitate towards posts of very high karma. Are there low-karma posts that impacted you? Maybe you think they are underrated or that they fail in interesting ways.

Embee20

I'm still bothering you with inquiries on user information. I would like to check this in order to write a potential LW post. Do we have data on the prevalence of "mental illnesses" and do we have a rough idea of the average IQ among LWers (or SSCers since the community is adjacent) I'm particulary interested in the prevalence of people with autism and/or schizoid disorders. Thank you very much. Sorry if I used offensive terms. I'm not a native speaker.

5Screwtape
I think the best Less Wrong Census for mental illness would be 2016, though 2012 did ask about autism. You're probably going to have better luck using the 2024 SSC/ACX survey data, as it's more recent and bigger. Have fun! 
4ChristianKl
If you search for "Less Wrong Census" you will find the existing surveys of the LessWrong readership. 
Embee51

What happens if and when a slightly unaligned AGI crowds the forum with its own posts? I mean, how strong is our "are you human?" protection?

Embee30

Thank you so much.

Embee102

Does someone have a guesstimate of the ratio of lurkers to posters on lesswrong? With 'lurker' defined as someone who has a habit of reading content but never posts stuff (or posts only clarification questions)

In other words, what is the size of the LessWrong community relative to the number of active contributors?

habryka*312

You could check out the LessWrong analytics dashboard: https://app.hex.tech/dac32525-33e6-44f9-bbcf-65a0ba40152a/app/9742e086-54ca-4dd9-86c9-25fc53f90f80/latest 

In any given week there are around 40k unique logged out users, around ~4k unique logged in users and around 400 unique commenters (with about ~1-2k comments). So the ratio of lurkers to commenters is about 100:1, though more like 20:1 for people who visit more regularly and people who comment.

7Stephen McAleese
There's a rule of thumb called the "1% rule" on the internet that 1% of users contribute to a forum and 99% only read the forum.