zrezzed
zrezzed has not written any posts yet.

Which do you agree would be better? I’m assuming the latter, but correct me if I’m wrong.
I haven’t thought this through, but a potential argument against: 1) agreement / alignment on what the heavy-tail problems are and their relative weight is a necessary condition for the latter to be a better strategy 2) neither this community, let alone the broader society have that thus 3) we should still focus on correctness overall.
That does reflect my own thinking about these things.
And idk, maybe I am kind of convinced? Like, consistency checks are a really powerful tool and if I imagine a young person being like "I will just throw myself into intellectual exploration and go deep wherever I feel like without trying to orient too much to what is going on at large, but I will make sure to do this in two uncorrelated ways", then I do notice I feel a lot less stressed about the outcome
I worry this gives up too much. Being embedded in multiple communities/cultures with differing or even conflicting values and world views is exceedingly common. Noticing that, and explicitly playing with the idea of “holding multiple... (read more)
One other thing to mention is that the speed of sound of the exhaust matters quite a lot. Given the same area ratio nozzle and same gamma in the gas, the exhaust mach number is constant; a higher speed of sound thus yields a higher exhaust velocity.
My understanding is this effect is a re-framing of what I described: for a similar temperature and gamma, a lower molecular weight (or specific heat) will result in a higher speed of sound (or exit velocity).
However, I feel like this framing fails to provide a good intuition for the underlying mechanism. At the limit, anyways, so it's harder (for me at least) to understand how gas properties relate to sonic properties. Yes, holding other things constant, a lower molecular weight increases the speed of sound. But crucially, it means there's more kinetic energy to be extracted to start with.
Is that not right?
Fun nerd snipe! I gave it a quick go and was mostly able to deconfuse myself, though I'm still unsure of the specifics. I would still love to hear an expert take.
First, what exactly is the confusion?
For an LOX/LH2 rocket, the most energy efficient fuel ratio is stoichiometric, at 8:1 by mass. However, real rockets apparently use ratios with an excess of hydrogen to boost [1] -- somewhere around 4:1[2] seems to provide the best overall performance. This is confusing, as my intuition is telling me: for the same mass of propellant, a non-stoichiometric fuel ratio is less energetic. Less energy being put into a gas with more mols should mean lower-enough temperatures that the... (read 362 more words →)
I think this is a great goal, and I’m looking forward to what you put together!
This may be a bit different than the sort of thing you’re asking about, but I’d love to see more development/thought around topics related to https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline .
Rationality is certainly a skill, and something better / more concise exposition on rationality itself can help people develop. But once you learn to think right, what are the some of the most salient object-level ideas that come next? How do we better realize values in the real world, and make make use of / propagate these better ways of thinking? Why is this so hard, and what are strategies to make it easier?
SSC/AXC is a great example of better exploring object-level ideas, and I’d love to see more of that type of work pulled back into the community.
What could a million perfectly-coordinated, tireless copies of a pretty smart, broadly skilled person running at 100x speed do in a couple years?
I this feels like the right analogy to consider.
And in considering this thought experiment, I'm not sure trying to solve alignment is the only/best way to reduce risks. This hypothetical seems open to reducing risk by 1) better understanding how to detect these actors operating at large scale 2) researching resilient plug-pulling strategies
Moreover, even if these things don't work that way and we get a slow takeoff, that doesn't necessarily save humanity. It just means that it will take a little longer for AI to be the dominant form of intelligence on the planet. That still sets a deadline to adequately solve alignment.
If a slow takeoff is all that's possible, doesn't that open up other options for saving humanity besides solving alignment?
I imagine far more humans will agree p(doom) is high if they see AI isn't aligned and it's growing to be the dominant form of intelligence that holds power. In a slow-takeoff, people should be able to realize this is happening, and effect non-alignment based solutions (like bombing compute infrastructure).
a superintelligence will be at least several orders of magnitude more persuasive than character.ai or Stuart Armstrong.
Believing this seems central to believing high P(doom).
But, I think it's not a coherent enough concept to justify believing it. Yes, some people are far more persuasive than others. But how can you extrapolate that far beyond the distribution we obverse in humans? I do think AI will prove to better than humans at this, and likely much better.
But "much" better isn't the same as "better enough to be effectively treated as magic".
This isn't where the community is supposed to have ended up. If rationality is systematized winning, then the community has failed to be rational.
Great post, and timely, for me personally. I found myself having similar thoughts recently, and this was a large part of why I recently decided to start engaging with the community more (so apologies for coming on strong in my first comment, while likely lacking good norms).
Some questions I'm trying to answer, and this post certainly helps a bit:
I’m fairly sure you have, in fact, made the same mistake as you have pointed out! Most people… have exactly no idea what a computer is. They do not understand what software is, or that it is something an engineer implements. They do not understand the idea of a “predictable” computer program.
I find it deeply fascinating most of the comments here are providing pushback in the other direction :)