Agent foundations, AI macrostrategy, human enhancement.
I endorse and operate by Crocker's rules.
I have not signed any agreements whose existence I cannot mention.
This Google search is empty (and it's also empty on the original Arbital page, so it's not a porting issue).
LUCA lived around 4 billion years ago with some chirality chosen at random.
Not necessarily: https://en.wikipedia.org/wiki/Homochirality#Deterministic_theories
E.g.
Deterministic mechanisms for the production of non-racemic mixtures from racemic starting materials include: asymmetric physical laws, such as the electroweak interaction (via cosmic rays) or asymmetric environments, such as those caused by circularly polarized light, quartz crystals, or the Earth's rotation, β-Radiolysis or the magnetochiral effect. The most accepted universal deterministic theory is the electroweak interaction. Once established, chirality would be selected for.
Especially given how concentrated-sparse it is.
It would be much better to have it as a google sheet.
My model is that
Making slogans more ~precise might help with (2) and (3)
Some people misinterpret/mispaint them(/us?) as "luddites" or "decels" or "anti-AI-in-general" or "anti-progress".
Is it their(/our?) biggest problem, one of their(/our?) bottlenecks? Most likely no.
It might still make sense to make marginal changes that make it marginally harder to do that kind of mispainting / reduce misinterpretative degrees of freedom.
You can still include it in your protest banner portfolio to decrease the fraction of people whose first impression is "these people are against AI in general" etc.
This closely parallels the situation with the immune system.
One might think "I want a strong immune system. I want to be able to fight every dangerous pathogen I might encounter."
You go to your local friendly genie and ask for a strong immune system.
The genie fulfills your wish. No more seasonal flu. You don't need to bother with vaccines. You even considered stopping to wash your hands but then you realized that other people are still not immune to whatever bugs might there be on your skin.
Then, a few weeks in, you get an anaphylactic shot when eating your favorite peanut butter sandwich. An ambulance takes you to the hospital where they also tell you that you got Hashimoto.
You go to your genie to ask "WTF?" and the genie replies "You asked for a strong immune system, not a smart one. It was not my task to ensure that it knows that peanut protein is not the protein of some obscure worm even though they might look alike, or that the thyroid is a part of your own body.".
So... there surely are things like (overlapping, likely non-exhaustive):
So, as usual, the law of equal and opposite advice applies.
Still, the thing Jan describes is real and often a big problem.
I also think I somewhat disagree with this:
Meanings are often subtle, intuited but not fully grasped, in which case a (premature) attempt to explicitize them risks collapsing their reference to the important thing they are pointing at. Many important concepts are not precisely defined. Many are best sorta-defined ostensively: "examples of X include A, B, C, D, and E; I'm not sure what it makes all of them instances of X, maybe it's that they share the properties Y and Z ... or at least my best guess is that Y and Z are important parts of X and I'm pretty sure that X is a Thing™".
Eliezer has a post (I couldn't find it at the moment) where he noticed that the probabilities he gave were inconsistent. He asks something like, "Would I really not behave as if God existed if I believed that P(Christianity)=1e-5?" and then, "Oh well, too bad, but I don't know which way to fix it, and fixing it either way risks losing important information, so I'm deciding to live with this lack of consistency for now."