Pandemic Prediction Checklist: H5N1
Pandemic Prediction Checklist: Monkeypox
Correlation may imply some sort of causal link.
For guessing its direction, simple models help you think.
Controlled experiments, if they are well beyond the brink
Of .05 significance will make your unknowns shrink.
Replications show there's something new under the sun.
Did one cause the other? Did the other cause the one?
Are they both controlled by what has already begun?
Or was it their coincidence that caused it to be done?
other than very narrow pathological cases.
I think the more common underlying issue here is that people are confused about the tax code. Tax codes are very confusing. Income/benefits cliffs do exist. People get confused about what is and isn't an income cliff based on what they heard from more or less equally tax-confused people.
it was kind to some of them to point out what was believed to be a true argument for why that was not the case here
I don't see evidence in the post comments that it was received that way, though it's possible those who read it as a true, helpful and kind didn't respond, or did elsewhere.
Eliezer was perfectly open to evidence he was mistaken
I don't think he's a schemer or engaging in some kind of systematic project to silence dissent.
I get why you read it as "kind." But I have an alternative thesis:
If you're interested, I can expand on this.
Edit: Clarifying changes, especially to emphasize that I interpret the essay as containing motivated reasoning and self-interested spin, not that Eliezer is lying.
No collective entity is a monolith.
If it wasn't obvious, I meant the term "ally" not in the sense of a formally codified relationaship, but to point out the uniquely high level of affinity, overlap, and shared concerns between the AI safety movement and EA.
There is a reason I said "ally," rather than literally identifying EA as part of the AI safety movement or vice versa.
Yeah, that's the problem. EA's the most obvious community clearly invested and interested in the kind of AI safety issues Eliezer focuses on. There's huge overlap between the AI safety and EA movement. To fail to recognize that, and carve time out of his day to compose naked, petty invective against EA over his disagreements, seems quite unpromising to me.
Institutional support, funding, positive and persistent community interest, dialog, support, and professional participation. Examples:
Take the above as my beliefs and understanding based on years of interaction, but no systematic up-to-date investigation.
You don't think EA is an ally of the AI safety movement?
I think it’s important to distinguish irritation from insult. The internet is a stressful place to communicate. Being snappish and irritable is normal. And many people insult specific groups they disagree with, at least occasionally.
What sets Eliezer apart from Gwern, Scott Alexander and Zvi is that he insults his allies.
That is not a recipe for political success. I think it makes sense to question whether he’s well suited to the role of public communicator about AI safety issues, given this unusual personality trait of his.
It's not obvious. What makes physical violence bad is that it's violence, and the badness of violence is in principle separate from the mechanism by which it's delivered. What's worse:
The answer really is not obvious to me.
When we're introducing a new term due to introduction of a new technology, we need to be careful about how we map old terms onto that new phenomenon.
How important is it that cell and nucleus remain intact for your application? Can other chromosomes be genetically engineered? What will happen to the chromosome once identified? Do you need to be able to identify chromosomes during M phase, or is interphase OK? How many chromosomes do you need to identify and extract?