US population in 1950 was about half what it is now, and population density in cities has probably increased even more. The counterfactual with a 0% vaccination rate today is surely much worse than 400-500 deaths per year.
Wouldn't this be analogous to a LLM with a very tiny context window in addition to frozen weights?
This confuses me. Are you saying the CDT agent does not have "the ability to alter outcomes of future interactions"?
Yes, but EY's statement implies that all (1, 2, 3) must be true for reciprocity to be strategic. There are iterated contexts where 1 and/or 2 do not hold (for example, a CDT agent playing iterated prisoner's dilemma against a simple tit-for-tat bot).
Eliezer Yudkowsky: Reciprocity in humans is an executing adaptation. It is not strategically convergent for all minds toward all other minds. It’s strategic only
- By LDT agents
- Toward sufficiently strong LDT-agent-predictors
- With negotiating power.
I assume this is referring to a one-shot context? Reciprocity seems plenty strategic for other sorts of agents/counterparties in an iterated context.
Perhaps for much of the planets lifetime, the earth was a graveyard of pristine corpses, forests of bodies, oceans of carcasses, a world littered with the indigestible dead.
Why wouldn't corpses would have been claimed by macroscopic scavengers?
I agree. Either "worthwhile" or "valuable" would be better here.
The ingredients do not contain real wasabi root.
Is the "wasabi" listed in the ingredients (8th ingredient) a different part/extract of real wasabi?
To guarantee objectivity I turned to Grok to ask if Elon Musk has ever done this. The answer is yes.
Independent of whether I agree with this, I would like to point out that it is perfectly consistent and reasonable to both want [X] to be banned and also keep doing [X] yourself unless and until it is actually banned.
Can't they just check whether they enjoy smoking?