Daniel Taylor

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I suspect that if the average citizen understands confirmation bias, economics 101, the prisoner's dilemma, decision theory, coordinated action, the scientific method and the Pareto frontier... most of Moloch goes away or never arises in the first place.

You can have adversarial equilibria still, sure, but if everyone is smart and aware of hidden consequences and understands the idea of zero vs non-zero sum games properly you don't have many adversarial equilibria which destroy net value.

I think we're supposed to assume as part of the premise that dath ilan's material science and neuroscience are good enough that they got preservation right - that is, they are justly confident that the neurology is preserved at the biochemical level. Since even we are reasonably certain that that's the necessary data, the big question is just whether we can ever successfully data mine and model it.

It's hardly a wilful ignorance, it's a deliberate rejection. A good decision theory, by nature, should produce results that don't actually depend on visible precommitments to achieve negotiation equilibrium, since an ideal agent negotiating ought to be able to accept postcommitment to things you would predictably wish you'd precommitted to. And if a decision theory doesn't allow you to hold out for fairness in the face of an uneven power dynamic, why even have one?

There's a difference between "technology that we don't know how to do but it's fine in theory", "technology that we don't even know if it's possible in principle" and "technology that we believe isn't possible at all". Uploading humans is the former; we have a good theoretical model for how to do it and we know physics allows objects with human brain level computing power.

Time travel is the latter.

It's perfectly reasonable for a civilisation to estimate that problems of the first type will be solved without becoming thereby committed to believing in time travel. Being ignorant of a technology isn't the same as being ignorant of the limits of physics.

But this prediction market is exactly the one case where, if the Keepers are concerned about AGI existential risk, signalling to the market not to do this thing is much much more important than preserving the secret. Preventing this thing is what you're preserving the secret for; if Civilization starts advancing computing too quickly the Keepers have lost.

To deceive in a prediction market is to change the outcome. In this case in the opposite of the way the Keepers want. The whole point of having the utterly trustworthy reputation of the Keepers is so that they can make unexplained bids that strongly signal "you shouldn't do this and also you shouldn't ask too hard why not" and have people believe them.

Given the sheer complexity of the human brain, it seems very unlikely anyone could possibly assess a 97% probability of revival without conditioning on AI of some strong form, if not a full AGI.

In a sane society it would be a task that an average citizen understood and could take on if necessary.

No, for a similar reason: to be the sort of person who gives in to threats is to motivate threats against you.

You should only negotiate for the things that you predict are actually part of an agent's utility function, not things that you believe to be part of a hostile function adopted only to impose utility costs on you.