I sympathize somewhat with this complexity point but I'm worried that training will be extremely non-Bayesian in a way that makes complexity arguments not really work. So I feel like the point about optimization power at best cuts the worry about hyperstition by about a factor of 2. Perhaps there should be research on how "sticky" the biases from early in training can be in the face of later optimization pressure.
In general, computing a boolean expression with terms without the signal being drowned out by the noise will require if the noise is correlated, and if the noise is uncorrelated.
Shouldn't the second one be ?
the past token can use information that is not accessible to the token generating the key (as it is in its “future” – this is captured e.g. by the attention mask)
Is this meant to say "last token" instead of "past token"?
I made a few edits to this post today, mostly in response to feedback from Ryan and Richard:
In the fictional dialogue, Claude Shannon's first answer is more correct -- info theory is useful far outside the original domain of application, and its elegance is the best way to predict that.
Slightly more spelled-out thoughts about bounded minds:
I suspect there is some merit to the Scientist's intuition (and the idea that constant returns are more "empirical") which nobody has managed to explain well. I'll try to explain it here.[1]
The Epistemologist's notion of simplicity is about short programs with unbounded runtime which perfectly explain all evidence. The [non-straw] empiricist notion of simplicity is about short programs with heavily-bounded runtime which approximately explain a subset of the evidence. The Epistemologist is right that there is nothing of value in the empiricist's notion if you are an unbounded Solomonoff inductor. But for a bounded mind, two important facts come into play:
Therefore a bounded mind will sometimes get more evidence from "fast-program induction on local data" (i.e. just extrapolate without a gears-level model) than from highly conjunctive arguments about gears-level models.
FWIW, I agree with the leading bit of Eliezer's position -- that we should think about the object-level and not be dismissive of arguments and concretely imagined gears-level models.
I'd be capable of helping aliens optimize their world, sure. I wouldn't be motivated to, but I'd be capable.
@So8res How many bits of complexity is the simplest modification to your brain that would make you in fact help them? (asking for an order-of-magnitude wild guess)
(This could be by actually changing your values-upon-reflection, or by locally confusing you about what's in your interest, or by any other means.)
Sigmoid is usually what "straight line" should mean for a quantity bounded at 0 and 1. It's a straight line in logit-space, the most natural space which complies with that range restriction.
(Just as exponentials are often the correct form of "straight line" for things that are required to be positive but have no ceiling in sight.)
Do you want to try playing this game together sometime?
Wouldn't it be better to have a real nighttime view of north america? I also found it jarring...