Homeostatic Bruce
Epistemic Status: Speculative. CW: trauma Humans are Adaptation Executors, Not Fitness Maximizers. — some guy 0 Summary Bruce is a weird guy, and I’m not the first to hypothesize that he serves an adaptive purpose. What I hope to add to the discussion is to distinguish between optimistic and pessimistic flavors of the adaptive explanation, and introduce possible mechanisms by which to optimally respond to whichever mix of the ancestral flavors best corresponds to our modern gene pool. I. Intro For as long as I can remember, I’ve been astounded at the ability of humans to underperform their potential. While surely there is a real, cognitive processing “speed-limit” for everyone — not everyone can pull a von Neumann, as it were — the vast majority of humans seem to be driving 15 in an 85 in terms of life success, however they define it. My sense is that this claim is not so controversial, so I won’t dwell on it. Suffice it to say I recently read a popular and successful book recommended by a teenage family member which was written by someone who, according to the anecdotes in the book itself, likely would score at around 70 on an IQ test. While the book was no Godel, Escher, Bach, it was not undeserving of its success; in any case, its author certainly was not. One could say that he was driving 45 in a 50 — and crushing the game of talents. There are people five to six standard deviations above this individual in intelligence who will massively underperform him, even according to their own personal value paradigm. I probably know a few at university. And, while this case might be especially prototypical, I don’t believe it’s exceptional. What’s more, it seems to me that the achievement gap increases with intelligence, to a degree exceeding what one would expect using naive economic adjustments. That is to say, yes, using simple Econ 101 principles we’d expect highly intelligent people to work somewhat less than less intelligent people in order to “buy” more lei
Of course mind uploading would work hypothetically. The question is, how much of the mind must be uploaded? A directed graph and an update rule? Or an atomic-level simulation of the entire human body? The same principle applies to evolutionary algorithms, reinforcement learning (not the DL sort imo tho, it's a dead end), etc. I actually don't think it would be impossible to at least get a decent lower bound on the complexity needed by each of these approaches. Do the AI safety people do anything like this? That would be a paper I'd like to read.
I don't know whether to respond to the "Once you know how to do it, you've done it" bit. Should I claim that this is not the case in other fields? Or will AI be "different"? What is the standard under which this statement could be falsified?