Cost, Not Sacrifice
In a recent bonus episode of the Bayesian Conspiracy podcast, Eneasz Brodski shared a thought experiment that caused no small amount of anguish. In the hypothetical, some eccentric but trustworthy entity is offering to give you an escalating amount of money for your fingers, starting at $10,000 for the first one and increasing 10x per finger up to $10 trillion for all of them.[1] On encountering this thought experiment, Eneasz felt (not without justification) that he mostly valued his manual dexterity more than wealth. Then, two acquaintances pointed out that one could use the $10 trillion to do a lot of good, and Eneasz proceeded to feel terrible about his decision. I had several responses to this episode, but today I'm going to focus on one of them: the difference between cost and sacrifice. How Ayn Rand Made Me a Better Altruist But first, a personal anecdote. I was raised Catholic, and like the good Catholic boy that I was, I once viewed altruism through the lens of personal sacrifice. For the uninitiated, Catholic doctrine places a strong emphasis on this notion of sacrifice - an act of self-abnegation which places The Good firmly above one's own wants or needs. I felt obligated to help others because it was the Right Thing to Do, and I accepted that being a Good Person meant making personal sacrifices for the good of others, regardless of my own feelings. I divided my options into "selfish" and "selfless" categories, and felt guilty when choosing the former. Even as I grew older and my faith in Catholicism began to wane, this sense of moral duty persisted. It was a source of considerable burden and struggle, for me, made worse by the fact that the associated cultural baggage was so deeply ingrained as to be largely invisible to me. Then, in a fittingly kabbalistic manner, Atlas Shrugged flipped my world upside down.[2] Ayn Rand, you see, did not believe in sacrifice. In her philosophy, the only real moral duty is the duty to oneself and one's own princip
Thank you for attempting to spell this out more explicitly. If I understand correctly, you are saying singular learning theory suggests that AIs with different architectures will converge on a narrow range of similar functions that best approximate the training data.
With less confidence, I understand you to be claiming that this convergence implies that (in the context of the metaphor) a given [teal thing / dataset] may reliably produce a particular shape of [black thing / AI].
So (my nascent Zack model says) the summary is incorrect to analogize the black thing to "architectures" instead of "parametrizations" or "functions", and more importantly incorrect to claim that the black shape's many degrees of freedom... (read more)