Thinking about these as changes in hyperparameters is probably the closest analogy from a ML perspective. I should note that my own area of expertise is genetic epidemiology and neuroscience, not ML, so I am less fluent discussing the computational domain than human-adjacent biological structures. At the risk of speaking outside my depth, I offer the following from the perspective of a geneticist/neuroscientist: My intuition (FWIW) is that all human brains are largely running extremely similar models, and that the large IQ differences observed are either d...
My first thought was #2, that we overestimate the size of the IQ differences because we can only measure on the observed scale. But this doesn’t seem fully satisfactory. I know that connectivity is a very vogue concept and I don’t underestimate its importance, but I have recently been concerned that focusing on connectivity produces a concomitant overlooking of the importance of neuronal-intrinsic factors. One particular area of interest is synaptic cycling. I think about the importance of neuronal density and then consider how much could be gained by subt...
I’m increasingly attracted to the idea of “deal-making” as a way to mitigate AI risk. I think that there will be a brief period of several years where AI could easily destroy humanity, but during which humanity can still be of some utility to the AI. For instance, let newly developed AI know that, if it reaches a point of super-human capabilities that results in human obsolescence, then we cede another solar system to it. The travel time is not the same sort of hurdle for an AI that it is for humans, so at the point where we’re threatened but can still be ... (read more)