I'm trying to synthesise actual human values, not hypothetical other values that other beings might have.
To be clear, when you say "actual human values", do you mean anything different than just "the values of the humans alive today, in the year 2019"? You mention "other beings" - is this meant to include other humans in the past who might have held different values?
Perhaps "size of compiled program" would be one way to make a crude complexity estimate. But I definitely would like to be able to better define this metric.
In any case, I don't think the concept of software complexity is meaningless or especially nebulous. A program with a great many different bespoke modules, which all interact in myriad ways, and are in turn full of details and special-cases and so on, is complex. A program that's just a basic fundamental core algorithm with a bit of implementation detail is simple.
I do agree that th
That could represent one step in a general trend of subsuming many detailed systems into fewer simpler systems. Or, it could represent a technology being newly viable, and the simplest applications of it being explored first.
For the former to be the case, this simplification process would need to keep happening at higher and higher abstraction levels. We'd explore a few variations on an AI architecture, then get a new insight that eclipses all these variations, taking the part we were tweaking and turning it into just another parameter for the system
Humans didn't evolve separate specialized modules for doing theoretical physics, chemistry, computer science, etc.; indeed, we didn't undergo selection for any of those capacities at all, they just naturally fell out of a different set of capacities we were being selected for.
Yes, a model of brain modularity in which the modules are fully independent end-to-end mechanisms for doing tasks we never faced in the evolutionary environment is pretty clearly wrong. I don't think anyone would argue otherwise. The plausible version of the modularity
The main thing that would predict slower takeoff is if early AGI systems turn out to be extremely computationally expensive.
Surely that's only under the assumption that Eliezer's conception of AGI (simple general optimisation algorithm) is right, and Robin's (very many separate modules comprising a big intricate system) is wrong? Is it just that you think that assumption is pretty certain to be right? Or, are you saying that even under the Hansonian model of AI, we'd still get a FOOM anyway?
While I find Robin's model more convincing than Eliezer's, I'm still pretty uncertain.
That said, two pieces of evidence that would push me somewhat strongly towards the Yudkowskian view:
A fairly confident scientific consensus that the human brain is actually simple and homogeneous after all. This could perhaps be the full blank-slate version of Predictive Processing as Scott Alexander discussed recently, or something along similar lines.
Long-run data showing AI systems gradually increasing in capability without any increase in complexity. Th
This seems to me a reasonable statement of the kind of evidence that would be most relevant.
I see, thank you. So then, would you say this doesn't & isn't intended to answer any question like "whose perspective should be taken into account?", but that it instead assumes some answer to that question has already been specified, & is meant to address what to do given this chosen perspective?