Informally, it's the kind of intelligence (usually understood as something like " the capacity to achieve goals in a wide variety of environments") which is capable of doing that which is instrumental to achieving the goal. Given a goal, it is the capacity to achieve that goal, to do what is instrumental to achieving that goal.
Bostrom, in Superintelligence (2014), speaks of it as "means-end reasoning".
So, strictly speaking, it does not involve reasoning about the ends or goals in service of which the intelligence/optimisation is being pre... (read more)
I didn't read your full paper yet, but from your summary, it's unclear to me how such understanding of intelligence would be inconsistent with the "Singularity" claim
* Instrumental superintelligence seems to be feasible - a system that is better at achieving a goal than the most intelligent human
* Such system can also self-modify, to better achieve its goal, leading to an intelligence explosion
Informally, it's the kind of intelligence (usually understood as something like " the capacity to achieve goals in a wide variety of environments") which is capable of doing that which is instrumental to achieving the goal. Given a goal, it is the capacity to achieve that goal, to do what is instrumental to achieving that goal.
Bostrom, in Superintelligence (2014), speaks of it as "means-end reasoning".
So, strictly speaking, it does not involve reasoning about the ends or goals in service of which the intelligence/optimisation is being pre... (read more)