Posts

Sorted by New

Wiki Contributions

Comments

I think that language plus our acquisition of the ability to make quasi-permanent records of human utterances are the biggest differentiators.

However, Whole Brain Emulation is likely to be much more resource intensive than other approaches, and if so will probably be no more than a transitional form of AGI.

I think that the process that he describes is inevitable unless we do ourselves in through some other existential risk. Whether this will be for good or bad will largely depend on how we approach the issues of volition and motivation.

Programming and debugging, although far from trivial, are the easy part of the problem. The hard part is determining what the program needs to do. I think that the coding and debugging parts will not require AGI levels of intelligence, however deciding what to do definitely needs at least human-like capacity for most non-trivial problems.

The following are some attributes and capabilities which I believe are necessary for superintelligence. Depending on how these capabilities are realized, they can become anything from early warning signs of potential problems to red alerts. It is very unlikely that, on their own, they are sufficient.

  • A sense of self. This includes a recognition of the existence of others.
  • A sense of curiosity. The AI finds it attractive (in some sense) to investigate and try to understand the environment that it find itself in.
  • A sense of motivation. The AI has attributes similar in some way to human aspirations.
  • A capability to (in some way) manipulate portions of its external physical environment, including its software but also objects and beings external to its own physical infrastructure.