However, Whole Brain Emulation is likely to be much more resource intensive than other approaches, and if so will probably be no more than a transitional form of AGI.
I think that the process that he describes is inevitable unless we do ourselves in through some other existential risk. Whether this will be for good or bad will largely depend on how we approach the issues of volition and motivation.
Programming and debugging, although far from trivial, are the easy part of the problem. The hard part is determining what the program needs to do. I think that the coding and debugging parts will not require AGI levels of intelligence, however deciding what to do definitely needs at least human-like capacity for most non-trivial problems.
The following are some attributes and capabilities which I believe are necessary for superintelligence. Depending on how these capabilities are realized, they can become anything from early warning signs of potential problems to red alerts. It is very unlikely that, on their own, they are sufficient.
I think that language plus our acquisition of the ability to make quasi-permanent records of human utterances are the biggest differentiators.