Holden presumably thinks that many academic AGI approaches are too risky since they are agent designs:
He's an example of biased selection in critics. No detailed critique from him wouldn't have been heard if he didn't take it seriously enough in the first place.
Nick Szabo thinks working on mind uploading is a waste of time.
You don't work on mind uploading today, you work on neurology, that solves a lot of practical problems including treatments for disorders, and which may lead to uploading, or not. I am rather sceptical that the future mind uploading is a significant contributor to the utility of such work.
I personally promoted intelligence amplification and argued that working on security is of little utility.
I do think it is of little utility because I do not believe in some over the internet foom. But if such foom is given, then security can stop it (or rather, work on the tools that would allow provably unhackable software). Ultimately the topic is entirely speculative and you only make arguments by adopting some of the assumptions. With regards to 'provably friendly AGI', once again the important bit here is 'provably', that requires techniques and tools that are over the board useful what ever comes in the future (by improving our degree of reliable understanding and control over our creations of any kind), while the 'friendly' is something you can't even work on without knowing how the 'provably' is going to be accomplished.
David Dalrymple criticized FAI and is working directly on mind uploading today, so apparently he disagrees with both you and Nick Szabo.
Nick Szabo explicitly suggested working on computer security so he seems to disagree with you about the utility. I disagree with you and him about whether provably unhackable software is feasible.
Do you think I've satisfied your request for examples of substantive disagreements? (I'd rather not go into object-level arguments since that's not what this post is about.)
Related Posts: A cynical explanation for why rationalists worry about FAI, A belief propagation graph
Lately I've been pondering the fact that while there are many critics of SIAI and its plan to form a team to build FAI, few of us seem to agree on what SIAI or we should do instead. Here are some of the alternative suggestions offered so far: