You don't have enough information to arrive at that level of certainty. He was not, for example, a general practitioner and I was not a client of his. I was actually working with him in medical education at the time. Come to think of it, bizarrely enough and by pure happenstance that does put the subject into the realm of his specialist knowledge.
I don't present that as a reason to be persuaded - I actually think not taking official status, particularly medicine related official status, seriously is a good thing. It is just a reply to your presumption.
While I don't expect you to take my (or his) word for anything I also wouldn't expect you to need to. This is exactly the finding I would expect based off general knowledge of human behavior. When people are constantly exposed to stimulus that is emotionally laden they will tend to become desensitized to it. There are whole schools of cognitive therapy based on this fact. If someone has taken on the role of a torturer then their emotional response to witnessing torture will be drastically altered. Either it will undergo extinction or the individual will be crippled with PTSD. This can be expected to apply even more when they fully identify with their role due to, for example, the hazing processes involved in joining military and paramilitary organisations.
Part of what seemed iffy was the claim that it was good for both the patients and the practitioner, when it was correlated (from what you said) with experience, with no mention of quality of care.
When someone says their source is "a doctor", what are the odds that it's a researcher specializing in that particular area? Especially when the information is something which could as easily be a fluffy popular report as something clearly related to a specialty?
Also, I had a prior from Bernard Siegal which is also intuitively plausible-- that doctors who are emotionally numb around their patients are more likely to burn out. This was likely to have been based on anecdote, but not a crazy hypothesis.
[...] SIAI's Scary Idea goes way beyond the mere statement that there are risks as well as benefits associated with advanced AGI, and that AGI is a potential existential risk.
[...] Although an intense interest in rationalism is one of the hallmarks of the SIAI community, still I have not yet seen a clear logical argument for the Scary Idea laid out anywhere. (If I'm wrong, please send me the link, and I'll revise this post accordingly. Be aware that I've already at least skimmed everything Eliezer Yudkowsky has written on related topics.)
So if one wants a clear argument for the Scary Idea, one basically has to construct it oneself.
[...] If you put the above points all together, you come up with a heuristic argument for the Scary Idea. Roughly, the argument goes something like: If someone builds an advanced AGI without a provably Friendly architecture, probably it will have a hard takeoff, and then probably this will lead to a superhuman AGI system with an architecture drawn from the vast majority of mind-architectures that are not sufficiently harmonious with the complex, fragile human value system to make humans happy and keep humans around.
The line of argument makes sense, if you accept the premises.
But, I don't.
Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It), October 29 2010. Thanks to XiXiDu for the pointer.