rhollerith_dot_com comments on Preface to a Proposal for a New Mode of Inquiry - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (83)
For rational people skeptical about hard takeoff, consider the Interim Report from the Panel Chairs, AAAI Presidential Panel on Long-Term AI Futures. Most economists I've talked to are also quite skeptical, much more so than I. Dismissing such folks because they haven't read enough of your writings or attended your events seems a bit biased to me.
Hi Robin!
If a professional philosopher or an economist gives his probability that AGI researchers will destroy the world, I think a curious inquirer should check for evidence that the philosopher or economist has actually learned the basics of the skills and domains of knowledge the AGI researchers are likely to use.
I am pretty sure that you have, but I do not know that, e.g., Daniel Dennett has, excellent rationalist though he is. All I was saying is that my interlocutor should check that before deciding how much weight to give Dennett's probability.
But in the above you explicitly choose to exclude AGI researchers. Now you also want to exclude those who haven't read a lot about AGI? Seems like you are trying to exclude as irrelevant everyone who isn't an AGI amateur like you.
I guess it depends where exactly you set the threshold. Require too much knowledge and the pool of opinions, and the diversity of the sources of those opinions, will be too small (ie, just "AGI ameteurs"). On the other hand, the minimum amount of research required to properly understand the AGI issue is substantial, and if someone demonstrates a serious lack of understanding, such as claiming that AI will never be able to do something that narrow AIs can do already, then I have no problem excluding their opinion.