Politically people who fear AI might go after companies like google.
but if the public at large started really worrying about uFAI, that's kind of the goal here.
I don't think that the public at large is the target audience. The important thing is that the people who could potential build an AGI understand that they are not smart enough to contain the AGI.
If you have a lot of people making bad arguments for why UFAI is a danger, smart MIT people might just say, hey those people are wrong I'm smart enough to program an AGI that does what I want.
I mean take a topic like genetic engineering. There are valid dangers involved in genetic engineering. On the other hand the people who think that all gene manipulated food is poisons are wrong. As a result a lot of self professed skeptics and Atheists see it as their duty to defend genetic engineering.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Overexposure of an idea can be harmful as well. Look at how Kurzweil promoted his idea of the singularity. While many of the ideas (such as intelligence explosion) are solid, people don't take Kurzweil seriously anymore, to a large extent.
It would be useful debating why Kurzweil isn't taken seriously anymore. Is it because of the fraction of wrong predictions? Or is it simply because of the way he's presented them? Answering these questions would be useful to avoid ending up like Kurzweil has.
While not doubting the accuracy of the assertion, why precisely do you believe Kurzweil isn't taken seriously anymore, and in what specific ways is this a bad thing for him/his goals/the effect it has on society?