Discovery Institute Fellow Erik J Larson
He has held the title of Chief Scientist in an AI-based startup whose first customer was Dell (Dell Legal), Senior Research Engineer at AI company 21st Century Technologies in Austin, worked as an NLP consultant for Knowledge Based Systems, Inc., and has consulted with other companies in Austin, helping to design AI systems that solve problems in natural language understanding.
Larson's been writing plenty of stuff critical of AI risk discussion lately, apparently even the Atlantic is keen to hear the creationist viewpoint.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Perhaps more accurate: because that is a likely side effect of the most effective way (etc.).
Not a side effect. The most effective way is to consume the entire cosmic commons just in case all that computation finds a better way. We have our own ideas about what we'd like to do with the cosmic commons, and we might not like the AI doing that; we might even act to try and prevent it or slow it down in some way. Therefore killing us all ASAP is a convergent instrumental goal.