Why I don't think that the probability that AGI kills everyone is roughly 1 (but rather around 0.995).
Let A = Ability to refuse to learn a certain thing. B = Not wanting to be replaced by the next step in evolution. D = Ability to build technology, manipulate others etc etc, in a way that kills all humans. For example, humans seems to have A to some...
Thanks for this post, and in particular for including Grothendieck as one of the examples to illustrate your ideas. I have thought that most people outside of mathematics, and even many that are studying math, are not familiar with him. So I like how you assume that such a reader will accept this and just start to read, in the first section, about a guy that is unknown to said theoretical reader. I think that shows respect for your audience.
There is one thing not related to your points but more to the work of Grothendieck that I would like to mention; your statement "It is his capacity to surface questions that set... (read 1432 more words →)