Because
"[the brain] is sending signals at a millionth the speed of light, firing at 100 Hz, and even in heat dissipation [...] 50000 times the thermodynamic minimum energy expenditure per binary swtich operation"
https://www.youtube.com/watch?v=EUjc1WuyPT8&t=3320s
AI will be quantitatively smarter because it'll be able to think over 10000 times faster (arbitrary conservative lower bound) and it will be qualitatively smarter because its software will be built by an algoirthm far better than evolution
"Less than a third of students by their own self-appointed worst-case estimate *1."
missing a word here, I think.
I think your post is spot on.
First question: I know you admire Trump's persuasion skills, but what I want to know is why you think he's a good person/president etc.
Answer: [talks about Trump's persuasion skills]
Yeah, okay.
This is an exceptionally well reasoned article, I'd say. Particular props to the appropriate amount of uncertainty.
Well, if you put it like that I fully agree. Generally, I believe that "if it doesn't work, try something else" isn't followed as often as it should. There's probably a fair number of people who'd benefit from following this article's advice.
I don't quite know how to make this response more sophisticated than "I don't think this is true". It seems to me that whether classes ore lone-wolf improvement is better is a pretty complex question and the answer is fairly balanced, though overall I'd give the edge to lone-wolf.
It doesn't really matter whether the AI uses their full computational capacity. If the AI has a 100000 times larger capacity (which is again a conservative lower bound) and it only uses 1% of it, it will still be 1000 as smart as the human's full capacity.
AGI's algorithm will be better, because it has instant access to more facts than any human has time to memorize, and it will not have all of the biases that humans have. The entire point of the sequences is to list dozens of ways that the human brain reliably fails.