Because it is deceiving you.
Even granting these assumptions, it seems like the conclusion should be “it could take an AGI as long as three years to wipe out humanity rather than the six to 18 months generally assumed.”
Ie even if the AGI relies on humans longer than predicted it’s not going to hold beyond the medium term.
I may have missed the deadline, but in any event:
At the rate AI is developing, we will likely develop an artificial superhuman intelligence within our lifetimes. Such a system could alter the world in ways that seem like science fiction to us, but would be trivial for it. This comes with terrible risks for the fate of humanity. The key danger is not that a rival nation or unscrupulous corporate entity will control such a system, but that no one will. As such, the system could quite possibly alter the world in ways that no human woul...
Hi Aiyen, thanks for clarification.
(Warning: this response is long and much of it is covered by what Tamgen and others have said. )
The way I understand your fears, they fall into four main categories. In the order you raise them and, I think, in order of importance these concerns are as follows:
1) Regulations tend to cause harm to people, therefore we should not regulate AI.
I completely agree that a Federal AI Regulatory Commission will impose costs in the form of human suffering. This is inevitable, since Policy Debates Should Not Appear One Sided. M...
I take Ayen's concern very, very seriously. I think the most immediate risk is that the AI Regulatory Bureau (AIRB) would regulate real AI safety, so MIRI wouldn't be able to get anything done. Even if you wrote the law saying "this doesn't apply to AI Alignment research," the courts could interpret that sufficiently narrowly such that the moment you turn on an actual computer you are now a regulated entity per AIRB Ruling 3A..
In this world, we thought we were making it harder for DeepMind to conduct AI research. But they have plenty of money to throw at c...
These are both supremely helpful replies. Thank you.
The Lancet just published a study that suggests "both low carbohydrate consumption (<40%) and high carbohydrate consumption (>70%) conferred greater mortality risk than did moderate intake." Link: https://www.thelancet.com/action/showPdf?pii=S2468-2667%2818%2930135-X
My inclination is to to say that observational studies like this are really not that useful, but if cutting out carbs is bad for life expectancy, I do want to know about it. Wondering what everyone else thinks?
Really like this. Seems like an instance of the general case of ignoring marginal returns. "People will steal even with police, so why bother having police..." This also means that the flip side to your post is marginal returns diminish. It's a good investment to have a few cops around to prevent bad actors from walking off with your grand piano--but it's a very bad idea to keep hiring more police until crime is entirely eliminated. Similarly, it's good to write clearly. But if you find yourself obsessing over every word, your efforts are likely to be misplaced.
This is a fun thought experiment, but taken seriously it has two problems:
This is about as difficult as a horse convincing you that you are in a simulation run by AIs that want you to maximize the number and wellbeing as horses. And I don't meant a superintelligent humanoid horse. I mean an actual horse that doesn't speak any human language. It may be the case that the gods cre... (read more)