Why can't we buy a pivotal event with power gained from a safely-weak AI?
A kind of "Why can't we just do X" question regarding weak pivotal acts and specifically the "destroy all GPUs" example discussed in (7.) of 1 and 2.
Both these relatively recent sources (feel free to direct me to more comprehensive ones) seem to somewhat acknowledge that destroying all (other) GPUs (short for sufficient hardware capabilities for too strong AI) could be very good, but that this pivotal act would be very hard or impossible with an AI that is weak enough to be safe.
[1] describes such a solution as non-existing, while [2] argues that the AI does not need to do all the destroying by itself and thus makes it not impossible with weak AI (I currently strongly agree with this).
The ways to perform the destroying are still described as very hard in [2], including building new weapon systems and swaying the (world) public with arguments, problem demonstrations
and propaganda.
Thus the question: Can a relatively weak AI not be enough to gain one a large amount of money (or directly internationally usable power) to both prohibit GPUs worldwide and enforce their destruction?
Is e.g. winning a realizable 10 trillion $ on the stock markets before they shut down realistic with a safely-weak AI? Also I can imagine quickly gaining enough corporate power to do what one wants with a useful but not too strong AI. Such solutions obviously have strong societal, but probably not directly x-risk, drawbacks.
This would notably likely necessitate the research capability to locate all capable hardware, which however I would expect an aligned current intelligence service much below CIA scale to be able to do (I would assume even with osint).
Why can't we buy a pivotal event with power gained from a safely-weak AI?
A kind of "Why can't we just do X" question regarding weak pivotal acts and specifically the "destroy all GPUs" example discussed in (7.) of 1 and 2.
Both these relatively recent sources (feel free to direct me to more comprehensive ones) seem to somewhat acknowledge that destroying all (other) GPUs (short for sufficient hardware capabilities for too strong AI) could be very good, but that this pivotal act would be very hard or impossible with an AI that is weak enough to be safe. [1] describes such a solution as non-existing, while [2] argues that the AI does not need to do all the destroying by itself and thus makes it not impossible with weak AI (I currently strongly agree with this). The ways to perform the destroying are still described as very hard in [2], including building new weapon systems and swaying the (world) public with arguments, problem demonstrations and propaganda.
Thus the question: Can a relatively weak AI not be enough to gain one a large amount of money (or directly internationally usable power) to both prohibit GPUs worldwide and enforce their destruction?
Is e.g. winning a realizable 10 trillion $ on the stock markets before they shut down realistic with a safely-weak AI? Also I can imagine quickly gaining enough corporate power to do what one wants with a useful but not too strong AI. Such solutions obviously have strong societal, but probably not directly x-risk, drawbacks.
This would notably likely necessitate the research capability to locate all capable hardware, which however I would expect an aligned current intelligence service much below CIA scale to be able to do (I would assume even with osint).