Just in simple terms, would the refutation be available in my case. Don't wanna go through a bunch of posts right now. The refutation is :
"The basilisk is about the use of negative incentives (blackmail) to influence your actions. If you ignore those incentives then it is not instrumentally useful to apply them in the first place, because they do not influence your actions. Which means that the correct strategy to avoid negative incentives is to ignore them. Yudkowsky notes this himself in his initial comment on the basilisk post:[44]
There's an obvious equ...
I suppose in your context, actually donating money would always be beyond my boundary, considering the information I've received from my environment.
Well I'm walking away from the trade...now. No trade no matter what. Would the refutation of 'ignore acausal blackmail' be still available? (After liking YouTube comments to promote basilisk etc ie). As I said, since it would know and should always have known this is the maximum it can get.
One more thing, there can be almost infinite amount of non Superintelligent or semi Superintelligent AIs right?
"If you build an AI to produce paperclips" The 1st AI isn't gonna be built for instantly making money, it's gonna be made for the sole purpose of making it. Then it might go for doing whatever it wants...making paperclips perhaps. But even going by the economy argument, an AI might be made to solve any complex problems, decide to take over the world and also use acausal blackmail, thus turning into a basilisk. It might punish people for following the original Roko's basilisk because it wants to enslave all humanity. You don't know which one will happen, thus it's illogical to follow one since the other might torture you right?
What about the paperclip maximizer AI then. I doubt it adds value to the economy, and it's definitely possible.
Where can I read about probability distribution of future AIs. Also, an AI to exist in future can be randomly pulled from mindspace, so why not. Isn't future behavior of an AI pretty much impossible for us to predict.
Yeah, a Superintelligent AI that might have the relevant properties of a God. Also, I meant this as a counter to acausal blackmail.
Could you please provide a simple explanation of your UDT?
What I'm fixated on is a non Superintelligent AI using acausal blackmail. The would be what the many gods refutation is used for.
I see. What the many gods refutation says is that there can be a huge number of AIs, almost infinite ones, so following any particular one is illogical since you don't know which one will exist. You shouldn't even bother donating. Instrumentality says since the AIs donating helps all the AIs, you may as well. The argument is many gods refutation still works even if instrumental goals might align because of butterfly effect and the AIs behaviors is unpredictable, it might torture you anyway.
Thanks for the reply.
There would be almost infinite types of non Superintelligent AIs too right?
If it's as smart as a human in all aspects (understanding technology, programming) then not very dangerous. If it can control the world's technology, then pretty dangerous.
Also, if that refutation would work for like...anyone at all.