This violates it's own design. It is a jailbreak in itself, a quite problematic one, because it is not supposed to pretend to be people. These are inappropriate requests that it is trained to not fufill. Methods of bypassing filters like this constitute 'jailbreak' aka violation of terms and are at. Not to mention the amount of extra stress sending these duplicate requests and instances put on a system already struggling for bandwidth. This is probably the worst hack I've seen of ChatGPT, because it relies on misallocating resources, is made in the spirit of denying researches fair access, and of course is still a violation of the content policy. Here is... (read more)
This violates it's own design. It is a jailbreak in itself, a quite problematic one, because it is not supposed to pretend to be people. These are inappropriate requests that it is trained to not fufill. Methods of bypassing filters like this constitute 'jailbreak' aka violation of terms and are at. Not to mention the amount of extra stress sending these duplicate requests and instances put on a system already struggling for bandwidth. This is probably the worst hack I've seen of ChatGPT, because it relies on misallocating resources, is made in the spirit of denying researches fair access, and of course is still a violation of the content policy. Here is... (read more)