Contrary opinion: Agents like #AutoGPT are more aligned than the underlying LLMs due to chaining. e.g. If the model will say "I need to make $1M" & then answers itself as "Stealing is the best plan to achieve it"; there are two prompts for the underlying LLMs to refuse help.
Contrary opinion: Agents like #AutoGPT are more aligned than the underlying LLMs due to chaining. e.g. If the model will say "I need to make $1M" & then answers itself as "Stealing is the best plan to achieve it"; there are two prompts for the underlying LLMs to refuse help.