emilfroberg

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
emilfrobergΩ010

As part of the task we are designing, the agent needs access to a LLM API (locally would make the deliverable too big I assume). The easiest would be OpenAI/Anthropic, but that is not static. Alternatively, we could host an LLama in the cloud, but it is not ideal for us to have to keep that running. What is the best way to do this?