Palantir published marketing material for their offering of AI for defense purposes. There's a video of how a military commander could order a military strike on an enemy tank with the help of LLMs. 

One of the features that Palantir advertises is:

Agents

Define LLM agents to pursue specific, scoped goals.

Given military secrecy we are hearing less about Palantir's technology than we hear about OpenAI, Google, Microsoft and Facebook but Palantir is one player and likely an important one. 

New to LessWrong?

New Comment
16 comments, sorted by Click to highlight new comments since: Today at 8:28 AM

Counterintuitively, I kind of hope Palantir does make progress in weaponizing AI. I think that that's a good way to get the government and general populace to take AI risks more seriously, but doesn't actually advance the Pareto frontier of superintelligent AGI and its concomitant existential risks. My experience with talking with non-technical friends and family about AI risk is that 'Robots with guns' is a much easier risk for them to grasp than non-embodied superintelligent schemer. 

I would expect that most actual progress in weaponizing AI would not be openly shared. 

However, the existing documentation should provide some grounding for talking points. Palantir talking about how the system is configured to protect the privacy of the medical data of the soldiers is an interesting view of how they see "safe AI". 

Galaxy-brain, pro e/acc take: advance capabilities fast enough that people freak out and we create a crises that enables sufficient coordination to avoid existential catastrophe

[-]rvnnt10mo10

To what extent would you expect the government's or general populace's responses to "Robots with guns" to be helpful (or harmful) for mitigating risks from superintelligence? (Would getting them worried about robots actually help with x-risks?)

Palantir's recent materials on this show that they're using three (pretty small for today frontier's standards) open source LLMs: Dolly-v2-12B, GPT-NeoX-20B, and Flan-T5 XL.

 

I think there's a good chance that they also have bigger models but the bigger models are classified. 

[-]O O10mo31

I doubt they or the government (or almost anyone) has the talent the more popular AI labs have. It doesn’t really matter if they throw billions of dollars at training these if no one there knows how to train them.

[-]gwern10mo40

They don't need to know how to train them, as there are several entities they could be licensing checkpoints from. (And finetuning them is generally much easier than training them.)

[-]O O10mo10

If fine tuning to get rid of hallucinations was easy other AI labs would have solved it a while ago.

I also think it is very easy to get a lot of out of distribution inputs in a battlefield.

The NSA prides itself on being the institution that employs the largest number of mathematicians. 

Historically, a lot of them seem to have worked on trying to break crypto, but strategically it makes more sense to focus that manpower on AI.

[-]O O10mo30

I don’t see why having a ton of mathematicians is helpful. I don’t think mathematics skills directly translate to ML skills which seem to just come from bright people having many years of experience in ML.

The government also currently doesn’t pay enough to attract talent and a lot of people also just don’t want to work for the military. Tho the former might change in the future.

From Palantir's CEO Alex Karp:

The history of technology, certainly in the last hundred years, things coming from the military going to consumer and that's also what I think primarily also has happened in AI.

In the interview, he also has that Palantir was building AI for the last five years, because Google et al didn't want to sell AI to the military.

[-]O O10mo52

Transformers obviously did not come from the . I can’t think of a single significant advancement in recent AI that can be attributed to the military.

I don’t like Alex Karp, he comes off as a sleazy conman and he often vastly oversells what his company does. Right now he’s saying some LLM his company recently deployed (which is almost certainly inferior to gpt4) should direct battlefield operations. Can you imagine even gpt4 directing battlefield operations? Unless he’s solved the hallucination problem, there is no way he can, in good faith, make that recommendation.

His interviews often have little substance as well.

I can’t think of a single significant advancement in recent AI that can be attributed to the military

It's the nature of classified projects that you usually can't attribute advances created in them.

Right now he’s saying some LLM his company recently deployed (which is almost certainly inferior to gpt4) should direct battlefield operations.

The software he sells uses an LLM but the LLM is only one aspect of it and the software seems to be free to let the user choose the LLM he wants to use. 

I think a better description would be that he sells AI software for making battlefield targeting decisions that was used in the last Ukraine war and he recently added the use of LLM as a feature to the software. 

Unless he’s solved the hallucination problem, there is no way he can, in good faith, make that recommendation.

The hallucination problem is mostly one where users want to get knowledge out of the LLM and not use the LLM to analyze other data sources. 

[-]O O10mo43

It's the nature of classified projects that you usually can't attribute advances created in them.

You can attribute parts of space project developments as coming from the military. Same for nuclear power.

The hallucination problem is mostly one where users want to get knowledge out of the LLM and not use the LLM to analyze other data sources.

I’m not so sure about this. Do any LLMs even have the context window to analyze large amounts of data?

They just sell a data UI/warehouse solution. 

I don't think that's a good assessment. The US Army wanted AI to analyze satellite images and other intelligence data. That was Project Maven:

Among its objectives, the project aims to develop and integrate “computer-vision algorithms needed to help military and civilian analysts encumbered by the sheer volume of full-motion video data that DoD collects every day in support of counterinsurgency and counterterrorism operations,” according to the Pentagon.

When Google employees revolted, Palantir stepped up to build those computer-vision algorithms for the US military.  Image recognition tasks aren't LLM-type AI but they are AI.

That capability gets used in Ukraine to analyze enemy military movements to help decide when to strike. 

I’m not so sure about this. Do any LLMs even have the context window to analyze large amounts of data?

You don't have to interact with LLMs in a way where there's one human text query and one machine-generated answer. 

The way AutoGPT works an LLM can query a lot of text as it searches through existing data. The technology that Palantir develops seems more like a commercialized AutoGPT than a model like GPT4. Both AutoGPT and Palantir's AIP allow you to select the language model that you want to use.

A good overview about how that works in a business case is at: