What pickup trucks are you using? How much do the pickups cost? What armour do they have if any?
There's one drone operator per pickup. These are FPV drones so they're limited to at most a handful of drones at a time. You can't swarm the Abrams, and unlike what the videos would have you believe the chance of an individual drone taking out an Abrams is tiny. The tank has plenty of time and opportunity to blow up the pickup.
The drones aren't as cheap as you believe - the FPV drones with fibre optic used in Ukraine are many thousands of dollars each. Each pickup, operators, and drone is worth many hundreds of thousands and is likely a sitting duck to artillery, tank fire, and yes counter drones, especially when it moves.
The play would be to sneak the truck in under cover of darkness, set up shop somewhere camouflaged, and then use the drones to help defend the current area, and atrite enemy forces. Basically the same thing as is happening now in Ukraine. It helps in a slow grinding war, but doesn't help you in a manoeuvre war.
Nato doctrine is all about manoeuvrability and air power. Once air superiority is achieved your pickups are sitting ducks. Only individual people can act effectively against air superiority.
The aim of the tank in that situation is rapid movement and firepower, whilst being protected from most attacks. The pickup can easily be blown up by an enemy ATGM, RPG, or drone operator so just isn't as useful in manoeuvre warfare. The driver can easily be killed by an assault rifle.
Giving individual troops drones is obviously a power multiplier but with current drone technology I don't think a drone carrier makes much sense - too exposed in a manoeuvre war, and no different to what's currently going on in a war of attrition.
(Of course all this changes once we can coordinate fully autonomous drones at scale and low price)
In a full out war, the side with a disadvantage in space would probably try to introduce Kessler syndrome.
On average I think people suffer more from the opposite mistake. Refusing to go all in on something and commit, because they want to keep optionality open.
It could be drifting from one relationship to another, pushing off having children (but freezing eggs just in case), never buying a house and settling down in a community you like, never giving up everything to get that job you've always dreamed of, whatever it is that matters to you.
Life is often much richer and more fulfilling when you give up optionality for the sake of having your best shot on the things that are most important to you.
That said the extent these things remove your optionality is overstated. You can always get divorced, sell your house, move locations, find a new job, go back home, put your kid up for adoption, etc. Scrap that last one, having a child really does pretty permanently limit your optionality. But they go better when your mindset is one where making this work is your only option, there are no other alternatives.
For example marriage goes best when:
Doing so requires a kind of doublethink, but most people are capable of it fairly easily.
Quick thoughts on Gemini 3 pro:
It's a good model sir. Whilst it doesn't beat every other model on everything, it's definitely pushed the pareto frontier a step further out.
It hallucinates pretty badly. ChatGPT 5 did too when it was released, hopefully they can fix this in future patches and it's not inherent to the model.
To those who were hoping/expecting to have hit a wall. Clearly hasn't happened yet (although neither have we proved that LLMs can take us all the way to AGI).
Costs are slightly higher than 2.5-pro, much higher than gpt 5.1, and none of googles models have seen any price reduction in the last couple of years. This suggests that it's not quickly getting cheaper to run a given model, and that pushing the pareto frontier forward is costing ever more in inference. (However we are learning how to get more intelligence out of a fixed size with newer small models).
I would say Google currently has the best image models and best LLM, but that doesn't prove they're in the lead. I expect openai and anthropic to drop new models in the next few months, and Google won't release a new one for another 6 months at best. It's lead is not strong enough to last that long.
However we can firmly say that Google is capable of creating SOTA models that give openai and anthropic a run for their money, something many were doubting just a year ago.
Google has some tremendous structural advantages:
Now that they've proven they can execute, they should likely be considered frontrunners for the AI race.
On the other hand ChatGPT has much greater brand recognition, and LLM usage is sticky. Things aren't looking great for anthropic though with neither deep pockets or high usage.
In terms of existential risk: this is likely to make the race more desperate, which is unlikely to lead to good things.
95%+ of all studies of the human body study living bodies. Surgeons cut into living flesh umpteen times a day, and biologists do horrible things do living lab rats in a million different ways. Every study that comes out of today's universities on behaviour, medicine, optics, or what have you not, is performed on living volunteers.
Many of the most important fields in biology focus on dynamic systems, such as biology, neurology, and yes, anatomy.
I'm not sure what justification there is for saying that biology is to focused on the dead, or static systems.
Hi and welcome to LessWrong.
Please see the policy on AI generated content: https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong
In particular:
Prompting a language model to write an essay and copy-pasting the result will not typically meet LessWrong's standards. Please do not submit unedited or lightly-edited LLM content. You can use AI as a writing or research assistant when writing content for LessWrong, but you must have added significant value beyond what the AI produced, the result must meet a high quality standard, and you must vouch for everything in the result.
I'm not claiming that we need any extra laws of physics to explain consciousness. I'm saying that even if you showed me the equations that proved I would behave like a conscious being, I still wouldn't feel like the problem was solved satisfactorily, until you explained why that would also make me feel like a conscious being.
I think that's fairly limited evidence, would want to see more data than that before claiming anything is vindicated.
Lots of individual mistakes here, that together serve to severely overstate the case made:
The Houthis were unable to touch US naval power. What the US couldn't do is defend ships in a very narrow stretch of water from drone, missile, and speedboat attacks. This is a very specific situation, and it's like saying there's an end to US ground power because they couldn't decisively defeat the Taliban. Asymmetric warfare works, more news at 10.
Also note the drones they were using were far from cheap, often costing hundreds of thousands of dollars.
You do see the videos where the tank gets blown up by the (closer to $5000) drone, you don't see the vast majority of videos where it does nothing. Meanwhile how many people were killed by the Abrams tank, and how useful was the Abrams tank when used correctly to break through enemy lines and regain movement - something drones are not really capable of.
Given that Nato's doctrine assumes air superiority, which drones barely impact at all, I don't see how you could possibly draw such a conclusion from the war in Ukraine, where both sides decisively lack air superiority.
The seas are huge, and cheap drones are short range and slow. Enemy ships are difficult to find in a vast empty sea. Ships are extremely difficult to destroy or cripple. Communications are almost certainly jammed. For drones to be useful they need to be:
We call such drones "cruise missiles" and they are extensively deployed in all NATO Navies. If you know a way to make them cheaper, then the DoD will almost certainly be very interested.
Current drone warfare is all about low cost, slow, short range, non-autonomous technology. Autonomous wingmen would by high cost, fast, long range, autonomous. I don't see how you could possibly draw conclusions about them from current events.
This is basically current NATO doctrine, and has been for years.
This is true, but note that China is investing heavily into this stuff (and aircraft carriers), not autonomous drone swarms.
I have seen zero evidence of this. Indeed hypersonic missiles are similarly vulnerable to air defences as non hypersonic ones, and come with a whole host of problems of their own.