Umm. IPFS is a file/block distribution mechanism, not a general transmission protocol like IP. More importantly, I can't imagine a colony or ship design (including submarine and deep-ocean vessels today, which do actually face these issues) that doesn't have a local cluster for storage and proxy to anything outside.
The ones I've seen (for long-term battery-health monitoring of intermittently-connected transport) have a local server which connects to the onboard devices and makes real-time decisions and updates, and also connects periodically to the mainland.
IP is fine for very long latencies and some amount of packet loss.
More importantly, I can't imagine a colony or ship design (including submarine and deep-ocean vessels today, which do actually face these issues) that doesn't have a local cluster for storage and proxy to anything outside.
Submarine's today don't need to face all these issues. It's okay when a submarine can't connect to npm because there's likely nobody on the submarine to use npm.
I expect a submarine to have a bunch of custom build software for tasks that are supposed to be done in a submarine while all the other tasks that you can do on computers that need internet connections fail.
If you take a cruise today in a deep-ocean vessel and want to update the windows on your laptop, I don't think you can currently access a local cluster of storage on the average cruise vessel that allows you to do this.
Given the NASA approach to ship design, NASA would likely contract out a bunch of custom software for tasks that they think should be done on a space ship. Then that software works and if someone wants to do something different they are out of luck.
When it comes for a colony you want that normal software like npm that's not specifically build for the colony still works.
My company has our own servers for Windows updates because our IT department tests (or at least delays) them - I update my laptop from a local server, not from the internet. If I went on a cruise, I'd probably just avoid it until I was in port. That's economics, not technology - the ships aren't on low-bandwidth links for long enough to justify the expense of local servers.
A colony will have different economics, but similar fundamentals - there'll be different metering and expectations for links to earth than locally, and application-specific proxies and local caches/resources will be the rule, not the exception.
Mirroring of servers is old technology (common in the '80s), content-distribution networks are newer but not actually new. This topic is a major part of most scalable system designs - in fact, NPM is a good example because it's near-trivial to use a local server for updates.
A colony will have different economics, but similar fundamentals - there'll be different metering and expectations for links to earth than locally, and application-specific proxies
Application-specific proxies mean that the end-user on Mars can't simply use whatever software from earth they want to use but has to ask the general proxy management entity specifically to setup a proxy for the application they want to use.
IPFS skips that and allows everything to work without setting up application-specific proxies.
To the extend that it's old technology we now have more access control technology that makes sure that software is downloaded from the official server and not any other provider.
If everything goes according to plan for SpaceX, SpaceX will start permanent bases on moon and mars in the next decade. One challenge for moon and mars bases will be that a good chunk of the software we have on earth stops working when there's no internet connection.
Package managers, whether they are Ubuntu's Snap or programming tools like npm download their packages from the internet and fail when there are minutes of delay for traffic between earth and mars. The same goes for App stores and operating system updates.
If those services would use IPFS instead of TCP/IP they would still work on mars if there's an IPFS bridge between earth and mask that transfers IPFS packages that are requested on mars but currently unavailable from earth to mars.
While it's possible to write custom software for Mars that works without TCP/IP it would benefit a Mars base a lot when as much of normal software as possible that can use IPFS to transfer data uses IPFS.
One reason why IPFS currently isn't used more is that every node that normally requests data from the IPFS network has to broadcast it's IP address openly together with the data it wants to access. In that configuration an attacker can gather knowledge about which software is run within a company network which is undesirable.
From the privacy perspective it would be advantageous if a user only has to reveal to his ISP which package they want to access via IPFS. If the user only would have to reveal to his ISP the content they want to access they would have better privacy then they currently have with TCP/IP.
This setup would be advantageous for the ISP as well because content that's downloaded by many users doesn't have to be requested multiple times from its original source which saves both the ISP and the original data source bandwidth costs.
Legacy ISPs profit a bit from this setup by being able to bill Netflix, Youtube and Amazon prime for hosting servers on their premises to do a similar job, but this predatory setup is bad for the open internet. It means that only companies that can afford servers at every ISP can get comparable performance for their video service while if this job would be done by the ISP, the ISP would pay for the servers on their network.
IPFS to TCP/IP being superior for a lot of content is a reason for SpaceX to provide IPFS proxies to their Starlink customers. Providing IPFS proxies will be an additional selling point to distinguish SpaceX from legacy ISPs. Overall the ability to incentivize software companies to save bandwidth by using IPFS for transferring data in a way that also works on Mars is more important to SpaceX's overall goals.