See also https://en.wikipedia.org/wiki/Metric_time and https://qntm.org/calendar.
I've been in the Unix world since before it was mostly Linux, and this kind of proposal comes up fairly regularly, but even back in the day it was pretty rare to see the raw number - everyone used ctime() or the equivalent in their shell or language of choice for human use. If you're going to format it for human use while still being easily computer-usable, just use ISO8601.
I did once use epoch-seconds as the timestamps on a resume. I don't know if it helped or hurt, but I got the job.
Do I expect this to catch on? Not really.
I guess I don't expect it to catch on either, but I think I wish it would. It could make the timestamps much more readable.
One suggestion I would like to put out there in case it does catch on is to make a clearer distinction between the "year"-"week"-"date" part and the "hour"-"minute"-"second" part, since that would make it even easier to catch the order of magnitude at a glance. Most datetime formats do this anyway.
Though in order to avoid ambiguity, it should probably still be clearly distinct from usual ways of writing is, as you've also done. I'd suggest maybe 16'62'7 68'531 or 16'62'7:68'531.
Unix-like computers keep time in seconds since the Unix Epoch—the 0 second, which is set to midnight, January 1st, 1970, Zulu/UTC. We call a particular second since the Unix Epoch a timestamp.
Keeping time this way is pretty handy, but looking at a long string of numbers is hard to parse. What to do?
Right now, as I'm typing, the current Unix timestamp is
16'62'7'68'531
. See how I put those'
in there? That's how I try to break up the number into parts to make sense of it.Starting from the right, the two rightmost digits make up what I think of roughly as a Unix minute, but rather than 60 seconds it's 100, so that's 123 standard minutes. One more unit over and we get the Unix hour, which is about a quarter of a standard hour long at 1000 seconds or 1623 standard minutes.
Calling 1000 seconds an hour might seem confusing, but the purpose of hours is to have a convenient unit for dividing up days, just like minutes are a convenient way of dividing hours. Speaking of which, 100 Unix hours equates to 1 Unix day, or 2779 standard hours. This is a bit longer than a standard day, but since some people seem to operate on 28 hour days anyway, they might find this appealing.
So that takes care of the 5 rightmost digits: 3 to track the seconds and minutes within an hour and 2 to track the hour within a day. The 6th digit from the right tracks the day of the Unix week, which is 10 Unix days, or 113154 standard days. It's a bit longer than our standard 7 day lunar weeks, but 10 day weeks were good enough for the ancient Greeks, so they're good enough for Unix nerds.
Since months are tied to the cycle of the moon there are no real months in Unix time. Instead 100 weeks make up a Unix year, which is approximately 3.17 standard years long. If it helps, though, a tenth of a Unix year is about three and a half months long, so you can think of each ten-week as like the three month quarters or seasons we informally divide our calendar by.
Now we've accounted for all the digits. If we break down my example timestamp of
16'62'7'68'531
, we can say that it was the 31st second of the 5th minute of the 68th hour of the 7th day of the 62nd week of the 16th year of the Unix Epoch. Breaking it down in a table:16
62
7
68
531
Do I expect this to catch on? Not really. This is all just a bit of fun to make sense of Unix timestamps. I can't recall seeing anyone else try to break down the Unix timestamp like this into human-sized units—at best I've seen people talk about things like megaseconds, and I think I can recall others referring to 100,000,000 seconds as a Unix year—so I figured I'd share it. I've been using this in my terminal for years to help me make sense of what I'm looking at. Maybe you'll also find it useful.