Originally posted by Zan Lynx
View Post
Announcement
Collapse
No announcement yet.
XFS File-System With Linux 5.10 Punts Year 2038 Problem To The Year 2486
Collapse
X
-
Nanoseconds might be overkill per se, but OTOH second resolution isn't enough, and the next natural size up from 32 bits is 64, so you might as well use nanoseconds. Particularly as a lot of other timing stuff is using nanoseconds anyway, so less chance of messing up some conversion.
Comment
-
Originally posted by cb88 View Post
Nanoseconds... the timers in a PC aren't even close to that accurate. If a build system wants to track file changes for speed it should do so with a small database rather than kludging the filesystem up with ridiculous precision to do it.
You take the last timer read and apply the current TSC to it.
CPUs are running 5 cycles per nanosecond. So YES the timers can be that accurate.
- Likes 2
Comment
-
Originally posted by Zan Lynx View Post
Do you even know how timers work?
You take the last timer read and apply the current TSC to it.
CPUs are running 5 cycles per nanosecond. So YES the timers can be that accurate.
- Likes 1
Comment
-
Originally posted by Zan Lynx View PostI guess I won't bother trying to convince you. But you're wrong.
Personally, I'd go for 128 bit Planck time. With that our timestamps would be at the limit of the resolution of the universe itself.
Just store a 1024-bit timestamp since the start of the Universe, to be a cool kid.
For any bits below 100ns precision just randomize them, not like it makes a fucking difference, since it's literally just measurement noise.
But that makes you a cool kid, right? "oh look I can store picosecond precision I'm cool af"
Comment
-
Originally posted by Zan Lynx View PostDo you even know how timers work?
You take the last timer read and apply the current TSC to it.
CPUs are running 5 cycles per nanosecond. So YES the timers can be that accurate.
Anything below 100ns is likely to just be statistical measurement noise. Wanna do an experiment and randomize those bits and see if anything breaks? It's literally random and nobody gives a shit. In fact there wouldn't be a difference anyway between randomizing them and an actual measurement.
- Likes 1
Comment
-
Originally posted by jabl View PostNanoseconds might be overkill per se, but OTOH second resolution isn't enough, and the next natural size up from 32 bits is 64, so you might as well use nanoseconds. Particularly as a lot of other timing stuff is using nanoseconds anyway, so less chance of messing up some conversion.
Yes, nanoseconds is overkill, but you've got 64 bits so you might as well do something with them. You can either push out the range to an enormous size, or increase precision.
Nanoseconds + 500 years is a good compromise in both directions, because a filesystem should never need to exceed either of those limits within the currently foreseeable future.
- Likes 4
Comment
Comment