Announcement

Collapse
No announcement yet.

XFS File-System With Linux 5.10 Punts Year 2038 Problem To The Year 2486

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Zan Lynx View Post

    Nanoseconds are barely enough. It is very important for build dependency tools like Make or Ninja that file times clearly show when a file was created after its source file. Otherwise files either don't get rebuilt when needed, or files are rebuilt when they don't need to be.

    This isn't a real problem with complex builds which might take half a second to produce the output but every build also contains some small files which are essentially just copied into the output. A cached RAM to RAM file copy can be done in a few nanoseconds especially once NVDIMM storage gets involved.
    Nanoseconds... the timers in a PC aren't even close to that accurate. If a build system wants to track file changes for speed it should do so with a small database rather than kludging the filesystem up with ridiculous precision to do it.

    Comment


    • #22
      I also think that the nanosecond resolution is overkill and your stated usecase of build systems is totally made up and that timestamps have nothing to do with dependencies.

      Comment


      • #23
        Nanoseconds might be overkill per se, but OTOH second resolution isn't enough, and the next natural size up from 32 bits is 64, so you might as well use nanoseconds. Particularly as a lot of other timing stuff is using nanoseconds anyway, so less chance of messing up some conversion.

        Comment


        • #24
          Originally posted by cb88 View Post

          Nanoseconds... the timers in a PC aren't even close to that accurate. If a build system wants to track file changes for speed it should do so with a small database rather than kludging the filesystem up with ridiculous precision to do it.
          Do you even know how timers work?

          You take the last timer read and apply the current TSC to it.

          CPUs are running 5 cycles per nanosecond. So YES the timers can be that accurate.

          Comment


          • #25
            Originally posted by Zan Lynx View Post

            Do you even know how timers work?

            You take the last timer read and apply the current TSC to it.

            CPUs are running 5 cycles per nanosecond. So YES the timers can be that accurate.
            That's typical lame brain programmer thinking.. you waste more time doing that than the granularity of the timestamp. Thus probably 99% of developers get that wrong. There is definitely vastly more jitter than that in the IO subsystem... to the point that sub usecond timestamps are entirely pointless.

            Comment


            • #26
              Originally posted by Zan Lynx View Post
              I guess I won't bother trying to convince you. But you're wrong.

              Personally, I'd go for 128 bit Planck time. With that our timestamps would be at the limit of the resolution of the universe itself.
              You can already do that easily dude.

              Just store a 1024-bit timestamp since the start of the Universe, to be a cool kid.

              For any bits below 100ns precision just randomize them, not like it makes a fucking difference, since it's literally just measurement noise.

              But that makes you a cool kid, right? "oh look I can store picosecond precision I'm cool af"

              Comment


              • #27
                Originally posted by Zan Lynx View Post
                Do you even know how timers work?

                You take the last timer read and apply the current TSC to it.

                CPUs are running 5 cycles per nanosecond. So YES the timers can be that accurate.
                CPUs are also out of order so your timestamp is way off compared to what you expect out of it. In fact, this could easily be a security hole (ala Spectre), if it truly was nanosecond accurate, which it likely isn't.

                Anything below 100ns is likely to just be statistical measurement noise. Wanna do an experiment and randomize those bits and see if anything breaks? It's literally random and nobody gives a shit. In fact there wouldn't be a difference anyway between randomizing them and an actual measurement.

                Comment


                • #28
                  I'd better set an outlook reminder

                  Comment


                  • #29
                    Originally posted by jabl View Post
                    Nanoseconds might be overkill per se, but OTOH second resolution isn't enough, and the next natural size up from 32 bits is 64, so you might as well use nanoseconds. Particularly as a lot of other timing stuff is using nanoseconds anyway, so less chance of messing up some conversion.
                    This is the correct answer.

                    Yes, nanoseconds is overkill, but you've got 64 bits so you might as well do something with them. You can either push out the range to an enormous size, or increase precision.

                    Nanoseconds + 500 years is a good compromise in both directions, because a filesystem should never need to exceed either of those limits within the currently foreseeable future.

                    Comment


                    • #30
                      Originally posted by smitty3268 View Post
                      because a filesystem should never need to exceed either of those limits within the currently foreseeable future.
                      That's what everyone said in 1980 with 32-bit timestamps.

                      Comment

                      Working...
                      X