Announcement

Collapse
No announcement yet.

XFS File-System With Linux 5.10 Punts Year 2038 Problem To The Year 2486

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Would have liked year 80486 for trolling Intel...

    Comment


    • #12
      Originally posted by jacob View Post
      Is this a backward compatible change or does it require reformatting?
      As I recall, the stated plan was that it will not require a reformat, just a change to the on-disk filesystem capabilities to upgrade to the new feature (using one/more of the tools from the xfsprogs utilities). That is not exactly backwardly compatible (old kernels without support for the new (bigtime) filesystem format will not support such filesystems after that capability is enabled, and as I recall a filesystem once bigtime is enabled cannot be downgraded to pre-bigtime), but it will allow one to transition a filesystem in place once you have the appropriate kernel support and no longer need to be able to mount the filesystem on older systems.

      Comment


      • #13
        And then engineers forgot out the hardlimit on the date which led to a bug allowing da Vinci to become evil and acquire the holo emitter to escape the holo deck and start the Hologram Wars with all the EMH Mk 1s retired into labor.
        Last edited by skeevy420; 15 October 2020, 08:57 AM.

        Comment


        • #14
          Originally posted by JackLilhammers View Post

          Out of pure curiosity, are nanoseconds really needed? Why microseconds aren't enough?
          Nanoseconds are barely enough. It is very important for build dependency tools like Make or Ninja that file times clearly show when a file was created after its source file. Otherwise files either don't get rebuilt when needed, or files are rebuilt when they don't need to be.

          This isn't a real problem with complex builds which might take half a second to produce the output but every build also contains some small files which are essentially just copied into the output. A cached RAM to RAM file copy can be done in a few nanoseconds especially once NVDIMM storage gets involved.

          Comment


          • #15
            I have seen good comments regarding this filesystem and also the skill of its developers.

            Does anyone know how it compares with other Linux filesystems when used in a personal Desktop/Workstation use case?

            Comment


            • #16
              Originally posted by jacob View Post
              Is this a backward compatible change or does it require reformatting?
              It requires a Flux Capacitor to go backwards.

              Comment


              • #17
                Originally posted by Zan Lynx View Post
                Nanoseconds are barely enough. It is very important for build dependency tools like Make or Ninja that file times clearly show when a file was created after its source file. Otherwise files either don't get rebuilt when needed, or files are rebuilt when they don't need to be.

                This isn't a real problem with complex builds which might take half a second to produce the output but every build also contains some small files which are essentially just copied into the output. A cached RAM to RAM file copy can be done in a few nanoseconds especially once NVDIMM storage gets involved.
                I think that's bullshit. Also RAM to RAM has nothing to do with NVDIMM since it's RAM to RAM? And you can easily do that with tmpfs. tmpfs btw is not XFS. nanosecond precision is the biggest bullshit in filesystems.

                Comment


                • #18
                  I guess they didn't go with an epoch counter? https://lkml.org/lkml/2014/6/2/793

                  Comment


                  • #19
                    Originally posted by Weasel View Post
                    I think that's bullshit. Also RAM to RAM has nothing to do with NVDIMM since it's RAM to RAM? And you can easily do that with tmpfs. tmpfs btw is not XFS. nanosecond precision is the biggest bullshit in filesystems.
                    I guess I won't bother trying to convince you. But you're wrong.

                    Personally, I'd go for 128 bit Planck time. With that our timestamps would be at the limit of the resolution of the universe itself.

                    Comment


                    • #20
                      Originally posted by Zan Lynx View Post

                      I guess I won't bother trying to convince you. But you're wrong.

                      Personally, I'd go for 128 bit Planck time. With that our timestamps would be at the limit of the resolution of the universe itself.
                      I'm expecting my modded thinkpad to last longer than that. Suboptimal solution clearly. /s

                      Comment

                      Working...
                      X