Announcement

Collapse
No announcement yet.

Testing The First PCIe Gen 5.0 NVMe SSD On Linux Has Been Disappointing

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by schmidtbag View Post
    Same here. In real-world tests, there's very little benefit in having anything faster than SATA. Sure, maybe a game takes another 3 seconds to load, or if you do UHD video editing you're definitely going to need a faster drive, but otherwise I'd rather just get what's cheapest and doesn't need a fan to cool it.
    I don't know about you, but after trying out running larger applications and to build larger projects off a PCIe SSD back in 2014 I don't think I want to go back. Sure, it's a quality of life kind of thing as it won't reduce compile times from bathroom breaks to the blink of an eye compared to a SATA SSD, but I still don't want to go back. Almost halving load and compile times for big projects is nothing to sneeze at.

    Also, with games getting pretty big these days the differences in load times aren't that small. We're talking reductions on the level of 50 seconds on a SATA SSD to less than 20 on a good PCIe one. With games also doing more and more streaming of higher LoD assets and whole parts of the map a high speed SSD reduces the pop-in effect this is vulnerable to. Worse yet, with previous generation consoles now starting to be abandoned by developers, more and more games that have been written specifically for current gen consoles are coming out. All of which* have a fast PCIe SSD.

    Comment


    • #22
      does these tests ignore the cache RAM on the ssd? i have very lame ssd from intel one the buffer is full the speeds drops extremely low. even lower than the sdcard on steam deck

      Comment


      • #23
        Originally posted by L_A_G View Post
        I don't know about you, but after trying out running larger applications and to build larger projects off a PCIe SSD back in 2014 I don't think I want to go back. Sure, it's a quality of life kind of thing as it won't reduce compile times from bathroom breaks to the blink of an eye compared to a SATA SSD, but I still don't want to go back. Almost halving load and compile times for big projects is nothing to sneeze at.
        Of course, there will be a handful of use-cases where the extra bandwidth will make a big difference. For everyday non-productivity use, SATA is fine.
        Also, with games getting pretty big these days the differences in load times aren't that small. We're talking reductions on the level of 50 seconds on a SATA SSD to less than 20 on a good PCIe one. With games also doing more and more streaming of higher LoD assets and whole parts of the map a high speed SSD reduces the pop-in effect this is vulnerable to. Worse yet, with previous generation consoles now starting to be abandoned by developers, more and more games that have been written specifically for current gen consoles are coming out. All of which* have a fast PCIe SSD.
        I haven't played any modern very demanding games so it's hard for me to speak on this. It's weird to me that there could be games that take 20 seconds to load on an NVMe disk. To me, that's a sign of a poorly designed game. CPU cycles are cheap these days - it's far better to compress game data. Not only does compression speed up load times, but it's cheaper to the user (since they don't have to use up so much disk space for a single game) and the game is faster to download. Even on 1Gbps connections, modern games would be annoyingly slow to download.

        Comment


        • #24
          Originally posted by schmidtbag View Post
          Of course, there will be a handful of use-cases where the extra bandwidth will make a big difference. For everyday non-productivity use, SATA is fine.
          Depending on how you define "productivity tasks" even one of those hybrids between an SSD and a mechanical HDD will do fine. I'd also hardly call it a "handful of tasks" when its basically everything that involves reading and writing significant amounts of data. It's basically anything that goes beyond the remit of thin client and similarly simple systems.

          I haven't played any modern very demanding games so it's hard for me to speak on this. It's weird to me that there could be games that take 20 seconds to load on an NVMe disk. To me, that's a sign of a poorly designed game. CPU cycles are cheap these days - it's far better to compress game data. Not only does compression speed up load times, but it's cheaper to the user (since they don't have to use up so much disk space for a single game) and the game is faster to download. Even on 1Gbps connections, modern games would be annoyingly slow to download.
          Game developers have always practiced fairly strong compression. Even back in the 8 bit days. The PS5 has dedicated hardware for it and the Xbox Series, like most consoles from the last couple of decades, has dedicated libraries for it. There's a limit to how much you can practically compress the often hundreds of gigabytes of textures, meshes, shaders, etc. the art department conjures up for a AAA release these days. Much of it in multiple LoDs I might add. AAA Games have been 50+ GB for a decade already and even that's only achievable trough heavy compression.

          The only time developers have had the ability to really go crazy with game sizes has been PS3 exclusives on up to 50GB discs when there wasn't a real need for games to get that big. Like with early CD games they used that disc space for huge amounts of pre-rendered cutscenes.

          Comment


          • #25
            Originally posted by L_A_G View Post
            Depending on how you define "productivity tasks" even one of those hybrids between an SSD and a mechanical HDD will do fine. I'd also hardly call it a "handful of tasks" when its basically everything that involves reading and writing significant amounts of data. It's basically anything that goes beyond the remit of thin client and similarly simple systems.
            I'm saying its only a handful of tasks that see a substantial enough performance increase where high speed NVMe makes sense to get.
            Game developers have always practiced fairly strong compression. Even back in the 8 bit days. The PS5 has dedicated hardware for it and the Xbox Series, like most consoles from the last couple of decades, has dedicated libraries for it. There's a limit to how much you can practically compress the often hundreds of gigabytes of textures, meshes, shaders, etc. the art department conjures up for a AAA release these days. Much of it in multiple LoDs I might add. AAA Games have been 50+ GB for a decade already and even that's only achievable trough heavy compression.
            Indeed but something you might not be aware of is consoles tend to have much smaller install sizes. This is primarily for 2 reasons:
            1. Some game devs don't compress [as much] for PC, even though they will for consoles.
            2. Consoles only need a single set of assets since they aren't expected to swap out of them.

            Comment


            • #26
              Originally posted by schmidtbag View Post

              I haven't played any modern very demanding games so it's hard for me to speak on this. It's weird to me that there could be games that take 20 seconds to load on an NVMe disk. To me, that's a sign of a poorly designed game. CPU cycles are cheap these days - it's far better to compress game data. Not only does compression speed up load times, but it's cheaper to the user (since they don't have to use up so much disk space for a single game) and the game is faster to download. Even on 1Gbps connections, modern games would be annoyingly slow to download.
              Well the PS5 shows that this is largely a software problem, those extra NVMe speeds can easily be used in applications such as games but you need to redesign all of the libraries/interfaces to take advantage of that (along with tricks like reading data directly from NVMe to the graphics card).

              Not sure whats happening with DirectStorage or other techniques, but a large part of why you are not seeing a big difference past a certain point is that a lot of software is still written using API's that have hard disk like assumptions where bottlenecks are completely different (or not there at all).

              Comment


              • #27
                Turn bases srrategy games are notorious for having their assets sitting in their own file. The load times on these relatively small games is way out of order considering their overall install size and resource demands.
                Hi

                Comment


                • #28
                  Originally posted by schmidtbag View Post
                  I'm saying its only a handful of tasks that see a substantial enough performance increase where high speed NVMe makes sense to get.
                  When we're talking about basically anything that involves heavy disc IO then it's absolutely not a "handful" of tasks. Maybe I was a little too subtle in my sarcasm there, we are talking almost all serious software development outside of embedded and web dev along with pretty much everything that used to be done on a workstation.

                  Indeed but something you might not be aware of is consoles tend to have much smaller install sizes. This is primarily for 2 reasons:
                  1. Some game devs don't compress [as much] for PC, even though they will for consoles.
                  2. Consoles only need a single set of assets since they aren't expected to swap out of them.
                  1. Not only are there no AAA "PC" devs these days, just developers who do both. Even then, what you're claiming isn't true. Consoles used to have far less memory than PCs so games used to only include the highest LoD (Level of Detail) assets on PC. Some games offered these as optional "HD texture packs" to not bloat up everyone's installs. Nowdays consoles have relatively similar amounts of memory to PCs so their games are about the same size as the PC versions of the same games.

                  2. Consoles absolutely do use multiple LoDs of assets and have done so for the last 20 years. It's a damn near ubiquitous optimization technique that was originally developed in the 1970s and its first notable (console) uses are in Playstation 1 games like Crash Bandicoot 3 (1998), the original Spyro the Dragon games (1998-2000). On PC it first showed up on the original Unreal Tournament (1999) and Serious Sam (2001).

                  Comment


                  • #29
                    Originally posted by L_A_G View Post
                    When we're talking about basically anything that involves heavy disc IO then it's absolutely not a "handful" of tasks. Maybe I was a little too subtle in my sarcasm there, we are talking almost all serious software development outside of embedded and web dev along with pretty much everything that used to be done on a workstation.
                    I can't tell if you're just too literal, hyper-focused on what you're trying to say, or you're being deliberately argumentative. Perhaps it's just me not writing clearly enough. Ultimately, my point is there are only a handful of different/unrelated tasks that benefit from NVMe. So, I'm not talking about compiling one software project vs another.
                    In other words, you can probably count on both your hands the kinds of workstation workloads that demand high bandwidth drives, such as video editing, high-definition raw data recording, large databases without indexes, code compiling, and so on. If you think there's a lot more than a dozen different tasks, you might be entering oddly specific niches or something that warrants a server rather than a desktop PC.
                    1. Not only are there no AAA "PC" devs these days, just developers who do both. Even then, what you're claiming isn't true. Consoles used to have far less memory than PCs so games used to only include the highest LoD (Level of Detail) assets on PC. Some games offered these as optional "HD texture packs" to not bloat up everyone's installs. Nowdays consoles have relatively similar amounts of memory to PCs so their games are about the same size as the PC versions of the same games.
                    Again, you're taking things needlessly literally, and you don't know the entire game development industry if you insist I'm wrong about this. Call of Duty Black Ops Cold War is the most egregious example where the PC version was quite a lot larger. But there are other examples like Horizon Zero Dawn, Red Dead Redemption 2, Final Fantasy XV, Middle Earth Shadow of War, and so on. In a lot of cases, we're not just talking a GB here and there, but sometimes tens of GB.
                    2. Consoles absolutely do use multiple LoDs of assets and have done so for the last 20 years. It's a damn near ubiquitous optimization technique that was originally developed in the 1970s and its first notable (console) uses are in Playstation 1 games like Crash Bandicoot 3 (1998), the original Spyro the Dragon games (1998-2000). On PC it first showed up on the original Unreal Tournament (1999) and Serious Sam (2001).
                    *sigh* I'm aware. I'm not referring to the techniques used to lower detail levels of things, for reasons such as distance, because that only requires specific assets to be downscaled. Some assets may remain at/near their max detail for the entirety of them being rendered. There are also plenty of games that can't take advantage of such techniques, such as practically any game with a fixed camera distance.
                    Last edited by schmidtbag; 07 March 2023, 11:18 AM.

                    Comment


                    • #30
                      Originally posted by schmidtbag View Post
                      I can't tell if you're just too literal, hyper-focused on what you're trying to say, or you're being deliberately argumentative. Perhaps it's just me not writing clearly enough. Ultimately, my point is there are only a handful of different/unrelated tasks that benefit from NVMe. So, I'm not talking about compiling one software project vs another.
                      Looks like I'll have to re-phrase my point again; No. There is a whole host of different use cases where high bandwidth storage is useful. Even just work on larger projects in a compiled language like C++ is far from a niche use. On top of the examples you've brought up there's major things like modern day CAD/CAM work (those are on fairly big data sets these days), any sort of modelling, simulation and data visualization work (which again works on very large datasets) and 3D art/asset creation just to name major categories that I can think off the top of my head.

                      Again, you're taking things needlessly literally, and you don't know the entire game development industry if you insist I'm wrong about this. Call of Duty Black Ops Cold War is the most egregious example where the PC version was quite a lot larger. But there are other examples like Horizon Zero Dawn, Red Dead Redemption 2, Final Fantasy XV, Middle Earth Shadow of War, and so on. In a lot of cases, we're not just talking a GB here and there, but sometimes tens of GB.
                      All of which are previous generation titles and, as I already explained to you, contain additional versions/LoDs of assets that aren't in the console versions of those games. Just look at the backport of the enhanced PC version of GTA5 to the current gen consoles (Playstation 4 and Xbox Series). It's about 87 GB or twice the size of the previous generation (Playstation 4 and Xbox One) release. You can see a similar, albeit not as extreme, case in the PS4/XB1 versions of Cyberpunk 2077 versus the enhanced PS5/XBS version.

                      *sigh* I'm aware. I'm not referring to the techniques used to lower detail levels of things, for reasons such as distance, because that only requires specific assets to be downscaled. Some assets may remain at/near their max detail for the entirety of them being rendered. There are also plenty of games that can't take advantage of such techniques, such as practically any game with a fixed camera distance.
                      Uuuh... Games don't down- or upscale assets on the fly and never have. They contain multiple copies of the same high quality base asset that have been progressively more downscaled in terms of texture quality and polygon count. The few games that can't take advantage of this technique are also generally smaller (in terms of development team and disc space use) indie titles and mobile games. Even fairly heavily zoomed in platformers like the aforementioned Crash Bandicoot 3 and Spyro took advantage of LoDs for objects in the background/distance.

                      Comment

                      Working...
                      X