Announcement

Collapse
No announcement yet.

Kioxia KCD8XPUG1T92 CD8P-R & KCMYXVUG3T20 CM7-V PCIe 5.0 SSDs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • ssokolow
    replied
    Originally posted by Weasel View Post
    I don't know what the angled thing pressure has to do with proper connection though, I mean nothing else needs it. Imagine having to insert the GPU at an angle and then press it while holding it. Yikes.
    I imagine the reasonings are more related to why CPUs use ZIF sockets. M.2 drives look kinda fragile.

    Leave a comment:


  • Weasel
    replied
    Originally posted by F.Ultra View Post
    having to push it down ensures proper connectivity due to the pressure, this is after all drives meant to be mounted and not constantly swapped like the U.2 drives. Even with heatsinks the nvme drives takes up less space than the 2.5" ones and you will simply not see an on motherboard mounted U.2/U.3 drive like you do with M.2 so that motherboards use M.2 instead of U.2/U.3 is not surprising at all.

    The use of a tiny scew vs the modern latch was a major design flaw yes, but the latch was an easy fix.
    I mean of course you can't find desktop motherboards with U.2, that was my whole complaint, no?

    I don't know what the angled thing pressure has to do with proper connection though, I mean nothing else needs it. Imagine having to insert the GPU at an angle and then press it while holding it. Yikes.

    Leave a comment:


  • F.Ultra
    replied
    Originally posted by Weasel View Post
    The angle insert and the pathetic tiny screw that is required because of the angled insert since you still have to push it down against its "natural" insertion is legit the most brain damaged design in existence of computing interconnection. So much so now some of them are screwless and it's a "new feature" you know. If it was designed properly from the get go we wouldn't need such bandaid "features".

    And yes, as others said, heat sink has to be placed and heat dissipation is terrible. And it's not that tiny anymore with a heat sink, losing its only supposed advantage.

    Maybe it wasn't a problem with Gen 3 drives. We're at Gen 5. No heat sink is suicide. So, size is not an argument anymore.
    having to push it down ensures proper connectivity due to the pressure, this is after all drives meant to be mounted and not constantly swapped like the U.2 drives. Even with heatsinks the nvme drives takes up less space than the 2.5" ones and you will simply not see an on motherboard mounted U.2/U.3 drive like you do with M.2 so that motherboards use M.2 instead of U.2/U.3 is not surprising at all.

    The use of a tiny scew vs the modern latch was a major design flaw yes, but the latch was an easy fix.

    Leave a comment:


  • Weasel
    replied
    Originally posted by F.Ultra View Post
    it's because m.2 is good enough, is smaller and doesn't require 12v, and the angled insert is perfect for the small nvme drives.
    The angle insert and the pathetic tiny screw that is required because of the angled insert since you still have to push it down against its "natural" insertion is legit the most brain damaged design in existence of computing interconnection. So much so now some of them are screwless and it's a "new feature" you know. If it was designed properly from the get go we wouldn't need such bandaid "features".

    And yes, as others said, heat sink has to be placed and heat dissipation is terrible. And it's not that tiny anymore with a heat sink, losing its only supposed advantage.

    Maybe it wasn't a problem with Gen 3 drives. We're at Gen 5. No heat sink is suicide. So, size is not an argument anymore.

    Leave a comment:


  • torsionbar28
    replied
    Originally posted by dnebdal View Post
    M.2 is tiny, and that means you can mount the drives directly to a laptop motherboard - and fit two or three on a fullsize ATX motherboard. It's a real benefit.
    That's the problem. They're so tiny, they become thermally constrained even at consumer-grade power levels. Adding a heatsink often interferes with other components, blocks PCIe card insertion, etc. The power/cooling situation around M.2 is more suited to laptop use than anything else. It's a performance loser on the desktop. Contrast with U.2/U.3 which can consumer 25w of power and has no challenges around cooling, no aftermarket heatsink nonsense.

    Leave a comment:


  • dnebdal
    replied
    Originally posted by Jonjolt View Post
    I really wish some of those enterprise connectivity options become more main stream.
    I don't know if any of them work at PCIe 5.0, but you can get fairly cheap PCIe cards that you plug a U.2 or U.3 drive into. I wish there were better solutions, but I have used two of those cards to add some U.3 drives to a workstation, and it works quite well. Shame the only other way is a trimode SAS controller and a backplane - you can't even use fan-out cables, because of some detail about PCIe clock signals.

    Leave a comment:


  • dnebdal
    replied
    Originally posted by Weasel View Post
    I don't understand why M.2 is still used when U.2 exists. That stupid angled way to insert it is also laughably designed. WTF?
    M.2 is tiny, and that means you can mount the drives directly to a laptop motherboard - and fit two or three on a fullsize ATX motherboard. It's a real benefit.

    Leave a comment:


  • F.Ultra
    replied
    Originally posted by Weasel View Post
    I don't understand why M.2 is still used when U.2 exists. That stupid angled way to insert it is also laughably designed. WTF?
    it's because m.2 is good enough, is smaller and doesn't require 12v, and the angled insert is perfect for the small nvme drives.

    Leave a comment:


  • Weasel
    replied
    I don't understand why M.2 is still used when U.2 exists. That stupid angled way to insert it is also laughably designed. WTF?

    Leave a comment:


  • Teggs
    replied
    PCIe SIG increasing their tempo may be all well and good. Perhaps it was even necessary and will get manufacturers to move faster. But from here it looks like by the time PCIe 7 is ratified, PCIe 6 devices will be rare as hen's teeth. When PCIe 8 is ratified, PCIe 7 devices simply won't exist. How far behind will things fall, and will it turn PCIe development into a farce? If you say you ratified a standard, but no one can implement even the previous version in production, how can you know that the standard is worthy?

    Leave a comment:

Working...
X