Announcement

Collapse
No announcement yet.

Linux Thunderbolt Support Can Work On Arm Systems

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by starshipeleven View Post
    From what was described in those threads, there seems to be no protocol involved in the TBT header on that chipset. It's just a PCIe device doing its thing downstream. It's kind of surprising for a Intel puppy technology like that.

    See this post: https://egpu.io/forums/postid/44262/
    jumping pin 3 to VCC pin 5 seems to keep them awake for detection at boot without any need for any devices attached.

    and also this https://egpu.io/forums/postid/44564/

    someone mention on a forum that to get this Titan Ridge to work with his Asus Intel X299 board they had to set in the BIOS "GPIO3 Force Pwr" to True.
    Which is likely the same as what you've done.
    ---

    What is interesting is how Titan Ridge is always awake, unlike Alpine Ridge, and yet the force power still helps at boot time.


    The header seems to just be GPIO to turn it on or off with the motherboard and the jumper trick is just forcing it "always on" by giving VCC to that pin.

    So as long as it is powered on the board firmware should find it (and stuff connected to it) on boot. Maybe there is a UEFI driver blob like for SAS HBA/RAID or GPUs.
    Thanks for all the info! I think I'll pick up a Titan Ridge and give it a shot on one of my POWER boxes...fingers crossed...

    Comment


    • #22
      Originally posted by kcrudup View Post
      ... but it IS PCIe; technically you could put ram there, but you'd need a PCIe DRAM controller too. I'm sure there's one out there.
      Yeah, I figured something common like that would indeed exist.

      However, I'm wondering: is it technically possible to share a single PCIe lane between multiple devices? I have been unable to find a clear-cut answer to that. It is implied that every PCIe card requires (at least) one exclusive lane, since it's apparently a point-to-point interface, unlike the bus architecture of conventional PCI. I didn't know that until I started googling and perusing Wikipedia on the matter. Seems like a step backward to me, at least in terms of flexibility and extensibility. But I'm sure there was a good reason for that.

      Comment


      • #23
        Originally posted by SteamPunker View Post
        However, I'm wondering: is it technically possible to share a single PCIe lane between multiple devices?
        Yes, PCIe multipliers exist, they work exactly like a network switch. The affordable ones take one PCIe lane and split it into 2 to 4 lanes.
        They are/were mostly used for GPU mining, but you can get those that take full x4 lanes or more, at much higher prices as that's businness hardware territory.
        For example this https://www.amazon.com/Express-Multi.../dp/B0167MCHI2 (note, it is using a USB 3.0 interface and cable to move around a PCIe x1 data lane without power, this is NOT a USB 3.0 card).

        There are also splitters (that split a single PCIe port into its lanes, so for example x16 card that splits that into x4 PCIe independent ports). Mostly used for NVMe in servers as a high end NVMe SSD needs 4 lanes, and in pretty much all motherboards since ages ago to have a SLI/Crossfire setup by "stealing" 8 PCIe lanes from the main PCIe slot to create another x8 PCIe slot, so you end up with two x8 PCIe slots.

        It is implied that every PCIe card requires (at least) one exclusive lane, since it's apparently a point-to-point interface, unlike the bus architecture of conventional PCI. I didn't know that until I started googling and perusing Wikipedia on the matter. Seems like a step backward to me, at least in terms of flexibility and extensibility.
        PCI was hot garbage exactly because it was not a point-to-point interface
        because that means:
        -shared bandwith, if any of your cards is using a lot of bandwith all your other cards have less. This is one of the main reasons GPUs jumped to AGP as soon as possible, as AGP had dedicated bandwith, and this allowed them to be sure of what bandwith they could count on.
        -everyone must go at the same speed/mode of the slowest device. If a card in the bunch needed a lower PCI speed or something, everyone else had to adapt. For example, if you had a PCI-X card (server version of PCI, it was longer, had 66Mhz clock and 64bit interface) that for some reason could only go in PCI mode, now all the PCI-X slots in your server become PCI slots (and halve the bandwith). This would be a huge pain in the ass for a PCIe environment like today where you have plenty of cards that are PCIe 1.0 or 2.0 because that's really all they need. If you had a shared bus EVERY CARD would be forced to go at PCIe 1.0 or 2.0 speed, and this would be VERY BAD.

        Also, the ability to enlarge or shrink PCIe connector (or pin count for embedded) size depending on how many PCIe lanes you are allocating is a massive space, cost and power budget reduction.

        The only thing PCI had for it was that it was better than ISA, but PCIe is plain better on all metrics.
        Last edited by starshipeleven; 21 May 2020, 07:00 PM.

        Comment


        • #24
          Thanks for the clear explanation, @starshipeleven.

          Comment


          • #25
            Originally posted by madscientist159 View Post

            Thanks for all the info! I think I'll pick up a Titan Ridge and give it a shot on one of my POWER boxes...fingers crossed...
            Keep us updated if it will work, if our DUALG5 PowerMac(PCIe) didn't die I would have put the Gigabyte Titanridge card in there as well.

            Comment

            Working...
            X