Announcement

Collapse
No announcement yet.

AMD Ryzen/Zen Currently Doesn't Support Coreboot Today

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by bridgman View Post
    Hopefully you remember your complete lack of interest in KMS at the time, and even pushing back on using drm for acceleration a bit earlier?
    Well, of course, this was 2007/2008, and we were writing a graphics driver for everyone to use, at that exact time. A display driver which was portable, reliable, and which happily ran on then currently supported SLE and RHEL and debian stable.

    KMS also was a piss-poor brainfart at the time, if keithp had not been hired into intel, but if someone with display experience had been hired instead, we would've had something close to atomic modesetting, with helper functions, in 2008. Instead, it was an idiotic copy of a limited set of my ideas that made it into randr1.2. It really was ridiculous.

    KMS only pushed through for 2 reasons:
    1) Radeon kms was suddenly important because it offered a means to side-step AMD management when they told you and the forkers to play nice with respect to radeonhd. You got radeonhd to depend on more of atombios at that time, and then the new stick to beat us with was found. This was the first bigger user of KMS, nobody cared about the header files jacob bornecrantzer did, or the early prototyping work jbarnes did for intel. Radeon was the first big user, and it only happened because of politics and powerplay, the perfect reason to make a perfect technical solution. not.
    2) dave airlie power play with respect to DRM, piggy backing this code onto drm (which is a mistake that still hurts kms today), taking the back door into the kernel. Just like the powerplay surrounding the xf86-video-ati driver, forcing everyone to take it through the back door, instead of starting cleanly from scratch with what obviously was a completely separate display driver.

    Having said all that. Stop detracting: did you, or did you not, help hand kgrids to the forked driver weeks before you handed this to your supposed technology partner? As that was the biggest smoking gun in all of your games, and it caused that [EXEC] to ask me to write that timeline email that i posted a few weeks ago.

    Comment


    • #32
      bridgman libv


      At then end of all this, it comes down to how do we make a way forwards, the devil is in the detail. Apart from GPIO multiplexing stuff that partners would consider IP how does a few PLL configuration registers and the simple stuff protect anyone? Why are those in ATOMBIOS? Also could you please point me to those native calls in DAL/DC you were talking about, I would be very interested to see them.

      Thanks so much!

      Comment


      • #33
        Originally posted by funfunctor View Post
        At then end of all this, it comes down to how do we make a way forwards, the devil is in the detail.
        AFAIK the discussion is more about "who said what" nearly a decade ago than about what we should be doing today, although we could spawn a sid-thread about what might be possible today in that area.

        Originally posted by funfunctor View Post
        Apart from GPIO multiplexing stuff that partners would consider IP how does a few PLL configuration registers and the simple stuff protect anyone? Why are those in ATOMBIOS?
        I believe libv's complaint was the opposite - that the PLL details were *not* in the AtomBIOS tables while I had been told they *were*.

        It wasn't the registers themselves AFAIK, it was how to calculate the divider and loop filter values that were programmed into those registers. They were supposed to be in AtomBIOS (and I was told they were) because they would have been generally useful.

        Don't remember the details but I imagine the BIOS tables only contained "program these values for this mode" lists which were not much help for implementing logic to generate arbitrary pixel clock frequencies.

        When we tried to dig up programming info for the PLL's they turned out to be licensed IP with limited and hard-to-find documentation. The HW folks we were talking to weren't sure who we licensed the IP from so we also had to do some dumpster-diving in Legal to find the IP owner and get permission to pass on the documentation we were able to get (which wasn't much). That all took time and meant that libv and others had to do a lot of late-night trial and error work in order to make progress in the meantime.

        Originally posted by funfunctor View Post
        Also could you please point me to those native calls in DAL/DC you were talking about, I would be very interested to see them.
        OK, will ask the DAL devs... they were the ones who told me about this.
        Last edited by bridgman; 05 March 2017, 05:22 PM.
        Test signature

        Comment


        • #34
          Originally posted by Holograph View Post

          Can find Intel motherboards that allow bifurcation and have x4 slots that actually work and don't disable half the features on the board when used, and from every source I've heard, 4 of the lanes on Ryzen are NVMe only, not general purpose. Is that true or did reviewers phrase "these 4 lanes almost always physically go to a M.2 slot" incorrectly (because you could still use an adapter if it's just a physical thing, but reviews I've seen have specifically said it's only for NVMe)?

          And why is bifurcation on Ryzen x8/x8 at best?...
          These four lanes have been earmarked for NVMe, but they are really just standard pci-e 3.0 lanes. What you can't do right now is though a drive that expects to communicate with a SATA controller over SATA-e there. This is because the four lanes are routed directly out of the cpu and bypass the chipset. While in thoery you could get a driver to make the cpu emulate a sata controller, performance would be horrible. If you want to use such a drive, the solution will likely be to hook it to a sata-E from the chip-set of connect to two of the pci-e lanes coming out of the chip-set.

          That said, with BIOS enablement, and/or a PCH chip, you should be able to run any pci-e device on those lanes. It's just that the platform is both new and rushed so the focused on just getting X things done where X provided a tangible improvement over past platforms and where X was really hurting market value. In 4-6 months when r5 and r3 have landed you should be seeing some motherboards with all sorts of niche options. With the R7 launch anything that didn't' give more FPS was second priority. A SATA m.2 drive doesn't give you any better performace than a Sata III 2.5" drive connected to a SATA-e port. Anything more than 2 graphics cards decrease FPS and increase jitter given coherency issues.

          Also some of the "NVMe only" (really just direct pci-e only) may indicate Naples is looking to leverage Optane type memory/cache on future server and high-compute packages.




          Comment


          • #35
            libv

            You are acting like serious douche. I can't believe that bridgman has the patience to deal with you.

            Comment


            • #36
              Originally posted by arakan94 View Post
              libv

              You are acting like serious douche. I can't believe that bridgman has the patience to deal with you.
              Lining ourselves up for a job at red hat, are we?

              Comment


              • #37
                Originally posted by bridgman View Post
                Fair point. My impression was that splitting across more than two GPUs was becoming unattractive because anything less than PCIE 3 x8 was not sufficient to feed the high end GPUs that were typically plugged into those connectors.
                Not only GPUs. Any kind of PCIe device is affected by this limitation.

                If you want two M.2 SSDs for example, it is not possible to use PCIe 3.0 x4 for both and two PCIe slots at the same time.
                AM4 mobos that offer two M.2 slots have only PCIe 2.0 for the second one, which seriously limits performance.

                Comment


                • #38
                  Originally posted by bridgman View Post
                  Fair point. My impression was that splitting across more than two GPUs was becoming unattractive because anything less than PCIE 3 x8 was not sufficient to feed the high end GPUs that were typically plugged into those connectors. For most other scenarios I *think* you are talking about using a desktop chipset for server or workstation applications - let me know if I'm getting that wrong.
                  Traditionally somewhat of a workstation application, yes, but I don't think that's a reasonable explanation either because consumers are now starting to get a lot more technology that is far beyond what enterprises had say 5 years ago. For example, consumer NVMe SSDs like the 960 Evo are quite affordable to the degree that power users will start wanting multiples. Basic users buying Dell machines will still only want 0 to 1 SSDs in their machines but please realize that power users are going to be expecting more. I honestly and truly think this was a bad time to bring out a platform that lacks in PCIe connectivity. If you brought this out 2-3 years ago I think it would be considered just fine. Personally I want a GPU, multiple NVMe SSDs, and either 10+ SATA and/or a SAS HBA (probably something like an LSI SAS 9207). Do many home users use LSI SAS cards? No, but I'm far from the only home user that wants lots of storage. And I am not the only home user to realize that building a separate server is still going to add 30-50 watts of power usage to the house and is rarely a smart idea if you've only got 1-2 users.



                  Originally posted by bridgman View Post
                  I'll try to find out re: whether those PCIE lanes have to be dedicated to NVME. That was not my impression but will check.
                  OK. If this is not true (which would be good - the fewer the limitations, the better) then I imagine some press contact people over there may have worded this incorrectly to some reviewers because there are seriously reviewers claiming that these lanes are very limited in use. One such reviewer is "Gamers Nexus" but he is far from the only one I saw claiming that. I literally want to claim that almost every review (around 5 or 6) that I saw pointed this out, though I'm not completely confident in that, maybe 1 or 2 reviews did not claim that. I am essentially positive that I did not see any reviews pointing out that these lanes are general purpose and are just usually going to physically go to a M.2 slot. Pretty dang sure that every single review I saw either said NVMe specifically or did not address this point at all.


                  (Sidenote, yes I did say I want NVMe, but this question is of whether these PCI-E lanes are general purpose or not - if they are GP then they could be split up into 2 x2 NVMe drives, or sent to a PLX chip, or something else. NVMe SSD could still be used, still get decent though not maximum speed, and I could also get more flexibility with the PCIe lanes. Ideally I'd still want more PCIe lanes but at the very least, the existing lanes could be used in a more flexible manner.)


                  Originally posted by bridgman View Post
                  Can you revisit this after the server parts launch ?
                  Yes, I will be following your product, both with motherboards as well as your product refreshes in the following few years. You may still get my purchase if someone brings out a motherboard with a PLX chip (which I would be okay with because I don't often use my storage heavily and game at the same time, generally one or the other). Especially if PSP gets open sourced and/or a new stepping comes out that improves the stability of your memory controller to get higher memory clocks which help core scaling with some applications, though that's not quite as important for gaming. And to be clear, I game, I store, I run Gentoo (which is a source-based distro and thus my machine spends a lot of time compiling stuff), and I'm also a developer myself (though I don't often use my personal machine for that anymore). Not sure if that was unclear or not. I use my machine for several things but in my opinion this should be expected of home users these days, especially users who would consider buying $499 (or even $399) 8c16t processors.

                  That said, I am also keeping an eye on what Intel does. Though I already have a Haswell CPU and none of their newer CPUs excite me at all, so your company does have some time to expand your offerings.

                  Anyway, thanks for your replies. I do appreciate them possibly more than it may seem.



                  Originally posted by WorBlux View Post

                  These four lanes have been earmarked for NVMe, but they are really just standard pci-e 3.0 lanes. What you can't do right now is though a drive that expects to communicate with a SATA controller over SATA-e there.
                  ...
                  Thanks for your reply. What you said is sort of what I would have expected, but some reviews phrased related information in a way that was confusing (at least to me).
                  Last edited by Holograph; 06 March 2017, 05:22 PM.

                  Comment

                  Working...
                  X