Announcement

Collapse
No announcement yet.

NVIDIA To Begin Publishing Open GPU Documentation

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    Cop-out?

    Is it me or does sound more like a way for them not to get left in the dark with Wayland and/or Mir? They need KMS to run but binary blobs don't really allow that to happen, yet anyways. If they want their hardware to at least function with either of those, they need the open source drivers to advance faster. Now they don't have to worry so much about making the binary blob work for Wayland and/or Mir because they can say "we gave you specs, make the open source driver work." Maybe I'm paranoid or jaded, Nvidia never seems to come off as actually liking the open source community.

    Comment


    • #82
      Originally posted by bridgman View Post
      That's something different though -- not the microcode doing bad things, but the microcode providing an interface which allows the driver to potentially do bad things.
      Hmm...

      To me that already counts as "the microcode doing bad things" - be it intentionally by the company or by plain poor design decisions.

      Comment


      • #83
        Originally posted by entropy View Post
        To me that already counts as "the microcode doing bad things" - be it intentionally by the company or by plain poor design decisions.
        Let's get rid of the "rm" command on Linux because you can do bad things with it (like "rm -rf /" as root.) The microcode is not there to protect you. It's there to give you full access to the hardware. That means you can do "bad things" with it.

        Comment


        • #84
          Originally posted by migizi View Post
          Is it me or does sound more like a way for them not to get left in the dark with Wayland and/or Mir? They need KMS to run but binary blobs don't really allow that to happen, yet anyways. If they want their hardware to at least function with either of those, they need the open source drivers to advance faster. Now they don't have to worry so much about making the binary blob work for Wayland and/or Mir because they can say "we gave you specs, make the open source driver work." Maybe I'm paranoid or jaded, Nvidia never seems to come off as actually liking the open source community.
          I'm pretty sure that the binary driver can work with Wayland and Mir.

          Comment


          • #85
            Originally posted by dee. View Post
            We can't trust the law to protect us from totalitarian spying.

            Like said, it's not a matter of Nvidia's motives. They may not have any choice in the matter.
            It's people we trust. Law's a word .

            Comment


            • #86
              Originally posted by MartinN View Post
              It's people we trust. Law's a word .
              You can't trust people either, if those people can be forced by other people to do bad things and to keep it secret.

              Comment


              • #87
                Originally posted by RealNC View Post
                Let's get rid of the "rm" command on Linux because you can do bad things with it (like "rm -rf /" as root.) The microcode is not there to protect you. It's there to give you full access to the hardware. That means you can do "bad things" with it.
                Did you read Marcin's post on the ML?

                If I got him right he says that the microcode introduces an interface/functionality
                which allows far more than necessary in a good design.
                Do you think that's a good thing?

                Comment


                • #88
                  Originally posted by entropy View Post
                  If I got him right he says that the microcode introduces an interface/functionality
                  which allows far more than necessary in a good design.
                  Do you think that's a good thing?
                  Yes, but the microcode is the hardware interface to the card. It should be the kernel/graphics driver's job to prevent people from abusing it. I mean, if I have hardware level access to the SATA controller, I can read everyone's data regardless of permissions. How is this any different?

                  But, I'm not a Nouveau developer, and have no experience writing kernel code, so there may be some nuance to this I'm not understanding.

                  Comment


                  • #89
                    Originally posted by kwahoo View Post
                    I don't think it's very useful...
                    Well that's a first step. At the begining ATI/AMD haven't release much neither.

                    Originally posted by xnor View Post
                    Pretty much everything.. so far anyway.
                    But the good part, is that NVIDIA has provided contacts where nouveau developer can discuss and get more stuff that they need.
                    For exemple, in the mailing discussion they started asking for the critical missing parts in powermanagement.
                    Sadly, according to NVidia, that is going to be a bit more complexe to get through the validation and "authorised-to-publish" pipeline.

                    Originally posted by xeekei View Post
                    I bet the open source Nvidia driver will catch up to AMD's and embarrass them there too like they do with the proprietary ones. Nvidia always take drivers more seriously.
                    Well on the other hand, there are a little bit differences in strategies.
                    - AMD started releasing small bits, but their big plans have always been to start soon dumping ALL the info they can. This included having the "releasing info" pipeline streamlined to be able to release faster. There future goal is "design newer car while keeping the opensource world in mind". The only stuff where AMD has been more cautious was the video unit, due to DRM. And even there, their long term strategy is "design future generation of video core so DRM doesn't prevent releasing the necessary docs for opensource". In short, they are moving toward a pipeline where opensource drivers are a normal part of their development pipeline.
                    - Nvidia, according to the mailing discussion, currently is NOT considering a complete openprocess. Instead they will provide contact for the Nouveau team, and will release info when asked for "needed" feature. Do not expect a full dump of everything for them. But at least when Nouveau teams find a problem, they can ask for specific help, instead of wasting resource to reverse engineer everything.

                    In short, seems that AMD will have opensource drivers almost completely integrated in their strategy (including paid-for developers), where Nvidia will limit it-self to the necessary collaboration to have nouveau succeed. But none the less, that's still good enough for now.

                    Originally posted by gamerk2 View Post
                    Disagree. Due to power draw, APU's are never going to advance past entry-level performance. Desktops and high end systems are still going to need dedicated GPU's as a result.
                    APU are more and more providing adequate power for average use. Yup, you won't be able to play "Crysis 4" just with the on die GPU anytime soon. But for lots of other use, including mid-range desktop and even gaming, they are currently enough.
                    So yes, you'll still always need discrete GPU for high-end tasks. Workstation doing intensive scientific or artistic work will need them. Ulta high-end gaming rig will need them, too. Avarage joe will probably happy with an APU to play his games, just like he's happy checking e-mail/tweeting/checking facebook on smartphones and tablets instead of lugging a laptop around.

                    Originally posted by TemplarGR View Post
                    Plus, there is no real reason why TDPs for consumer APU's have to be in the range of 100W... Why not 150 or 200 watts?
                    Well, huh, perhaps because the waste heat needs to be removed and 150W TDP cooler are big enough towers already? :-D

                    Originally posted by TemplarGR View Post
                    Discrete gpus will go the way of discrete fpus.
                    Well discrete FPUs have always been seen as a cost cutting method (sell cheaper FPU-less CPU) and where expected to return back to the CPU once process and prices made it possible.
                    Whereas, it took quite some time (until the apparition of GPGPU concepts like CTM, CUDA, OpenGL, etc.) before GPU where realised to be "CPU worhty".

                    But still the comparison is valid: nowadays CPU are able to pack enough FPU power for almost everyone. Only specific high-end usage (like scientific computing) requires discrete accelerators (massive parallel FPUs and or SIMD, headless GPGPUs, FPGA and other specialist hardware).
                    Probably the same will happen over time with APU: average laptop will be able to play game using onboard GPUs, while scientific clusters, renderfarm, etc. will still harbour discrete multi-GPU cards (probably card having the same APUs in massevely parallel configurations).

                    But still, that means for vendors like Nvidia that their high end market will be shrinking. Selling 3D hardware to almost everyone wanting 3D > selling 3D hardware only to enthousiast gamers, while average users use APUs > selling 3D hardware only to a few specific buyers (universities, industry and government computing clusters), while virtually everyone does 3D with APUs.


                    Originally posted by MartinN View Post
                    If you want to play games - buy a console. For work, buy a laptop with integrated GPUs (such as Intel's HD/Iris). Let us stop this dogma that one computer has to do it all and do it best - it shouldn't - unless you're willing to fork over a premium, and most people aren't.
                    You don't necessarily need to fork a premium to get a machine able to play games. You need if you want to always have the "latest most powerfull machine even able to play the next Crysis on max settings". For everybody else, you can buy a decent machine for a couple of hundreds of euro. I might not be the best, but it's still very good for most games. And the best part is, you still have a lot of money, so in 6 months or 1 year, you can upgrade some parts. And again this time, you buy not the most expensive part. Again they won't be the absolute best of their time, but on the other hand for a rather cheap price they operate much better than the previous one and probably even on par with- or even better- than what was the best for the first machine.

                    Don't buy X000 for machine A. Buy X00 for machine B (which is a bit slower, but good enough), then buy X00 for upgrade B2 (which is better than B, and probably even a bit better than A, although it only cost now a fraction of what A used to cost back then [and of what A2 costs now])

                    Also, PC have advantages beyond simple hardware modularity. They are much less controlled platforms with much more freedom to try anything. PCs are Indie developers free playing field, whereas console have always been parts of big-publishing networks with Indie dev only a recent and much more limited arrival. One of the main reason that Gabe announced for Valve's move to Linux, is to keep this freedom for PC Gaming and avoid the Apple-like/Console-like walled garden that Microsoft is currently building by getting more and more control on their windows platforms.

                    Originally posted by Cyber Killer View Post
                    We're in an important moment of GNU/Linux history - X is going out the door and there are 2 contenders waiting to take it's place. For GPU makers it means more work to support the new stuff, and probably more work than it was for X.
                    Well not necessarily that much. X was a very complicated beast (simply due to its long lineage, from a server mainly playing painting command as they come, up to the modern buffered hardware accelerated monster, while keeping all backward compatible among all those disparate generations). Providing a functioning X server is quite a huge chunk of work.
                    Meanwhile, Wayland and Mir are pure compositors. They require not much more than glorified blitter, with most of the logic going into the compositor and the tool-kits. (Well nowadays, compositors and tool-kits ARE ALREADY doing most of the works, except that they have to do this works while collaborating with all the big pile of retro-compatibility mess. The point is that in th future, Wayland and Mir only rely on a few basic hardware function - EGL and co - and the mess is restricted in a compatibility layer running as a separate composited application : XWayland and XMir).
                    So in the long term, once Wayland has stabilised and Mir has died, keeping drivers will be less complicated than back in the X Server days).

                    Originally posted by pingufunkybeat View Post
                    Originally posted by dee. View Post
                    Yes, that sounds entirely reasonable. I'm sure we'll all be happy with NSA-approved firmware blobs in our open drivers...
                    The blobs run exclusively on the graphics card.
                    What harm can they do there?
                    Well, being a GPU it has full access to the system memory and can execute quite complex routine on its own. It's as if you had a second processor, but this one boot and runs a completely different BIOS and its own proprietary OS, completely outside your control.
                    (It gives the same kind of access like the DMA acess of FireWire and co).

                    Originally posted by Sonadow View Post
                    • a proper motherboard (BIOS firmware blob)
                    • x86 CPUs (x86 microcode blob loaded on boot)
                    • Atheros Wifi cards (firmware blob)
                    • Ralink Wifi cards (firmware blob)
                    • Intel Wifi cards (firmware blob)
                    • AMD GPUs (firmware blobs)
                    All the exemple have different accesses:
                    • the BIOS is completely shut off upon boot and the OS takes over the control of the machine (only ACPI remain used after take over and it only controls a few hardware configuration registers). Unless a complex chain of patching and corrupting takes place, it's hard to compromise the main OS (but its possible. The BIOS could be launching a hypervisor, which in turn patches the running OS to make itself invisible. Still requires lots of work to be invisible for every single OS of the big Linux/BSD zoo).
                    • CPU microcode (AMD and Intel) are extremely limited. They only control how machine instruction will get translated into CPU micro ops. If you touch anything fundamental (like logic block) not only are you corrupting security functions (encryption, signing, etc), but basically everything else and the OS will probably be unable to run (everything needs "XOR"s etc.) If you only touch the few crypto functions (hardware random number generation, hardware accelerated crypto) these are known quantities and can be adapted - see the recent debate between Linus and a guy insisting on having hardware-only random number generation. Dubious-RNG can be mixed with classic entropy sources in a way that makes them secure.
                    • network/motherboard firmware: Specially the kind found on enterprise hardware. They are *really* dangerous. They ARE separate (embed) CPU, running their own embed OS and have almost full access to the hardware. Canonically, this is used to allow remote administration of everything (including VNC-like access, including remote EFI/BIOS flashing, etc.) Practically, they could be abused to do virtually anything with the target machine (spy on a users' session, steal passwords and cryptokey from ram, etc. any form of remote tampering that you could dream of) .
                    • wifi: depending on how the hardware is designed, the adapter might not be able to see anything (USB2.0 has no DMA). Still by principle, you shouldn't trust anything coming down a Wifi adapter. Even with WPA2 activated, consider the channel not trust-worthy. Either use for things that you wouldn't be embarrassed to shout to your whole neighbourhood with a megaphone from your house's roof - like basic browsing. Or establish an actual encrypted end-to-end channel (OTR chat, ZRTP voice call, GPG encrypted mail, e-banking using an actual security token, etc.) for *everything else*.


                    Originally posted by RealNC View Post
                    Let's get rid of the "rm" command on Linux because you can do bad things with it (like "rm -rf /" as root.) The microcode is not there to protect you. It's there to give you full access to the hardware. That means you can do "bad things" with it.
                    None the less, in all the above case 2 things could help:
                    • implementing sufficient security. Like using IOMMU to shield CPU's RAM from unauthorised access by hardware with DMA capabilities. Like using proper cryptographic mixing to not being exploitable by a single compromised hardware component
                    • having opensource firmware, so its code can be audited for security bugs (and/or hidden backdoors).


                    In the case of graphics drivers, IOMMU can help. But a closed firmware mean you can audit it.

                    Comment


                    • #90
                      Originally posted by TemplarGR View Post
                      Plus, there is no real reason why TDPs for consumer APU's have to be in the range of 100W... Why not 150 or 200 watts? If that means no discrete GPU, why not?
                      It's difficult to pump that much power through packed transistors without accumulating so much heat that the chip simply melts itself.

                      That's why the P4 chips were considered a failure - they could potentially clock up to massive speeds, but quickly fried themselves by using too much power, forcing Intel to only release the lower clocked versions.

                      There's a reason that all chips since then have been very carefully designed to fit within certain power envelopes and why those envelopes haven't changed much over the years. The more power-efficient they get, simply allows them to stick on more transistors at once without melting.

                      Comment

                      Working...
                      X