Announcement

Collapse
No announcement yet.

The Open-Source Graphics Card Is Dead

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Well, turns out it's not so easy to build a circuit consisting of 3 billion transistors (a GPU from the year 2010) and selling it for 200 bucks. Whatasurprise.

    Comment


    • #12
      Sounds like the project could have taken off if:
      a) There was only one dominate graphics card company charging arm + leg prices for each minor generation upgrade
      b) Poorly implemented and tightly closed driver support that was strictly limited to a major OS.

      Since at present we have Amd/Ati, Intel and Nvidia at least giving second class citizen driver support, and competition in hardware performance pricing; I don't see any open source graphics making it past the planning stage, much less actual fab production.

      Comment


      • #13
        Originally posted by tweak42 View Post
        Sounds like the project could have taken off if:
        a) There was only one dominate graphics card company charging arm + leg prices for each minor generation upgrade
        b) Poorly implemented and tightly closed driver support that was strictly limited to a major OS.

        Since at present we have Amd/Ati, Intel and Nvidia at least giving second class citizen driver support, and competition in hardware performance pricing; I don't see any open source graphics making it past the planning stage, much less actual fab production.
        Pretty much. I'm sure there are *some* people who care about whether their graphics hardware is open or not. And I'm sure that *some* of those people care about it enough to try making their own hardware. But that fraction of a fraction really isn't enough to actually achieve much...

        Comment


        • #14
          It was never intended to compete with mainstream hardware

          because everyone knew from the beginning that would be impossible without millions of preorders providing scale.

          The point was to get something out there people could play with. It was intended primarily for students and people who wanted to play around with huge FPGAs.

          Comment


          • #15
            Originally posted by brouhaha View Post
            When the project started, there was no recent-generation ATI or Nvidia chip that had public documentation. Only an ATI chip that was already about four generations old had docs. ATI (now AMD) started providing documentation that covers most of the the chip (UVD being the notable exception), so the need for OGP is much less that it was when the project started.
            I'm sorry

            It sounded like a good idea at the time. At least Egbert shares the blame on this one though.

            Comment


            • #16
              I have a proposal. Just take OpenCores, with only a million transistors per 2.5dmips/mhz, or 32instructions/hz(512bit-vector*fmac). Intel can do this with 40-60millionT, Arm with 13mT, Mips with 2-3mT and OpenCores with 1mT. Then add Transcoding Instructions for fast Emulation(Qemu) of other processors, like Godson Cpu, in China maths are patent-free. Then add Graphics Instructions like Mips-3D (no Asic Units, like Rasters or TMUs, only Hardware-Accelerated Software-Rasterizer(LLVM-pipe+). Your Gpu is ready with 2-4Tflops/watt@28nm or more on new lithography, calculated by me. Then use your Core on a super fast Fpga like Abax-3d (make a deal with a company for a cheap Fpga), or make your own with TSMC for example.

              Comment


              • #17
                Originally posted by libv View Post
                I'm sorry

                It sounded like a good idea at the time. At least Egbert shares the blame on this one though.
                Explain more. I'm a little ignorant of the players in this project. Are you implying you helped push and design this hardware? If so, good stuff man! Even though it wasn't a raging success, I love projects like these.

                Comment


                • #18
                  Originally posted by brouhaha View Post
                  The $750 OGD1 card was not just a framebuffer VGA; the point was to run 3D acceleration in the FPGA. The FPGA was not able to hold as many shaders or run as fast as an ATI or Nvidia chip, but it was a developmental prototype, and if the project was successful, an ASIC would have been developed. It still would have been difficult to match what the big guys can do, but it at least would have wound up somewhere in the ballpark.

                  The $750 OGD1 card was also not actually intended to sell as a product to end users as a graphics card. It was intended for developers, either of the OGP, or anyone that wanted a fairly beefy FPGA.

                  When the project started, there was no recent-generation ATI or Nvidia chip that had public documentation. Only an ATI chip that was already about four generations old had docs. ATI (now AMD) started providing documentation that covers most of the the chip (UVD being the notable exception), so the need for OGP is much less that it was when the project started.

                  Considering that you can get a better board for doing this sort of thing from Avnet for $500 (Uses a Spartan 6 with a much higher gate count...only has one DVI connector though...) it was a bit of a hard sell. There's still room for a bit of trying at this sort of thing for the embedded space, but if ARM wises up and helps the RE effort by giving out key pieces of info for the Mali like AMD did with the Radeon, there may be less "need" to do this. I still think there's room for trying at innovation in the space- you're just not going to get a full-fledged beast out of an FPGA. :-D

                  Comment


                  • #19
                    There's no hope of someone coming up with a full-fledged 3D pipeline on an FPGA that can even remotely compete with anything NVIDIA or ATI sells, so why bother with a 3D pipeline? On the other hand, here's what I want: a USB board with a decent 8k+ LE FPGA and DDR2 memory to supply one or three or more decent video outputs, so it can be used to to drive a number auxilary monitors. In addition, users can upload custom display driver designs to the FPGA for specialized features - for instance, impl. X primitives right on the FPGA, accelerated modes for rendering text or graphs (i.e. buffer-less oscilliscope/waterfall scopes handled entirely by the FPGA), or to accelerate rendering of hardware (tile-based display modes of game console or Amiga graphics modes), etc.. Currently I dabble with a TS-7300 which is an ARM-based board with a pure FPGA-implemented video out but having this on a small USB board would be lovely.


                    Originally posted by Svartalf View Post
                    Considering that you can get a better board for doing this sort of thing from Avnet for $500 (Uses a Spartan 6 with a much higher gate count...only has one DVI connector though...) it was a bit of a hard sell. There's still room for a bit of trying at this sort of thing for the embedded space, but if ARM wises up and helps the RE effort by giving out key pieces of info for the Mali like AMD did with the Radeon, there may be less "need" to do this. I still think there's room for trying at innovation in the space- you're just not going to get a full-fledged beast out of an FPGA. :-D

                    Comment


                    • #20
                      Some FPGAs are more powerful than todays GPUs. And I was speaking for a non-FPGA alternative. Many Core OpenCores (100+) with 1 million transistor each, in a Smartphone.

                      Comment

                      Working...
                      X