Announcement

Collapse
No announcement yet.

The Open-Source Graphics Card Is Dead

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • maiden
    replied
    Open Graphics not completely dead.

    We (ORSoC Graphics accelerator) are still active:
    Note:  This blog post outlines upcoming changes to Google Currents for Workspace users. For information on the previous deprecation of Googl...



    Old release, new one with support for vector graphics and 3d graphics will come soon.



    Might come something out of this too:
    Note:  This blog post outlines upcoming changes to Google Currents for Workspace users. For information on the previous deprecation of Googl...

    Leave a comment:


  • maiden
    replied
    But we are still alive!

    Hello we at the ORSOC Graphics accelerator is still active

    Note:  This blog post outlines upcoming changes to Google Currents for Workspace users. For information on the previous deprecation of Googl...


    And there might be something here soon:
    Note:  This blog post outlines upcoming changes to Google Currents for Workspace users. For information on the previous deprecation of Googl...

    Leave a comment:


  • rabit
    replied
    Sadly not really true. A massive chunk of space on an FPGA is devoted not to logic elements but all possible reroutable interconnections between these logic elements, and due to the complexity and manufacturing yields of very big FPGAs, one that might barely match the complexity of the lowest-end GPUs will cost more than your entire Alienware gaming rig. IMHO, using an FPGA as a computation device is far more interesting than so-called many-core CPUs.

    Performance is NOT the number of cores that you can count but the number of logic gates that you can effectively utilize, and 'core' implies a lot of wasted logic that an FPGAs can devote purely to any task. This is the real strength of programmable logic that will be realized once FPGAs makers offer better tools for partial reprogrammability. (Xilinx only briefly did with JBits)

    Originally posted by artivision View Post
    Some FPGAs are more powerful than todays GPUs. And I was speaking for a non-FPGA alternative. Many Core OpenCores (100+) with 1 million transistor each, in a Smartphone.

    Leave a comment:


  • artivision
    replied
    Some FPGAs are more powerful than todays GPUs. And I was speaking for a non-FPGA alternative. Many Core OpenCores (100+) with 1 million transistor each, in a Smartphone.

    Leave a comment:


  • rabit
    replied
    There's no hope of someone coming up with a full-fledged 3D pipeline on an FPGA that can even remotely compete with anything NVIDIA or ATI sells, so why bother with a 3D pipeline? On the other hand, here's what I want: a USB board with a decent 8k+ LE FPGA and DDR2 memory to supply one or three or more decent video outputs, so it can be used to to drive a number auxilary monitors. In addition, users can upload custom display driver designs to the FPGA for specialized features - for instance, impl. X primitives right on the FPGA, accelerated modes for rendering text or graphs (i.e. buffer-less oscilliscope/waterfall scopes handled entirely by the FPGA), or to accelerate rendering of hardware (tile-based display modes of game console or Amiga graphics modes), etc.. Currently I dabble with a TS-7300 which is an ARM-based board with a pure FPGA-implemented video out but having this on a small USB board would be lovely.


    Originally posted by Svartalf View Post
    Considering that you can get a better board for doing this sort of thing from Avnet for $500 (Uses a Spartan 6 with a much higher gate count...only has one DVI connector though...) it was a bit of a hard sell. There's still room for a bit of trying at this sort of thing for the embedded space, but if ARM wises up and helps the RE effort by giving out key pieces of info for the Mali like AMD did with the Radeon, there may be less "need" to do this. I still think there's room for trying at innovation in the space- you're just not going to get a full-fledged beast out of an FPGA. :-D

    Leave a comment:


  • Svartalf
    replied
    Originally posted by brouhaha View Post
    The $750 OGD1 card was not just a framebuffer VGA; the point was to run 3D acceleration in the FPGA. The FPGA was not able to hold as many shaders or run as fast as an ATI or Nvidia chip, but it was a developmental prototype, and if the project was successful, an ASIC would have been developed. It still would have been difficult to match what the big guys can do, but it at least would have wound up somewhere in the ballpark.

    The $750 OGD1 card was also not actually intended to sell as a product to end users as a graphics card. It was intended for developers, either of the OGP, or anyone that wanted a fairly beefy FPGA.

    When the project started, there was no recent-generation ATI or Nvidia chip that had public documentation. Only an ATI chip that was already about four generations old had docs. ATI (now AMD) started providing documentation that covers most of the the chip (UVD being the notable exception), so the need for OGP is much less that it was when the project started.

    Considering that you can get a better board for doing this sort of thing from Avnet for $500 (Uses a Spartan 6 with a much higher gate count...only has one DVI connector though...) it was a bit of a hard sell. There's still room for a bit of trying at this sort of thing for the embedded space, but if ARM wises up and helps the RE effort by giving out key pieces of info for the Mali like AMD did with the Radeon, there may be less "need" to do this. I still think there's room for trying at innovation in the space- you're just not going to get a full-fledged beast out of an FPGA. :-D

    Leave a comment:


  • Tgui
    replied
    Originally posted by libv View Post
    I'm sorry

    It sounded like a good idea at the time. At least Egbert shares the blame on this one though.
    Explain more. I'm a little ignorant of the players in this project. Are you implying you helped push and design this hardware? If so, good stuff man! Even though it wasn't a raging success, I love projects like these.

    Leave a comment:


  • artivision
    replied
    I have a proposal. Just take OpenCores, with only a million transistors per 2.5dmips/mhz, or 32instructions/hz(512bit-vector*fmac). Intel can do this with 40-60millionT, Arm with 13mT, Mips with 2-3mT and OpenCores with 1mT. Then add Transcoding Instructions for fast Emulation(Qemu) of other processors, like Godson Cpu, in China maths are patent-free. Then add Graphics Instructions like Mips-3D (no Asic Units, like Rasters or TMUs, only Hardware-Accelerated Software-Rasterizer(LLVM-pipe+). Your Gpu is ready with 2-4Tflops/watt@28nm or more on new lithography, calculated by me. Then use your Core on a super fast Fpga like Abax-3d (make a deal with a company for a cheap Fpga), or make your own with TSMC for example.

    Leave a comment:


  • libv
    replied
    Originally posted by brouhaha View Post
    When the project started, there was no recent-generation ATI or Nvidia chip that had public documentation. Only an ATI chip that was already about four generations old had docs. ATI (now AMD) started providing documentation that covers most of the the chip (UVD being the notable exception), so the need for OGP is much less that it was when the project started.
    I'm sorry

    It sounded like a good idea at the time. At least Egbert shares the blame on this one though.

    Leave a comment:


  • smitty3268
    replied
    It was never intended to compete with mainstream hardware

    because everyone knew from the beginning that would be impossible without millions of preorders providing scale.

    The point was to get something out there people could play with. It was intended primarily for students and people who wanted to play around with huge FPGAs.

    Leave a comment:

Working...
X