Announcement

Collapse
No announcement yet.

GPU Drivers, Crocodile Petting Zoos & LLVMpipe

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • GPU Drivers, Crocodile Petting Zoos & LLVMpipe

    Phoronix: GPU Drivers, Crocodile Petting Zoos & LLVMpipe

    Zack Rusin has written a new blog post where he compares writing free software graphics drivers to running a crocodile petting zoo and wireless bungee jumping...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Eh, wouldn't that slow things down a lot?

    Comment


    • #3
      One possible interpretation of Zack's post is that he is proposing to replace TGSI with LLVM IR as the common language of the driver stack. Drivers could either go directly from LLVM IR to GPU code or could use an existing IR as an intermediate step (eg TGSI, Mesa IR, or something like the "il" we use in the proprietary stack).

      Just a guess, but if that was the case it wouldn't necessarily slow things down but it might make the shader compiler portion of the driver larger and more complex. Hard to say.
      Test signature

      Comment


      • #4
        Good thing I haven't gotten around to studying Gallium3D drivers yet.

        Comment


        • #5
          It's also possible that Zack is talking about keeping TGSI and going back to using LLVM as part of the standard GPU shader compiler, in which case we would have :

          GLSL => TGSI => LLVM IR => GPU shader code

          or, if you're into compute :

          OpenCL C99 => LLVM IR => TGSI => LLVM IR => GPU shader code

          I guess I should stop looking through Evergreen Mesa code and go read some Gallium3D code, but I'd kinda like to get Evergreen support out first.
          Test signature

          Comment


          • #6
            The thing I don't really understand is how being able to run shaders is going to help without the rest of the hardware driver. On ATI graphics at least you need to set up things like the colour buffer to handle writeback of the results, and once you do that you might as well set up the depth buffer and texture samplers...

            ... and at that point you've pretty much got a GL driver.

            Still, I think the idea is good... looking for the smallest possible bit of porting required to get a modern stack running on new hardware, then add featuers from there. I gues Zack is thinking about running texture processing in shader code, which is an interesting idea if you can get the addressing set up right. It wouldn't be as fast as dedicated hardware but would let you get something running and useable faster.
            Test signature

            Comment

            Working...
            X