Announcement

Collapse
No announcement yet.

Intel Enables Tessellation Shader Support In Open-Source Linux Driver

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by M@GOid View Post


    I know right? They sell 16 core/ 32 threads processors since 2014.
    desktop?

    Comment


    • #12
      Originally posted by pal666 View Post
      8core desktop processor is so 2011
      So socket 2011?

      Comment


      • #13
        Originally posted by ultimA View Post

        Not necessarily. Tesselation trades memory bandwidth for GPU compute resources. So if your application+hardware combination was bandwidth limited and was not fully utilizing the GPU otherwise, tesselation can improve performance by having to send less vertices to the GPU while still retaining the same quality.
        That presumes you are using tessellation to replace your old code that was providing identical vertices the old way. From what I've seen, that has never really happened in real apps. Maybe on mobile?

        Games seem to just use tessellation to add additional vertices on top of the old ones, providing a higher level of quality (for more gpu work).

        Comment


        • #14
          Intel figured out how to fix the gpu crashes on gen7 hardware, and there are patches enabling it there now too.
          Last edited by smitty3268; 25 December 2015, 05:12 AM.

          Comment


          • #15
            Originally posted by smitty3268 View Post
            Intel figured out how to fix the gpu crashes on gen7 hardware, and there are patches enabling it there now too.
            Aww yiss.


            Unigine Heaven doesn't render correctly on ivy bridge, but you can see tessellation works:

            It looks black without tessellation too. And yes, ~/.driconf is up to date, and with radeonsi it works fine in the same configuration.

            By the way: Is intel ging to do something about X getting sluggish with heavy OpenGL loads? Running unigine heaven on my ivy bridge gpu not only makes X less responsive, it also makes the mouse pointer sluggish and jumpy...

            Comment


            • #16
              Originally posted by ultimA View Post

              Not necessarily. Tesselation trades memory bandwidth for GPU compute resources. So if your application+hardware combination was bandwidth limited and was not fully utilizing the GPU otherwise, tesselation can improve performance by having to send less vertices to the GPU while still retaining the same quality.
              On its own, tessellation doesn't trade anything, it's a shader stage that generates varying degrees of additional triangles from something called "patches" (geometry that isn't renderable on its own); so just like with geometry shaders, you have additional computation overhead and a bigger memory constraint. (As smitty already pointed out)

              Using that additional geometry to then send less total vertices to the GPU, I don't consider being just tessellation itself but an advanced rendering technique that makes use of it. But the point is, with statically generated geometry, you only upload it once, whereas with tessellation, that shader stage will run on each draw call, unless of course you capture the generated vertices back into buffers and plain-draw them from there on, something that apparently is often done with geometry shaders (I don't know if it's actually viable with tess, especially since you often want varying level-of-detail as you walk through the 3D world).

              It really all depends on what the application doing.

              Comment


              • #17
                Originally posted by pal666 View Post
                desktop?
                It works in desktop motherboards. The only thing stopping you from having one is money :-P

                Comment


                • #18
                  Originally posted by smitty3268 View Post

                  That presumes you are using tessellation to replace your old code that was providing identical vertices the old way. From what I've seen, that has never really happened in real apps. Maybe on mobile?

                  Games seem to just use tessellation to add additional vertices on top of the old ones, providing a higher level of quality (for more gpu work).
                  Depends how you look at it. Even if you use tesselation to generate additional geometry for improved picture quality, often it will be because it would have been prohibitive to send that many geometry in the first place. So your app might get slower using it, but still, without tesselation it would have gotten even slower. In the end tesselation was used to save bandwidth. The only difference is, in one case the extra bandwidth is used to increase FPS at current quality levels, while in the other it is used to enable higher quality with the same hardware.

                  Comment


                  • #19
                    Originally posted by siavashserver
                    Less vertex shader work. When dealing with animated (skinned) geometries, usually 4 bone/joint transformations per vertex should be read from video memory and blended together. By using tessellation, the time consuming work will be done only for key vertices by vertex shader, and then extra level of detail will be simply added on top during tessellation stage.

                    I'm not sure I understand this scenario fully. Since the vertex shader runs after the tess ones, and on every vertex that was generated by tess, how do you save on expensive vert shader invocations? Or are you talking about running a non-tess pipeline on the few vertices first, capturing that via transform feedback, and then running it through tess with a simpler vert shader at the end?

                    Comment


                    • #20
                      Originally posted by siavashserver
                      No, the vertex shader runs before tessellation stage:
                      You're totally right, not sure how I fucked that up in my head, haha. I actually noticed when I edited some of the GLSL in my engine, and thought "wait, what I wrote on phoronix this morning didn't make any sense lol".

                      Comment

                      Working...
                      X