Announcement

Collapse
No announcement yet.

Intel Enables Tessellation Shader Support In Open-Source Linux Driver

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • haagch
    replied
    Didn't they omit tessellation in the linux port of metro redux? Have they added it since?

    Leave a comment:


  • Kano
    replied
    I think the patches needed are the same for Haswell, but i can wait till it is merged. Metro Redux would be an interesting test. MLL was unstable with Intel before.
    Last edited by Kano; 27 December 2015, 08:30 AM.

    Leave a comment:


  • haagch
    replied
    For Haswell it should already be, right?

    For ivy bridge, it's more convenient here: http://cgit.freedesktop.org/~kwg/mes...tess-ivb-pairs, merges with no conflicts.

    Leave a comment:


  • Kano
    replied
    Nice, you found a hack. I wait till the Intel patches show up in mesa git, then in could try Haswell.

    Leave a comment:


  • haagch
    replied
    Originally posted by haagch View Post
    Unigine Heaven doesn't render correctly on ivy bridge, but you can see tessellation works:

    It looks black without tessellation too. And yes, ~/.driconf is up to date, and with radeonsi it works fine in the same configuration.
    Found the problem here: https://bugs.freedesktop.org/show_bug.cgi?id=92233
    So we know exactly what the problem is, it's a one line fix, yet it's 2.5 months old with no activity. *sigh*

    With
    Code:
    disable_blend_func_extended=true unigine-heaven


    Glorious 17 fps with 1024x576 and a very sluggish kwin/X with an unusably sluggish mouse pointer.

    Does actually any real world application exist that uses tessellation and that runs at 1920x1080@60fps on ivy bridge?

    Edit: Tessmark with the X8 setting barely stays above 60 fps in a maximized window, yay!
    Last edited by haagch; 26 December 2015, 06:22 PM.

    Leave a comment:


  • Ancurio
    replied
    Originally posted by siavashserver
    No, the vertex shader runs before tessellation stage:
    You're totally right, not sure how I fucked that up in my head, haha. I actually noticed when I edited some of the GLSL in my engine, and thought "wait, what I wrote on phoronix this morning didn't make any sense lol".

    Leave a comment:


  • Ancurio
    replied
    Originally posted by siavashserver
    Less vertex shader work. When dealing with animated (skinned) geometries, usually 4 bone/joint transformations per vertex should be read from video memory and blended together. By using tessellation, the time consuming work will be done only for key vertices by vertex shader, and then extra level of detail will be simply added on top during tessellation stage.

    I'm not sure I understand this scenario fully. Since the vertex shader runs after the tess ones, and on every vertex that was generated by tess, how do you save on expensive vert shader invocations? Or are you talking about running a non-tess pipeline on the few vertices first, capturing that via transform feedback, and then running it through tess with a simpler vert shader at the end?

    Leave a comment:


  • ultimA
    replied
    Originally posted by smitty3268 View Post

    That presumes you are using tessellation to replace your old code that was providing identical vertices the old way. From what I've seen, that has never really happened in real apps. Maybe on mobile?

    Games seem to just use tessellation to add additional vertices on top of the old ones, providing a higher level of quality (for more gpu work).
    Depends how you look at it. Even if you use tesselation to generate additional geometry for improved picture quality, often it will be because it would have been prohibitive to send that many geometry in the first place. So your app might get slower using it, but still, without tesselation it would have gotten even slower. In the end tesselation was used to save bandwidth. The only difference is, in one case the extra bandwidth is used to increase FPS at current quality levels, while in the other it is used to enable higher quality with the same hardware.

    Leave a comment:


  • M@GOid
    replied
    Originally posted by pal666 View Post
    desktop?
    It works in desktop motherboards. The only thing stopping you from having one is money :-P

    Leave a comment:


  • Ancurio
    replied
    Originally posted by ultimA View Post

    Not necessarily. Tesselation trades memory bandwidth for GPU compute resources. So if your application+hardware combination was bandwidth limited and was not fully utilizing the GPU otherwise, tesselation can improve performance by having to send less vertices to the GPU while still retaining the same quality.
    On its own, tessellation doesn't trade anything, it's a shader stage that generates varying degrees of additional triangles from something called "patches" (geometry that isn't renderable on its own); so just like with geometry shaders, you have additional computation overhead and a bigger memory constraint. (As smitty already pointed out)

    Using that additional geometry to then send less total vertices to the GPU, I don't consider being just tessellation itself but an advanced rendering technique that makes use of it. But the point is, with statically generated geometry, you only upload it once, whereas with tessellation, that shader stage will run on each draw call, unless of course you capture the generated vertices back into buffers and plain-draw them from there on, something that apparently is often done with geometry shaders (I don't know if it's actually viable with tess, especially since you often want varying level-of-detail as you walk through the 3D world).

    It really all depends on what the application doing.

    Leave a comment:

Working...
X