Announcement

Collapse
No announcement yet.

A Look At OpenGL ES 3.0: Lots Of Good Stuff

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • curaga
    replied
    Yes, delta compression in general is badly parallelizable. But you may be able to send the vertices as 8-bit values still with clever packing, in non-delta ways.

    I was only talking about your point of sending less data over. Geometry shaders are needed to generate more, I agree there.

    Leave a comment:


  • ecloud
    replied
    Originally posted by curaga View Post
    Vert packing is an old trick, you can create any complex system you like as long as you can also unpack in a vertex shader.

    You sure it will be a benefit though?
    How can a vertex shader look at more than one vertex at a time? With delta compression, every vertex would depend on the previous one, so the GPU could not unpack them in parallel. However multiple delta-compressed sequences could be unpacked in parallel. So maybe each vertex given could be the start of a delta-compressed path, and a supplementary buffer provides the deltas? But a vertex shader is restricted to producing one output vertex for each input vertex. What I want is a way to generate vertices in shader code. Bezier rendering is adaptive: you generate more vertices in the areas of the curve which make tight bends, and fewer in the areas which are closer to straight; so it's hard to predict the total number that you need in advance. When rendering delta-compressed sequences, the total number of output vertices can be known in advance, however it's still a vertex-generation problem because each vertex must be visited in order, to apply diffs from the previous vertex. Isn't this type of problem the point of having geometry shaders?

    Leave a comment:


  • curaga
    replied
    Vert packing is an old trick, you can create any complex system you like as long as you can also unpack in a vertex shader.

    You sure it will be a benefit though?

    Leave a comment:


  • ecloud
    replied
    why still no geometry shaders?

    The one thing I want most from some next-gen ES spec is a way to accelerate 2d elements such as bezier curves and elliptical arcs. It should be possible for the GPU to generate the interpolated points from the control points. Any ideas how to get that? If only they had included geometry shaders.

    Another thing I want (and another way to accelerate curve rendering) is a way to send vertices across as 8-bit deltas rather than full 32-bit floats. Haven't figured out how to do that with ES 2.0 either.

    Leave a comment:


  • jrch2k8
    replied
    Originally posted by curaga View Post
    Fine, prepare for an elanthis-like essay nobody bothers to read, because it's way too long.


    The new linking model, while more flexible, requires much more boilerplate. This is a
    clear step backwards. Before you didn't have to do the several calls binding names to numbers.

    A much better model would be to default to the current ones, and if *and only if* the user
    wishes to override, then override. This would let one use gl_FragData[3] and yet bind an
    input to their desired name if they wish.

    The syntax is just plain bad. I just woke up and yet I could do much better.


    Case 1: "precision mediump float;"

    Precision is entirely redundant, it's there in the "p". Nuke it. New syntax: "medp float;".


    Case 2: "layout(location = 0) out vec4 data;"

    Sweet jesus how horrifying. That thing is more verbose than Java. And Java's trying really
    hard.

    New try: "out vec4 data: 0;". I still don't like in and out, but can't think of a change
    there right now. Even varying was better, but with the coming geometry shaders that wouldn't
    be sensible any longer.




    I say "the new syntax is bad". You want me to write an essay on the topic? Surely you can
    see yourself how it's bad.



    Ad-hom, yeah. Nothing like personal insults to get a day started.

    It is Khronos who decides the syntax. The syntax happens to be absolutely horrible.
    How is this related, at all, to the intelligence of a random dev at corp $foo or some forum
    poster.
    well to be honest is a question of personal view some ppl prefer to work with condensed syntax and other prefer more expanded versions, i particulary prefer verbose coding since in the long run when things get very massive in lines makes your life easier

    for example:[c++]

    i hate case:

    my_type DecImg(my_vector x, my_vector y, bool cond) <-- many of the devs i work with like to make me suffer with this

    i like case:

    Ect2_img_data DecodeECT2ImageAVX(m_vector_256 BlockAfromSwizzle, m_vector_256 BlockBfromSwizzle, bool FrameEndedSignaliing)

    sure case 1 is easy to write but when you need to debug a 200k lines program is insane while case 2 is a swift and i would love if c++ syntax could be even more verbose but as you see is a matter of tastes

    Leave a comment:


  • madjr
    replied
    bye Directx!

    Leave a comment:


  • curaga
    replied
    Fine, prepare for an elanthis-like essay nobody bothers to read, because it's way too long.


    The new linking model, while more flexible, requires much more boilerplate. This is a
    clear step backwards. Before you didn't have to do the several calls binding names to numbers.

    A much better model would be to default to the current ones, and if *and only if* the user
    wishes to override, then override. This would let one use gl_FragData[3] and yet bind an
    input to their desired name if they wish.

    The syntax is just plain bad. I just woke up and yet I could do much better.


    Case 1: "precision mediump float;"

    Precision is entirely redundant, it's there in the "p". Nuke it. New syntax: "medp float;".


    Case 2: "layout(location = 0) out vec4 data;"

    Sweet jesus how horrifying. That thing is more verbose than Java. And Java's trying really
    hard.

    New try: "out vec4 data: 0;". I still don't like in and out, but can't think of a change
    there right now. Even varying was better, but with the coming geometry shaders that wouldn't
    be sensible any longer.


    Some people are so quick to reject anything different than what they're already used
    to, without giving the new ideas thorough, unbiased consideration. Granted every new idea
    isn't good, but you should at least give an alternative or explanation to your
    sensationalist statements.
    I say "the new syntax is bad". You want me to write an essay on the topic? Surely you can
    see yourself how it's bad.

    Don't worry, you're not smarter than the devs from AMD, Nvidia, Intel etc who worked
    on the specs, so the specs are fine and carefully crafted.
    Ad-hom, yeah. Nothing like personal insults to get a day started.

    It is Khronos who decides the syntax. The syntax happens to be absolutely horrible.
    How is this related, at all, to the intelligence of a random dev at corp $foo or some forum
    poster.

    Leave a comment:


  • elanthis
    replied
    Yes, the syntax sucks. The actual _feature_ of the syntax is 1000x better than what was there before. Sometimes great advancements come wrapped in terrible packages.

    If you want to know one of the reasons why I constantly complain about OpenGL/GLSL being so crappy and D3D/HLSL is so much better, now you know. HLSL has had the "new" way of linking shader stages for years (which among other things is a hard prerequisite for separable shaders), but without the horrible syntax Khronos saddled GLSL with. Why in $DEITY's name Khronos picked the awkward obtuse location thing instead of doing it like HLSL did -- which is clean, readable, and concise -- is beyond me.

    Leave a comment:


  • F i L
    replied
    Originally posted by mark45 View Post
    What, you saw the line with "location" and that makes you vomit? Are you so feeble minded? It just shows that you can specify the location in the new version.

    There's other nice stuff, "varying" is gone, "in" is in, you can use your own (mnemonic) words instead of gl_FragColor, and other neat stuff.

    Don't worry, you're not smarter than the devs from AMD, Nvidia, Intel etc who worked on the specs, so the specs are fine and carefully crafted.
    +1

    Some people are so quick to reject anything different than what they're already used to, without giving the new ideas thorough, unbiased consideration. Granted every new idea isn't good, but you should at least give an alternative or explanation to your sensationalist statements. For the most part, GLES 3.0 looks like a great improvement all around, and with Intel bringing it to Mesa by early next year (i'm sure Mac will follow suit as well), I'm hoping it becomes the new "base line" architecture we need to build against for modern applications.

    Leave a comment:


  • mark45
    replied
    Originally posted by curaga View Post
    Okay everyone, take a look at the shader code porting slide.

    Now, hands up anyone who wanted to vomit after reading the "new and improved" glsl es 3.0 syntax.
    What, you saw the line with "location" and that makes you vomit? Are you so feeble minded? It just shows that you can specify the location in the new version.

    There's other nice stuff, "varying" is gone, "in" is in, you can use your own (mnemonic) words instead of gl_FragColor, and other neat stuff.

    Don't worry, you're not smarter than the devs from AMD, Nvidia, Intel etc who worked on the specs, so the specs are fine and carefully crafted.
    Last edited by mark45; 12 August 2012, 07:01 PM.

    Leave a comment:

Working...
X