Announcement

Collapse
No announcement yet.

S3TC Is Still Problematic For Mesa Developers, Users

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by -jK- View Post
    How are ASTC or ETC1/2 replacements for S3TC?
    • ETC is implemented in software for all desktop drivers atm (and it is very unlikely MESA impl. them in hw before the blob ones do)
    • ASTC isn't exposed yet by any drivers as I know (nor is it known how many gpus are able to do it in hw)
    So is the future of MESA really to uncompress all current & future texture formats and to not support a single hw accel. compressed texture format?

    My view of the future:
    • ASTC is too complicated, so there won't be many compression libs (see M$'s BC6/7). Neither will there be much driver/hw support. So it won't ever get popular.
    • ETC could easily gain popularity (better quality than S3TC), if it would be supported by drivers, but I don't see that in the (near) future
    So it still isn't the fault of the engine/content devs that S3TC is preferred.
    Eh? So you're saying that you think it'll fail due to it being too new?.. IIRC ASTC is part of the OpenGL spec, so drivers will have to support it if they want to claim OpenGL compliance. Not necessarily in hardware, but if you're supporting it in software, might as well also have it accelerated. So yes, it's still not the fault of the developers that S3TC is preferred, but it will be in a few years. Or, you know, they will stop preferring it.

    Comment


    • #17
      Originally posted by doom_Oo7 View Post
      Oh, it's been a long time we didn't see a Matrox graphic card, they must be dead too
      Matrox at least sold stuff like the Triple Head To Go, though that is largely unnecessary due to GPUs having as many as 6 outs these days, doubt there where all that many people actually connecting them to laptops that only have a single video out.

      If they are just sitting back and collecting licensing checks then they as a company shouldn't exist anymore. Produce or die.

      Comment


      • #18
        Originally posted by Kivada View Post
        Matrox at least sold stuff like the Triple Head To Go, though that is largely unnecessary due to GPUs having as many as 6 outs these days, doubt there where all that many people actually connecting them to laptops that only have a single video out.
        They are still on top with respect to "head count": http://www.matrox.com/graphics/en/pr.../m9188pciex16/ 8 heads!

        Comment


        • #19
          Originally posted by Kivada View Post
          If they are just sitting back and collecting licensing checks then they as a company shouldn't exist anymore. Produce or die.
          So you are really not aware? They just stopped selling to the consumer market and only focus on professionnal, like medical imagery (some IRMs are made by Matrox for instance) market.

          I guess it's the same for VIA.

          Comment


          • #20
            Originally posted by doom_Oo7 View Post
            So you are really not aware? They just stopped selling to the consumer market and only focus on professionnal, like medical imagery (some IRMs are made by Matrox for instance) market.

            I guess it's the same for VIA.
            Actually no I was not. I take it this is strictly embedded?

            A while back, when I was looking around for a new monitor in the same resolution bracket as ye olde IBM T221 none of the high end screens I could find like the Eizo FDH3601 mentioned anything about Matrox even though they claimed they where for the medical/engineering/etc. market.

            Even so, you'd think that the bigger players would be pushing them out, it's not like AMD and Intel don't make embedded x86 systems. Hell, Intel is trying to shoehorn x86 into a cellphone power envelope these days.

            Originally posted by droste View Post
            They are still on top with respect to "head count": http://www.matrox.com/graphics/en/pr.../m9188pciex16/ 8 heads!
            Uh, if you need to go THAT big why not go projection or get a few 100" screens? If you actually had to render something on screens of a decent resolution instead of just play back video or static images then even a GTX Titan wouldn't be anywhere near enough grunt to not end up a choppy mess.
            Last edited by Kivada; 08-15-2013, 01:20 AM.

            Comment


            • #21
              Originally posted by Kivada View Post
              Uh, if you need to go THAT big why not go projection or get a few 100" screens? If you actually had to render something on screens of a decent resolution instead of just play back video or static images then even a GTX Titan wouldn't be anywhere near enough grunt to not end up a choppy mess.
              Getting one large screen isn't the only use case for such cards. If you want to display simple things at different locations, this is a perfect way to do this.

              Comment


              • #22
                Originally posted by droste View Post
                Getting one large screen isn't the only use case for such cards. If you want to display simple things at different locations, this is a perfect way to do this.
                As opposed to lining out to coax and using a splitter and a spool of RG-6 cable to run it to a bunch of cheap TVs? Convert to video over Cat5/5e/6 and back again?

                If you don't already have the screens it's cheaper then VGA/DVI/HDMI/DisplayPort cable if you need to run it more then the few feet of cable the screens came with.

                If you don't already have some of the pieces on hand there are there are allot of ways to skin this cat.

                Comment


                • #23
                  Yes, Via is still fairly strong in the embedded sector (gambling machines, digital signage).

                  Comment


                  • #24
                    Did they ever consider asking for permission to be allowed to use it?

                    Comment


                    • #25
                      Originally posted by curaga View Post
                      Yes, Via is still fairly strong in the embedded sector (gambling machines, digital signage).
                      Maybe, but I really wonder why. VIAs products simply cannot compete in terms of power consumption, features or performance. I remember the last time I heard about VIA they tried to peddle a 25W CPU as a "market-leading energy efficient" solution, which was quite ridiculous. On top of that, VIAs hardware tends to be quite buggy and software/driver support is pretty bad, even on Windows.

                      I guess most embedded hardware that still uses VIA solutions only does so because it was designed ages ago when VIA still had a small edge. I can't imagine anyone using VIA-based hardware for new developments.

                      Comment


                      • #26
                        Originally posted by brent View Post
                        Maybe, but I really wonder why. VIAs products simply cannot compete in terms of power consumption, features or performance. I remember the last time I heard about VIA they tried to peddle a 25W CPU as a "market-leading energy efficient" solution, which was quite ridiculous. On top of that, VIAs hardware tends to be quite buggy and software/driver support is pretty bad, even on Windows.

                        I guess most embedded hardware that still uses VIA solutions only does so because it was designed ages ago when VIA still had a small edge. I can't imagine anyone using VIA-based hardware for new developments.
                        Yes, inertia is a big reason for any entrenched industry.

                        However they do still have some edge - as far as I know, no Atom is capable of 1W max / 0.1W idle. The lowest-powered Atom is around 3W IIRC. Their CPUs aren't really buggy; but the same can't be said for their graphics and to some extent chipsets.

                        Comment


                        • #27
                          Originally posted by curaga View Post
                          Yes, inertia is a big reason for any entrenched industry.

                          However they do still have some edge - as far as I know, no Atom is capable of 1W max / 0.1W idle. The lowest-powered Atom is around 3W IIRC. Their CPUs aren't really buggy; but the same can't be said for their graphics and to some extent chipsets.
                          Well, VIA may have a 1W TDP CPU, but it is extremely slow (C7 @ 500 MHz), and still requires a two-die chipset to function. On the other hand, both Intel and AMD have SoCs (chipset fully integrated) with an overall TDP of < 5 W. Intel even has < 3 W TDP parts. I don't see an edge for VIA here at all. VIA-based designs are more complex (three dice on the PCB instead of one), will chug more power and perform worse.

                          Comment


                          • #28
                            Originally posted by GreatEmerald View Post
                            Eh? So you're saying that you think it'll fail due to it being too new?.. IIRC ASTC is part of the OpenGL spec, so drivers will have to support it if they want to claim OpenGL compliance. Not necessarily in hardware, but if you're supporting it in software, might as well also have it accelerated. So yes, it's still not the fault of the developers that S3TC is preferred, but it will be in a few years. Or, you know, they will stop preferring it.
                            ASTC was announced with iirc OpenGL 4.3, but didn't became part of the profile nor did it in 4.4. It still is an extension, neither is it implemented in any desktop driver. So atm there is no ASTC. It's just available on a paper. And cause of its complexity I assume it will stay like that for a while.
                            Never the less, as long as all drivers implement a texture compression in software, it is worthless (like ETC). And software implementations can't be accelerated, cause they send the decompressed bitmap to the GPU increasing bandwidth a lot when accessing the texture (the whole point of texture compression (on PCs) is to reduce gpu<->vram bandwidth, reducing memory requirements doesn't matter with >512MB vram). So the only thing that can be accelerated is the decompressing before sending it to the GPU, but that's worthless and has no impact on the _final_ performance (it just reduces a lag when sending the texture).
                            Last edited by -jK-; 08-15-2013, 02:39 PM.

                            Comment


                            • #29
                              Originally posted by -jK- View Post
                              ASTC was announced with iirc OpenGL 4.3, but didn't became part of the profile nor did it in 4.4. It still is an extension, neither is it implemented in any desktop driver. So atm there is no ASTC. It's just available on a paper. And cause of its complexity I assume it will stay like that for a while.
                              Never the less, as long as all drivers implement a texture compression in software, it is worthless (like ETC). And software implementations can't be accelerated, cause they send the decompressed bitmap to the GPU increasing bandwidth a lot when accessing the texture (the whole point of texture compression (on PCs) is to reduce gpu<->vram bandwidth, reducing memory requirements doesn't matter with >512MB vram). So the only thing that can be accelerated is the decompressing before sending it to the GPU, but that's worthless and has no impact on the _final_ performance (it just reduces a lag when sending the texture).
                              It's not yet part of it? Hmm, well, that can be a problem indeed, then.
                              As for the acceleration part, I meant "if the hardware has no such capability, do it in software; else do it completely in hardware; since it's much faster in hardware and we have a software implementation for it written already, let's add hardware support for it in our new graphics cards".

                              Comment


                              • #30
                                Originally posted by brent View Post
                                Well, VIA may have a 1W TDP CPU, but it is extremely slow (C7 @ 500 MHz), and still requires a two-die chipset to function. On the other hand, both Intel and AMD have SoCs (chipset fully integrated) with an overall TDP of < 5 W. Intel even has < 3 W TDP parts. I don't see an edge for VIA here at all. VIA-based designs are more complex (three dice on the PCB instead of one), will chug more power and perform worse.
                                Yep, the 4.5w AMD G-T16R is one of the most current for low draw x86.

                                Comment

                                Working...
                                X