Announcement

Collapse
No announcement yet.

S3TC Is Still Problematic For Mesa Developers, Users

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Kivada View Post
    Uh, if you need to go THAT big why not go projection or get a few 100" screens? If you actually had to render something on screens of a decent resolution instead of just play back video or static images then even a GTX Titan wouldn't be anywhere near enough grunt to not end up a choppy mess.
    Getting one large screen isn't the only use case for such cards. If you want to display simple things at different locations, this is a perfect way to do this.

    Comment


    • #22
      Originally posted by droste View Post
      Getting one large screen isn't the only use case for such cards. If you want to display simple things at different locations, this is a perfect way to do this.
      As opposed to lining out to coax and using a splitter and a spool of RG-6 cable to run it to a bunch of cheap TVs? Convert to video over Cat5/5e/6 and back again?

      If you don't already have the screens it's cheaper then VGA/DVI/HDMI/DisplayPort cable if you need to run it more then the few feet of cable the screens came with.

      If you don't already have some of the pieces on hand there are there are allot of ways to skin this cat.

      Comment


      • #23
        Yes, Via is still fairly strong in the embedded sector (gambling machines, digital signage).

        Comment


        • #24
          Did they ever consider asking for permission to be allowed to use it?

          Comment


          • #25
            Originally posted by curaga View Post
            Yes, Via is still fairly strong in the embedded sector (gambling machines, digital signage).
            Maybe, but I really wonder why. VIAs products simply cannot compete in terms of power consumption, features or performance. I remember the last time I heard about VIA they tried to peddle a 25W CPU as a "market-leading energy efficient" solution, which was quite ridiculous. On top of that, VIAs hardware tends to be quite buggy and software/driver support is pretty bad, even on Windows.

            I guess most embedded hardware that still uses VIA solutions only does so because it was designed ages ago when VIA still had a small edge. I can't imagine anyone using VIA-based hardware for new developments.

            Comment


            • #26
              Originally posted by brent View Post
              Maybe, but I really wonder why. VIAs products simply cannot compete in terms of power consumption, features or performance. I remember the last time I heard about VIA they tried to peddle a 25W CPU as a "market-leading energy efficient" solution, which was quite ridiculous. On top of that, VIAs hardware tends to be quite buggy and software/driver support is pretty bad, even on Windows.

              I guess most embedded hardware that still uses VIA solutions only does so because it was designed ages ago when VIA still had a small edge. I can't imagine anyone using VIA-based hardware for new developments.
              Yes, inertia is a big reason for any entrenched industry.

              However they do still have some edge - as far as I know, no Atom is capable of 1W max / 0.1W idle. The lowest-powered Atom is around 3W IIRC. Their CPUs aren't really buggy; but the same can't be said for their graphics and to some extent chipsets.

              Comment


              • #27
                Originally posted by curaga View Post
                Yes, inertia is a big reason for any entrenched industry.

                However they do still have some edge - as far as I know, no Atom is capable of 1W max / 0.1W idle. The lowest-powered Atom is around 3W IIRC. Their CPUs aren't really buggy; but the same can't be said for their graphics and to some extent chipsets.
                Well, VIA may have a 1W TDP CPU, but it is extremely slow (C7 @ 500 MHz), and still requires a two-die chipset to function. On the other hand, both Intel and AMD have SoCs (chipset fully integrated) with an overall TDP of < 5 W. Intel even has < 3 W TDP parts. I don't see an edge for VIA here at all. VIA-based designs are more complex (three dice on the PCB instead of one), will chug more power and perform worse.

                Comment


                • #28
                  Originally posted by GreatEmerald View Post
                  Eh? So you're saying that you think it'll fail due to it being too new?.. IIRC ASTC is part of the OpenGL spec, so drivers will have to support it if they want to claim OpenGL compliance. Not necessarily in hardware, but if you're supporting it in software, might as well also have it accelerated. So yes, it's still not the fault of the developers that S3TC is preferred, but it will be in a few years. Or, you know, they will stop preferring it.
                  ASTC was announced with iirc OpenGL 4.3, but didn't became part of the profile nor did it in 4.4. It still is an extension, neither is it implemented in any desktop driver. So atm there is no ASTC. It's just available on a paper. And cause of its complexity I assume it will stay like that for a while.
                  Never the less, as long as all drivers implement a texture compression in software, it is worthless (like ETC). And software implementations can't be accelerated, cause they send the decompressed bitmap to the GPU increasing bandwidth a lot when accessing the texture (the whole point of texture compression (on PCs) is to reduce gpu<->vram bandwidth, reducing memory requirements doesn't matter with >512MB vram). So the only thing that can be accelerated is the decompressing before sending it to the GPU, but that's worthless and has no impact on the _final_ performance (it just reduces a lag when sending the texture).
                  Last edited by -jK-; 15 August 2013, 02:39 PM.

                  Comment


                  • #29
                    Originally posted by -jK- View Post
                    ASTC was announced with iirc OpenGL 4.3, but didn't became part of the profile nor did it in 4.4. It still is an extension, neither is it implemented in any desktop driver. So atm there is no ASTC. It's just available on a paper. And cause of its complexity I assume it will stay like that for a while.
                    Never the less, as long as all drivers implement a texture compression in software, it is worthless (like ETC). And software implementations can't be accelerated, cause they send the decompressed bitmap to the GPU increasing bandwidth a lot when accessing the texture (the whole point of texture compression (on PCs) is to reduce gpu<->vram bandwidth, reducing memory requirements doesn't matter with >512MB vram). So the only thing that can be accelerated is the decompressing before sending it to the GPU, but that's worthless and has no impact on the _final_ performance (it just reduces a lag when sending the texture).
                    It's not yet part of it? Hmm, well, that can be a problem indeed, then.
                    As for the acceleration part, I meant "if the hardware has no such capability, do it in software; else do it completely in hardware; since it's much faster in hardware and we have a software implementation for it written already, let's add hardware support for it in our new graphics cards".

                    Comment


                    • #30
                      Originally posted by brent View Post
                      Well, VIA may have a 1W TDP CPU, but it is extremely slow (C7 @ 500 MHz), and still requires a two-die chipset to function. On the other hand, both Intel and AMD have SoCs (chipset fully integrated) with an overall TDP of < 5 W. Intel even has < 3 W TDP parts. I don't see an edge for VIA here at all. VIA-based designs are more complex (three dice on the PCB instead of one), will chug more power and perform worse.
                      Yep, the 4.5w AMD G-T16R is one of the most current for low draw x86.

                      Comment

                      Working...
                      X