Announcement

Collapse
No announcement yet.

VESA Announces DSC 1.2 Compression Standard

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • VESA Announces DSC 1.2 Compression Standard

    Phoronix: VESA Announces DSC 1.2 Compression Standard

    The Video Electronics Standards Association announced Display Stream Compression 1.2 today as the newest DSC standard...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Well - it is NOT lossless as INCORRECTLY stated in the article.
    It is "visually lossless" hence lossy compression. It's only that the typical uses won't notice (or so they claim).
    (here is the proof: http://www.vesa.org/faqs/ )

    Edit: I see the article is fixed Great!
    I personally would write "someone made up a devious way of disguising LOSSY compression by using the term "visually lossless"" that would spice things up a bit
    Last edited by waxhead; 27 January 2016, 04:38 PM.

    http://www.dirtcellar.net

    Comment


    • #3
      From what I understand from [1], that was looking at a 24in monitor from 45cm.

      I study the effect of image compression on diagnostic imaging as part of my Ph.D., I would be terribly annoyed if my display pipeline added another layer of artefacts... and so would radiologists. This seems like a terrible idea.


      [1]https://www.researchgate.net/publication/277725449_A_new_standard_method_of_subjective_asse ssment_of_barely_visible_image_artifacts_and_a_new _public_database_Subjective_analysis_of_image_qual ity

      Comment


      • #4
        Originally posted by jpambrun View Post
        I study the effect of image compression on diagnostic imaging
        Something tells me that this isn't designed for you or your needs. There's a reason why this clearly states it's a way to make things cheaper and why high-end display gear is so expensive.

        Comment


        • #5
          Originally posted by jpambrun View Post
          From what I understand from [1], that was looking at a 24in monitor from 45cm.

          I study the effect of image compression on diagnostic imaging as part of my Ph.D., I would be terribly annoyed if my display pipeline added another layer of artefacts... and so would radiologists. This seems like a terrible idea.


          [1]https://www.researchgate.net/publication/277725449_A_new_standard_method_of_subjective_asse ssment_of_barely_visible_image_artifacts_and_a_new _public_database_Subjective_analysis_of_image_qual ity

          I too feel like this is a questionable step to take. At least with today's consumer display products, you can calibrate and get closer to correct. This extra step will ensure poorer quality.

          That said, it might be okay if their intra coding can eventually converge to the correct image after two frames or so.

          Comment


          • #6
            Originally posted by microcode View Post

            That said, it might be okay if their intra coding can eventually converge to the correct image after two frames or so.
            You probably meant inter-frame (i.e. to take advantage from temporal redundancy) coding. That is a very interesting though. Indeed, that would work my use case.

            Comment


            • #7
              Someone care to explain how using anything less than 4:4:4 can be used for hdr? Unless I'm mistaken, hdr will only concern itself only with color differences, which seems to require MORE color bits than before.

              Comment


              • #8
                Originally posted by liam View Post
                Someone care to explain how using anything less than 4:4:4 can be used for hdr? Unless I'm mistaken, hdr will only concern itself only with color differences, which seems to require MORE color bits than before.

                I'm not an expert on color HDR image, but I have a feeling that in YUV colorspace most (if not all) of the added dynamic range is in the Y channel. Furthermore, the X:X:X notation refers to spacial down-sampling, not color quantization.

                Comment

                Working...
                X