Page 34 of 34 FirstFirst ... 24323334
Results 331 to 340 of 340

Thread: AMD Catalyst 7.12 Linux Driver -- The Baby's In Surgery

  1. #331
    Join Date
    Jun 2007
    Posts
    406

    Default

    Quote Originally Posted by bridgman View Post
    Don't worry, I'm not asking the question
    so you're saying that we'll someday be able to make use of UDV?! will that day be defined like:

    Code:
    date day = getSystemDate();

  2. #332
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,572

    Default

    Right now, I'm just saying that we understand the need.

    re: your second question, I guess it depends on when you run the program.

  3. #333
    Join Date
    Jun 2007
    Posts
    406

    Default

    Quote Originally Posted by bridgman View Post
    re: your second question, I guess it depends on when you run the program.
    that was the idea... it should have mean that we won't have a more or less precise indication on when it would eventually be supported except that when it would be already done.

  4. #334
    Join Date
    Jan 2008
    Posts
    16

    Default

    @Givemesugarr
    what is in your opinion more stressing:
    - a full bitmap reload or
    - a partial refresh of the old frame?!
    The response is very simple:
    It is much much much more stressing for a processor to calculate a picture by differences from previous pictures. To reload a full bitmap picture is the less stressing way THINKABLE. It's not possible to be less stressing!! (If the picture moves.)

    But I don't know exactly all differences of the algorithms (I only know the 4 stages of h.264 with UDV).

    The only thing that I know is that people all over the internet tell that divx/xvid works fine with a standard computer. But there are much experts that say that a normal computer is not enough for high bitstreams of h.264. I know cryptic and coding theory. But I'm no expert for divx and xvid. So I can't really answer all the questions.

    I don't understand what you're telling about hashcodes. Normally this should be asymetric coding. This means you can verify the hashcode but you can't create it. Did you ever hear about asymetric coding?
    Mathematically formulas allow you to verify the code WITHOUT being possible to construct it. EVEN IF YOU KNOW THE VERIFIING PROCEDURE!!

    Also the coding doesn't make grow exponentially the processor load. Only the key has to be coded very strong. But once this key is packed the rest is coded conventionally (linear to the length).


    But all this is not the important thing.

    Do you really think that DRM will be broken by UDV!!

    DRM is already broken for most of the cases.

    You don't need UDV to break anything.

    You punish only the sincere people who want to play BOUGHT DVD's on their computer. Only THOSE people need UDV. The crackers don't need! They play it as xvid! (And they have Penryn's!!!)

  5. #335
    Join Date
    Dec 2007
    Location
    /dev/hell
    Posts
    297

    Default

    Quote Originally Posted by eigerhar View Post
    @Givemesugarr
    The response is very simple:
    It is much much much more stressing for a processor to calculate a picture by differences from previous pictures. To reload a full bitmap picture is the less stressing way THINKABLE. It's not possible to be less stressing!! (If the picture moves.)
    it depends only on if you change completely the picture or or just some pixels from a frame to the other.

    in general it's less stressing using the differences.
    And it's much more space efficient!

    remember that the coding is needed to fit the media and to not fulfill the bandwidth when in streams.
    :-)

  6. #336
    Join Date
    Jan 2008
    Posts
    16

    Default

    in general it's less stressing using the differences.
    And it's much more space efficient!
    I think you're mixing up two things. You're speaking from one picture to the next one. But also xvid/divx dont produce every picture from scratch.

    That's not what we're speaking about. Because of this I added "(if the picture moves)"

    Also divx/xvid wait for some movement before producing the next bitmap.

    But if you have only a little movement it is not possible to be less stressing than producing a new bitmap. For a difference you need necessary a calculation and that's necessary more stressing for the CPU than producing a bitmap.

  7. #337
    Join Date
    Jun 2007
    Posts
    406

    Default

    Quote Originally Posted by eigerhar View Post
    @Givemesugarr


    The response is very simple:
    It is much much much more stressing for a processor to calculate a picture by differences from previous pictures. To reload a full bitmap picture is the less stressing way THINKABLE. It's not possible to be less stressing!! (If the picture moves.)

    But I don't know exactly all differences of the algorithms (I only know the 4 stages of h.264 with UDV).
    for the hw is less stressing to pict the temporal differences since:
    1 - your system bus needs to transfer less stuff
    2 - your videocard ram is less stressed since it has some portions that are refreshed lesser times than if there were to be refreshed.
    3 - i don't know how the video registers work, but i think that will be less stressed if they're more stable. is as if you need to load some tables into them and use them in readonly for more time than to load them at every frame. but as i've said i don't know how video memory registers work and i cannot say that my theory on them is correct.
    the only part that will surely be more stressed is the gpu, but nowadays standard gpus are able to compute a lot of stuff and are quite good.
    this is true for hw decoding. for sw decoding the thing should be in the same way, but here, the system cpu has to do not only the video decoding but:
    1 - run the idling and kernel processes
    2 - run the services
    3 - run the xorg interface and programs that are in startup
    4 - run the session programs
    5 - run the video application that lets you view the video
    6 - run the codecs
    7 - use the codecs to extract the hw decoding part and see if it can be decoded hw mode
    8 - have all the hw decoding done by itself thus fulling the instruction queue.
    9 - pass the output back to the codec

    these steps are also done with sw divx/xvid and it's right that sw decoding of h264 is more stressful than xvid one, but think of another aspect:
    - h264 comes with resolutions > 640x480 (dvd format)
    - xvid/divx comes with resolutions < 640x480 and when it comes with larger resolutions the compression is less intense and the file size increased so that to stress less the system bus.
    as i've said to compare the 2 formats you need to take a h264 encoded video and transform in xvid/divx 2pass of the same resolution and fps. you'll see a great difference in file size and when you'll try using it you won't see a great difference in cpu load.

    Quote Originally Posted by eigerhar View Post

    The only thing that I know is that people all over the internet tell that divx/xvid works fine with a standard computer. But there are much experts that say that a normal computer is not enough for high bitstreams of h.264. I know cryptic and coding theory. But I'm no expert for divx and xvid. So I can't really answer all the questions.
    well, as i told you, h264 vs xvid/divx in low resolution on sw decoding is more stressful but works on an average pc turion64x2 1gb ram and 128 igp videoram. both the standards play about at the same speed and stress. i tried a high res xvid 1280x720 and h264 1280x720 and the xvid is more memory consuming than the h264 counterpart. and now be sure that the h264 decoder isn't yet perfectly tuned as it is divx.

    Quote Originally Posted by eigerhar View Post
    I don't understand what you're telling about hashcodes. Normally this should be asymetric coding. This means you can verify the hashcode but you can't create it. Did you ever hear about asymetric coding?
    Mathematically formulas allow you to verify the code WITHOUT being possible to construct it. EVEN IF YOU KNOW THE VERIFIING PROCEDURE!!
    hmm... maybe you didn't understood well the example:
    hashcodes are one way generation and cannot be generated backward, ie starting from the hashcode and going to the message from which they were generated. hashcodes are validated through the following mechanism: you know exactly what how to create them and you have the same original message. now you create the hashcode applying the know method to the original method and compare the 2 hashcodes: the received one and the locally created one. drm has about the same function mechanism. so if you know how it works you're able to create the "hashcodes" for any content you like and validate it as genuine, even if it was duplicated.

    Quote Originally Posted by eigerhar View Post
    Also the coding doesn't make grow exponentially the processor load. Only the key has to be coded very strong. But once this key is packed the rest is coded conventionally (linear to the length).
    with drm keys don't count since drm doesn't encode the data, but just validates some strings and tells if you have the rights to use it. so the concept of coding keys with drm isn't applicable. there's no need to encode the key when what you need to do is just validate the contend. you could also leave the validation in plain text, as long as nobody knows how the validation process goes.

    Quote Originally Posted by eigerhar View Post
    But all this is not the important thing.

    Do you really think that DRM will be broken by UDV!!

    DRM is already broken for most of the cases.

    You don't need UDV to break anything.

    You punish only the sincere people who want to play BOUGHT DVD's on their computer. Only THOSE people need UDV. The crackers don't need! They play it as xvid! (And they have Penryn's!!!)
    well, on this part i can agree with you. but this is true on a sw level not on hw level. why?! because on hw level it's veryyyyyyyyy hard to get a board, unpack it, remove every protective stuff, map the circuits and then test it. let's say that doing it with a new generation video board would be stupid and probably eternal.
    to broke hw drm you could do it in 2 ways:
    - hw testing -> not applicable
    - driver reverse engineer -> applicable if there is some driver that support it and in this case you could be thrown to prison for violating the laws since reverse engineering is usually prohibited.

    I think you're mixing up two things. You're speaking from one picture to the next one. But also xvid/divx dont produce every picture from scratch.

    That's not what we're speaking about. Because of this I added "(if the picture moves)"

    Also divx/xvid wait for some movement before producing the next bitmap.

    But if you have only a little movement it is not possible to be less stressing than producing a new bitmap. For a difference you need necessary a calculation and that's necessary more stressing for the CPU than producing a bitmap.
    well, h264 is always temporal/spatial dependent. with h264 you cannot talk anymore about stationary pictures. the only independent frames are the I ones, but P and B aren't and losing them wouldn't be much of a failure. this is the main strength of this standard. xvid used only I and P frames. the other B frames are an additional type of frames that help reduce the filesize without reducing the quality. and it's not true that xvid/divx waits for movement, as it's not true also with h264. the I frames are hard coded in a fixed prosition. the main difference stands in the quantization of these frames and on how the quantization matrix is chosen.
    for a video card is more stressing drawing a full 1280x800 screen (my laptop one) than refreshing only a poligon of 360x360 pixels. the video registers needs to be written for all the screen in the first case and in the second only the ones corresponding to the refreshed pictures are to be written, while the others need only to be read. now, it's faster the first example in which you write and read all the registers or the second in which you only write some of them and read all?! and since the internal bus of the video card isn't infinite the second solution is less stressing for the overall system.
    now, i've found a simple guide on h264 for the all type of readers.http://www.indigovision.com/whitepapers_h264video.php
    try giving it a read and see how it works. the standard seems quite difficult, but with average hw of these days it's simple to implements. obviously the sw approach is more difficult, but for high resolution like full-hd there should be no match with other standards, except the vob-mpg2 uncompressed which means having to go around with a 100gb per movie.

  8. #338
    Join Date
    Oct 2007
    Location
    Roanoke, VA
    Posts
    228

    Default

    I think this discussion would be more appropriate on the Doom9 forums at http://forum.doom9.org/

    Can we return to a driver discussion rather than a codec discussion? A discussion of the relative merits of the various codecs and video specifications (and their technical details) is not very relevant to this thread.

  9. #339
    Join Date
    Sep 2007
    Posts
    128

    Default

    Thought I'd just like to add to givemesugarr's post...

    The Xvid (and DivX) codec has support for B-frames as well, not just the I and P frame types. Depending on the profile you choose, B-frames may or may not be available to you though. Should it be (say you're using Unrestricted), you can (if it so fancies you) disable B-frame support in the encoder (vfw interface for instance if you do your stuff in vdubmod), and sometimes we do as it messes up the tools we use for some odd reason or another -_-; The difference however being that the H.264 specification allows for more reference frames (ie. more flexibility) than earlier ones. Then we can get into the whole macroblock thing that's different and the feature I love the most, the in-loop deblocking filter.

    eigerhar: While it's true that it'll take less time to decode say...only I frames (full image, reference frame, however you want to call it), you lose compressibility. If you want speed over compression, then there's other options available, even including loseless codecs (the ones we're discussing right now are lossy).

  10. #340

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •