Announcement

Collapse
No announcement yet.

Support for kernel 2.6.29?

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    a) I don't use compiz - because it is a broken POS, ever was
    b) I don't use lenny. I don't use debian. I never will
    c) I don't use stone old software. So no Xserver 1.4.X but something more recent - 1.6.1
    d) even with earlier xorg-server versions I never saw that mysterious 'tearing'.
    e) I am using tvtime which does not work without xv - and it works well.

    Comment


    • #22
      Video tearing is wierd; some people see it easily and it really bugs them, other people (like most developers ) don't see it at all. I was checking out the open drivers that shipped with Jaunty and watching Big Buck Bunny - I thought it was real smooth, but one of the guys from our multimedia team came over and was pointing out tears every few seconds. I never saw a single one.

      Comment


      • #23
        My eyes are trained to see those errors, i saw em with vdpau + enabled composite at once too. But compared to fgrlx the oss ati is smooth. Nvidia is definitely the best driver for video but ati oss is not bad.

        Comment


        • #24
          hm, I am not too distortion resistant - I can't stand 75Hz on a crt for example. But video looks good on my desktop...

          Comment


          • #25
            Originally posted by Kano View Post
            Well 2.6.29 support can be patched, with a smaller patch, when your distro provides some extra files or a very huge one. But 2.6.30 seems to be really problematic. When you make it compile then it uses 2 symbols which are not in the kernel anymore, one could possibly patched, but the other is in the binary part, so no go. I do not understand that when ati only provides drivers for the latest cards that they are not able to try a new kernel. Ubuntu even has got a collection of every mainline kernel incl. rc, so they would not even require to compile it on their own. Nvidia somehow manages for their current cards, for the others i am still waiting for official 2.6.30 support but 2.6.29 is there for every card.
            I think you are talking about "pci_enable_msi".
            I just make small place holder function like this:
            #undef pci_enable_msi
            int pci_enable_msi(struct pci_dev *pdev)
            {
            int pci_out;
            pci_out=pci_enable_msi_block(pdev, 1);
            return pci_out;
            }
            and add it somewhere in fglrx module.
            I don't guarantee that it will actually work because last time I checked this, I was on 2.6.30-rc2 and since then I deleted my patch.

            Comment


            • #26
              Interesting way, can you create a better 2.6.29 patch too without new includes?

              Comment


              • #27
                Originally posted by sobkas View Post
                I think you are talking about "pci_enable_msi".
                I just make small place holder function like this:
                #undef pci_enable_msi
                int pci_enable_msi(struct pci_dev *pdev)
                {
                int pci_out;
                pci_out=pci_enable_msi_block(pdev, 1);
                return pci_out;
                }
                and add it somewhere in fglrx module.
                I don't guarantee that it will actually work because last time I checked this, I was on 2.6.30-rc2 and since then I deleted my patch.
                Hmm, I don't essentially see why the '#define pci_enable_msi(pdev) pci_enable_msi_block(pdev, 1)' from the kernel sources should work different from the code you gave. That is, as far as I can see your code is the same as

                #undef pci_enable_msi
                int pci_enable_msi(struct pci_dev *pdev)
                {
                return pci_enable_msi_block(pdev, 1);
                }

                which should be the same as the macro, no?
                ps. The macro was checked from linux-2.6.30-rc5
                Edit: Btw, make sure you have CONFIG_PCI_MSI in kernel enabled or none of this will work. Iirc Catalyst warned about it at one point.
                Last edited by nanonyme; 05-09-2009, 03:18 PM.

                Comment


                • #28
                  Originally posted by nanonyme View Post
                  Hmm, I don't essentially see why the '#define pci_enable_msi(pdev) pci_enable_msi_block(pdev, 1)' from the kernel sources should work different from the code you gave. That is, as far as I can see your code is the same as

                  #undef pci_enable_msi
                  int pci_enable_msi(struct pci_dev *pdev)
                  {
                  return pci_enable_msi_block(pdev, 1);
                  }

                  which should be the same as the macro, no?
                  ps. The macro was checked from linux-2.6.30-rc5
                  Edit: Btw, make sure you have CONFIG_PCI_MSI in kernel enabled or none of this will work. Iirc Catalyst warned about it at one point.
                  A macro is just a macro. It's instruction for preprocessor to replace one text with another and preprocessor only works on source code.
                  It wasn't meant to even touch binaries.

                  With fglrx the main problem is that the pci_enable_msi is used by binary only module:
                  nm libfglrx_ip.a.GCC4|grep pci_enable_msi
                  U pci_enable_msi
                  There must be function called pci_enable_msi or module wont work.

                  In short macro is provided to keep API backward compatible while changing ABI.
                  Essentially preprocessor just replace text "pci_enable_msi(pdev)" with "pci_enable_msi_block(pdev, 1)".

                  Comment


                  • #29
                    Originally posted by sobkas View Post
                    A macro is just a macro. It's instruction for preprocessor to replace one text with another and preprocessor only works on source code.
                    It wasn't meant to even touch binaries.

                    With fglrx the main problem is that the pci_enable_msi is used by binary only module:
                    nm libfglrx_ip.a.GCC4|grep pci_enable_msi
                    U pci_enable_msi
                    There must be function called pci_enable_msi or module wont work.

                    In short macro is provided to keep API backward compatible while changing ABI.
                    Essentially preprocessor just replace text "pci_enable_msi(pdev)" with "pci_enable_msi_block(pdev, 1)".
                    Point taken. You don't still need the integer though, right?

                    Comment


                    • #30
                      Originally posted by nanonyme View Post
                      Point taken. You don't still need the integer though, right?
                      What integer?
                      You are talking about "int pci_out;"?
                      Not really, I just like this way of writing functions(debugging and all).

                      Comment

                      Working...
                      X