Announcement

Collapse
No announcement yet.

Intel's IPU6 Webcam Linux Driver Still A Mess, But Some Patches To Help

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel's IPU6 Webcam Linux Driver Still A Mess, But Some Patches To Help

    Phoronix: Intel's IPU6 Webcam Linux Driver Still A Mess, But Some Patches To Help

    While for years Intel has been very well regarded -- and rightfully so -- for their open-source Linux hardware support, occasionally there are exceptions. One such exception currently is Intel's IPU6 drivers for their MIPI cameras found on many newer Alder lake laptops and presumably upcoming Raptor Lake laptops too. The IPU6 drivers remain outside of the Linux kernel and will still likely be that way for sometime...

    https://www.phoronix.com/news/Intel-...I-Mess-Patches

  • #2
    Can Dell, HP and Lenovo do something about this?
    Dell have their XPS series that runs Linux and HP have their EliteBook series that run Linux.

    Intel is generally great at open source, how come they have this webcam driver mess on Linux?

    Comment


    • #3
      Originally posted by uid313 View Post
      Intel is generally great at open source, how come they have this webcam driver mess on Linux?
      They moved auto-focus, debayering (including the AI enhancements), auto-exposure, and similar features from the firmware to userspace, and nobody expected this. That's why everything sucks. And I think it will suck forever, because of lawyers.

      Comment


      • #4
        Originally posted by patrakov View Post

        They moved auto-focus, debayering (including the AI enhancements), auto-exposure, and similar features from the firmware to userspace, and nobody expected this. That's why everything sucks. And I think it will suck forever, because of lawyers.
        In this case the law is a tool to enforce power - the power to deny knowledge in the IPU secret sauce to their competitors. I'm not saying I agree with the decision (and I shouldn't even have to state that, but some people are going to make assumptions). I'm merely saying I understand the decision even though it's one doomed to fail. Their competitors already know how Intel is doing what they're doing with the IPUs.

        It's not because of lawyers. It's that lawyers are the front line troops to enforce a bad decision by the executives. There is a big difference between the two statements. You're blaming the wrong people.

        Edit to add: It's also a dumb decision on Intel's management. It means that people that have the expertise to improve or customize the user space image processing algorithms can't as easily do so - which is the entire point of a hardware Image Processing Unit. It's supposed to allow new capabilities on the fly rather than in post processing, not deny them.
        Last edited by stormcrow; 27 November 2022, 09:54 AM.

        Comment


        • #5
          but makes sense after all, you have this fat main cpu and gpu with a lot of compute power to spare during video conferencing.
          why waste the chip space on the camera controller die, if you can just move the compute from the camera die to the main CPU / GPU?

          Furthermore you can implement your debayering algorithm in a known x86 environment with all the tools and debugging, instead of having it run inside a embeeded firmware on the camera.
          Futhermore software might want to use the raw un-debayered image from the camera down the road to do some AI magic with better light sensitvity and such.
          From a perspective of a platform engineer this all makes sense.
          - lower cost in hardware
          - less hassle to develop software / firmware
          - more freedom in implementing more elaborate techniques in the future (not for the existing platform but probably for a new one with this system already in place)

          This might even turn out to be positive for linux, as there is the chance, that these cameras might have better images in linux than on windows, as we are free to implement / update whatever debayering algorithm we want / improve on it performance / quality / power consumption wise in the future.

          Comment


          • #6
            Originally posted by Spacefish View Post
            but makes sense after all, you have this fat main cpu and gpu with a lot of compute power to spare during video conferencing.
            why waste the chip space on the camera controller die, if you can just move the compute from the camera die to the main CPU / GPU?

            Furthermore you can implement your debayering algorithm in a known x86 environment with all the tools and debugging, instead of having it run inside a embeeded firmware on the camera.
            Futhermore software might want to use the raw un-debayered image from the camera down the road to do some AI magic with better light sensitvity and such.
            From a perspective of a platform engineer this all makes sense.
            - lower cost in hardware
            - less hassle to develop software / firmware
            - more freedom in implementing more elaborate techniques in the future (not for the existing platform but probably for a new one with this system already in place)

            This might even turn out to be positive for linux, as there is the chance, that these cameras might have better images in linux than on windows, as we are free to implement / update whatever debayering algorithm we want / improve on it performance / quality / power consumption wise in the future.
            Exactly. The issue here I believe comes from the kernel patches required so far. But once the kernel patches are mainlined, having everything in the userspace would me it would be trivially easy for distros to ship this and for users to swap these drivers for open-source ones if so they desire.

            Comment


            • #7
              Originally posted by Spacefish View Post
              but makes sense after all, you have this fat main cpu and gpu with a lot of compute power to spare during video conferencing.
              why waste the chip space on the camera controller die, if you can just move the compute from the camera die to the main CPU / GPU?

              Furthermore you can implement your debayering algorithm in a known x86 environment with all the tools and debugging, instead of having it run inside a embeeded firmware on the camera.
              Futhermore software might want to use the raw un-debayered image from the camera down the road to do some AI magic with better light sensitvity and such.
              From a perspective of a platform engineer this all makes sense.
              - lower cost in hardware
              - less hassle to develop software / firmware
              - more freedom in implementing more elaborate techniques in the future (not for the existing platform but probably for a new one with this system already in place)

              This might even turn out to be positive for linux, as there is the chance, that these cameras might have better images in linux than on windows, as we are free to implement / update whatever debayering algorithm we want / improve on it performance / quality / power consumption wise in the future.
              You have mention only good things as a hardware manufacturer's point of view but here are some points as a user:
              • you cannot change OS and/or computer architecture if the manufacturer has not provided their blobs for that particular OS and architecture
              • if that blob contains bugs you'll have unreliable system instead of unreliable device
              • you won't never be sure about existence of security issues.

              Comment


              • #8
                Originally posted by phoronix View Post
                The patches make it so that the mainline int3472 driver code will work with the sensor drivers bundled with the out-of-tree IPU6 driver code. The goal is to simply make it easier for Intel's out-of-tree driver to work with standard mainline distribution kernels.
                Interesting. I remember the discussion around changes that broke out-of-tree ZFS, back then the sentiment was quite different:
                Out-of-tree modules are basically treated like they don't exist.
                https://www.phoronix.com/news/Linus-...o-To-ZFS-Linux

                But it looks like here this is preparation for merging new drivers or something? So it might be different.

                Comment


                • #9
                  Originally posted by gosh000 View Post

                  You have mention only good things as a hardware manufacturer's point of view but here are some points as a user:
                  • you cannot change OS and/or computer architecture if the manufacturer has not provided their blobs for that particular OS and architecture
                  • if that blob contains bugs you'll have unreliable system instead of unreliable device
                  • you won't never be sure about existence of security issues.
                  All points that were already addressed. Let me also point out that from the typical user none of your points really matter. Only a few people are skilled enough (and have the time and patience) to write the drivers and even fewer skilled enough to audit them for security issues or fix any discovered bugs regardless of openness. Merely having an nominally open driver doesn't create a silver bullet that suddenly anyone and everyone can magically use it. It just lowers the bar slightly.

                  Also: The vast majority of Intel's desktop/laptop users are using Windows anyway so couldn't give a crap about a Linux driver. An appeal from "the users" is a pretty weak argument. There are plenty of technical/social arguments to be made, but the "user point of view" is by far the weakest. The most salient point is that for skilled hobbiests and professionals that care to do so, they can help Intel develop their technology further by Intel opening the entire stack. Intel is a hardware company, less so a software company. Having world class imaging stack on top of their IPUs is a feather one can take to the bank. One of the Apple talking points with their M* Macbook line is the imaging sensor has less noise than traditional laptop sensors due to *vague hand waving* AI noise reduction. Intel is missing a big score here to essentially crowdsource the expertise on their own IPU stack.

                  Comment


                  • #10
                    Originally posted by gosh000 View Post

                    You have mention only good things as a hardware manufacturer's point of view but here are some points as a user:
                    • you cannot change OS and/or computer architecture if the manufacturer has not provided their blobs for that particular OS and architecture
                    • if that blob contains bugs you'll have unreliable system instead of unreliable device
                    • you won't never be sure about existence of security issues.
                    Compare to hard coding the algorithm in the hardware where you cannot upgrade it regardless of security issues, or as a firewarm which might affect the entire system if a security CVE is discovered, a userspace driver is so much better.

                    Of course, if they can open source it, it's even better.

                    Comment

                    Working...
                    X