Announcement

Collapse
No announcement yet.

The Ongoing Open-Source Work To Enable Webcam Support On Recent Intel Laptops

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • The Ongoing Open-Source Work To Enable Webcam Support On Recent Intel Laptops

    Phoronix: The Ongoing Open-Source Work To Enable Webcam Support On Recent Intel Laptops

    Webcamera support on recent generations of Intel laptops have tended to be a mess due to the Intel IPU6 requiring an out-of-tree kernel driver and a proprietary user-space component. But fortunately thanks to the work of Linar and Red Hat on a "SoftISP" implementation within libcamera, it's becoming possible to leverage these recent MIPI-based webcameras on an open-source software stack...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Is sad this in CPU side, but looks the only way.
    Can be good if Intel redistributes all firmware and Algorithms to Libcamera team

    Comment


    • #3
      Originally posted by EliasOfWaffle View Post
      Is sad this in CPU side, but looks the only way.
      Can be good if Intel redistributes all firmware and Algorithms to Libcamera team
      If Intel hasn't done it by now its because there is some 3rd party IP they are using or something, Intel is normally pretty good about opensource support even if it requires a firmware blob. (I'm remembering the GMA500 for example).

      The last page of the slide set clearly states the future iterations will work towards GPU support via opengl, later opencl and/or vulkan - in fact they have a list of features and list which will be on GPU (most of them).

      Comment


      • #4
        My company bought a new Intel laptop for me to use a couple months ago. The maker offered an option of removing the built-in webcam and microphone, which I took. USB webcam/mic's have been working so well that I see no reason to have an always-on camera and voice recorder.

        Comment


        • #5
          Originally posted by EliasOfWaffle View Post
          Is sad this in CPU side, but looks the only way.
          Can be good if Intel redistributes all firmware and Algorithms to Libcamera team
          As I've said before, and Intel's own docs show the hardware component for processing the data from the camera is on the CPU. The Image Processing Unit is part of the CPU itself. It's a co-processor just like an NPU on some systems, an iGPU, math or vector processors, etc etc etc. It takes the raw data from the camera, performs its clean up, then pushes that to the program requesting video streams. Having a dedicated coprocessor for automatic video preprocessing is efficient and is already becoming the standard way of managing video going forward. The problem isn't the idea, and it's not being done to save a penny on camera circuits. The problem is it's being done in black boxes excluding users from controlling that aspect of their computing experience while any competitor so inclined isn't going to be stopped by having to reverse engineer the blobs.

          The effort to get this working is being moved to the GPU because GPUs are well documented while the Intel (and others' equivalent) IPU is currently a black box. No one outside of Intel, and whoever they contracted with, has access to the documentation necessary to utilize the IPU coprocessor without reverse engineering that's likely not worth the time and effort given GPUs can perform the same task if potentially not as efficiently.

          Comment


          • #6
            Originally posted by EliasOfWaffle View Post
            Is sad this in CPU side, but looks the only way.
            Can be good if Intel redistributes all firmware and Algorithms to Libcamera team
            Lets compare this with the typical pipeline as used by a UVC compliant camera, both internal and external:
            1. The camera chipsets reads the raw image data from the sensor (typically using CSI), in (typically) RGGB Bayer format
            2. It runs the Debayer algorithm
            3. optionally:
              • brightness/exposure control feedback
              • noise reduction
              • flicker reduction
              • ...
            4. Image is YUV converted and subsampled
            5. Image is encoded as H.264/H.265/MJPEG stream (uncompressed only works for very low resolution/low FPS)
            6. Image is transferred via USB
            7. Image is received by the host USB controller, DMAed to some memory space
            8. Compressed stream is decoded
            Using the host chipset to receive the CSI stream from the sensor gets rid of the stream encoding/decoding, i.e. less power spent, and CPU cycles saved on the host side (though, it may use the GPU/Video engine for decoding).

            Debayer and all the optional image processing are now on the host side, so this is probably less efficient when implemented on the CPU. But when implemented on the GPU with shaders, this can be as efficient as doing it in the camera chipset (ballpark figure). It also allows new and/or better algorithms.

            Comment


            • #7
              Kudos to the devs on this one. Hans you're a hero as it seems Intel pulled an Apple straying from perfectly standardized USB for webcams with no good reason. I was shocked to see this on a recent 12th gen Dell XPS.

              Comment


              • #8
                IMO, this is totally a "penny pinching" move, and not a "more efficient" thing. I'm pretty sure laptop manufacturers just wanted to be able to utilize the same ultra cheap and nasty MIPI "front" cameras all of the $99 Android phones have. With my boring 720p USB UVC webcam and Firefox streaming a Google Meet, my CPU sits at about 1% core usage, with an effective core clock of ~550 MHz.

                Comment


                • #9
                  Shame Intel didn't manage to sort out this whole IPU6 mess sooner, this was the deciding factor that made me go get an AMD laptop instead of an Intel one (with an Arc dGPU) last year, since I consider the webcam to be part of "basic" functionality, otherwise they'd have been tied since my main goal with the new laptop was just to escape the pain of NVIDIA switchable graphics and their blasted proprietary drivers constantly holding me back from trying out stuff like Wayland or whatever.

                  Comment


                  • #10
                    Originally posted by AmericanLocomotive View Post
                    effective core clock of ~550 MHz.
                    HWiNFO has some confusing ways of measuring clock frequencies (and none of them are the correct way, IMO). "Effective clock" is cycles per wall clock second, using the APERF MSR only. What that's measuring is real clock frequency * %cpu utilization. Most, or maybe all, Intel CPUs from the last decade have a minimum clock frequency of 800 MHz. 550 MHz effective clock could be 70% utilization at 800 MHz, or it could be 10 % at 5.5 GHz. The latter would be extremely inefficient.

                    HWiFO's "instant" clock frequency reads the frequency multiplier at a single time during the execution of HWiFO's sampling code. This is mostly bogus unless the sampling loop happens to interrupt your real workload, which only reliably happens at 100% CPU utilization. So it's only useful for measuring thermal/power-limited clock frequency, and "effective clock" should give the same answer.

                    What Intel recommends, and what turbostat's Bzy_MHz column shows, is (APERF_n - APERF_n-1) / (MPERF_n - MPERF_n-1) * base_clock. That gives the real clock frequency when the CPU was awake, averaged over the time between samples.

                    Comment

                    Working...
                    X