Announcement

Collapse
No announcement yet.

The Ongoing Open-Source Work To Enable Webcam Support On Recent Intel Laptops

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AmericanLocomotive
    replied
    I'm well aware of what "effective" clocks are. I am pointing out that streaming UVC camera video uses precious little processor time.

    Let's use my Steamdeck, since it's the most mobile oriented device I have, and has good total system power reporting ability:
    - Google Meet instant meeting running with camera off: 7.8w
    - Google Meet instant meeting running with camera on: 8.3w

    So that's 500mW extra to:
    - Power up the USB camera and run all of its electronics
    - Stream compressed video content from the camera to the Steamdeck
    - Decompress the video and display it locally on the Steamdeck
    - Re-encode the video stream and send it back to Google

    Let's do another test to eliminate browser overhead and the whole streaming/re-encoding aspect
    Steamdeck sitting with VLC Running, but idle: 5.3w
    Steamdeck viewing UVC camera with VLC: 5.5w

    We're talking 200mW here. It's a small amount of power. That's about 15 minute total loss of battery life across the 8 hours nominal the Steam Deck battery will last browsing the web.

    I'm sure MIPI cameras shave a few mW by eliminating USB overhead and having the camera processing be done on the CPU, which is almost certainly on a more advanced semiconductor node than the camera electronics. That's important when you have a tiny phone battery, but becomes increasingly less important when you have the 40+ wH batteries that laptops have.

    Using MIPI cameras in laptops is just a cost savings thing.

    Leave a comment:


  • yump
    replied
    Originally posted by AmericanLocomotive View Post
    effective core clock of ~550 MHz.
    HWiNFO has some confusing ways of measuring clock frequencies (and none of them are the correct way, IMO). "Effective clock" is cycles per wall clock second, using the APERF MSR only. What that's measuring is real clock frequency * %cpu utilization. Most, or maybe all, Intel CPUs from the last decade have a minimum clock frequency of 800 MHz. 550 MHz effective clock could be 70% utilization at 800 MHz, or it could be 10 % at 5.5 GHz. The latter would be extremely inefficient.

    HWiFO's "instant" clock frequency reads the frequency multiplier at a single time during the execution of HWiFO's sampling code. This is mostly bogus unless the sampling loop happens to interrupt your real workload, which only reliably happens at 100% CPU utilization. So it's only useful for measuring thermal/power-limited clock frequency, and "effective clock" should give the same answer.

    What Intel recommends, and what turbostat's Bzy_MHz column shows, is (APERF_n - APERF_n-1) / (MPERF_n - MPERF_n-1) * base_clock. That gives the real clock frequency when the CPU was awake, averaged over the time between samples.

    Leave a comment:


  • X_m7
    replied
    Shame Intel didn't manage to sort out this whole IPU6 mess sooner, this was the deciding factor that made me go get an AMD laptop instead of an Intel one (with an Arc dGPU) last year, since I consider the webcam to be part of "basic" functionality, otherwise they'd have been tied since my main goal with the new laptop was just to escape the pain of NVIDIA switchable graphics and their blasted proprietary drivers constantly holding me back from trying out stuff like Wayland or whatever.

    Leave a comment:


  • AmericanLocomotive
    replied
    IMO, this is totally a "penny pinching" move, and not a "more efficient" thing. I'm pretty sure laptop manufacturers just wanted to be able to utilize the same ultra cheap and nasty MIPI "front" cameras all of the $99 Android phones have. With my boring 720p USB UVC webcam and Firefox streaming a Google Meet, my CPU sits at about 1% core usage, with an effective core clock of ~550 MHz.

    Leave a comment:


  • boeroboy
    replied
    Kudos to the devs on this one. Hans you're a hero as it seems Intel pulled an Apple straying from perfectly standardized USB for webcams with no good reason. I was shocked to see this on a recent 12th gen Dell XPS.

    Leave a comment:


  • StefanBruens
    replied
    Originally posted by EliasOfWaffle View Post
    Is sad this in CPU side, but looks the only way.
    Can be good if Intel redistributes all firmware and Algorithms to Libcamera team
    Lets compare this with the typical pipeline as used by a UVC compliant camera, both internal and external:
    1. The camera chipsets reads the raw image data from the sensor (typically using CSI), in (typically) RGGB Bayer format
    2. It runs the Debayer algorithm
    3. optionally:
      • brightness/exposure control feedback
      • noise reduction
      • flicker reduction
      • ...
    4. Image is YUV converted and subsampled
    5. Image is encoded as H.264/H.265/MJPEG stream (uncompressed only works for very low resolution/low FPS)
    6. Image is transferred via USB
    7. Image is received by the host USB controller, DMAed to some memory space
    8. Compressed stream is decoded
    Using the host chipset to receive the CSI stream from the sensor gets rid of the stream encoding/decoding, i.e. less power spent, and CPU cycles saved on the host side (though, it may use the GPU/Video engine for decoding).

    Debayer and all the optional image processing are now on the host side, so this is probably less efficient when implemented on the CPU. But when implemented on the GPU with shaders, this can be as efficient as doing it in the camera chipset (ballpark figure). It also allows new and/or better algorithms.

    Leave a comment:


  • stormcrow
    replied
    Originally posted by EliasOfWaffle View Post
    Is sad this in CPU side, but looks the only way.
    Can be good if Intel redistributes all firmware and Algorithms to Libcamera team
    As I've said before, and Intel's own docs show the hardware component for processing the data from the camera is on the CPU. The Image Processing Unit is part of the CPU itself. It's a co-processor just like an NPU on some systems, an iGPU, math or vector processors, etc etc etc. It takes the raw data from the camera, performs its clean up, then pushes that to the program requesting video streams. Having a dedicated coprocessor for automatic video preprocessing is efficient and is already becoming the standard way of managing video going forward. The problem isn't the idea, and it's not being done to save a penny on camera circuits. The problem is it's being done in black boxes excluding users from controlling that aspect of their computing experience while any competitor so inclined isn't going to be stopped by having to reverse engineer the blobs.

    The effort to get this working is being moved to the GPU because GPUs are well documented while the Intel (and others' equivalent) IPU is currently a black box. No one outside of Intel, and whoever they contracted with, has access to the documentation necessary to utilize the IPU coprocessor without reverse engineering that's likely not worth the time and effort given GPUs can perform the same task if potentially not as efficiently.

    Leave a comment:


  • andyprough
    replied
    My company bought a new Intel laptop for me to use a couple months ago. The maker offered an option of removing the built-in webcam and microphone, which I took. USB webcam/mic's have been working so well that I see no reason to have an always-on camera and voice recorder.

    Leave a comment:


  • panikal
    replied
    Originally posted by EliasOfWaffle View Post
    Is sad this in CPU side, but looks the only way.
    Can be good if Intel redistributes all firmware and Algorithms to Libcamera team
    If Intel hasn't done it by now its because there is some 3rd party IP they are using or something, Intel is normally pretty good about opensource support even if it requires a firmware blob. (I'm remembering the GMA500 for example).

    The last page of the slide set clearly states the future iterations will work towards GPU support via opengl, later opencl and/or vulkan - in fact they have a list of features and list which will be on GPU (most of them).

    Leave a comment:


  • EliasOfWaffle
    replied
    Is sad this in CPU side, but looks the only way.
    Can be good if Intel redistributes all firmware and Algorithms to Libcamera team

    Leave a comment:

Working...
X