I'm well aware of what "effective" clocks are. I am pointing out that streaming UVC camera video uses precious little processor time.
Let's use my Steamdeck, since it's the most mobile oriented device I have, and has good total system power reporting ability:
- Google Meet instant meeting running with camera off: 7.8w
- Google Meet instant meeting running with camera on: 8.3w
So that's 500mW extra to:
- Power up the USB camera and run all of its electronics
- Stream compressed video content from the camera to the Steamdeck
- Decompress the video and display it locally on the Steamdeck
- Re-encode the video stream and send it back to Google
Let's do another test to eliminate browser overhead and the whole streaming/re-encoding aspect
Steamdeck sitting with VLC Running, but idle: 5.3w
Steamdeck viewing UVC camera with VLC: 5.5w
We're talking 200mW here. It's a small amount of power. That's about 15 minute total loss of battery life across the 8 hours nominal the Steam Deck battery will last browsing the web.
I'm sure MIPI cameras shave a few mW by eliminating USB overhead and having the camera processing be done on the CPU, which is almost certainly on a more advanced semiconductor node than the camera electronics. That's important when you have a tiny phone battery, but becomes increasingly less important when you have the 40+ wH batteries that laptops have.
Using MIPI cameras in laptops is just a cost savings thing.
Announcement
Collapse
No announcement yet.
The Ongoing Open-Source Work To Enable Webcam Support On Recent Intel Laptops
Collapse
X
-
Originally posted by AmericanLocomotive View Posteffective core clock of ~550 MHz.
HWiFO's "instant" clock frequency reads the frequency multiplier at a single time during the execution of HWiFO's sampling code. This is mostly bogus unless the sampling loop happens to interrupt your real workload, which only reliably happens at 100% CPU utilization. So it's only useful for measuring thermal/power-limited clock frequency, and "effective clock" should give the same answer.
What Intel recommends, and what turbostat's Bzy_MHz column shows, is (APERF_n - APERF_n-1) / (MPERF_n - MPERF_n-1) * base_clock. That gives the real clock frequency when the CPU was awake, averaged over the time between samples.
Leave a comment:
-
Shame Intel didn't manage to sort out this whole IPU6 mess sooner, this was the deciding factor that made me go get an AMD laptop instead of an Intel one (with an Arc dGPU) last year, since I consider the webcam to be part of "basic" functionality, otherwise they'd have been tied since my main goal with the new laptop was just to escape the pain of NVIDIA switchable graphics and their blasted proprietary drivers constantly holding me back from trying out stuff like Wayland or whatever.
- Likes 1
Leave a comment:
-
IMO, this is totally a "penny pinching" move, and not a "more efficient" thing. I'm pretty sure laptop manufacturers just wanted to be able to utilize the same ultra cheap and nasty MIPI "front" cameras all of the $99 Android phones have. With my boring 720p USB UVC webcam and Firefox streaming a Google Meet, my CPU sits at about 1% core usage, with an effective core clock of ~550 MHz.
- Likes 3
Leave a comment:
-
Kudos to the devs on this one. Hans you're a hero as it seems Intel pulled an Apple straying from perfectly standardized USB for webcams with no good reason. I was shocked to see this on a recent 12th gen Dell XPS.
- Likes 2
Leave a comment:
-
Originally posted by EliasOfWaffle View PostIs sad this in CPU side, but looks the only way.
Can be good if Intel redistributes all firmware and Algorithms to Libcamera team- The camera chipsets reads the raw image data from the sensor (typically using CSI), in (typically) RGGB Bayer format
- It runs the Debayer algorithm
- optionally:
- brightness/exposure control feedback
- noise reduction
- flicker reduction
- ...
- Image is YUV converted and subsampled
- Image is encoded as H.264/H.265/MJPEG stream (uncompressed only works for very low resolution/low FPS)
- Image is transferred via USB
- Image is received by the host USB controller, DMAed to some memory space
- Compressed stream is decoded
Debayer and all the optional image processing are now on the host side, so this is probably less efficient when implemented on the CPU. But when implemented on the GPU with shaders, this can be as efficient as doing it in the camera chipset (ballpark figure). It also allows new and/or better algorithms.
- Likes 2
Leave a comment:
-
Originally posted by EliasOfWaffle View PostIs sad this in CPU side, but looks the only way.
Can be good if Intel redistributes all firmware and Algorithms to Libcamera team
The effort to get this working is being moved to the GPU because GPUs are well documented while the Intel (and others' equivalent) IPU is currently a black box. No one outside of Intel, and whoever they contracted with, has access to the documentation necessary to utilize the IPU coprocessor without reverse engineering that's likely not worth the time and effort given GPUs can perform the same task if potentially not as efficiently.
- Likes 2
Leave a comment:
-
My company bought a new Intel laptop for me to use a couple months ago. The maker offered an option of removing the built-in webcam and microphone, which I took. USB webcam/mic's have been working so well that I see no reason to have an always-on camera and voice recorder.
- Likes 3
Leave a comment:
-
Originally posted by EliasOfWaffle View PostIs sad this in CPU side, but looks the only way.
Can be good if Intel redistributes all firmware and Algorithms to Libcamera team
The last page of the slide set clearly states the future iterations will work towards GPU support via opengl, later opencl and/or vulkan - in fact they have a list of features and list which will be on GPU (most of them).
- Likes 1
Leave a comment:
-
Is sad this in CPU side, but looks the only way.
Can be good if Intel redistributes all firmware and Algorithms to Libcamera team
Leave a comment:
Leave a comment: