Originally posted by bridgman
View Post
Announcement
Collapse
No announcement yet.
AMD Packs In More AMDGPU Features For Linux 4.15
Collapse
X
-
Last edited by shmerl; 09 October 2017, 05:07 PM.
-
Originally posted by shmerl View PostI understood that something is blocking its usage on the hardware / firmware side of things (I've read something about waiting for firmware to enable something and so on).
There have sometimes been licensing issues in the past (eg. MS pays license fees per-copy-of-Windows and complies with license requirements by being closed source) but I haven't heard anything like that recently.Test signature
- Likes 2
Comment
-
Originally posted by shmerl View Post
I mean the part that's in the hardware + firmware. That should be OS agnostic. The software part of that hybrid approach should be OS specific, sure. I understood that something is blocking its usage on the hardware / firmware side of things (I've read something about waiting for firmware to enable something and so on). May be that's incorrect and it's just missing software implementation? If that's the case, that's better I suppose.
Comment
-
Originally posted by Brisse View PostI know it says so on Wikipedia ( https://en.wikipedia.org/wiki/Unifie..._Decoder#UVD_6 ) but I'm pretty certain it's wrong. It doesn't have VP9 hardware decoding. They cheated a bit and implemented a GPGPU/CPU based hybrid decoding for it, but that's on Windows only, and I wouldn't really call it proper hardware decoding.
But then, at least, it should be possible to write something like this as SW for "Linux" / userland, too and hook that up to VDPAU/VAAPI/Openmax/... so programs might be able to use it, even if parts of it require some more CPU power. Half-accelerated is still better than none.
Originally posted by shmerl View PostWhy is it on Windows only? It should be OS agnostic. What prevents it from being used on Linux?
Originally posted by agd5f View PostH.265 encode is implemented as part of the UVD engine rather than VCE. The two engines are very similar. On Raven, the two separate engines (UVD and VCE) are replaced by a single engine (VCN).
It's really neat to have so many devs here so you get some insights.
Stop TCPA, stupid software patents and corrupt politicians!
Comment
-
Originally posted by Adarion View PostOh, well, that would be cheating, indeed. Cause then it would be done in SW on CPU/Shaders. :/
But then, at least, it should be possible to write something like this as SW for "Linux" / userland, too and hook that up to VDPAU/VAAPI/Openmax/... so programs might be able to use it, even if parts of it require some more CPU power. Half-accelerated is still better than none.
- Likes 3
Comment
-
Originally posted by Brisse View Post
I know it says so on Wikipedia ( https://en.wikipedia.org/wiki/Unifie..._Decoder#UVD_6 ) but I'm pretty certain it's wrong. It doesn't have VP9 hardware decoding. They cheated a bit and implemented a GPGPU/CPU based hybrid decoding for it, but that's on Windows only, and I wouldn't really call it proper hardware decoding.
What about Khronos Group's OpenMAX?
Comment
-
Originally posted by timofonic View Post
I wonder why there's need to have "hardware decoding/encoding" in the current form. Don't they really have a binary blob running in a specialized hardware? What about making that hardware become fully programmable so projects like FFMpeg can be built for that hardware instead relying on what they provide in their GPUs?
You could build a similar mixed programmable and fixed function pipeline for video, but then you are still limited by the fixed function blocks and having a big pipeline would use more power and die area to support a relatively small number of states (as compared to graphics).
Originally posted by timofonic View PostWhat about Khronos Group's OpenMAX?
- Likes 1
Comment
-
Originally posted by agd5f View PostVideo decode/encode hw is fixed function. It's designed do specific operation for specific codec. If you change the codec, the hw no longer fits. If you make a generic solution you end up with a CPU or GPU, neither of which by itself it a good fit for video. Even on GPUs there are some parts of the 3D pipeline that do not map will to shaders so they are implemented as fixed function blocks. Some new graphics API features require newer fixed function blocks so it's not completely programmable.
You could build a similar mixed programmable and fixed function pipeline for video, but then you are still limited by the fixed function blocks and having a big pipeline would use more power and die area to support a relatively small number of states (as compared to graphics).
Originally posted by agd5f View PostIt's a media framework similar to VDPAU or VA-API.
So what about ditching that mess, use OpenMax and participate in the development of it to make it better for desktop systems too?
Comment
-
Originally posted by timofonic View PostYes, but OpenMax is a standard. VDPAU, VA-API, cuvid/nvdecode don't.
So what about ditching that mess, use OpenMax and participate in the development of it to make it better for desktop systems too?
Comment
Comment