Originally posted by Veerappan
View Post
Announcement
Collapse
No announcement yet.
AMD Releases Open-Source UVD Video Support
Collapse
X
-
Originally posted by Veerappan View PostUmm, doesn't RS880 have UVD2? http://en.wikipedia.org/wiki/Unified_Video_DecoderTest signature
Comment
-
Originally posted by droidhacker View PostThat's absurd. The open source drivers are very close to the blobs. You're way behind the times.
Also, there's no such thing as "twenty times slower". "times" means multiply. You can't multiply "slowness".
Comment
-
Originally posted by Veerappan View PostI think that the previous poster is referring to the fact that the Llano (and maybe Trinity) APUs default to a low power state in the VBIOS, whereas the AMD Desktop cards default to a high power state. In order to get full performance out of Llano/Trinity, you need to set the power profile to high, or you need to set the power method to dynpm.
Comment
-
Originally posted by brent View PostNo, that's not it. Setting the profile or enabling dynpm will do nothing, because the driver simply does not allow higher clocks. There is no way to get the full clock working on mobile APUs without hacks.
Comment
-
Originally posted by bridgman View PostAFAIK there are enough differences that the code which works on other UVD2 implementations doesn't work yet on the UVD in the IGPs. I believe Christian posted about this earlier in the thread.
Hopefully he figures the differences out. I've got a Radeon 6850, A6-3500, HD4200, and an HD3200 in various systems at home. The HD3200 is in a file-server, but the rest of them could end up doing video decoding duties at any time in the future, and the A6-3500 spends an average of 2 hours a night playing back recorded HD TV episodes.
Comment
-
Originally posted by artivision View PostI really try hard to understand what you say.
1) Rasterizers inside GPU drivers are unified (as vendors say). They can execute shaders and draw graphics from multiple shader languages, with a simple front end, plus a compiler target back-end in order for a compiler to terget the GPU.
2) When i say SSE4.2 or AVX, i mean at least 6-insructions processors with 7 - 9.5 drystone(dmips/mhz) single thread.
3) Are you a programmer? Have you even try to compile GLSL_source to GLSL_bytecode and then to GLSL_machinery_code. It takes 2-15 minutes for simple shader programs, the most of it to the first half. Now add the HLSL_bytecode to GLSL_source and then you have it. The problem isn't to the dead corners. The only possibility here is to write some sub-extensions for OpenGL extensions that will compile D3D cases. Something like sub-compiler that will target open and closed GLSL compilers inside GPU driver, and this sub-compiler will be LLVM friendly.
4) MS has already lose court fight for HLSL implementation. We only ask that an MS-D3D(via Wine) can see the GPU directly, without translations.
2.) well my point is no game uses sse 4.2/AVX this days[maybe unreal4 but not sure yet] and the single thread performance is very relative after all most games bottlenecks are on the bandwith/GPU side more than the CPU and in some others the CPU do affect the max framerate but normally the FPS is high enough to not care, so this days for most games the CPU point is moot unless you wanna break a Bench record or play with multimonitors in 3D[which neither is properly supported in linux yet]
3.) i am and those timings are insanely high you probably have a huge time eater in your code unrelated to the to hlsl-glsl
4.) again my point the wine performance is close enough but wine is already exploring an option to have an external hlsl compiler especially for DX10/11 http://wiki.winehq.org/HLSLCompiler but again the current wine implementation for Dx9 can handle very well very taxing games like crysis 2 in very high settings fluid enough for me not to care.
5.) handle the gpu directly is probably not a good idea at all
here you can see how wine shaders work http://wiki.winehq.org/DirectX-Shaders
Comment
Comment