Originally posted by smitty3268
View Post
Announcement
Collapse
No announcement yet.
AMD's UVD2-based XvBA Finally Does Something On Linux
Collapse
X
-
Originally posted by mugginz View PostIs it 3D Vision you're talking about or is there some other functionality not supported under Linux?
Comment
-
Originally posted by Qaridarium"I want FULL functioning of the card"
you can get this only with full opensource spec of your hardware!
because there are many functions in the card you can't use with a cloused source driver! because the driver do not give you full access to the hardware!
So, the radeon driver can now run everything? Does that apply to the HD 4000 and Evergreen cards? Blender and other 3D computer graphics software?!? Raster graphics editors (e.g. GIMP), Vector graphics editors (e.g. Inkscape)? All these I can use an ATI card with the FOSS radeon driver???!?
I can use AutoCAD via Wine with the same ATI card with the radeon driver?!?
I can use GoogleEarth with it?
I can use Cinelerra, Kino and Kdenlive with the radeon driver?!?
I can use any video player such as Kaffeine, MPlayer, KMplayer, VLC, Totem and it can play all media including DVD, HD DVD and Blu-ray?!?
It can play both OpenGL and xv now?!?
One can do all this with the FOSS radeon driver?!? Why didn?t anyone tell me this? Is this true, Kano?!? Why is anyone even arguing then?
Comment
-
Originally posted by Hans View PostEDIT:
@mugginz
Ohh I forgot.
A lot of the posts you hear about how bad fglrx is, is from nvidia users who don't got a clue. They haven't tried fglrx and are only posting because of somekind of fanboism or because they want to contribute to the amd/ati bashing.
Not that amd/ati fanboys are any better.
Comment
-
Q dreams of a FOSS only world. But that must be in a parallel universe not this world. When somebody dislikes mpeg2 accelleration which is used in all dvds and h264 which is even used my lots of bluray (many use vc1 and some even mpeg2) and for youtube then this it not normal. He can not even understand/interpret a vdpau test with atom cpu correctly. His full explaination that cpu can decode better than gpu relies on this review:
Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite
the problem is that this review did not use those highend cpus he always talks about. nor is the test in any way relevant. Of course cpu needed less power to decode that test file. Excacly 296 milli watt more than vdpau! That's of course a reason to say that cpu decoding is better.
Sadly the acticle did not mention the bitrate of the used example file. That's something i always criticied - in older tests the same problem was there.
The problems is that the tested video is just h264 level 4.0 similar to what youtube uses with LESS than 8 mpbs. You can verify that with
MediaInfo is a convenient unified display of the most relevant technical and tag data for video and audio files
on this:
With that stupid example Q wants to look extra clever, but he isn't. In reality the problematic videos have got a bitrate that is up to 6 times higher for peak rate and about factor 4 for average when you test bluray content. It is very unlikely that he tested nor phoronix used that for testing. A simple Atom could not even decode that with mplayer multithreaded, but the benchmark would never show more than 50 % cpu usage because the mplayer used is single threaded and Atom has ht. Of course Q has known that all in advance and my explaination of the power benchmark result is not correct, right?
Comment
-
Originally posted by Panix View PostATI cards operate on less heat and power, these new generation, at least.
ATI modules are defintely not a viable solution for STB where good video support is a must. However, they are better than NVIDIA modules for the compute market (OpenCL + DP support). Though, DP support is currently half-implemented on the ATI side, so that's not really much useful in practise, for now. That's the reality for sub-100W MXM modules. For desktop PCI-E, ATI cards are indeed more power efficient than Fermi for example. This is pretty obvious. ;-)
Comment
-
Originally posted by Kano View Posthttp://www.phoronix.com/scan.php?pag...u_mobile&num=1
the problem is that this review did not use those highend cpus he always talks about. nor is the test in any way relevant. Of course cpu needed less power to decode that test file. Excacly 296 milli watt more than vdpau! That's of course a reason to say that cpu decoding is better.
- The CPU frequency was not monitored. Though, the CPU used was an Atom 330 that does not feature IEST so, no frequency scaling. So, should there be frequency scaling, you would have noticed that while the GPU was decoding, the CPU would run at 800 MHz. For SW decoding, the CPU would be at full 1.6 GHz. The 800 MHz /1.6 GHz difference would probably account for a 1W difference in favor of the GPU decoding solution.
- IIRC, the H.264 1080p test file is "Grey.ts". Not something really difficult to decode. I also use Grey for some of my tests, but more to expose some deinterlacing problems to some people. That clip is not representative for real world H.264 clips with all features that allows the standard. It was representative for exposing bugs (in MPlayer SW decoding path years ago, in HW decoding drivers nowadays for a certain vendor).
BTW, I have just noticed I have a 23 Mbps MPEG-2 clip that my Athlon 64 X2 @ 2.7 GHz cannot handle... This means I will have to handle the IDCT code path, which is awful. As is XvMC / MC or IDCT anyway.
Comment
-
xvba-video 0.6.11
A new version of xvba-video, the XvBA backend to VA-API, is now available at:
splitted-desktop.com is your first and best source for all of the information you’re looking for. From general topics to more of what you would expect to find here, splitted-desktop.com has it all. We hope you find what you are searching for!
Version 0.6.11 - 18.Apr.2010
* Fix VA context destruction
* Fix rendering of empty surfaces
* Fix vaGetImage() in I420 format
* Fix vaCreateConfig() to validate profile & entrypoint
The first fix was for XBMC decoder sofreset mechanism. However, the fix is incomplete and requires more changes. The driver requires changes too to get it right and sane. The current approach is error prone. A workaround can be thought about but this is very fragile and relies on undocumented stuff. The driver change is needed.
The second fix I don't quite remember what it was for, as it was done one month ago. It was either for Gnash, or for Lightspark. Both are VA-API enabled Flash players. It could happen that nobody decoded into a surface nor used vaPutImage() to it. So, it was left uninitialized and eventually crashed.
The third fix is for VLC. XvBA does not support I420 natively, so I need to hack around and expose the right I420 layout through an YV12 format internally. This is not optimal for some other applications (GStreamer), but this is the only solution for VLC.
The fourth fix was most visible with XBMC before the commits of yesterday. The program never really checked the VA profile was actually supported prior to using it. On the other hand, xvba-video did not fully check for VA profiles & entrypoints for all functions taking those.
This should be the last 0.6.x series driver. 0.7.x will focus on optimizations and removes some workarounds, that are normally fixed nowadays, but would have prevented the planned changes from being efficient. The current dev tree now requires fglrx >= 8.69.2. I will probably increase the requirement as I see fit. Probably 8.72 or even 8.73 in the end.
Comment
-
Originally posted by Panix View PostWhat about all the users who have used both cards and those that ACTUALLY have ATI cards and have problems with fglrx? If the ATI cards could at least match what the Nvidia cards can do, there wouldn't be a problem. Actually, a lot of users would drop their Nvidia cards and switch to ATI now. ATI cards operate on less heat and power, these new generation, at least. Lots of users would sell cards and switch if there's a good reason. But, if the support is still subpar and the development is unbearably slow, then there will be little reason to. I read of a lot of users who have either ATI cards or have used both comment about experiences with the ATI cards. On specific distro forums, too. How can you conclude that it's all 'nvidia fanboys' making the complaints? It sounds like it's both inaccurate and you didn't do your homework.
I have both type of cards and I do also have problems with my nvidia driver. Can you specify the problem you might have with fglrx?
Comment
-
Originally posted by mugginz View PostI'm still not totally against going with an ATI card for HTPC. It would give me a bit of a personal window into what to expect if I was to buy the higher performance card for the desktop. I so want to use Eyefinity. I have four 24 inch screens of which I can only use two at the moment. nVidia provide awesome support for a two card system, but when you use multiple cards, you loose compositing and take a performance hit. While there is a work around for the loss of compositing with Xinerama enabled, it's not one I'd consider ideal by any means. ATI is the only vendor that produces a high performance solution for more than two screens under Linux at present.
If I do get the ATI card (against my better judgment) and their are issues, I will likely be very vocal about it indeed.
The OS install was done on an AMD 3200+ on an Asus 939 M.B. with a nVidia 6600. I put the hard drive into the new HTPC with the new 8400GS with Gigabyte 775 MoBo and there was nothing to do. It all just worked. Something Windows doesn't do gracefully at all. One thing I wanted to do was at least go with an AMD CPU and Mobo but could source one with three PCI slots in a micoATX form factor.
Comment
Comment