Announcement

Collapse
No announcement yet.

Does anyone know when OpenSource ATI GPUs power options are fixed?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Asariati
    replied
    Originally posted by dogsleg View Post
    Well, yes. Hey, AMD, I am your (AMD) user, and actually I'm ready to pay some extra money for better user-interesting features in Linux. I don't need OpenCL, I'm in need of perfomance, OpenGL, and powermanagement.
    This would be a nice experiment. However, when seeing how these bounty-based projects go on BSD, they usually have no success.

    Leave a comment:


  • dogsleg
    replied
    Originally posted by crazycheese View Post
    If quality linux driver is not in budget of your driver department, why don?t you search for external sources to IMPROVE the situation in this segment.
    Ok, the segment is small.
    Ok, the amount of opensource hackers is small.
    How much is amount OF USERBASE that is ready to BUY or PAY your for your driver? You have not done any research here.
    Well, yes. Hey, AMD, I am your (AMD) user, and actually I'm ready to pay some extra money for better user-interesting features in Linux. I don't need OpenCL, I'm in need of perfomance, OpenGL, and powermanagement.

    Leave a comment:


  • curaga
    replied
    Tom, can the VLIW packetizer take full advantage of the four or five concurrent instructions on AMD cards? Or is it limited to just one (cpu-focused?)?

    Leave a comment:


  • tstellar
    replied
    Originally posted by Drago View Post
    Tom, thank you for your clarification. From some old discussions here I got the feeling that LLVM is not sutable for graphics, what changed?
    Is this LLVM->VLIW packetizer somewhere in your commits? Since LLVM is intermediate representation isn't needed some additional optimization steps depending on the VLIW pecularities of concrete chip? Hence LLVM->VLIW compiler optimizer.

    Vadim Girlin is optimizing current r600g shader compiler in this repo:
    mesa 3D graphics library (my experimental wip branches) - GitHub - VadimGirlin/mesa at r600_shader_opt


    Your comments greatly appreciated!
    I'm not sure what the arguments against using LLVM for graphics were in the past, but in its current state I think LLVM is suitable for graphics shaders.

    The LLVM->VLIW packetizer is compiler pass that is included in the LLVM libraries. It does not work with LLVM IR, but rather MachineInstr objects that represent the actual hardware instructions for a target. Each target is responsible for lowering LLVM IR to MachineInstr objects and this is done prior to running the VLIW packetizer.

    The packetizer works been analyzing the instruction dependencies in a program and determining what instructions can be packetized together without changing the logic of the program. Before adding a new instruction to a packet, it "asks" the target if there are any target specific constraints that would prevent the instruction from being added. This is where the VLIW peculiarities of R600 hardware would be handled.

    Vadim's optimization will still work even with the LLVM backend, because the operate on struct r600_bytecode objects, which is what the backend currently generates.

    Leave a comment:


  • darkbasic
    replied
    Uh? Didn't know of the packetizer, this thread starts to be quite interesting...

    Leave a comment:


  • bridgman
    replied
    Looks like the packetizer was added quite recently (last 4 months from looking at the header file). Don't think Tom added it but he will know better

    Leave a comment:


  • Drago
    replied
    Originally posted by tstellar View Post
    The r600 LLVM backend can be used for graphics too, and it comes fairly close to matching the current r600 compiler as far as number of piglit test passes. It is just missing support for some of the texture instructions and a few other things. It should produce much better code than the current r600g compiler once it makes use of the LLVM VLIW packetizer, which is something that I'm hoping will be done as part of the Open Source compute effort. Adding support for the texture instructions shouldn't be too difficult of a task. It is not a priority for me at the moment, but someone else from the community could easily do this if they were interested.
    Tom, thank you for your clarification. From some old discussions here I got the feeling that LLVM is not sutable for graphics, what changed?
    Is this LLVM->VLIW packetizer somewhere in your commits? Since LLVM is intermediate representation isn't needed some additional optimization steps depending on the VLIW pecularities of concrete chip? Hence LLVM->VLIW compiler optimizer.

    Vadim Girlin is optimizing current r600g shader compiler in this repo:
    mesa 3D graphics library (my experimental wip branches) - GitHub - VadimGirlin/mesa at r600_shader_opt


    Your comments greatly appreciated!

    Leave a comment:


  • watercraft
    replied
    I had similar issues with power management for a Radeon 6950 on Linux, basically it was this
    1. Single monitor power management would work with setting low/medium/high (I don't use dynpm)
    2. Dual monitor power management kept the same clock speeds for low/medium/high (I don't use dynpm)
    After adding the flag drm.debug=0x02 to the kernel command line I was able to see the issue, the power tables in the video bios were not configured correctly to match the radeon kernel module.
    Code:
    [    0.917689] [drm:radeon_pm_print_states], 4 Power State(s)
    [    0.917690] [drm:radeon_pm_print_states], State 0: Default
    [    0.917691] [drm:radeon_pm_print_states],    Default
    [    0.917692] [drm:radeon_pm_print_states],    16 PCIE Lanes
    [    0.917693] [drm:radeon_pm_print_states],    3 Clock Mode(s)
    [    0.917694] [drm:radeon_pm_print_states],            0 e: 800000     m: 1250000      v: 1175 No display only
    [    0.917696] [drm:radeon_pm_print_states],            1 e: 800000     m: 1250000      v: 1175
    [    0.917698] [drm:radeon_pm_print_states],            2 e: 800000     m: 1250000      v: 1175
    [    0.917700] [drm:radeon_pm_print_states], State 1: Performance
    [    0.917701] [drm:radeon_pm_print_states],    16 PCIE Lanes
    [    0.917702] [drm:radeon_pm_print_states],    3 Clock Mode(s)
    [    0.917703] [drm:radeon_pm_print_states],            0 e: 250000     m: 150000       v: 900  No display only
    [    0.917705] [drm:radeon_pm_print_states],            1 e: 500000     m: 1250000      v: 1000
    [    0.917706] [drm:radeon_pm_print_states],            2 e: 870000     m: 1250000      v: 1175
    [    0.917708] [drm:radeon_pm_print_states], State 2: Default
    [    0.917709] [drm:radeon_pm_print_states],    16 PCIE Lanes
    [    0.917710] [drm:radeon_pm_print_states],    3 Clock Mode(s)
    [    0.917711] [drm:radeon_pm_print_states],            0 e: 500000     m: 1250000      v: 1000 No display only
    [    0.917712] [drm:radeon_pm_print_states],            1 e: 500000     m: 1250000      v: 1000
    [    0.917714] [drm:radeon_pm_print_states],            2 e: 870000     m: 1250000      v: 1175
    [    0.917715] [drm:radeon_pm_print_states], State 3: Default
    [    0.917716] [drm:radeon_pm_print_states],    16 PCIE Lanes
    [    0.917717] [drm:radeon_pm_print_states],    3 Clock Mode(s)
    [    0.917719] [drm:radeon_pm_print_states],            0 e: 500000     m: 1250000      v: 1000 No display only
    [    0.917720] [drm:radeon_pm_print_states],            1 e: 500000     m: 1250000      v: 1000
    [    0.917721] [drm:radeon_pm_print_states],            2 e: 870000     m: 1250000      v: 1175
    With a single monitor State 1: Performance was being used which allowed for the most effective power management. However, with dual monitors State 0: Default was being used because the driver couldn't find a power state to use for dual monitors based on its search algorithm. To get around this I hacked around the driver to always use State 1: Performance which allows for using the power management with dual monitors. Depending on how the power tables are setup across different cards from different generations/manufactorers seems to dictate the level of success with power management.

    When using Windows the AMD/ATI driver actually uses State 2: Default for power manegement which doesn't make too much sense to me, but it is likely that in their driver there are different search algorithms for the power states to use. A userspace tool to manually select the power states would be a good idea, with the caveat that it may cause display issues.

    Leave a comment:


  • tstellar
    replied
    Originally posted by Drago View Post
    Yes, I am watching Tom's patches, but I believe his work is more focused to compute on r600g, and compute/graphics on radeonsi.
    I wish more optimal graphics performance on r600g.
    The r600 LLVM backend can be used for graphics too, and it comes fairly close to matching the current r600 compiler as far as number of piglit test passes. It is just missing support for some of the texture instructions and a few other things. It should produce much better code than the current r600g compiler once it makes use of the LLVM VLIW packetizer, which is something that I'm hoping will be done as part of the Open Source compute effort. Adding support for the texture instructions shouldn't be too difficult of a task. It is not a priority for me at the moment, but someone else from the community could easily do this if they were interested.

    Leave a comment:


  • schnelle
    replied
    Originally posted by grege View Post
    One thing also needed is an open source version of the Catalyst Control Center for the free driver. Changing power settings using cat and echo is so last century. Something imaginative like Radeon Control Center.

    +1

    We really need more user friendly way to change power settings until dynpm becomes reality.

    Leave a comment:

Working...
X