Windows 10 Radeon Software vs. AMDGPU On Ubuntu Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Adarion
    replied
    My sincere gratulation to the AMD-ATI team! This is impressive!
    AMDGPU Pro most of the time on par with W10 or even in the lead. And the free stack also delivered a very good match.
    Compare that to the situation ~8 years ago. Day and Night.

    Leave a comment:


  • BenPope
    replied
    Originally posted by bridgman View Post
    I was about to say "the preview driver is not tested on 16.04" but then I noticed Michael had successfully run it on 16.04 for this article

    I'm using Ubuntu 15.10, but I don't think that makes a difference, I just built this: https://cgit.freedesktop.org/~agd5f/linux/tree/?h=drm-next-4.7-wip-polaris rebooted and then reinstalled the beta driver. Whilst installing the driver, the kernel module failed to build. Seems to work fine with the amdgpu driver in the kernel: http://openbenchmarking.org/result/1604191-HA-BPPADOKAN42.

    I'm guessing as long as the kernel branch has the DAL stuff (and its API isn't changed significantly), the kernel amdgpu driver should work with the rest of the amdgpu pro stack, right?

    Leave a comment:


  • bridgman
    replied
    Originally posted by stiiixy View Post
    IGP is integrated, and DGP is discrete. I see this used on Arch and a couple other sites. I found it's a nice simple method to make it easier to remember the differance by simply changing the first letter.

    Anyway, can anyone explain to be in lay how I make use, or force full-time if I have to, the DGP on my laptop instead of the IGP? Apparently the kernel is supposed to be 'intelligent', but it doesn't appear to happening with my one whole day of testing. Also, I'm on mesa and a Radeon 7670 on Arch (xorg is at 1.18, radeon at 7.7 and whatever comes after it, using all standard arch packages) so no AMDGPU for me (missed by one gen!). Apparently using radeon.modeset=1 doesn't do anything for this issue.
    It's a userspace switch, not in the kernel. There are a few guides around depending on your distro (you mentioned Arch, is that what you are using ?), here's an example for Ubuntu:

    Leave a comment:


  • bug77
    replied
    Originally posted by duby229 View Post

    I thought only the kernel ran in long mode and then context switched to compatibility mode for 3bit code in userspace? If so then that 32bit code doesn't have access to additional registers or extensions. However if your right that long mode supports 2, 4 or 8 byte pointers, then in that case there is absolutely no reason to build a linux system that is totally 64bit since it will have access to all the additional registers and extensions from long mode..

    EDIT: It should be pretty easy to write a tool to determine how much address space app needs and then compile it appropriately.
    A compute intensive application would not necessarily need a ton of RAM, but it may still be faster with access to additional registers. But your trick will do for many apps.

    Originally posted by duby229 View Post

    I think it's a drm feature called switcheroo. I've personally avoided laptops with both integrated and discrete graphics due to the number of complaints I've read about it.

    Here is an Arch guide with all the info.
    https://wiki.archlinux.org/index.php/hybrid_graphics
    Ha, I'm struggling to get my work laptop to use the discrete card right now. For some reason lspci says the nvidia card is using nouveau even if I manually removed nouveau from the system. So yeah, queue one more complaint about hybrid graphics.

    Leave a comment:


  • duby229
    replied
    Originally posted by stiiixy View Post

    IGP is integrated, and DGP is discrete. I see this used on Arch and a couple other sites. I found it's a nice simple method to make it easier to remember the differance by simply changing the first letter.

    Anyway, can anyone explain to be in lay how I make use, or force full-time if I have to, the DGP on my laptop instead of the IGP? Apparently the kernel is supposed to be 'intelligent', but it doesn't appear to happening with my one whole day of testing. Also, I'm on mesa and a Radeon 7670 on Arch (xorg is at 1.18, radeon at 7.7 and whatever comes after it, using all standard arch packages) so no AMDGPU for me (missed by one gen!). Apparently using radeon.modeset=1 doesn't do anything for this issue.
    I think it's a drm feature called switcheroo. I've personally avoided laptops with both integrated and discrete graphics due to the number of complaints I've read about it.

    Here is an Arch guide with all the info.

    Leave a comment:


  • duby229
    replied
    Originally posted by CrystalGamma View Post

    You don't have to choose: there is x32, which is a calling convention running in long mode (64-bit) but with 4-byte pointers.
    I thought only the kernel ran in long mode and then context switched to compatibility mode for 32bit code in userspace? If so then that 32bit code doesn't have access to additional registers or extensions. However if your right that long mode supports 2, 4 or 8 byte pointers, then in that case there is absolutely no reason to build a linux system that is totally 64bit since it will have access to all the additional registers and extensions from long mode..

    EDIT: It should be pretty easy to write a tool to determine how much address space app needs and then compile it appropriately.
    Last edited by duby229; 19 April 2016, 09:53 AM.

    Leave a comment:


  • rabcor
    replied
    Double POst

    Leave a comment:


  • rabcor
    replied
    I am impressed... Maybe I'll go AMD next... (Although the scores are looking less impressive when you compare them to Nvidia's performance; although there is a certain lack of on-par samples, e.g. the Fury generally exhibits performance similar to 980, Fury X to 980-Ti, and I imagine the 285 would be somewhere around the range of 960, although it seemed to be performing noticably worse than even a 950)

    But at least this tells me that AMD are indeed doing their best here, and that is enough for me. Only way to go now is up.

    Originally posted by Passso View Post

    Please read the Steam statistics about hardware first. Then write.

    The vast majority of people do use igp. It's the vast majority of PC gamers that don't.

    Originally posted by atomsymbol

    In my opinion, if an application fits (when running) into a 32-bit address space, and does not benefit from 64-bit integer arithmetic, there is nothing wrong in compiling it for a 32-bit target. For example, small "streaming" Linux programs such as 'cat', 'tr', 'tail', or small programs such as 'cron' and 'sleep', fall into this category.

    There isn't, but steam is a slow as fuck performing application, and it's easy to point the fingers at shitty 32-bit code rather than simply just the fact that the code must obviously be shitty to begin with. Volvo have been lazy about this for years and years on end.
    Last edited by rabcor; 19 April 2016, 08:27 AM.

    Leave a comment:


  • theriddick
    replied
    Talos has deeper framerate analysis data. It would be nice if a curve chart was possible showing how consistent framerate was. I know that some games are shockers yet will try and tell you they get high fps but the stutter is terrible. (Warthunder on the AMD drivers for example, XCOM2 is another).

    Leave a comment:


  • Passso
    replied
    Originally posted by liam View Post

    Most people, the vast majority, just use the igp.
    Please read the Steam statistics about hardware first. Then write.

    Leave a comment:

Working...
X