Announcement

Collapse
No announcement yet.

Wine-Staging 1.7.36 Has Threadpool, CUDA 7, NVENC

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
    Kemosabe
    Senior Member

  • Kemosabe
    replied
    My results with wine/wine-csmt on AMD A10-7850K, tested with catalyst and radeonsi:
    • Mafia II. Low-Medium settings playable. Some minor glitches But after about 30min: VRAM memory leak causes a crash (2gb VRAM)
      Dead Space: Completely unplayable even on lowest settings. The menu is amost not usable due to lags. Some glitches.
      Dark Souls. Only playable with CSMT not always fluent but okay.
      Dark Souls II. Does only work with latest CSMT. Stock wine is unable to draw menu/characters, very bad performance on low.


    Now gallium-nine/radeonsi
    • Mafia II: Good performance maxed out. No memory leak, no glitches.
      Dead Space: Good performance maxed out. No glitches.
      Dark Souls. Perfectly fluent even with motion bur.
      Dark Souls II. Maxed out, no issues, fluent.


    Sorry but wine's OpenGL translation layer is as broken as eON, as broken as early versions of VALVE's source ports. Last on does only partial translation.
    Conclusion: Perhaps performance is fine for nvidia users, but they still have glitches and other severe issues. Make wine work for mesa users at least, there are much too many obvious benefits.

    Leave a comment:

  • Ansla
    Senior Member

  • Ansla
    replied
    It's irrelevant if it will be implemented using Gallium or not, from Wine point of view it will still be just another so to link with/dlopen.

    Leave a comment:

  • duby229
    Senior Member

  • duby229
    replied
    Originally posted by Ansla View Post
    That is not really a fair comparison, first of all Gallium Nine is not only Linux specific it's also AMD specific while the API it implements (D3D9) is GPU agnostic on Windows.

    On the other hand, CUDA is not Linux specific, it's implemented by the nVidia driver on all architectures and is also nVidia specific on Windows.

    Now, if AMD released the specs for Mantle and also provided an implementation at least for Linux, if not also OSX, that would be in the exact possition as CUDA and should be treated the same by Wine developers.
    Yeah, I doubt very much. If a mantle state tracker does get implemented it'll have to be gallium. Once AMDGPU gets done and hardware is released AMD will be relying on gallium even more heavily.

    Wine devs either need to pull their head out of the sand, or they need to push it further in.

    Leave a comment:

  • Ansla
    Senior Member

  • Ansla
    replied
    Originally posted by jrch2k8 View Post
    i'm starting to dislike their policy to reject Gallium integration because is "linux specific" when they basically are integrating every nVidia only Linux specific non compatible with anything else in the world in wine.
    That is not really a fair comparison, first of all Gallium Nine is not only Linux specific it's also AMD specific while the API it implements (D3D9) is GPU agnostic on Windows.

    On the other hand, CUDA is not Linux specific, it's implemented by the nVidia driver on all architectures and is also nVidia specific on Windows.

    Now, if AMD released the specs for Mantle and also provided an implementation at least for Linux, if not also OSX, that would be in the exact possition as CUDA and should be treated the same by Wine developers.

    Leave a comment:

  • Ouroboros
    Senior Member

  • Ouroboros
    replied
    Just thought I'd let others know here in case anyone actually uses my wine-staging copr repo:

    I forgot to fix a copy/paste mistake in the SPEC versions string before releasing Wine 1.7.36. I removed the build with the broken version string, but users that upgraded to 1.7.36*-1 will still have to manually 'downgrade' (upgrade) to a newer build to detect future updates. I apologize for any inconveniences.

    Code:
    # yum downgrade wine-*
    The above command should properly downgrade you to the correct build and re-enable automatic upgrades, though I didn't personally test it. Last time I attempted dnf downgrade with wildcards I had issues, so please use yum.
    If the above command fails, you can just uninstall and reinstall:
    Code:
    # yum remove wine-*
    # yum install wine
    Also, on the very slim chance that there's anyone waiting on libclc or LLVM 3.6 in my mesa copr that I mentioned in the last wine thread:
    libclc git requires LLVM >=3.6

    LLVM 3.6 requires bleeding-edge OCaml packages, rebuilds, one package which isn't even packaged by Fedora.
    Although I built most of the packages, I determined that it would be too much of a burden maintaining such packages and implementing the required packages in my mesa copr would be a mess.
    I'm going to attempt building LLVM without OCaml bindings, which I've already done in my OBS testing repo, create a test copr with LLVM 3.6 and mesa and then add LLVM 3.6 to my main mesa copr if there aren't any issues. Don't expect anything LLVM related anytime soon, though. In case you were wondering, it appears nothing relies on llvm-ocaml at the moment according to Arch's repository https://www.archlinux.org/packages/e...64/llvm-ocaml/

    Apologies for the wall of text/hijack.

    Leave a comment:

  • GreatEmerald
    Senior Member

  • GreatEmerald
    replied
    Originally posted by slackner View Post
    I am from the Wine Staging team and I would like to comment on that. First of all, "Wine Staging" is not identical to "Wine". We have different maintainers, and also use different rules to decide what can go in and what not. Upstream Wine rejected the inclusion of Gallium 3D (and also didn't seem to have any interest to include CUDA support in the past for example), but Wine Staging will happily accept a cleaned up patchset to add this feature. When it is useful for at least a couple of people and is actively maintained, we have no problem with adding it to our version, even if it probably will never get upstream.

    The main reason why Gallium Nine is not added yet is because the corresponding Wine code is still very "hacky". I know that this argument is difficult understand from a user perspective, especially when it already works pretty well. But for our Wine Staging tree it is also very important that the code quality is at least so good, that in theory other unrelated people can understand and improve it. It doesn't make sense to add it for example, when the interface between Wine and Mesa could basically change at any time. For interested people, the bug report on wine-staging.com contains more information about the remaining TODOs: https://bugs.wine-staging.com/show_bug.cgi?id=40#c3

    Regards,
    Sebastian
    OK, now that's newsworthy. Good to know that and that progress is being made.

    Leave a comment:

  • duby229
    Senior Member

  • duby229
    replied
    There exists a native d3d9 implementation. If wine devs don't want to call it an emulator then they need to implement native support. There is no doubt at all that Nine support is -FAR- better than wines d3d9-ogl translation.

    Leave a comment:

  • duby229
    Senior Member

  • duby229
    replied
    Originally posted by Commander View Post
    I like that people complain that Wine is nVidia specific while the Wine devs try to implement csmt that works for all drivers instead of relying on __gl_threaded_optimizations from nVidia blob driver to do pretty much the same thing from my understanding.
    The flaw in CSMT is that -is- the cause of huge CPU overhead. Radeon users can just use nine and not worry get any of it. The only ones benefiting from CSMT are nvidia users anyways. It would be better for them to drop CSMT in favor of doing things the right way.

    Look, The truth is that CSMT uses a ton of CPU that is totally unnecessary. If wine devs want to say that wine is not an emulator, then they really -need- to stop doinge stupid shit like that.

    EDIT: They would have better luck lobbying nVidia to support nouveau then they are having writing giganticly complex hacks. If a hack isn't simple, it isn't worth it.
    duby229
    Senior Member
    Last edited by duby229; 10 February 2015, 09:42 AM.

    Leave a comment:

  • magika
    Senior Member

  • magika
    replied
    Originally posted by nanonyme View Post
    I thought the conclusion was csmt had fundamental problems, will not be merged and instead a replacement will be designed by Wine devs. Sounds like far future fantasies to me rather than something that will help people today.

    On another matter, does Wine use DSA in the dx to gl translation extensively?
    Fundamental problem is CSMT was written against d3d9 (as d3d in wine), they decided to refactor stuff in wine &csmt against dx10/11 instead of merging d3d9 version.

    Leave a comment:

  • nanonyme
    Senior Member

  • nanonyme
    replied
    Originally posted by Commander View Post
    I like that people complain that Wine is nVidia specific while the Wine devs try to implement csmt that works for all drivers instead of relying on __gl_threaded_optimizations from nVidia blob driver to do pretty much the same thing from my understanding.
    I thought the conclusion was csmt had fundamental problems, will not be merged and instead a replacement will be designed by Wine devs. Sounds like far future fantasies to me rather than something that will help people today.

    On another matter, does Wine use DSA in the dx to gl translation extensively?

    Leave a comment:

Working...
X