Originally posted by atomsymbol
Announcement
Collapse
No announcement yet.
RadeonSI Quietly Landed A Shader Cache As A Last Feature For Mesa 11.2
Collapse
X
-
-
Originally posted by geearf View PostYou could use a tmp drive and sync it back to HDD on shutdown/restart. (Of course that demands enough RAM, even with a compress FS...)
- Likes 1
Comment
-
Originally posted by duby229 View Post
That's pretty much the only way to get playable framerates on wine with The Old Republic online. It uses a disk cache that needs to be on a tmp drive. (which brings it from a 6GB minimum requirement to a 12GB minimum requirement)
There's one already for this radeonsi caching.
- Likes 1
Comment
-
Originally posted by smitty3268 View Post
The OSS drivers will almost certainly keep around an environment variable letting you disable the cache whenever you want, for debugging purposes if nothing else.
There's one already for this radeonsi caching.
Comment
-
Originally posted by atomsymbol
Do you mean latency on a rotational disk, not SSD? I have my home directory (and consequently the shader caches, such as ~/.AMD/GLCache) on an SSD.
Since the moment I bought an SSD, latency isn't an issue in any application I am running - the CPU speed is now the limiting factor.
Or can latency of the shader cache be an issue even on an SSD?
Hopefully xpoint will remedy this, somewhat.
- Likes 1
Comment
-
Originally posted by duby229 View Post
That's pretty much the only way to get playable framerates on wine with The Old Republic online. It uses a disk cache that needs to be on a tmp drive. (which brings it from a 6GB minimum requirement to a 12GB minimum requirement)
As the follower poster said, it'd be great to be able to configure the path of the cache (once it gets pushed to a partition), maybe on a per process basis.
- Likes 1
Comment
-
Originally posted by duby229 View Post
Cool, that means benchmarking options. That is one of the best things the OSS devs offer imo. It means we can all see how the performance turns out.
Instead of an environment variable, a per-application setting and a per graphics driver setting would be better for dealing with such cache behaviour.
(+ a way to determine, set which GPU's are used in benchmarks)
It's easier to work with for benchmarking applications.
Comment
-
Originally posted by geearf View Post12Gb starts to get quite steep (I have 16Gb, but I feel newer games will probably ask even more).
As the follower poster said, it'd be great to be able to configure the path of the cache (once it gets pushed to a partition), maybe on a per process basis.
Comment
Comment