Originally posted by curaga
View Post
Announcement
Collapse
No announcement yet.
Radeon r600g and WebGL
Collapse
X
-
Speaking of WebGL...
Well, at very least, blacklisting RadeonSI for webGL "by default" would be more than reasonable for now. You see, RadeonSI can be crashed by remote data. Well-known example is http://www.blend4web.com/en/demo/farm/ (The Farm demo @ blender4web.com). This one would trigger some LLVM error and LLVM library is nasty enough to kill whole browser when facing this error. Not sure why llvm should behave this way but whatever, it allows what called REMOTE DENIAL OF SERVICE ATTACK - server with particular content can kill whole browser at will.
And btw it would be nice if AMD (and other drivers devs) would get idea that its nice to be a bit more security-minded. As we can see, driver code can face calls from external/untrusted code. This actually uplifts security and stability requirements a lot. Driver should be ready to face bogus data, rather unfriendly usage, can face some attempts to exploit it and so on.
Comment
-
Originally posted by 0xBADCODE View PostWell, at very least, blacklisting RadeonSI for webGL "by default" would be more than reasonable for now. You see, RadeonSI can be crashed by remote data. Well-known example is http://www.blend4web.com/en/demo/farm/ (The Farm demo @ blender4web.com). This one would trigger some LLVM error and LLVM library is nasty enough to kill whole browser when facing this error. Not sure why llvm should behave this way but whatever, it allows what called REMOTE DENIAL OF SERVICE ATTACK - server with particular content can kill whole browser at will.
And btw it would be nice if AMD (and other drivers devs) would get idea that its nice to be a bit more security-minded. As we can see, driver code can face calls from external/untrusted code. This actually uplifts security and stability requirements a lot. Driver should be ready to face bogus data, rather unfriendly usage, can face some attempts to exploit it and so on.
Blacklist every driver for now.
Comment
-
Originally posted by brosis View Posthttps://bugzilla.kernel.org/show_bug.cgi?id=60533
Blacklist every driver for now.
Comment
-
Originally posted by 0xBADCODE View PostI do not see how it related to buggy drivers. Swap is inherently slow because whole idea to emulate fast RAM using slow HDD would lead to very slow RAM emulation. This haves huge potential to make programs unresponsive. For that reason I just got enough RAM and do not use swap at all. If some program goes mad, OOM killer would do it fast. My system always fast and responsible - you just can't cause it to swap. And still system slowdown and app crash are completely different kinds of bugs.
You open a WWW page and immediately your system stops responding, because a graphic driver allocates gigabytes of data in RAM and VRAM, that in turn cause swap to explode in huge size (that actually causes system to go down, or on chrome - the browser to be killed). OOM does not kill the process, because allocation is done via graphical driver. You can experiment yourself, exploit is in the thread, just close all important data and be ready for reboot. And its just JS/HTML.Last edited by brosis; 27 June 2014, 02:10 PM.
Comment
-
Originally posted by brosis View PostTake another peek at the bug I posted.
You open a WWW page and immediately your system stops responding,
If you do not have swap, excessive memory outage would just quickly trigger OOM killer and it would backstab offending processes (and some others if you're unlucky) in matter of seconds. It's possible to control what OOM Killer "prefers" and what it "never" touches. Then it comes to the fact programs can take considerable resources to process some data. Malicious data/code can (ab)use it to take heck a lot of resources to process and its also known issue. This problem is multifold. Ideally I would expect browser itself to be first line of defence, limiting amount of resources consumed by processed content/resources untrusted code in VM can use. Since it runs VM, its quite okay to expect browser to do some resources policing. Unfortunately it is far from perfect. Browser took some steps, and would try to warn you if some code of script takes too much time to execute. Unfortunately it's hard from being bullet-proof. Then we come to 2nd line of defence: OS can limit resources in generic way. Program would fail after hitting limits, so it limits what your system can process. But it would ensure interactivity is retained and resources are available to other programs even in worst cases. Maybe not very user friendly but exists. Not sure why browsers do not use such OS services if they're too lazy to care about resources usage on their own in sane ways. These things can be quite bullet-proof if configured correctly. And its not just browser what can do heavy memory usage. Try to load 1000000 * 1000000 pixels picture to graphic editor or viewer and soon you'll get idea it started to choke and using swap like a hell. And it would work really slowly and it would take "forever and half" to load 1000000 * 1000000 pixels picture into RAM unless you have enough physical RAM do do that (and its not likely you've got about 3TiB RAM to do that).
P.S. sorry but test page failed to load so I haven't got full idea what it does. I suepect it somehow fools browser into drawing big picture or so?
because a graphic driver allocates gigabytes of data in RAM and VRAM, that in turn cause swap to explode in huge size (that actually causes system to go down, or on chrome - the browser to be killed). OOM does not kill the process, because allocation is done via graphical driver.
You can experiment yourself, exploit is in the thread, just close all important data and be ready for reboot. And its just JS/HTML.
However, I can admit mentioned bug is not related to resource attacks at all. its some purely technical bug in LLVM, causing LLVM to choke in some cases at code generation. Then LLVM is "smart" enough to terminate whole process due to error. Needless to say its rather strange handling of errors and as it turned out, it can be provoked remotely. And this obviously means code quality/stability/reliability at this point is well below the point its safe to expose it to untrusted web content.
Sure, new features are good. But should not serve as gateways to pwn system or play really nasty pranks. So maybe you're even correct in sense that possibly whole browser feature should be seriously reworked or blocked until good resolution found.
Comment
-
Originally posted by 0xBADCODE View PostIf you do not have swap, excessive memory outage would just quickly trigger OOM killer and it would backstab offending processes (and some others if you're unlucky) in matter of seconds. It's possible to control what OOM Killer "prefers" and what it "never" touches. Then it comes to the fact programs can take considerable resources to process some data. Malicious data/code can (ab)use it to take heck a lot of resources to process and its also known issue. This problem is multifold. Ideally I would expect browser itself to be first line of defence, limiting amount of resources consumed by processed content/resources untrusted code in VM can use. Since it runs VM, its quite okay to expect browser to do some resources policing. Unfortunately it is far from perfect. Browser took some steps, and would try to warn you if some code of script takes too much time to execute. Unfortunately it's hard from being bullet-proof. Then we come to 2nd line of defence: OS can limit resources in generic way. Program would fail after hitting limits, so it limits what your system can process. But it would ensure interactivity is retained and resources are available to other programs even in worst cases. Maybe not very user friendly but exists. Not sure why browsers do not use such OS services if they're too lazy to care about resources usage on their own in sane ways. These things can be quite bullet-proof if configured correctly. And its not just browser what can do heavy memory usage. Try to load 1000000 * 1000000 pixels picture to graphic editor or viewer and soon you'll get idea it started to choke and using swap like a hell. And it would work really slowly and it would take "forever and half" to load 1000000 * 1000000 pixels picture into RAM unless you have enough physical RAM do do that (and its not likely you've got about 3TiB RAM to do that).
P.S. sorry but test page failed to load so I haven't got full idea what it does. I suepect it somehow fools browser into drawing big picture or so?
its attached to the bug. Its fully HTML/JS and the offender is PartyHard, that essentially allocates huge surfaces (of VRAM).
Usually, when I open a huge file in GIMP, (and the very known warning inside Fx when script it taking too much time to load) - I get either a warning or can cancel the load. I know exactly the use case you mentioned, I worked in the past on a 16kx16k photo on a notebook with 512MB. When I returned from the hardware store with extra 4GB of SODIMM, it was still making simple transformation. This is why I set swappiness to 10 and use BFQ. In this case, however, one can't do anything as kernel is too busy allocating RAM, and when it hits swap (within seconds), its simply uncontrollable.
My message was more generic, - the WebGL with r600g is problematic, but at least not so problematic compared to simple known exploits like this. Userland still has too much power over hardware resources, simple exploits there lead to systemwide problems.
Comment
Comment