Announcement

Collapse
No announcement yet.

Ubuntu Is Going After A New Linux Kernel API

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by jrch2k8 View Post
    i do know this, my point is you need a real great number of concurrent applications or a very bad coding to starve the memory
    Not necessarily... if the memory is there, it's arguably bad coding to *not* use it. No point in having all that memory if nothing is using it, so if you've got data that can usefully be kept in memory instead, you might as well.

    That's the idea of this proposed API, right? An app might not need 16GB of memory, but if it can find a use for it, it's better than leaving it unused. Then if something else wants memory, the kernel can force the first process to give up the space it doesn't actually need - the current alternative being for the kernel to get an out-of-memory error and just kill one of the processes to free some up.

    Comment


    • #32
      Originally posted by Delgarde View Post
      Not necessarily... if the memory is there, it's arguably bad coding to *not* use it. No point in having all that memory if nothing is using it, so if you've got data that can usefully be kept in memory instead, you might as well.

      That's the idea of this proposed API, right? An app might not need 16GB of memory, but if it can find a use for it, it's better than leaving it unused. Then if something else wants memory, the kernel can force the first process to give up the space it doesn't actually need - the current alternative being for the kernel to get an out-of-memory error and just kill one of the processes to free some up.
      well in linux all extra memory is used for cache and speedups, so is not like linux let the RAM go to waste.

      but in your theoretical case you must prefer to actually force the kernel cache off and don't use the ram because you gain more saving that power since in ARM everything active hurts the battery

      Comment


      • #33
        Originally posted by Delgarde View Post
        Not necessarily... if the memory is there, it's arguably bad coding to *not* use it. No point in having all that memory if nothing is using it, so if you've got data that can usefully be kept in memory instead, you might as well.

        That's the idea of this proposed API, right? An app might not need 16GB of memory, but if it can find a use for it, it's better than leaving it unused. Then if something else wants memory, the kernel can force the first process to give up the space it doesn't actually need - the current alternative being for the kernel to get an out-of-memory error and just kill one of the processes to free some up.
        I just want to point out 1 thing though, if you have 512MB of RAM being used for cache that means you need to move that much from storage at some point at least once. If you do this too aggressively then storage bandwidth becomes the bottleneck. Another problem is the amount of time needed to move data out of cache for one application to make room for data that a second application needs to load. It's storage bandwidth that suffers the most on too aggressive caching.

        Comment


        • #34
          Originally posted by duby229 View Post
          I just want to point out 1 thing though, if you have 512MB of RAM being used for cache that means you need to move that much from storage at some point at least once. If you do this too aggressively then storage bandwidth becomes the bottleneck. Another problem is the amount of time needed to move data out of cache for one application to make room for data that a second application needs to load. It's storage bandwidth that suffers the most on too aggressive caching.
          Everyone on mobile is already flash based, everything on desktop is moving TO flash based. SATA 3 is what? 8GB/s? I get that not everything is on flash and SATA3 NOW, but moving forward... ya think will storage bandwidth really be an issue?
          All opinions are my own not those of my employer if you know who they are.

          Comment


          • #35
            Originally posted by Ericg View Post
            Everyone on mobile is already flash based, everything on desktop is moving TO flash based. SATA 3 is what? 8GB/s? I get that not everything is on flash and SATA3 NOW, but moving forward... ya think will storage bandwidth really be an issue?
            SATA 3 is about 6 MB/s , far from 8GB/s, which is about the transfer rate of DDR3-1066 RAM.

            Comment


            • #36
              Originally posted by Delgarde View Post
              Because it might not be on disk... it might be info that can be re-calculated if needed, but which can be kept in memory for improved performance. E.g a browser might be keeping some rendering/layout structures for tabs that aren't currently visible, to save time if the user switches to them. Easily regenerated if the cache has to be cleared, but keeping it saves a second or two when it's needed.
              Why not write the data to a file then? You could even use the new tmpfile facility (forgot the name) where the file is not visible outside of your process (and gets cleaned up as soon as the process exits). I don't think the kernel will actually write the file if it doesn't have to.

              But yeah ... there are cases where this could be useful on low memory devices...

              Comment


              • #37
                Originally posted by duby229 View Post
                I just want to point out 1 thing though, if you have 512MB of RAM being used for cache that means you need to move that much from storage at some point at least once. If you do this too aggressively then storage bandwidth becomes the bottleneck. Another problem is the amount of time needed to move data out of cache for one application to make room for data that a second application needs to load. It's storage bandwidth that suffers the most on too aggressive caching.
                Sure, but that's up to the application. I'm assuming that the major use case would be to accumulate data in memory to avoid having to reload/recalculate it the second time it's used, rather than for pre-loading data that hasn't been required but might be.

                Comment


                • #38
                  Originally posted by jrch2k8 View Post
                  i doubt you find a game that demanding running in ARM for a while, is not like battefiled 4 will run on ubuntu phone anytime soon
                  You'd be surprised how awfully coded are some games in Facebook. I don't know how they do to achieve that, but they run far slower (and it's not my connection, BTW) than Prototype in a 2008 computer (same year Prototype was released). And it's the dumb games, like CityVille and such shitty games. They also use quite a bit of memory. I wish I can blame Flash, because if that's the source of the problem, HTML5 might solve it. It's not even Linux causing the problem, because those games are played by my parents on Windows 7.

                  Originally posted by sarmad View Post
                  I fail to understand why Linux doesn't let the user make the decision in low memory situations. How hard is it to just pause everything and display a dialog to the user letting him pick the application to kill/swap out/etc? This would be far more effective, and far easier to implement, than what Canonical is proposing.
                  That's as user friendly as asking them to ssh someone: most, normal, people, don't even know what that means.

                  Originally posted by r1348 View Post
                  Nope, if they work with upstream, this would be a very useful and welcome feature.
                  123.

                  Sounds like a good idea to me, and if some desktop apps adhere to such API I might get a performance boost on my old 1GiB RAM Unichrome box.

                  Originally posted by fscan View Post
                  Why not write the data to a file then? You could even use the new tmpfile facility (forgot the name) where the file is not visible outside of your process (and gets cleaned up as soon as the process exits). I don't think the kernel will actually write the file if it doesn't have to.

                  But yeah ... there are cases where this could be useful on low memory devices...
                  Sometimes, regenerating data is faster than reading it from disk, which takes as much as 10000 cycles to call IIRC, and if the data isn't a big block, it's mostly a waste.

                  Comment


                  • #39
                    Originally posted by fscan View Post
                    Why not write the data to a file then? You could even use the new tmpfile facility (forgot the name) where the file is not visible outside of your process (and gets cleaned up as soon as the process exits). I don't think the kernel will actually write the file if it doesn't have to.
                    Why would you want to write it to a file, if it's just a bunch of data you're keeping in memory? It's not like you want it persisted... it's just state that you can re-create, but which is cheaper to leave in memory. And serializing a few GB of complex structures (some of which may be unpersistable) is a hell of a lot of complexity, compared to the ability to just flag that block of memory as reclaimable and being careful about how you access it.

                    I'm really not seeing why people seem so resistant to this proposal... it seems like a really good idea, not to mention already available in garbage-collected languages (the GC taking the role of the kernel).

                    Comment


                    • #40
                      Originally posted by Delgarde View Post
                      Why would you want to write it to a file, if it's just a bunch of data you're keeping in memory? It's not like you want it persisted... it's just state that you can re-create, but which is cheaper to leave in memory. And serializing a few GB of complex structures (some of which may be unpersistable) is a hell of a lot of complexity, compared to the ability to just flag that block of memory as reclaimable and being careful about how you access it.

                      I'm really not seeing why people seem so resistant to this proposal... it seems like a really good idea, not to mention already available in garbage-collected languages (the GC taking the role of the kernel).
                      garbage collector are an aberration of nature and they normally waste more memory than they save but allow all sort of lazyness in the name "easy of use". don't believe me? try to make 2 programs one in C/C++ and other in .Net/java/JS that fill lots of ram and then clean it, the result will show you why something like that is not very welcomed since the kernel is not the place to solve lazyness
                      Last edited by jrch2k8; 29 August 2013, 10:18 PM.

                      Comment

                      Working...
                      X