Announcement

Collapse
No announcement yet.

Ubuntu Is Going After A New Linux Kernel API

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Development

    Would this make application development more difficult?
    If you have to code stuff to mark things as revocable, and then regenerate it?

    Phones have 2 GB RAM, and the upcoming phones will have 3 GB RAM.

    Comment


    • #62
      Originally posted by rvdboom View Post
      Make it 100 times faster (i.e. 600MB/s). 6MB/s would make it half as fast as Gigabit Ethernet. :-)
      I guess you meant 6Gbits/s.
      Yeah, that somehow slipped.

      Comment


      • #63
        Originally posted by sarmad View Post
        I fail to understand why Linux doesn't let the user make the decision in low memory situations. How hard is it to just pause everything and display a dialog to the user letting him pick the application to kill/swap out/etc? This would be far more effective, and far easier to implement, than what Canonical is proposing.
        This is exactly the reason why Linux fails so badly when it comes to the end user. What iOS has thought is: the user doesn't know what he wants, don't give him that many choices, it confuses him. An yes, I know, I am praising Apple on Phoronix, kill me.

        Ubuntu is lacking a proper HIG, but the guys at elementary seem to have it: http://elementaryos.org/docs/human-i...-configuration

        Originally posted by Delgarde View Post
        Currently, the only options are to prevent processes from using that memory at all, or to kill them when memory runs out. This is a middle ground, letting them use memory if it's available, but avoiding the need for a hard kill when it's not.
        This is exactly what I'm doing in the games I've designed. Stretching the memory limits.

        Originally posted by jayrulez View Post
        Just pointing it out: QML is also a garbage collected language.
        The whole point of Qt/QML is that you get C++ to do the heavy lifting and QML to do the flexible UI manipulation, you need it there and you need it QC'd.

        Comment


        • #64
          Finally the time have come for canonical to have their own kernel fork, have fun trolls! :P

          Comment


          • #65
            Originally posted by jrch2k8 View Post
            the cause will probably be crappy apps, for example:

            case 1: map images or other datasets bigger than the ram itself
            So if the user wants to play Battlefield 4 with graphics maxed out on a computer with 512mb of ram, it is the fault of the software and not of the user if the memory is filled ? There is something that often comes with softwares, it is called "minimum hardware requirements".

            Comment


            • #66
              Originally posted by GreatEmerald View Post
              Properly optimised garbage collectors can be better at memory management than typical manual memory management, much like compilers are typically better at their jobs than trying to do manual optimisations. Sure, you can code everything in Assembly and hand-optimise everything to get something more efficient than a compiler could do, but you'd be spending the time in which you could have coded several more applications for that small gain. So GC isn't about being lazy, it's about coding efficiency.
              I didn't mean that every use of GC is out of laziness, but in most cases it is. For proper use of GC you probably need to know how memory works, while most GC users use such languages because they don't want to take the time to understand how to work with memory. On the other hand, I think we are both talking more about "feelings" on the subject than actual facts. Do you have any proof that GC can be better at memory management? I don't have any about the opposite, but I'm not trying to convince anyone, I'm just not using any GC language for personal preference, I feel more comfortable having an idea of what is happening with memory.

              Comment


              • #67
                Originally posted by mrugiero View Post
                I didn't mean that every use of GC is out of laziness, but in most cases it is. For proper use of GC you probably need to know how memory works, while most GC users use such languages because they don't want to take the time to understand how to work with memory. On the other hand, I think we are both talking more about "feelings" on the subject than actual facts. Do you have any proof that GC can be better at memory management? I don't have any about the opposite, but I'm not trying to convince anyone, I'm just not using any GC language for personal preference, I feel more comfortable having an idea of what is happening with memory.
                It depends on what mrugiero thinks "better" means. Everything comes at a cost. GC comes at the cost of extra memory usage, wasted CPU cycles to keep track and wasted CPU cycles when it frees the memory. mrugiero, I recommend you read this: http://sealedabstract.com/rants/why-...apps-are-slow/

                Comment


                • #68
                  Originally posted by TheOne View Post
                  Finally the time have come for canonical to have their own kernel fork, have fun trolls! :P
                  Adding one new API is not creating a fork.

                  Comment


                  • #69
                    Originally posted by TheBlackCat View Post
                    Adding one new API is not creating a fork.
                    Especially when the idea starts with "check with upstream"

                    Comment


                    • #70
                      Originally posted by matzipan View Post
                      It depends on what mrugiero thinks "better" means. Everything comes at a cost. GC comes at the cost of extra memory usage, wasted CPU cycles to keep track and wasted CPU cycles when it frees the memory. mrugiero, I recommend you read this: http://sealedabstract.com/rants/why-...apps-are-slow/
                      very good article btw.

                      i would add an famous phrase we use in my country "lazy work twice" which means the time you save not knowing how memory will come to bite you later twice because you will end having to learn at least the basics of memory managament, how gc works for your picked language and burn specialized forums to find out you have to touch again a lot of your code to optimize it to meet acceptable performance levels, hence should have been faster just learn memory management and use a native language from the beggining.

                      of course gc'ed languages have their proper use, the point is use the right tool for the job

                      Comment

                      Working...
                      X