Announcement

Collapse
No announcement yet.

Writing Ubuntu Phone Apps Seem Fairly Easy

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by caligula View Post
    Please educate yourself about dynamic dispatch, vtables and profile guided optimization before making such claims. JIT can dynamically recompile dynamic dispatch into direct static dispatch, then inline further. This isn't possible without JIT.
    Originally posted by pal666 View Post
    no, it is not. this bullshit can be perpetuated only by people who had never run any real optimizing compiler and don't know that heavy optimization is very resource hungry. the fact is, jit makes for slow memory hogs. even google understood it and switched to aot

    The reality is that the truth is somewhere between these two statements. There's a lot of things JIT Compilers can do that AoT can't, but at the same time they lack the time to perform aggressive optimization as they have to fight with startup time. At the same time AoT compilers have the time to perform aggressive optimization, but they are limited in just how far they can optimize by having to have Generic Binaries, inability to do PGO and otherwise. The fastest binaries would actually go like this:

    Dev Compiles to Device Independent IL -> Application performs aggressive device specific optimization during installation while still leaving enough information in to allow the JITer to act on it -> JITer performs PGO and other runtime optimization while the application is being run.

    Additionally the more toolable a language is the higher its optimization potential is because that means that the compiler can know more about the code itself and as a result make better decisions. This is where things like CLang for C++ and Roslyn for C# become especially interesting.
    Last edited by Luke_Wolf; 24 February 2015, 06:02 PM.

    Comment


    • #42
      Originally posted by caligula View Post
      Take a look at Julia language: http://julialang.org/
      For benchmarks also LuaJIT or ASM.js. LuaJIT has small community and poor developers and it still beats many static compilers.
      Julia is surely quite fast for what it does, yet, the compilers of the AOT-compiled languages in the comparison, namely gfortran and go, are not known to output fast code. We have a lot of fortran users at our physics institute and *nobody* uses the GNU Fortran compiler, as it produces really slow code. Everybody uses the non-commercial version of Intel's Fortran compiler.

      Comment


      • #43
        Originally posted by JS987 View Post
        There will be never JIT with near zero memory and CPU usage. RAM is twice as expensive as 2 years ago.
        I don't agree with the statement regarding RAM being more expensive because you can get an 8GB stick of DDR3 for $80 CAD which is awesome. With inflation I'd actually place the faster DDR3 RAM as being cheaper than DDR2 RAM when it was about the same age.

        Originally posted by JS987 View Post
        CPU caches are less efficient with higher memory usage.
        This is why we have much larger caches. Now we have L2/L3 caches that are larger than floppy drives. That being said, if my memory serves me right, the more data that you need to process and remember, the more latency it introduces, period. It's not very much, and because of the amount of data we need to push through, it's faster to introduce a little bit of latency to get an overall higher throughput than to have less latency but less throughput.

        TL;DR We really don't notice the performance difference when talking about latency in RAM like DDR2 vs DDR3 vs DDR4. We do notice, however, the throughput (or whatever you want to call the really big number that I call speed).

        Now, the more data we get flowing through, the more complicated it gets to keep everything responsive. It's easier to coordinate 3 people to do things efficiently than it is for 30, than it is for 300, etc. That being said, I'm pretty sure most of it is done automagically now. Either by an onboard controller or by a compiler working its magic, or both.

        I wonder what Intel does to have their memory latency less than half of AMD's memory latency.

        In there end, there will always be a place for fast proof-of-concept languages and also screaming-fast, low-memory languages. A good example would be that daala encoder/decoder that's written in javascript and run from the browser. It's a great proof-of-concept, although it's not terribly fast. Taking the time to properly write it out in optimized C (or whatever they're using) will show a big performance increase. AFAIK they've only really just sort of started to really optimize their code since they're still developing the codec.

        Comment


        • #44
          Originally posted by profoundWHALE View Post
          I don't agree with the statement regarding RAM being more expensive because you can get an 8GB stick of DDR3 for $80 CAD which is awesome. With inflation I'd actually place the faster DDR3 RAM as being cheaper than DDR2 RAM when it was about the same age.



          This is why we have much larger caches. Now we have L2/L3 caches that are larger than floppy drives. That being said, if my memory serves me right, the more data that you need to process and remember, the more latency it introduces, period. It's not very much, and because of the amount of data we need to push through, it's faster to introduce a little bit of latency to get an overall higher throughput than to have less latency but less throughput.

          TL;DR We really don't notice the performance difference when talking about latency in RAM like DDR2 vs DDR3 vs DDR4. We do notice, however, the throughput (or whatever you want to call the really big number that I call speed).

          Now, the more data we get flowing through, the more complicated it gets to keep everything responsive. It's easier to coordinate 3 people to do things efficiently than it is for 30, than it is for 300, etc. That being said, I'm pretty sure most of it is done automagically now. Either by an onboard controller or by a compiler working its magic, or both.

          I wonder what Intel does to have their memory latency less than half of AMD's memory latency.

          In there end, there will always be a place for fast proof-of-concept languages and also screaming-fast, low-memory languages. A good example would be that daala encoder/decoder that's written in javascript and run from the browser. It's a great proof-of-concept, although it's not terribly fast. Taking the time to properly write it out in optimized C (or whatever they're using) will show a big performance increase. AFAIK they've only really just sort of started to really optimize their code since they're still developing the codec.
          You gotta give Intel credit, they are masters at designing cache cells.

          RAM speed is a compromise between latency and bandwidth.

          Think about bandwidth from the perspective of time slices, or hertz. 1 hertz equals 1 cycle per second. You can plainly see that higher frequency means smaller time slices. Things like timing and latency settings can be adjusted to get higher bandwidth, (more hertz)

          Latency is kinda like how long it takes to "get to" the bandwidth, and bandwidth is like how many data lanes the buss has and at what frequency it operates at.
          Last edited by duby229; 25 February 2015, 07:20 PM.

          Comment


          • #45
            Originally posted by duby229 View Post
            You gotta give Intel credit, they are masters at designing cache cells.

            RAM speed is a compromise between latency and bandwidth.

            Think about bandwidth from the perspective of time slices, or hertz. 1 hertz equals 1 cycle per second. You can plainly see that higher frequency means smaller time slices. Things like timing and latency settings can be adjusted to get higher bandwidth, (more hertz)

            Latency is kinda like how long it takes to "get to" the bandwidth, and bandwidth is like how many data lanes the buss has and at what frequency it operates at.
            Intel are truly masters of hardware.

            As far as latency/bandwidth in RAM is concerned, I consider myself as knowledgeable. The problem is that I was expecting someone much more knowledgeable to just show up and yell at me. As an example, I'm 99% sure that with GDDR they were like, "Screw latency! We need bandwidth!" so they can have a large amount of data going through it, but it's not terribly responsive in comparison. It's like the difference between a Semi pulling a large trailer from point A to point B, vs a single engine plane carrying a little bit very quickly. It's like the old comparison between torque and horsepower. Torque is the amount of work it can do, while Horsepower is how fast it gets done.

            Comment


            • #46
              Originally posted by profoundWHALE View Post
              I don't agree with the statement regarding RAM being more expensive because you can get an 8GB stick of DDR3 for $80 CAD which is awesome. With inflation I'd actually place the faster DDR3 RAM as being cheaper than DDR2 RAM when it was about the same age.
              I bought DDR3 for half of current price. There won't be netbook or desktop PC with 128 GB RAM (as caligula said) with current trend of RAM prices, which won't cost more than current systems with 8 GB RAM.

              Comment

              Working...
              X