Page 11 of 18 FirstFirst ... 910111213 ... LastLast
Results 101 to 110 of 177

Thread: Is Assembly Still Relevant To Most Linux Software?

  1. #101
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    5,283

    Default

    128-bit variables are already supported via compiler extensions. Not 128-bit pointers though.

  2. #102
    Join Date
    Jan 2007
    Posts
    418

    Default

    Quote Originally Posted by frign View Post
    I think there is a strong misconception on your side here: Only because an integer has the same size as the address-scope of the currently employed Operating System (namely, 64 bit), it does not mean these datatypes are faster than smaller ones (8, 16, 32).
    It is the other way around: The smaller the datatypes, the less ressources are needed to handle them. There are exceptions for 16 bit integers on some old architectures, but we do overall have a consistent speedup in all cases where the integer-size has been limited.
    I guess this requires some hard evidence to compare various CPU's and architectures and see what happens. Write some code for 8bit AVR (still relevant today), 16bit MSP-430 (also still relevant today); 32bit 'old' CPU (pentium-m comes to mind) and modern 64bit CPU in both 32bit and 64bit modes and see what happens.

    I'm not convinced that using anything other then the native width is more efficient (FOR CALCULATION!). Yes there is a cost/benefit analysis in other regards that are also very important, something the compile can't even know about. The data has to move in and out from the CPU over a bus. If the buss is narrower then the native size of the CPU, some magic will need to happen. If it stays internal to the CPU (registers) then the native size is most efficient in any case.

    Using bigger int's however has the disadvantage it uses more memory, and that can potentially slow things down again (more data to copy into memory etc).
    Even in the case of the slower 16 bit integers, you can employ the fast integer-types and benefit from well-thought-out typesets for specific architectures, ruling out potential slowdowns.

    The compiler is a smart guy, but he is not a magician: He would never risk anything and he is no artificial intelligence. There may be constant improvements in this sector, but limiting the integer-size requires you to know that the integer would _never_ overflow.
    How would a compiler predict that when he has to optimise a stdin-parser?
    I do fully agree with you here and we should use u8, u16 etc as fit. And yes, the PROGRAMMER needs to know that his data fits into a certain variable. I never said the compiler should check for 'valid overflowing' it's a dumb technique but it does get used. Anyway I only wanted to say, the compiler can be made smart enough, to automatically replace u8 (or uint8, or whatever) with whatever it sees fit to be the fastest/least memory requirement (based on a switch if you wish). I as a developer know it will fit into an u8. The compiler can optimize it to whatever it wants.

    PS: I don't think we will see 128 bit soon, because 64 bit-addresses can span virtual memory with the maximum size of 16 exbibyte, which is 1 trillion gibibytes.
    But I may only sound like Bill Gates having allegedly stated this in 1981:
    We will see 128 bit data types soon.. You know why? It already exists. First result for uint128 on google? http://msdn.microsoft.com/en-us/library/cc230384.aspx They even use a valid example. Store a ipv6 address. Yes you could store it in a struct, but that's besides the point. Actually, a 128bit int could potentially be more optimized so when using it in routers I can see it making sense. (I know it's a definition to two 64bits for now, but the datatype is there).

    But there's actually more valid usage for it, today. http://lxr.free-electrons.com/source.../b128ops.h#L54
    Yes it's in the kernel source already. Right now the only sensible thing is to use it in cryptographic routines. And it makes sense. If your encryption routine makes use of 128bit values, why do magic with structs and unions to work around a shortcoming?

  3. #103
    Join Date
    Oct 2012
    Location
    Cologne, Germany
    Posts
    308

    Angry To clear it up once and for all!

    Quote Originally Posted by oliver View Post
    I guess this requires some hard evidence to compare various CPU's and architectures and see what happens. Write some code for 8bit AVR (still relevant today), 16bit MSP-430 (also still relevant today); 32bit 'old' CPU (pentium-m comes to mind) and modern 64bit CPU in both 32bit and 64bit modes and see what happens.

    I'm not convinced that using anything other then the native width is more efficient (FOR CALCULATION!). Yes there is a cost/benefit analysis in other regards that are also very important, something the compile can't even know about. The data has to move in and out from the CPU over a bus. If the buss is narrower then the native size of the CPU, some magic will need to happen. If it stays internal to the CPU (registers) then the native size is most efficient in any case.

    Using bigger int's however has the disadvantage it uses more memory, and that can potentially slow things down again (more data to copy into memory etc).

    I do fully agree with you here and we should use u8, u16 etc as fit. And yes, the PROGRAMMER needs to know that his data fits into a certain variable. I never said the compiler should check for 'valid overflowing' it's a dumb technique but it does get used. Anyway I only wanted to say, the compiler can be made smart enough, to automatically replace u8 (or uint8, or whatever) with whatever it sees fit to be the fastest/least memory requirement (based on a switch if you wish). I as a developer know it will fit into an u8. The compiler can optimize it to whatever it wants.


    We will see 128 bit data types soon.. You know why? It already exists. First result for uint128 on google? http://msdn.microsoft.com/en-us/library/cc230384.aspx They even use a valid example. Store a ipv6 address. Yes you could store it in a struct, but that's besides the point. Actually, a 128bit int could potentially be more optimized so when using it in routers I can see it making sense. (I know it's a definition to two 64bits for now, but the datatype is there).

    But there's actually more valid usage for it, today. http://lxr.free-electrons.com/source.../b128ops.h#L54
    Yes it's in the kernel source already. Right now the only sensible thing is to use it in cryptographic routines. And it makes sense. If your encryption routine makes use of 128bit values, why do magic with structs and unions to work around a shortcoming?
    I don't know if I can take you serious any more. You still might not get the concept I think.

    Of course there are native 128 bit variables, why shouldn't there be any? There are also native 128 bit variables available on 32 bit processors! Gosh!

    So, how does this work you might ask. I give you the answer, so you don't tell this BS to anyone else in the future:

    Let's take the normal x86-architecture, because it is simple to explain:
    There are specific registers to store specific integers of specified size. I'll list them for your convenience:

    1. 8 bit --> Registers AL, AH, ...
    2. 16 bit --> Registers AX, ...
    3. 32 bit --> Registers EAX, ...
    4. 64 bit --> Registers MM0, ...
    5. 128 bit --> Registers XMM0, ...


    As you can see, both big and small integers are natively integrated into the CPU, even though we can only have 32 bit addresses. Using large integers eats up more RAM, granted, but it is not faster to use them in any way. I put it this way: Storing each integer as a 64 bit integer on 64 bit systems doesn't bring you benefits and there are specific registers for all sizes!
    It brings benefits to use smaller integers, because they are native to the CPU!
    Even better, 8 bit AVR and 16 bit MSP-430 doesn't mean we peak at 8 bits. I honestly have never worked with these processors, but I am sure they do support 8 bit integers. Scaling those up to greater lengths depends on the architecture, but you are not forcibly locked to the specified maximum address space.

    Also, using the C99 integer-types will bring you the flexibility you expected: there are ways to implement integers of _at least_ n bit size (int_leastn_t), of fastest least n bit size (int_fastn_t) or as of a standard n bit size (int8_t). Stdint.h already handles that for you and you don't even need special compilers to optimise it in this regards.

    In this regards, storing certain datatypes in those native registers is not a big hurdle! Need to store an unsigned integer of at least 128 bit size? No problem, just use uint_least128_t and you are fine.

    You are pointing out issues which do not exist! Integer sizes do not magically scale up or down depending on which architecture you are; it depends on the specific implementation of the architecture you are using.

    I hope this was clear enough already!

  4. #104
    Join Date
    Oct 2012
    Location
    Cologne, Germany
    Posts
    308

    Thumbs up Yes

    Quote Originally Posted by curaga View Post
    128-bit variables are already supported via compiler extensions. Not 128-bit pointers though.
    128 bit variables: They are even supported by the processor itself, so you don't need compiler extensions. You just need the to address the right register and store the variable in it; if your compiler doesn't already implement this behaviour properly and requires extensions, I would recommend you to switch to another which does!

    128 bit pointers: Don't make sense. Why?
    Quote Originally Posted by My-bloody-self!
    If you wrote a book with 100 pages, how much sense would it make to set up a table of contents for 1000 pages?
    I hope you get my point: If your processor can only handle address-lengths of 64 bits size, 128 bit addresses (--> 128 bit pointers) don't factually bloody make sense.

  5. #105
    Join Date
    May 2012
    Posts
    658

    Default

    8/16bit calculations are faster then 32/64, ofc
    they can be loaded faster, easier aligned and even packed when needed

    for example you can load 64bit from memory and treat it as 8bit
    then shift the whole register and treat as the next 8bit number
    and so in all 8 times

    also worth noting is, what was mentioned i think, that you need to take care how you structure your struct
    as the compiler will align the values in it to 8/16bytes (maybe even 64byte cachelines)
    thus having.. well holes

    sizes of each data types can be dependent not only on the compiler but also on the OS and the alignment of alpha centauri stars with their planet
    so sizeof is good to make sure

    Quote Originally Posted by ciplogic View Post
    ...
    damn, now i have to write a function

    xmms shuffles are like sudoku so il write one tomorrow when me head clears up

    and ye, i was thinking of just the raw brute force multiplication as it is bit hard for a compiler since shuffling it to fit nicely needs a bit of planing
    like just a function compute(pointer, pointer, how_many)

    also interpreting the title "Is Assembly Still Relevant To Most Linux Software?" is, to be honest, not that easy
    if you count the 1% gained (guessing) from assembly in shared libraries, then it is relevant
    if you count things written directly in a program, then probably not that much (overall) except in the kernel and such low level things

    and again, assembly is not really to be used when not needed
    and its not as hard as everybody says
    Last edited by gens; 04-09-2013 at 03:25 PM.

  6. #106
    Join Date
    Oct 2012
    Location
    Cologne, Germany
    Posts
    308

    Smile What can I say?

    Quote Originally Posted by gens View Post
    8/16bit calculations are faster then 32/64, ofc
    they can be loaded faster, easier aligned and even packed when needed

    for example you can load 64bit from memory and treat it as 8bit
    then shift the whole register and treat as the next 8bit number
    and so in all 8 times

    also worth noting is, what was mentioned i think, that you need to take care how you structure your struct
    as the compiler will align the values in it to 8/16bytes (maybe even 64byte cachelines)
    thus having.. well holes

    sizes of each data types can be dependent not only on the compiler but also on the OS and the alignment of alpha centauri stars with their planet
    so sizeof is good to make sure



    damn, now i have to write a function

    xmms shuffles are like sudoku so il write one tomorrow when me head clears up

    and ye, i was thinking of just the raw brute force multiplication as it is bit hard for a compiler since shuffling it to fit nicely needs a bit of planing

    also interpreting the title "Is Assembly Still Relevant To Most Linux Software?" is, to be honest, not that easy
    if you count the 1% gained (guessing) from assembly in shared libraries, then it is relevant
    if you count things written directly in a program, then probably not that much (overall) except in the kernel and such low level things

    and again, assembly is not really to be used when not needed
    Quote Originally Posted by Myself
    Gens: A rock of knowledge in a sea of ignorance
    I hope everyone got it now ^^

  7. #107
    Join Date
    May 2012
    Posts
    658

    Default

    its all in probably all optimization manuals
    and the C standard book, the 700 page one (dont worry i just read about CHAR/INT sizes in it)

    honestly, high level optimizations like program structure and algorithms bring more to performance then low level ones
    but they are cumulative with low level ones
    (actually writing clean code as you say)

  8. #108
    Join Date
    Sep 2012
    Posts
    780

    Default

    Quote Originally Posted by frign View Post
    I think there is a strong misconception on your side here: Only because an integer has the same size as the address-scope of the currently employed Operating System (namely, 64 bit), it does not mean these datatypes are faster than smaller ones (8, 16, 32).

    It is the other way around: The smaller the datatypes, the less ressources are needed to handle them. There are exceptions for 16 bit integers on some old architectures, but we do overall have a consistent speedup in all cases where the integer-size has been limited.
    Even in the case of the slower 16 bit integers, you can employ the fast integer-types and benefit from well-thought-out typesets for specific architectures, ruling out potential slowdowns.
    That's typically not the case in the very article you quoted. When reducing the integer size, the instructions for his 32bit arm cpu are slower.

    Quote Originally Posted by frign View Post
    The compiler is a smart guy, but he is not a magician: He would never risk anything and he is no artificial intelligence. There may be constant improvements in this sector, but limiting the integer-size requires you to know that the integer would _never_ overflow.
    Well, in C, an int is at least 16bit, and can be anything above that. This means the programmer:
    - will not put values requiring more than 16bit in an int
    - will not rely on any "overflowing" behavior
    That leaves plenty of space for the compiler. Specifically, it would do the same thing as Stdint.h, but with flexibility on the same platform.

  9. #109
    Join Date
    Mar 2011
    Posts
    349

    Default

    Quote Originally Posted by gens View Post
    and again, assembly is not really to be used when not needed
    and its not as hard as everybody says
    IDK who says assembly is that hard, it's hard to do well though. It's tedious to read especially when the coder is very good at optimizing.

    I accidentally deleted that part about 8 bit being faster than 16 and so on, I maybe remembering something else but I'm pretty sure I saw tests dispelling just that. Something about a CPU performs best in it's native integer length. So a 32 bit CPU performs
    add eax,1 faster than add al,1.

    I got to get a IDE hard disk reader to resurrect some old code. All this talk is making me nostalgic.

  10. #110
    Join Date
    Oct 2012
    Location
    Cologne, Germany
    Posts
    308

    Question Proof?

    Quote Originally Posted by erendorn View Post
    When reducing the integer size, the instructions for his 32bit arm cpu are slower.
    Can you give me proof or a specific example? Judging from the years I worked at low-level C-programming, smaller integers are faster than the standard ones and the compiler just can't know the range of the very integers it has to work with. It is just impossible.

    The only compiler I know capable of this is a standardised Ada-Compiler like GNAT-GCC, given the condition you work out the ranges in your code properly, but not GCC. The language itself doesn't allow it!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •