Announcement

Collapse
No announcement yet.

Debian Guts Support For Old MIPS CPUs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by xen0n View Post

    IMO it's not a matter of endianness, but rather proper documentation, at which Homo sapiens is not known to be good...
    Bro... I get an import error when I try to import "documentation" into my javascript project. Is this "documentation" thing, something I need to write myself or is there a module in the NexGenNeoSpiderWeb3.0Js framework that can handle it for me?

    Comment


    • #32
      Originally posted by cybertraveler View Post
      Is it bad practice to create new application-layer, network protocols that use LE?
      No one said that.
      If you create a network application protocol has LE, it still will use network formats to communicate..

      Comment


      • #33
        Originally posted by tuxd3v View Post
        That is all Wrong..
        Network Format is BigEndian..
        Why? Why is Network Order big endian?

        Originally posted by tuxd3v View Post
        You can try to paint it has you prefer to suit your ego, or Religion( ..but that is YET another thing.. )
        If you have to call it a matter of religion, then you're acknowledging your position isn't logically defensible.

        Originally posted by tuxd3v View Post
        This subject is like a religious war, never ending..
        Your refusal to accept defeat doesn't mean the war didn't end. Even years after WWII, there were some Japanese who thought the war was still on. Tragically, this even lead to some deaths in Brazil, at the hands of Japanese nationalists who considered the end of the war to be "fake news".

        And, last I checked, LE won. How many new ISAs are BE?

        Originally posted by tuxd3v View Post
        Number in BE are correctly printed, in hexa, as it his a natural notation for humans.
        ie:
        0x0123456789

        I am using the Arabic Numerology, like we use in the Christian Society, from the left to right..
        A number in BE is like that!

        On Contrast,
        Writing a number in LE, is written like in Arabic Alphabetic Cultures... from Right to Left.
        taking the same example, , LE is like this:
        8967452301

        This is the motive why is more efficient to convert from Binary to Decimal in BE, than in LE..
        Its also a lot more efficient to know if a number is positive or negative in BE..
        I'm not sure I follow. Please explain in the form of code. In C, if you know it. Otherwise, pick the most C-like language you do know.

        Comment


        • #34
          Originally posted by tuxd3v View Post
          You are talking about L4-L7 layers maybe( application ptotocols ? or at least some of them.. ), I was talking about packing data, serialising it to the network, remember disks now rely in network, in big clusters.. you are serialising all the time..
          The higher layers constitute the packet payloads, which is the vast majority of the data. That stuff is opaque to the lower layers and not subject to its endian-ness. The stuff you're talking about is just lower-layer packet headers. So, a very small minority of the actual bytes.

          Originally posted by tuxd3v View Post
          Inside each, its the same or almost the same, with advantages to BE in finding faster if the number is positive or negative( for obvious reasons.. ).
          You seem to be unaware that data is not read in from memory in byte-wise fashion. It's read into the chip in cacheline-sized chunks, and then the CPU core loads and stores entire word-length registers at a time. So, whether the sign is contained in the byte at the lowest or highest address is immaterial.

          Comment


          • #35
            Originally posted by torsionbar28 View Post
            Yeah, "network protocols". LOL. You mean like TCP/IP? Nobody is using that, are they?? No need to mention that obscure protocol by name, am I right?
            Not only.

            However, most network hardware offers the ability to offload some amount of this work - especially the high-end, faster hardware. It's not really my area, but I think the practical downsides of using LE in the datacenter have long-ago been mitigated to the point where they're a non-issue.

            Comment


            • #36
              Originally posted by cybertraveler View Post
              Is it bad practice to create new application-layer, network protocols that use LE?
              As d4ddi0 mentioned, most people naively use their native endian-ness. These days, that happens to be LE. Back when computer networking was being developed, it was BE. That's why a lot of the older protocols are BE, but more recent data formats tend to be LE.

              Originally posted by xen0n View Post
              IMO it's not a matter of endianness, but rather proper documentation, at which Homo sapiens is not known to be good...
              If you want other people to use your network protocol, you need to document it.

              Ever hear of RFCs?

              Comment


              • #37
                Originally posted by coder View Post
                Why? Why is Network Order big endian?
                Because, the network, was defines to be BigEndian, and so TCP/IP is BigEndian.
                It was defined that way, as a better way to serialise data..

                Here is the rfc RFC1700, it defines network transmission protocols..

                Originally posted by coder View Post
                And, last I checked, LE won. How many new ISAs are BE?
                LE won what??
                Can you specify what LE won??

                Originally posted by coder View Post
                I'm not sure I follow. Please explain in the form of code. In C, if you know it. Otherwise, pick the most C-like language you do know.
                BE: 0x0123456789

                I believe that you can see the MSB at base address right?
                So you only need to fetch it, to now if its positive number of not, right??In LE you need to fetch all the number, to know..

                I found a nice article for you..
                Read this

                There are byte swapping libraries which are included with most C/C++ libraries. The most commonly used routines are htons() and ntohs() used for network byte order conversions. The host to Big/Little Endian routines (htobe16()/be16toh(), etc) are more complete as they handle swaps of 2, 4 and 8 bytes. These routines are platform independent and know that a swap is only required on Little Endian systems. No swapping is applied to the data when run on a Big Endian host computer as the data is already in "network byte order".
                It goes to a great extent to explain, the busy work LE arch's need to do to operate in the Network..

                Anyway, TCP/IP is their, for some decades now, and its BigEndian.
                Don't get mad with me, I am not the guy implementing LE..
                Instead of questioning me, you need to put a question to Intel,AMD,VIA, etc.. why they decided to try to force their standard on the system, at the time..

                Now you are wasting lots of energy, only to communicate in the network...
                And I am only speaking about you( as a final user), now imagine, SAN systems hosting disks, and such...
                The tremendous amount of power needed for that..
                Last edited by tuxd3v; 25 August 2019, 12:03 PM. Reason: typos,typos..god..

                Comment


                • #38
                  Originally posted by cybertraveler View Post
                  Is it bad practice to create new application-layer, network protocols that use LE?
                  If you rely on the network, the mount of work will be in triple for you..
                  Since you are packing in LE, then you need to do exactly that for BE, including checksum'ing and so on..
                  So you need to unpack( LE to host format data ), and repack again( host format data to BE )..
                  1. Pack Host data, in LE format
                  2. Unpack LE to Host Format Data( Again , like before 1. )
                  3. Pack Data to BE to send..
                  It would be three times the work..

                  Comment


                  • #39
                    Originally posted by tuxd3v View Post
                    This subject is like a religious war, never ending..
                    Each architecture has its pros and cons..
                    yes, it'll never end
                    Each architecture has its pros and cons, and so do big and little endian. However there are more advantages to use little endian than to use big endian. In fact what you're arguing about network serialization is pretty much nonsense nowadays, since the network adapters can already put the bytes in the correct order without help from the CPU
                    If you're saying big endian allows faster sign checking then it's the same as saying little endian allows faster parity checking, since by reading just the first byte we'll know the number is odd or even right away. And little endian is far more suitable for mathematics which is generally done from the least significant part (except some operations like division)

                    In fact the designers of RISC-V architecture also said that:
                    We chose little-endian byte ordering for the RISC-V memory system because little-endian systems are currently dominant commercially (all x86 systems; iOS, Android, and Windows for ARM). A minor point is that we have also found little-endian memory systems to be more natural for hardware designers. However, certain application areas, such as IP networking, operate on big-endian data structures, and so we leave open the possibility of non-standard big-endian or bi-endian systems.

                    Comment


                    • #40
                      Originally posted by tuxd3v View Post
                      Because, the network, was defines to be BigEndian, and so TCP/IP is BigEndian.
                      It was defined that way, as a better way to serialise data..
                      I understand your claim. Now, what's your source on that?

                      Originally posted by tuxd3v View Post
                      Here is the rfc RFC1700, it defines network transmission protocols..
                      No, it's the Assigned Numbers RFC. And what's your point?

                      Originally posted by tuxd3v View Post
                      LE won what??
                      Can you specify what LE won??
                      You called it a never-ending religious war.

                      LE won, this. As evidence, I cite the lack of new uArch's that are BE, and that all the bi-endian uArch's are being run in LE mode, with support continually being dropped for BE.

                      LE won the war, whether you choose to accept it or not. If BE had such huge advantages as you claim, then it wouldn't have lost.

                      Originally posted by tuxd3v View Post
                      BE: 0x0123456789

                      I believe that you can see the MSB at base address right?
                      So you only need to fetch it, to now if its positive number of not, right??In LE you need to fetch all the number, to know..
                      1. This is not code. I asked for code that demonstrated an efficiency advantage in binary -> decimal conversion, as you claimed.
                      2. You do not understand how modern CPUs work. They don't fetch 32-bit ints one-byte-at-a-time. The whole thing is read in a single cycle. Modern computers have no primary datapath narrower than 32-bits, and most datapaths are 64-bits or wider.

                      Originally posted by tuxd3v View Post
                      I found a nice article for you..
                      Read this
                      I understand BE perfectly well. I have actually coded on BE systems, for a time, both embedded (networking) and old Macs.

                      Originally posted by tuxd3v View Post
                      Don't get mad with me, I am not the guy implementing LE..
                      It's because you're being obtuse and failing to back up your claims.

                      Originally posted by tuxd3v View Post
                      Now you are wasting lots of energy, only to communicate in the network...
                      And I am only speaking about you( as a final user), now imagine, SAN systems hosting disks, and such...
                      The tremendous amount of power needed for that..
                      I prefer not to repeat myself, but it seems you missed these points on the first go-around:
                      1. The overheads involved in running storage traffic over a TCP stack are not dominated by endian conversions.
                      2. Some amount of TCP/IP is offloaded by the NIC, anyhow.
                      3. The application defines the endianness of the payload - not the lower-layer protocols. If a file format is LE, that data doesn't get byte-swapped by the network stack.

                      Comment

                      Working...
                      X