If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
IMO it's not a matter of endianness, but rather proper documentation, at which Homo sapiens is not known to be good...
Bro... I get an import error when I try to import "documentation" into my javascript project. Is this "documentation" thing, something I need to write myself or is there a module in the NexGenNeoSpiderWeb3.0Js framework that can handle it for me?
This subject is like a religious war, never ending..
Your refusal to accept defeat doesn't mean the war didn't end. Even years after WWII, there were some Japanese who thought the war was still on. Tragically, this even lead to some deaths in Brazil, at the hands of Japanese nationalists who considered the end of the war to be "fake news".
And, last I checked, LE won. How many new ISAs are BE?
Number in BE are correctly printed, in hexa, as it his a natural notation for humans.
ie:
0x0123456789
I am using the Arabic Numerology, like we use in the Christian Society, from the left to right..
A number in BE is like that!
On Contrast,
Writing a number in LE, is written like in Arabic Alphabetic Cultures... from Right to Left.
taking the same example, , LE is like this:
8967452301
This is the motive why is more efficient to convert from Binary to Decimal in BE, than in LE..
Its also a lot more efficient to know if a number is positive or negative in BE..
I'm not sure I follow. Please explain in the form of code. In C, if you know it. Otherwise, pick the most C-like language you do know.
You are talking about L4-L7 layers maybe( application ptotocols ? or at least some of them.. ), I was talking about packing data, serialising it to the network, remember disks now rely in network, in big clusters.. you are serialising all the time..
The higher layers constitute the packet payloads, which is the vast majority of the data. That stuff is opaque to the lower layers and not subject to its endian-ness. The stuff you're talking about is just lower-layer packet headers. So, a very small minority of the actual bytes.
Inside each, its the same or almost the same, with advantages to BE in finding faster if the number is positive or negative( for obvious reasons.. ).
You seem to be unaware that data is not read in from memory in byte-wise fashion. It's read into the chip in cacheline-sized chunks, and then the CPU core loads and stores entire word-length registers at a time. So, whether the sign is contained in the byte at the lowest or highest address is immaterial.
Yeah, "network protocols". LOL. You mean like TCP/IP? Nobody is using that, are they?? No need to mention that obscure protocol by name, am I right?
Not only.
However, most network hardware offers the ability to offload some amount of this work - especially the high-end, faster hardware. It's not really my area, but I think the practical downsides of using LE in the datacenter have long-ago been mitigated to the point where they're a non-issue.
Is it bad practice to create new application-layer, network protocols that use LE?
As d4ddi0 mentioned, most people naively use their native endian-ness. These days, that happens to be LE. Back when computer networking was being developed, it was BE. That's why a lot of the older protocols are BE, but more recent data formats tend to be LE.
I'm not sure I follow. Please explain in the form of code. In C, if you know it. Otherwise, pick the most C-like language you do know.
BE: 0x0123456789
I believe that you can see the MSB at base address right?
So you only need to fetch it, to now if its positive number of not, right??In LE you need to fetch all the number, to know..
There are byte swapping libraries which are included with most C/C++ libraries. The most commonly used routines are htons() and ntohs() used for network byte order conversions. The host to Big/Little Endian routines (htobe16()/be16toh(), etc) are more complete as they handle swaps of 2, 4 and 8 bytes. These routines are platform independent and know that a swap is only required on Little Endian systems.No swapping is applied to the data when run on a Big Endian host computer as the data is already in "network byte order".
It goes to a great extent to explain, the busy work LE arch's need to do to operate in the Network..
Anyway, TCP/IP is their, for some decades now, and its BigEndian.
Don't get mad with me, I am not the guy implementing LE..
Instead of questioning me, you need to put a question to Intel,AMD,VIA, etc.. why they decided to try to force their standard on the system, at the time..
Now you are wasting lots of energy, only to communicate in the network...
And I am only speaking about you( as a final user), now imagine, SAN systems hosting disks, and such...
The tremendous amount of power needed for that..
Last edited by tuxd3v; 25 August 2019, 12:03 PM.
Reason: typos,typos..god..
Is it bad practice to create new application-layer, network protocols that use LE?
If you rely on the network, the mount of work will be in triple for you..
Since you are packing in LE, then you need to do exactly that for BE, including checksum'ing and so on..
So you need to unpack( LE to host format data ), and repack again( host format data to BE )..
Pack Host data, in LE format
Unpack LE to Host Format Data( Again , like before 1. )
This subject is like a religious war, never ending..
Each architecture has its pros and cons..
yes, it'll never end
Each architecture has its pros and cons, and so do big and little endian. However there are more advantages to use little endian than to use big endian. In fact what you're arguing about network serialization is pretty much nonsense nowadays, since the network adapters can already put the bytes in the correct order without help from the CPU
If you're saying big endian allows faster sign checking then it's the same as saying little endian allows faster parity checking, since by reading just the first byte we'll know the number is odd or even right away. And little endian is far more suitable for mathematics which is generally done from the least significant part (except some operations like division)
In fact the designers of RISC-V architecture also said that:
We chose little-endian byte ordering for the RISC-V memory system because little-endian systems are currently dominant commercially (all x86 systems; iOS, Android, and Windows for ARM). A minor point is that we have also found little-endian memory systems to be more natural for hardware designers. However, certain application areas, such as IP networking, operate on big-endian data structures, and so we leave open the possibility of non-standard big-endian or bi-endian systems.
LE won, this. As evidence, I cite the lack of new uArch's that are BE, and that all the bi-endian uArch's are being run in LE mode, with support continually being dropped for BE.
LE won the war, whether you choose to accept it or not. If BE had such huge advantages as you claim, then it wouldn't have lost.
I believe that you can see the MSB at base address right?
So you only need to fetch it, to now if its positive number of not, right??In LE you need to fetch all the number, to know..
This is not code. I asked for code that demonstrated an efficiency advantage in binary -> decimal conversion, as you claimed.
You do not understand how modern CPUs work. They don't fetch 32-bit ints one-byte-at-a-time. The whole thing is read in a single cycle. Modern computers have no primary datapath narrower than 32-bits, and most datapaths are 64-bits or wider.
Now you are wasting lots of energy, only to communicate in the network...
And I am only speaking about you( as a final user), now imagine, SAN systems hosting disks, and such...
The tremendous amount of power needed for that..
I prefer not to repeat myself, but it seems you missed these points on the first go-around:
The overheads involved in running storage traffic over a TCP stack are not dominated by endian conversions.
Some amount of TCP/IP is offloaded by the NIC, anyhow.
The application defines the endianness of the payload - not the lower-layer protocols. If a file format is LE, that data doesn't get byte-swapped by the network stack.
Comment