Originally posted by xen0n
View Post
Announcement
Collapse
No announcement yet.
Debian Guts Support For Old MIPS CPUs
Collapse
X
-
-
Originally posted by tuxd3v View PostThat is all Wrong..
Network Format is BigEndian..
Originally posted by tuxd3v View PostYou can try to paint it has you prefer to suit your ego, or Religion( ..but that is YET another thing.. )
Originally posted by tuxd3v View PostThis subject is like a religious war, never ending..
And, last I checked, LE won. How many new ISAs are BE?
Originally posted by tuxd3v View PostNumber in BE are correctly printed, in hexa, as it his a natural notation for humans.
ie:
0x0123456789
I am using the Arabic Numerology, like we use in the Christian Society, from the left to right..
A number in BE is like that!
On Contrast,
Writing a number in LE, is written like in Arabic Alphabetic Cultures... from Right to Left.
taking the same example, , LE is like this:
8967452301
This is the motive why is more efficient to convert from Binary to Decimal in BE, than in LE..
Its also a lot more efficient to know if a number is positive or negative in BE..
Comment
-
Originally posted by tuxd3v View PostYou are talking about L4-L7 layers maybe( application ptotocols ? or at least some of them.. ), I was talking about packing data, serialising it to the network, remember disks now rely in network, in big clusters.. you are serialising all the time..
Originally posted by tuxd3v View PostInside each, its the same or almost the same, with advantages to BE in finding faster if the number is positive or negative( for obvious reasons.. ).
Comment
-
Originally posted by torsionbar28 View PostYeah, "network protocols". LOL. You mean like TCP/IP? Nobody is using that, are they?? No need to mention that obscure protocol by name, am I right?
However, most network hardware offers the ability to offload some amount of this work - especially the high-end, faster hardware. It's not really my area, but I think the practical downsides of using LE in the datacenter have long-ago been mitigated to the point where they're a non-issue.
Comment
-
Originally posted by cybertraveler View PostIs it bad practice to create new application-layer, network protocols that use LE?
Originally posted by xen0n View PostIMO it's not a matter of endianness, but rather proper documentation, at which Homo sapiens is not known to be good...
Ever hear of RFCs?
- Likes 1
Comment
-
Originally posted by coder View PostWhy? Why is Network Order big endian?
It was defined that way, as a better way to serialise data..
Here is the rfc RFC1700, it defines network transmission protocols..
Originally posted by coder View PostAnd, last I checked, LE won. How many new ISAs are BE?
Can you specify what LE won??
Originally posted by coder View PostI'm not sure I follow. Please explain in the form of code. In C, if you know it. Otherwise, pick the most C-like language you do know.
I believe that you can see the MSB at base address right?
So you only need to fetch it, to now if its positive number of not, right??In LE you need to fetch all the number, to know..
I found a nice article for you..
Read this
There are byte swapping libraries which are included with most C/C++ libraries. The most commonly used routines are htons() and ntohs() used for network byte order conversions. The host to Big/Little Endian routines (htobe16()/be16toh(), etc) are more complete as they handle swaps of 2, 4 and 8 bytes. These routines are platform independent and know that a swap is only required on Little Endian systems. No swapping is applied to the data when run on a Big Endian host computer as the data is already in "network byte order".
Anyway, TCP/IP is their, for some decades now, and its BigEndian.
Don't get mad with me, I am not the guy implementing LE..
Instead of questioning me, you need to put a question to Intel,AMD,VIA, etc.. why they decided to try to force their standard on the system, at the time..
Now you are wasting lots of energy, only to communicate in the network...
And I am only speaking about you( as a final user), now imagine, SAN systems hosting disks, and such...
The tremendous amount of power needed for that..
Comment
-
Originally posted by cybertraveler View PostIs it bad practice to create new application-layer, network protocols that use LE?
Since you are packing in LE, then you need to do exactly that for BE, including checksum'ing and so on..
So you need to unpack( LE to host format data ), and repack again( host format data to BE )..- Pack Host data, in LE format
- Unpack LE to Host Format Data( Again , like before 1. )
- Pack Data to BE to send..
- Likes 1
Comment
-
Originally posted by tuxd3v View PostThis subject is like a religious war, never ending..
Each architecture has its pros and cons..
Each architecture has its pros and cons, and so do big and little endian. However there are more advantages to use little endian than to use big endian. In fact what you're arguing about network serialization is pretty much nonsense nowadays, since the network adapters can already put the bytes in the correct order without help from the CPU
If you're saying big endian allows faster sign checking then it's the same as saying little endian allows faster parity checking, since by reading just the first byte we'll know the number is odd or even right away. And little endian is far more suitable for mathematics which is generally done from the least significant part (except some operations like division)
In fact the designers of RISC-V architecture also said that:
We chose little-endian byte ordering for the RISC-V memory system because little-endian systems are currently dominant commercially (all x86 systems; iOS, Android, and Windows for ARM). A minor point is that we have also found little-endian memory systems to be more natural for hardware designers. However, certain application areas, such as IP networking, operate on big-endian data structures, and so we leave open the possibility of non-standard big-endian or bi-endian systems.
- Likes 2
Comment
-
Originally posted by tuxd3v View PostBecause, the network, was defines to be BigEndian, and so TCP/IP is BigEndian.
It was defined that way, as a better way to serialise data..
Originally posted by tuxd3v View PostLE won what??
Can you specify what LE won??
LE won, this. As evidence, I cite the lack of new uArch's that are BE, and that all the bi-endian uArch's are being run in LE mode, with support continually being dropped for BE.
LE won the war, whether you choose to accept it or not. If BE had such huge advantages as you claim, then it wouldn't have lost.
Originally posted by tuxd3v View PostBE: 0x0123456789
I believe that you can see the MSB at base address right?
So you only need to fetch it, to now if its positive number of not, right??In LE you need to fetch all the number, to know..- This is not code. I asked for code that demonstrated an efficiency advantage in binary -> decimal conversion, as you claimed.
- You do not understand how modern CPUs work. They don't fetch 32-bit ints one-byte-at-a-time. The whole thing is read in a single cycle. Modern computers have no primary datapath narrower than 32-bits, and most datapaths are 64-bits or wider.
I understand BE perfectly well. I have actually coded on BE systems, for a time, both embedded (networking) and old Macs.
Originally posted by tuxd3v View PostDon't get mad with me, I am not the guy implementing LE..
Originally posted by tuxd3v View PostNow you are wasting lots of energy, only to communicate in the network...
And I am only speaking about you( as a final user), now imagine, SAN systems hosting disks, and such...
The tremendous amount of power needed for that..- The overheads involved in running storage traffic over a TCP stack are not dominated by endian conversions.
- Some amount of TCP/IP is offloaded by the NIC, anyhow.
- The application defines the endianness of the payload - not the lower-layer protocols. If a file format is LE, that data doesn't get byte-swapped by the network stack.
Comment
Comment