Originally posted by jacob
View Post
Announcement
Collapse
No announcement yet.
Another Sizable Performance Optimization To Benefit Network Code With Linux 5.17
Collapse
X
-
Originally posted by tuxd3v View Postyes, looking now back the changes that this world took, and the big achievement of that Student in Finland at the time..how it changed the world..
- Likes 1
Comment
-
Originally posted by tuxd3v View PostBut if you are sending via network a binary file native to your system, if you are in Little Endian you need to pass to network byte order, but this process you doesn't even know its happening, its done in background.. its the Operating system that does it..
Higher level protocols designed to work in a mixed-endian environment need to decide how to do it, but that's independent of the choice TCP/IP has made. HTTP <= 1.1, for instance, is text based so doesn't have any endianness issues. And if you transfer a binary file (say, a JPEG) over HTTP, just like TCP/IP doesn't care about the packet payload, HTTP doesn't care about the content of the binary file, it can just be blasted as-is over the wire. Now, said binary file format might specify the endianness, but that's independent of HTTP and independent of TCP/IP.
- Likes 2
Comment
-
Originally posted by jacob View Post
Actually they both have their benefits. On old big endian CPUs like the m68000 where the data bus was only 16 bit wide and loading a 32 bit word from memory took twice as long than a 16bit word, you could optimise a binary search tree by first loading only the first two bytes of each node which were the high bytes, and only load the rest if those high bytes were equal to those of your key.
Comment
-
Originally posted by jabl View PostNo, like asgavar (and yours truly in an earlier comment in this thread) pointed out, TCP/IP endianness only applies to the packet headers. TCP/IP doesn't give a shit about the data payload, it's just an opaque bag of bits.
Higher level protocols designed to work in a mixed-endian environment need to decide how to do it, but that's independent of the choice TCP/IP has made. HTTP <= 1.1, for instance, is text based so doesn't have any endianness issues. And if you transfer a binary file (say, a JPEG) over HTTP, just like TCP/IP doesn't care about the packet payload, HTTP doesn't care about the content of the binary file, it can just be blasted as-is over the wire. Now, said binary file format might specify the endianness, but that's independent of HTTP and independent of TCP/IP.
see this:
Understanding big and little endian byte order is critical for digital forensic analysis. This introductory article will point you in the right direction.
A Computer Science portal for geeks. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions.
JPEG, like I said is already in network byte order( Big Endian ), so no change for it..a program in Little Endian opening a JPEG, it already knows that it has to convert the file for reading, the same for writing..
ASCII Text files are not changed, at least they shouldn't since the char arrays are index same way in Little Endian or Big Endian, the content of the char arrays is already Big Endian.. its our natural form of writing and reading for humans( non Arabic Languages.. ).
UTF16 format, is sometimes different, sometimes it provide a header in the file, first 2 bytes( BOM ) byte order mask,..
Comment
-
Originally posted by F.Ultra View PostLooks like the NIC supports it so driver issue I guess.
The driver Qualcom released for linux supported some offload features, but when maintainers looked at it, they saw that the driver was not built accordingly with Linux standards, and so they rewrote the driver, but this time with very simple functionality, taking out the "most precious" things..so right now its a POS driver..
I found this about it:
Last edited by tuxd3v; 25 November 2021, 04:33 PM.
Comment
-
Originally posted by tuxd3v View Postyeah, probably supports in hardware.
The driver Qualcom released for linux supported some offload features, but when maintainers looked at it, they saw that the driver was not built accordingly with Linux standards, and so they rewrote the driver, but this time with very simple functionality, taking out the "most precious" things..so right now its a POS driver..
I found this about it:
Comment
-
Originally posted by sinepgib View PostIndeed. You have a very limited number of instructions you can run on the CPU to achieve those throughputs. Swapping is an extra instruction, and not only that, it's an extra instruction that necessarily introduces a data dependency, which in turn means it takes extra space on your reorder buffer, slowing down your pipeline. The only operation you can get for free is the one you don't do.Last edited by microcode; 26 November 2021, 01:04 AM.
- Likes 1
Comment
-
Originally posted by tuxd3v View PostNo its definitely NOT.. and you can see that in a 100Gbps adapter we are at ~65Gbps mtu 1500 or jumbo frames more than 90Gbps, I believe, using 2 cores( It is in the previous article Michael wrote, about network ), ..but you now just imagine if you want to saturate 400Gbps..
- Likes 2
Comment
Comment