Originally posted by sinepgib
View Post
Originally posted by sinepgib
View Post
There are also certain things that would require more operations to be done in Litle Endian too, but the code was done to follow the logical path in which the programmer was coding..for example test if a number is positive or negative..
Originally posted by sinepgib
View Post
serialization/deserialization is the "typical elephant in the room" case were you need to explicitly swap bytes....
Originally posted by sinepgib
View Post
Its the way the algorithm was implemented give advantage to big or little..majority of code was thought for Little Endian in Gnu/Linux..yes the endianess maybe can give marginal gains, but marginal over marginal, after thousands and thousands of iterations or lines of code, makes difference..
Originally posted by sinepgib
View Post
Originally posted by sinepgib
View Post
sometimes is easier to follow one route because you know in advance that favours your little endian, when compiler generates assembler code.., or even in assembler doing some operations instead of others..
Originally posted by sinepgib
View Post
Well, now that we know the history that CPUs/Operating systems took, its easy to say that Big Endian was a wrong move in network, but at the time..
Originally posted by sinepgib
View Post
If today consumer hardware were mixed, some would suffer from one thing, others would suffer from other things
If you were using BE for personal computers today, and if pcie, etc were invented to be little endian would also be a mess either way..
But I suspect that in that BE scenario, pcie would have being invented BE( to take advantage of it endianess.. ).
In any case the Ideal solution would have being, from the beginning the same, and don't change it..in my opinion.
Leave a comment: