Announcement

Collapse
No announcement yet.

Intel Is Trying To Support The x32 ABI For LLVM/Clang

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    @erendorn
    I have seen that, I even replied to his post. Whether the gcc.c compile benchmark qualifies as a real-world comparison is debatable (I tend to say no). But a problem is that it is not a real comparison because the outputs of the three compile runs are different from each other. One gives you 32 bit x86 code, the second one 64 bit x86_64 and the third one 32 bit x32 code. Different compiler optimization techniques might be used for the different architectures and the linker will also have to perform different tasks.

    I tried to do some video encode related benchmarks but ran into the problem that ffmpeg must disable assembly optimizations for x32, so the comparison would only become fair if I artificially hampered x86_64 performance by disabling x86_64 asm (which no real world user would do).

    Comment


    • #42
      Originally posted by chithanh View Post
      @erendorn
      I have seen that, I even replied to his post. Whether the gcc.c compile benchmark qualifies as a real-world comparison is debatable (I tend to say no). But a problem is that it is not a real comparison because the outputs of the three compile runs are different from each other. One gives you 32 bit x86 code, the second one 64 bit x86_64 and the third one 32 bit x32 code. Different compiler optimization techniques might be used for the different architectures and the linker will also have to perform different tasks.
      Why would that be?
      He did not give the details of his runs, but there's no reason a compiler would not be able to compile for another architecture. He most likely set the same target architecture for all three runs.

      Comment

      Working...
      X