Announcement

Collapse
No announcement yet.

R600 Gallium3D LLVM Shader Compiler Hooked Up

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by tstellar View Post
    What does llvm-config --cxxflags output?
    Never mind, I did it. LLVM needs `-frtti` compiler flag, in CMAKE_CXXFLAGS_[RELEASE/DEBUG/etc.].
    Tom, maybe you could say something about it, in the commit messages and/or blog post.
    Interestengly textures are visible. Doom3 starts, but entering the playable area segfaults, saying something about KILL AMDIL unstruction. I will try to investigate.
    Dumping shaders for glxgears, doesn't ring any bell, yet.

    Comment


    • #42
      KILL should be the discard glsl command.

      Comment


      • #43
        Originally posted by madbiologist View Post
        How new was the mesa you compiled? This committ looks like it might fix this issue:

        http://cgit.freedesktop.org/mesa/mes...9750e194759d89
        Unfortunately not. I got the same error in Doom3:
        Code:
        LLVM ERROR: Cannot select: target intrinsic %llvm.AMDGPU.kill
        doom3.x86_64: /home/drago/doom3-dheng.git/neo/idlib/Heap.h:821: void idDynamicBlockAlloc<type, baseBlockSize, minBlockSize>::FreeInternal(idDynamicBlock<type>*) [with type = unsigned char, int baseBlockSize = 16384, int minBlockSize = 256]: Assertion `block->node == __null' failed.
        Stack dump:
        0.      Running pass 'Function Pass Manager' on module 'tgsi'.
        1.      Running pass 'AMDIL DAG->DAG Pattern Instruction Selection' on function '@mai
        I think, it is generated before Machine Code Generation pass, in SelectionDAG pass.
        Maybe graphics sharders use KILL, different way than OpenCL did. (AMDIL code drop is compute orientated, yet).

        Comment


        • #44
          I can't find KILL instruction definition in R600Instructions.td nor in AMDILNodes.td.
          I guess KILL is not ment to be used in OpenCL compute.

          Comment


          • #45
            Changing this:
            Code:
            bld_base->op_actions[TGSI_OPCODE_KIL].intr_name = "llvm.AMDGPU.kill";
            into this:
            Code:
            bld_base->op_actions[TGSI_OPCODE_KIL].intr_name = "llvm.AMDGPU.kilp";
            Game is playable, but with model artifacts. I believe they are not related to this change.
            From this definition in R600Instructions.td, I conclude that above patching is legit:
            Code:
            def KILP : Pat <
              (int_AMDGPU_kilp),
              (MASK_WRITE (KILLGT (f32 ONE), (f32 ZERO)))
            >;
            Probably the same as KILL, but with small overhead.
            Last edited by Drago; 28 April 2012, 10:27 AM.

            Comment


            • #46
              Originally posted by Drago View Post
              Never mind, I did it. LLVM needs `-frtti` compiler flag, in CMAKE_CXXFLAGS_[RELEASE/DEBUG/etc.].
              Tom, maybe you could say something about it, in the commit messages and/or blog post.
              Interestengly textures are visible. Doom3 starts, but entering the playable area segfaults, saying something about KILL AMDIL unstruction. I will try to investigate.
              Dumping shaders for glxgears, doesn't ring any bell, yet.
              I usually configure LLVM with autoconf instead of CMake, and the -fno-rtti flag is added to the CXXFLAGS by default, so maybe there are some difference between CMake and configure. It seems like the real problem is that in the absence of -frtti and -fno-rtti flags, llvm builds without rtti, but Mesa builds with rtti. I think the best solution might be to always use -fno-rtti in Mesa.

              Comment


              • #47
                Originally posted by Drago View Post
                Changing this:
                Code:
                bld_base->op_actions[TGSI_OPCODE_KIL].intr_name = "llvm.AMDGPU.kill";
                into this:
                Code:
                bld_base->op_actions[TGSI_OPCODE_KIL].intr_name = "llvm.AMDGPU.kilp";
                Game is playable, but with model artifacts. I believe they are not related to this change.
                From this definition in R600Instructions.td, I conclude that above patching is legit:
                Code:
                def KILP : Pat <
                  (int_AMDGPU_kilp),
                  (MASK_WRITE (KILLGT (f32 ONE), (f32 ZERO)))
                >;
                Probably the same as KILL, but with small overhead.
                KILP and KIL are different instructions. KILP unconditionally kills a pixel, while KIL conditionally kills the pixel based on the value of src0. The intrinsic llvm.AMDGPU.kill is correct for TGSI_OPCODE_KIL. The reason you are seeing the "cannot select" error is because there is no pattern like the one above for int_AMDGPU_kill.

                In order to figure out what pattern to use, you need to look at the definition of KIL in the tgsi docs (src/gallium/docs/source/tgsi.rst) and then look in the ISA doc (there is a list of ISA docs here: http://www.x.org/wiki/RadeonFeature) for your card to find the r600 hardware instruction that KIL should be lowered to.

                Comment

                Working...
                X