Page 4 of 6 FirstFirst ... 23456 LastLast
Results 31 to 40 of 57

Thread: Radeon Gallium3D Still Long Shot From Catalyst

  1. #31

    Default

    I'm sure if bridgman wasn't in the way all the time, r600g would be in a much better state. I wish this kid would just go away..
    Last edited by idontlikebridgman; 03-25-2012 at 02:56 PM.

  2. #32
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by bridgman View Post
    That's like putting my pickup truck in my pocket so it's always there when I need it
    this only sounds funny if you use analogies like "Pickup" and "Pocket"

    if you don't use analogies if you use "Bit" it just doesn’t mater the PLACE of the bits doesn’t matter because they are there anyway!

    in other words you just use rhetorical tricks to don't admit my argument.


    Quote Originally Posted by bridgman View Post
    The open source userspace stack is >10x the size of the kernel driver,
    invalid argument because the bits are there anyway! its just a question in how to manage the bits.
    after this "Fix" the linux kernel is 10x larger really who cares? no one cares !
    the bit size of the kernel source doesn't care!
    the bits are in the system anyway!
    you save ZERO!

    Quote Originally Posted by bridgman View Post
    and the proprietary userspace stack is >50x and approaching the size of the entire Linux kernel.
    invalid argument because no one CARE! the garbage catalyst can never be in the "Linux" kernel because of this its senseless to arguing on this shit.



    Quote Originally Posted by bridgman View Post
    I don't think there would be a lot of joy among the kernel developers if we tried to move all that into kernel space.
    why? i would not bet on this! just ask linus torvald for example!

    if he say no then its no.. but your speculations is just FUD!

    Quote Originally Posted by bridgman View Post
    The multimedia API framework is quite different between Android and X/Wayland. They may converge over time but right now they are very different.
    LOL... APIs can always be translates per layer.

    you can also work to get a long time run.

    over long time this is the best way to fix the linux fragmentation.
    Last edited by Qaridarium; 03-25-2012 at 03:20 PM.

  3. #33
    Join Date
    Nov 2009
    Location
    Italy
    Posts
    970

    Default

    I figured out how to make radeon better than catalyst: every time quaridarium writes some bullshit, instead of wasting your time replying you should write a couples of lines of code. At such a pace I'm pretty sure radeon will double catalyst's performance in no more than a month.
    ## VGA ##
    AMD: X1950XTX, HD3870, HD5870
    Intel: GMA45, HD3000 (Core i5 2500K)

  4. #34
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,514

    Default

    That is a *great* idea !

  5. #35
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by bridgman View Post
    That is a *great* idea !
    sure but your argument about the kernel source size is still bullshit.

    20 years ago the kernel source was so much smaller maybe we should use a time travelling engine to go back in time to be sure the kernel source is smaller...

    this is just bullshit!

    and hey don't write code this makes the kernel source bigger ans we all know this is "bad" LOL

    Bridgman bullshit logic at work..

  6. #36
    Join Date
    Jan 2008
    Posts
    299

    Default

    Quote Originally Posted by Qaridarium View Post
    sure but your argument about the kernel source size is still bullshit.

    20 years ago the kernel source was so much smaller maybe we should use a time travelling engine to go back in time to be sure the kernel source is smaller...

    this is just bullshit!

    and hey don't write code this makes the kernel source bigger ans we all know this is "bad" LOL

    Bridgman bullshit logic at work..
    Size isn't the only factor to consider in something like this.

    I think what you're suggesting is to move all of the 3D stack into the kernel. This would include the GLSL compiler and a lot of other core-Mesa components. The GLSL compiler, for instance, is C++ which is a no-go for the kernel. Then there's that you can't do floating-point operations in the kernel, which are obviously quite important for OpenGL.

    Also there's the issue of security -- moving thousands of lines of code into kernel space...

  7. #37
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by mattst88 View Post
    Size isn't the only factor to consider in something like this.

    I think what you're suggesting is to move all of the 3D stack into the kernel. This would include the GLSL compiler and a lot of other core-Mesa components. The GLSL compiler, for instance, is C++ which is a no-go for the kernel. Then there's that you can't do floating-point operations in the kernel, which are obviously quite important for OpenGL.

    Also there's the issue of security -- moving thousands of lines of code into kernel space...
    "The GLSL compiler, for instance, is C++ which is a no-go for the kernel."

    LOL there is no real GLSL compiler. you can rewrite this sad piece .
    the status for the GLSL compilers for the r600 are "Sad" and yes maybe we will never get a good VLIW compiler. this means its a death horse.

    "Then there's that you can't do floating-point operations in the kernel, which are obviously quite important for OpenGL."

    you miss an important step in your argument why should this be a no go?

    or do i have to ask linus ? LOL

  8. #38
    Join Date
    Jan 2008
    Posts
    299

    Default

    There's a serious inability on your part to participate in a reasonable discussion. I probably shouldn't bother responding, but I'll do it only once more given the lack of meaningful response.

    Quote Originally Posted by Qaridarium View Post
    "The GLSL compiler, for instance, is C++ which is a no-go for the kernel."

    LOL there is no real GLSL compiler. you can rewrite this sad piece .
    the status for the GLSL compilers for the r600 are "Sad" and yes maybe we will never get a good VLIW compiler. this means its a death horse.
    The GLSL compiler, you know, the one that Intel wrote? http://cgit.freedesktop.org/mesa/mesa/tree/src/glsl

    It's C++.

    You're incorrectly thinking about the piece of r600g that translates some IR to hardware instructions.

    Quote Originally Posted by Qaridarium View Post
    "Then there's that you can't do floating-point operations in the kernel, which are obviously quite important for OpenGL."

    you miss an important step in your argument why should this be a no go?

    or do i have to ask linus ? LOL
    I can't tell if that's a particularly sad attempt to dodge a very real difficulty of moving user space code into the kernel, or if you actually don't understand.

    I'll assume it's the latter and waste a bit more of my time explaining it to you.

    From the kernel documentation (http://git.kernel.org/?p=linux/kerne...g.tmpl;hb=HEAD)

    Code:
    No floating point or MMX
          The FPU context is not saved; even in user
          context the FPU state probably won't
          correspond with the current process: you would mess with some
          user process' FPU state.  If you really want
          to do this, you would have to explicitly save/restore the full
          FPU state (and avoid context switches).  It
          is generally a bad idea; use fixed point arithmetic first.

  9. #39

    Default

    Quote Originally Posted by bridgman View Post
    for the next generation we are starting sufficiently early that we should be able to hide both development and review in the pre-launch window where it doesn't impact users or community developers
    You mean top-GPU in 8xxx series? Or something beyond?

  10. #40
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by mattst88 View Post
    There's a serious inability on your part to participate in a reasonable discussion. I probably shouldn't bother responding, but I'll do it only once more given the lack of meaningful response.
    'serious inability' if my inability is more important than the tropic then your mental-fascism is not my problem.

    Quote Originally Posted by mattst88 View Post
    The GLSL compiler, you know, the one that Intel wrote? http://cgit.freedesktop.org/mesa/mesa/tree/src/glsl
    It's C++.
    i don't care about intel shit in an AMD tropic.
    the solution is easy just put the intel shit into the garbage.
    we need a AMD/VLIW compiler ! and not a intel JOKE.
    the intel compiler is not VLIW ! you only get 1/5 of the speed with amd hardware this is just an FUD attack from intel on AMD!



    Quote Originally Posted by mattst88 View Post
    I can't tell if that's a particularly sad attempt to dodge a very real difficulty of moving user space code into the kernel, or if you actually don't understand.

    I'll assume it's the latter and waste a bit more of my time explaining it to you.

    From the kernel documentation (http://git.kernel.org/?p=linux/kerne...g.tmpl;hb=HEAD)

    Code:
    No floating point or MMX
          The FPU context is not saved; even in user
          context the FPU state probably won't
          correspond with the current process: you would mess with some
          user process' FPU state.  If you really want
          to do this, you would have to explicitly save/restore the full
          FPU state (and avoid context switches).  It
          is generally a bad idea; use fixed point arithmetic first.
    its just the wrong way to "think" because you can calculate ALL without floating point because of this you can run a Floating Point Emulation!

    now what? more FUD from your side?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •