Announcement

Collapse
No announcement yet.

NVIDIA, Mentor Graphics May Harm GCC

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • bridgman
    replied
    Originally posted by Eisnefaust View Post
    As I said, if even AMD would support the development of opensource the way it is painted to the outside world by their PR loudspeakers, the today's picture would be different.
    ??? I hear comments like this from time to time, but I don't think anyone has actually been able to provide an example of these alleged "PR loudspeakers". Can you give me an example please ?

    Thanks,
    JB

    Leave a comment:


  • log0
    replied
    Originally posted by GreatEmerald View Post
    I agree that it's not all that useful and that it might be a maintenance burden the GCC maintainers do not want to take. But I don't see what this has to do with software freedom.[/QUOT]

    Again, no, it's not a requirement. It's separate from GCC. It's not like now installing GCC would pull in proprietary software, it's rather like the proprietary software supporting the output of GCC. GCC makes the output, and proprietary software just so happens to be able to use it as an input. But that's of no concern of GCC.
    More like GCC making output for proprietary software (I don't see what PTX would be else useful for). To make it more extreme, you could see it as a way of adding proprietary plugins to GCC (with PTX being the interface here), which it kinda goes against what gnu stands for.

    Leave a comment:


  • GreatEmerald
    replied
    Originally posted by log0 View Post
    With current gcc targets (real ISA) you can have a fully open source stack/os. This virtual ISA (PTX) target constrains you to platforms supported by nvidia drivers (and maybe even certain driver versions depending on nvidias discretion). Which makes it kinda less useful for GNU I guess.
    I agree that it's not all that useful and that it might be a maintenance burden the GCC maintainers do not want to take. But I don't see what this has to do with software freedom.

    Originally posted by tarceri View Post
    Yea that's fine but this is creating functionality in gcc that can only be used with a proprietary part. There is a difference between choosing to used proprietary libraries in your app and compiling with gcc and building functionality into gcc which requires it and I personally think it's a bad trend which I wouldnth want to see.
    Again, no, it's not a requirement. It's separate from GCC. It's not like now installing GCC would pull in proprietary software, it's rather like the proprietary software supporting the output of GCC. GCC makes the output, and proprietary software just so happens to be able to use it as an input. But that's of no concern of GCC.

    Leave a comment:


  • Temar
    replied
    Originally posted by blackiwid View Post
    I read no real argument, maybe Michael does overdue or mix to much personal oppinion into articles, so what. Its his right, there are no juristiction against such stuff, its his site, deal with it or don't use this site.
    So basically your only problem is me criticising Michael's article? LOL, better deal with it. If Michael would not want any feedback he could simply close this forum.

    Leave a comment:


  • tarceri
    replied
    Originally posted by GreatEmerald View Post
    And a proprietary application compiled with GCC can have a dependency on external proprietary libraries (probably even the same CUDA libraries), without which the program won't run. Again it's of no concern to GCC.
    Yea that's fine but this is creating functionality in gcc that can only be used with a proprietary part. There is a difference between choosing to used proprietary libraries in your app and compiling with gcc and building functionality into gcc which requires it and I personally think it's a bad trend which I wouldnth want to see.

    Leave a comment:


  • blackiwid
    replied
    Originally posted by Temar View Post
    lol, nice try. Did you already run out of arguments?

    Could my failure in understanding your ironie simply be a language barrier problem?
    than maybe you should not post on enlgish sites if you dont understand the answers. Maybe you did not understand the article then too?
    I even set one time the retards in "" thats also in other languages a clear sign that I did not meant it really.

    To arguments you bring no, only that you think thats ok what nvidia makes others find it not ok, just let it that way...

    I read no real argument, maybe Michael does overdue or mix to much personal oppinion into articles, so what. Its his right, there are no juristiction against such stuff, its his site, deal with it or don't use this site.
    Maybe I mixed some other abuses from other people here posting comments into my answer its hard to read 6 sites full of bullshit and answer then 20 people to only their concerns. So if I got that wrong with calling him names like the word "nazi" was fallen, that somebody else did that, sorry for that.

    Leave a comment:


  • log0
    replied
    Originally posted by GreatEmerald View Post
    And a proprietary application compiled with GCC can have a dependency on external proprietary libraries (probably even the same CUDA libraries), without which the program won't run. Again it's of no concern to GCC.
    With current gcc targets (real ISA) you can have a fully open source stack/os. This virtual ISA (PTX) target constrains you to platforms supported by nvidia drivers (and maybe even certain driver versions depending on nvidias discretion). Which makes it kinda less useful for GNU I guess.

    Leave a comment:


  • Temar
    replied
    Originally posted by blackiwid View Post
    k that proofs it, you are a kid <10 years old, because children don't understand ironie, like you. As if I would have called them retards, thats ironie rofl. but ok its not your fault that a kid dont understand ironie is just a thing that is normal not his fault. Maybe your parents should protect you from yourself posting such stuff on random websites with fanatics on it. (thats ironie too, I try to write that explizitly now as long as you kiddo write here)
    lol, nice try. Did you already run out of arguments?

    Could my failure in understanding your ironie simply be a language barrier problem?

    Leave a comment:


  • Eisnefaust
    replied
    Do not gloryfy AMD

    Well, AMD might offer some documentation of their ISA, but compared to the speed the development for real open source UNIX like systems, mainly the *BSD lines, is going on, there is a lot to reconsider and there is also a lot to be unraveled as PR from AMD. OPEN in strictu sensu means the support for all platforms. And this is not exclusively the pool of Linux derivates around.

    On the first sight, I aggree with the statement of the compiler-source Larrabel comments on. On the second view, what is wrong with OpenACC in GCC? Look at LLVM. LLVM has its backend hardware driver, generating code for a specific target. If OpenACC is a concept that has an initial single-target only, it is a starting point. I have no ideas about how restricted the concept of implementing the OpenACC standard into GCC (or even LLVM/CLANG) and restricting the use only for the nVidia PTX backend. I doubt that the design would be that poisenous nVidia-only. As I said, if even AMD would support the development of opensource the way it is painted to the outside world by their PR loudspeakers, the today's picture would be different, there wouldn't be only one guy developing code for Linux only (Mesa and X11 seem to be claimed now by the Linux community ...), the same for Intel and their confusing KMS policy.

    We all "want to make money", isn't that the so called anglo-american credo? It is natural, that nVidia tries to secure their claim. At the end it is the customers choice how the development will find its course. If customers WANT a more open standard, they have the choice to avoid all non-standards. But in fact, most customers are reeducated stupid consumer mules.

    Another example, a bit away from the main subject, but a similar message: AMD and nVidia cut off double precission facilities from their consumer GPUs. Consumers have to buy expensive professional GPUs if they want to have scientific numbercrunching power, but most of those GPGPU devices, especially from nVidia, do have limited video capabilities - so it is wise to buy another energy consuming GPU for the very same computer (if it is a workstation). I never realised why I should spent 600 or even 900 EUR for a gaming card with all the neat double precision capabilities gone or not supported. Highend in price but lowend in the real useful capabilities? As a customer, I can (and do) reject those devices and stay with cheap GPUs, although I'd spend more money if I could use it for other purposes as only video displaying.

    And at this very moment, nVidia ist the only choice if someone uses one of the *BSD UNIX systems since nVidia offers high quality, CUDA ripped-off, drivers. This decision was made when AMD didn't offer BSD BLOBs for their crap and the opensource drivers crashed nearly every day the HD49XX, HD46XX and the HD47XX GPUs we had a lot of at the department that time.

    Stop bashing nVidia.

    Leave a comment:


  • blackiwid
    replied
    Originally posted by Temar View Post
    I don't see any anger in my posts, nor did I call anyone names. You are the one talking about "retards" and you see anger where there is none. I simply have a different opinion which you obviously don't like.
    k that proofs it, you are a kid <10 years old, because children don't understand ironie, like you. As if I would have called them retards, thats ironie rofl. but ok its not your fault that a kid dont understand ironie is just a thing that is normal not his fault. Maybe your parents should protect you from yourself posting such stuff on random websites with fanatics on it. (thats ironie too, I try to write that explizitly now as long as you kiddo write here)

    Leave a comment:

Working...
X