Announcement

Collapse
No announcement yet.

NVIDIA Announces Open-Source CV-CUDA Project

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Developer12
    replied
    Originally posted by coder View Post
    Not at all. Dali is very low-level and mostly focuses on the primitives needed for feeding a standard inferencing or training pipeline.
    Emphasis on "new iteration."

    Originally posted by coder View Post
    EVGA is relevant to gamers, most of whome don't understand or care about shit like this.
    Any publicity that isn't EVGA publicity is good publicity. Every word written on phoronix about this is one that isn't written about EVGA, or that fact their PCIe 3.0 adapters are firestarters.

    Leave a comment:


  • pal666
    replied
    Originally posted by coder View Post
    Huh?? OpenCV already has both a CUDA and OpenCL backend (largely maintained by Intel, I think). What would forking it supposedly accomplish?
    that's not me wanting to fork novideo libraries to add open backends

    Leave a comment:


  • pal666
    replied
    Originally posted by darkdragon-001 View Post
    OpenCV already has a CUDA module: https://docs.opencv.org/3.4/d2/dbc/cuda_intro.html

    Probably it would be much more helpful to upstream their efforts there. Like this, it's just marketing bullshit...
    just minor clarification: in my comment space between open and cv was intended, i.e. "some opensource library for computer vision"

    Leave a comment:


  • pal666
    replied
    Originally posted by sarmad View Post
    Not sure what you mean by "you"; I'm not affiliated with nVidia. If you are asking why nVidia didn't fork opencv instead of building a new tool then that's a good question and I don't have an answer to that. I'm merely saying that nVidia building a new library and making it open source is better then them building the same library but keeping it closed source.
    no, i was asking you who was talking about forking this great novideo library. why did you have to wait for novideo library when you already had open source libraries?
    Last edited by pal666; 24 September 2022, 08:16 AM.

    Leave a comment:


  • coder
    replied
    Originally posted by Developer12 View Post
    It sounds like a new iteration of their dali dataloader.
    Not at all. Dali is very low-level and mostly focuses on the primitives needed for feeding a standard inferencing or training pipeline.


    Originally posted by Developer12 View Post
    No doubt they're trotting this out now, so far before actual release, to distract people from EVGA leaving.
    WTF? That makes zero sense. This is targeted at mostly cloud apps, where EVGA is nonexistent. EVGA is relevant to gamers, most of whome don't understand or care about shit like this.

    Nvidia is a big company and they have lots of people working on various software projects. They're doing projects like this fairly often, but you probably don't happen to pay much attention to that. You're just trying to correlate the last few things you heard about them in the news, which is a pretty poor heuristic for understanding the world. Especially considering this site has very uneven coverage of Nvidia and their activities.

    Originally posted by Developer12 View Post
    I wonder if they've even implemented anything yet or if it's just a "quick! think of something!" idea.
    You could apply for early access and find out.

    https://developer.nvidia.com/cv-cuda...f22_prsy_en-us

    Leave a comment:


  • coder
    replied
    Originally posted by pal666 View Post
    what was stopping you from forking opencv?
    Huh?? OpenCV already has both a CUDA and OpenCL backend (largely maintained by Intel, I think). What would forking it supposedly accomplish?

    Leave a comment:


  • coder
    replied
    Originally posted by zexelon View Post
    Its probably not a trap per say... but the fine print is its a stripped down and probably gutted version of CUDA only targeted at computer vision... while computer vision is awesome and all its pretty much run of the mill with OpenCV being around for what a decade plus now?
    The article clearly says it's a library built atop CUDA. And the distinction vs. OpenCV is that (based on their descriptions), these sound like higher-level operations than what OpenCV contains.

    "CV-CUDA accelerates AI special effects such as relighting, reposing, blurring backgrounds and super resolution."

    I presume their "10x performance improvement" is probably relative to doing the same things via OpenCV, because it wasn't really architected to prioritize performance -- especially on GPUs.

    Originally posted by zexelon View Post
    This is more of a straight up marketing ploy.
    I don't believe so, but I think it is intended to entice people to add (yet another) CUDA dependency to their cloud apps.

    Leave a comment:


  • tildearrow
    replied
    Originally posted by sarmad View Post

    Still useful. Since it's open source someone can fork it and make it work with Amd/Intel GPU while keeping the same API so that existing user apps continue to work. The important thing is user facing APIs.
    ...and who's gonna do that?

    Leave a comment:


  • Developer12
    replied
    It sounds like a new iteration of their dali dataloader.

    No doubt they're trotting this out now, so far before actual release, to distract people from EVGA leaving.
    I wonder if they've even implemented anything yet or if it's just a "quick! think of something!" idea.

    Leave a comment:


  • darkdragon-001
    replied
    Originally posted by pal666 View Post
    i've got impression it's not "stripped down open cuda", but "open cv library, targeting proprietary novideo-only cuda"
    OpenCV already has a CUDA module: https://docs.opencv.org/3.4/d2/dbc/cuda_intro.html

    Probably it would be much more helpful to upstream their efforts there. Like this, it's just marketing bullshit...

    Leave a comment:

Working...
X