Announcement

Collapse
No announcement yet.

Intel's New Brand For High Performance Discrete Graphics: Arc

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • smitty3268
    replied
    Originally posted by coder View Post
    Even 90% has got to be worth some serious $$$. AMD's FSR sure isn't 90%.
    It sure seems to be for 4k resolutions. It only falls apart at lower resolutions.

    Leave a comment:


  • coder
    replied
    Originally posted by arQon View Post
    My original comment was "I bet you can get 90% of the way there via ...".
    Even 90% has got to be worth some serious $$$. AMD's FSR sure isn't 90%.

    Originally posted by arQon View Post
    By the end of the thread, you appeared to have morphed that into "I bet you can have something exactly equivalent to DLSS via ...",
    First of all, I said nothing of the sort until after you accused me of losing track of the thread. Secondly, I didn't say "exactly equivalent", I said "comparable" (which is more like "approximately equivalent" - 90% would probably count).

    As for where the thread was going, you seemed to be casting doubt on whether DLSS actually used deep learning, which I was responding to.

    Originally posted by arQon View Post
    That's all.
    Yeah? I wouldn't bet on it.

    Leave a comment:


  • arQon
    replied
    Originally posted by coder View Post
    Why do you say that? If you're going to throw shade
    Huh? Wasn't intended in anything like that tone - sorry if it came off that way.

    My original comment was "I bet you can get 90% of the way there via ...". By the end of the thread, you appeared to have morphed that into "I bet you can have something exactly equivalent to DLSS via ...", which obviously is both a bit ridiculous and very different from what I actually said. That's all.

    Leave a comment:


  • coder
    replied
    Originally posted by arQon View Post
    I think you've lost track of the thread at some point.
    Why do you say that? If you're going to throw shade, at least be specific about your allegations.

    Originally posted by arQon View Post
    "Uses the Tensor cores" isn't really a compelling argument,
    If you can build an upsampling filter with comparable quality to DLSS 2, without using deep learning, I'm sure there's a lot of money in it, for you.

    Originally posted by arQon View Post
    if running code on them automatically makes something count as "AI" then I guess that explains why we're drowning in a sea of BS PR lately.
    If you understand exactly what they are, then it should be fairly clear that they can't be used extensively for much that isn't deep learning.

    Leave a comment:


  • arQon
    replied
    Originally posted by coder View Post
    In neither case can you get by with a LUT. People have profiled it and observed that it indeed does use the Tensor cores. As compared with a LUT, a deep learning model gives you better detection and handling of corner cases that tend to stymie conventional scalers.
    I think you've lost track of the thread at some point. But in fairness, I'm the one making the claim, so until I can find the time to back it up with code (which will be months from now at best) I'll have to bow out.

    "Uses the Tensor cores" isn't really a compelling argument, but if running code on them automatically makes something count as "AI" then I guess that explains why we're drowning in a sea of BS PR lately.

    Leave a comment:


  • coder
    replied
    Originally posted by arQon View Post
    Totally fair: I'm not blaming you for the BS at all - I'm just, yknow, REALLY tired of seeing a tiny subset of 25-year-old ML called "AI" *and* be attached to apparently anything that run on electricity at all. And plenty of stuff that doesn't even clear THAT low a bar...
    These days, "AI" equates to inferencing with deep neural networks. Hence, AMD's FSR doesn't qualify, while DLSS does.

    I'm sure we all know "AI" covers a much broader set of fundamental techniques.

    Originally posted by arQon View Post
    Unless DLSS only works after spending 2000 hours rendering and resampling 4K frames from each specific game, you're implicitly looking at what's really just a LUT indexed by dRGB/dA/dz. (A sort of "CAS on steroids", basically).
    DLSS 1.0 relied on models that were game-specific and in fact did require a significant training time. DLSS 2.0 seems to build upon TAA and uses a generic model, AFAIK.

    In neither case can you get by with a LUT. People have profiled it and observed that it indeed does use the Tensor cores. As compared with a LUT, a deep learning model gives you better detection and handling of corner cases that tend to stymie conventional scalers.

    Leave a comment:


  • arQon
    replied
    Originally posted by coder View Post
    I'm just trying to use consistent terminology. I don't want to get in a semantic debate over "AI", but I take it to mean their upsampling is closer to DLSS than FSR.
    Totally fair: I'm not blaming you for the BS at all - I'm just, yknow, REALLY tired of seeing a tiny subset of 25-year-old ML called "AI" *and* be attached to apparently anything that run on electricity at all. And plenty of stuff that doesn't even clear THAT low a bar...

    >> (In fact, I expect you could reproduce about 90% of DLSS with a 3D texture LUT and a pretty simple shader, if you tried
    > I guess we're going to have to agree to disagree on this.

    Really? I mean, yeah, I didn't actually spec it out, but it "feels about right" to me. Unless DLSS only works after spending 2000 hours rendering and resampling 4K frames from each specific game, you're implicitly looking at what's really just a LUT indexed by dRGB/dA/dz. (A sort of "CAS on steroids", basically).
    I mean, you're right that it probably needs either 1Ds for A and z, or a 2D for both if there's a nontrivial relationship between them - I was working on the assumption that you'd chroma the RGB into Y and use that with A+z, but that does have corner cases - but other than that, I still think it's viable. You can do remarkable things remarkably easily, sometimes. (FXAA is the poster child for that, though Oculus's "un-distortion" is also up there).

    > However, there's no debating the sheer gulf in "AI compute" power between Nvidia's Tensor-enabled GPUs and equivalent RDNA 2.

    I'm not sure why that's relevant here, but I'll take your word for it.

    Leave a comment:


  • uid0
    replied
    Originally posted by kylew77 View Post
    A guy can dream: 17inch thinkpad workstation, integrated Intel Arc gpu, 5.0Ghz+ 8 core+ cpu, running OpenBSD 7.something flawlessly on an optane ssd!
    While you are dreaming, don't forget to ask for a decent 3:2 or 16:10 screen and an Ethernet port!
    Actually, better not: it will suck to wake up...
    Last edited by uid0; 22 August 2021, 11:56 PM. Reason: clarity

    Leave a comment:


  • coder
    replied
    Originally posted by arQon View Post
    The "AI" in use in that particular reference is 100% buzzword bs and 0% actual AI.
    I'm just trying to use consistent terminology. I don't want to get in a semantic debate over "AI", but I take it to mean their upsampling is closer to DLSS than FSR.

    Originally posted by arQon View Post
    whether or not there are Tensor equivalents on the card is not meaningfully related to the presence or absence of this feature.
    It actually is, because RDNA 2 GPUs lack enough compute power to do something like DLSS. Nvidia can only do that by virtue of their Tensor cores.

    Originally posted by arQon View Post
    > And RDNA 2 is reliant on some packed-math instructions, somewhat akin to SSE.

    Which is also all that's needed for this (and for that matter, nearly everything "AI" that gets offloaded these days).
    (In fact, I expect you could reproduce about 90% of DLSS with a 3D texture LUT and a pretty simple shader, if you tried - but then it wouldn't fill the Buzzword Bingo card for marketing...)
    I guess we're going to have to agree to disagree on this. However, there's no debating the sheer gulf in "AI compute" power between Nvidia's Tensor-enabled GPUs and equivalent RDNA 2.

    Nvidia first added packed math in Pascal. Volta & Turing's Tensor cores were a step-change above and beyond that.

    Leave a comment:


  • coder
    replied
    Originally posted by torsionbar28 View Post
    no way Intel is going to allow their GPU to deliver higher benchmark results on an AMD cpu.
    With Ryzen CPUs quickly gaining popularity, that will mean a lot of low-scoring benchmarks floating around, when this thing launches. And when it comes to light that the reason is to sabotage performance on AMD CPUs, even more negative publicity.

    It's also a weak move. It's an implicit admission that Intel can only sell CPUs though cheap tricks. And it would only work, if Intel is able to build decidedly superior GPUs, which is so far looking questionable. Otherwise, it would do more to hurt GPU sales than help CPU sales. So, I call BS on that.

    Originally posted by torsionbar28 View Post
    If history is any indication, they will somehow de-optimize the code path when an AMD platform is detected.
    Even in Intel's own software, that's not uniformly true. Check out the OpenVINO benchmarks in PTS. In many of them, the latest AMD CPUs outperform competing Intel models.

    Leave a comment:

Working...
X