Originally posted by coder
View Post
Announcement
Collapse
No announcement yet.
Intel's New Brand For High Performance Discrete Graphics: Arc
Collapse
X
-
- Likes 1
-
Originally posted by arQon View PostMy original comment was "I bet you can get 90% of the way there via ...".
Originally posted by arQon View PostBy the end of the thread, you appeared to have morphed that into "I bet you can have something exactly equivalent to DLSS via ...",
As for where the thread was going, you seemed to be casting doubt on whether DLSS actually used deep learning, which I was responding to.
Originally posted by arQon View PostThat's all.
Leave a comment:
-
Originally posted by coder View PostWhy do you say that? If you're going to throw shade
My original comment was "I bet you can get 90% of the way there via ...". By the end of the thread, you appeared to have morphed that into "I bet you can have something exactly equivalent to DLSS via ...", which obviously is both a bit ridiculous and very different from what I actually said. That's all.
- Likes 2
Leave a comment:
-
Originally posted by arQon View PostI think you've lost track of the thread at some point.
Originally posted by arQon View Post"Uses the Tensor cores" isn't really a compelling argument,
Originally posted by arQon View Postif running code on them automatically makes something count as "AI" then I guess that explains why we're drowning in a sea of BS PR lately.
Leave a comment:
-
Originally posted by coder View PostIn neither case can you get by with a LUT. People have profiled it and observed that it indeed does use the Tensor cores. As compared with a LUT, a deep learning model gives you better detection and handling of corner cases that tend to stymie conventional scalers.
"Uses the Tensor cores" isn't really a compelling argument, but if running code on them automatically makes something count as "AI" then I guess that explains why we're drowning in a sea of BS PR lately.
- Likes 2
Leave a comment:
-
Originally posted by arQon View PostTotally fair: I'm not blaming you for the BS at all - I'm just, yknow, REALLY tired of seeing a tiny subset of 25-year-old ML called "AI" *and* be attached to apparently anything that run on electricity at all. And plenty of stuff that doesn't even clear THAT low a bar...
I'm sure we all know "AI" covers a much broader set of fundamental techniques.
Originally posted by arQon View PostUnless DLSS only works after spending 2000 hours rendering and resampling 4K frames from each specific game, you're implicitly looking at what's really just a LUT indexed by dRGB/dA/dz. (A sort of "CAS on steroids", basically).
In neither case can you get by with a LUT. People have profiled it and observed that it indeed does use the Tensor cores. As compared with a LUT, a deep learning model gives you better detection and handling of corner cases that tend to stymie conventional scalers.
Leave a comment:
-
Originally posted by coder View PostI'm just trying to use consistent terminology. I don't want to get in a semantic debate over "AI", but I take it to mean their upsampling is closer to DLSS than FSR.
>> (In fact, I expect you could reproduce about 90% of DLSS with a 3D texture LUT and a pretty simple shader, if you tried
> I guess we're going to have to agree to disagree on this.
Really? I mean, yeah, I didn't actually spec it out, but it "feels about right" to me. Unless DLSS only works after spending 2000 hours rendering and resampling 4K frames from each specific game, you're implicitly looking at what's really just a LUT indexed by dRGB/dA/dz. (A sort of "CAS on steroids", basically).
I mean, you're right that it probably needs either 1Ds for A and z, or a 2D for both if there's a nontrivial relationship between them - I was working on the assumption that you'd chroma the RGB into Y and use that with A+z, but that does have corner cases - but other than that, I still think it's viable. You can do remarkable things remarkably easily, sometimes. (FXAA is the poster child for that, though Oculus's "un-distortion" is also up there).
> However, there's no debating the sheer gulf in "AI compute" power between Nvidia's Tensor-enabled GPUs and equivalent RDNA 2.
I'm not sure why that's relevant here, but I'll take your word for it.
Leave a comment:
-
Originally posted by kylew77 View PostA guy can dream: 17inch thinkpad workstation, integrated Intel Arc gpu, 5.0Ghz+ 8 core+ cpu, running OpenBSD 7.something flawlessly on an optane ssd!
Actually, better not: it will suck to wake up...
- Likes 3
Leave a comment:
-
Originally posted by arQon View PostThe "AI" in use in that particular reference is 100% buzzword bs and 0% actual AI.
Originally posted by arQon View Postwhether or not there are Tensor equivalents on the card is not meaningfully related to the presence or absence of this feature.
Originally posted by arQon View Post> And RDNA 2 is reliant on some packed-math instructions, somewhat akin to SSE.
Which is also all that's needed for this (and for that matter, nearly everything "AI" that gets offloaded these days).
(In fact, I expect you could reproduce about 90% of DLSS with a 3D texture LUT and a pretty simple shader, if you tried - but then it wouldn't fill the Buzzword Bingo card for marketing...)
Nvidia first added packed math in Pascal. Volta & Turing's Tensor cores were a step-change above and beyond that.
Leave a comment:
-
Originally posted by torsionbar28 View Postno way Intel is going to allow their GPU to deliver higher benchmark results on an AMD cpu.
It's also a weak move. It's an implicit admission that Intel can only sell CPUs though cheap tricks. And it would only work, if Intel is able to build decidedly superior GPUs, which is so far looking questionable. Otherwise, it would do more to hurt GPU sales than help CPU sales. So, I call BS on that.
Originally posted by torsionbar28 View PostIf history is any indication, they will somehow de-optimize the code path when an AMD platform is detected.
Leave a comment:
Leave a comment: