Announcement

Collapse
No announcement yet.

The GeForce RTX 2080 Ti Arrives For Linux Benchmarking

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • coder
    replied
    Originally posted by OMTDesign View Post
    I wonder how well these cards perform in AI and machine learning applications. With the addition of all those extra RTX and Tensor cores I am hoping to see Titan V level performance.
    RTX cores won't help with AI.

    Otherwise, yes. The specs say that Tensor performance should be close to Titan V at training, and well beyond anything at inference.

    Leave a comment:


  • coder
    replied
    Originally posted by pegasus View Post
    If you ignore miners, the largest audience for these cards is the AI crowd. They can afford to pay for them much more than gamers, but it's still cheaper and more cost-effective for them to go for gaming cards instead of server class Teslas.
    Agreed. Caffe2 is probably a good test vehicle, as it supports fp16 and integer arithmetic, and is otherwise rather well-supported (i.e. on other hardware).

    You'll find an example commandline in this (fixed) issue:

    https://github.com/caffe2/caffe2/issues/1408

    Originally posted by pegasus View Post
    Tesla T4 is supposedly very good at it at 75W only and if this 2080ti shows same numbers, gamers will whine again about availability
    The Tesla T4 is basically a cut-back RTX 2080 without display outputs. They use the same TU04 GPU, but the T4 has fewer cores enabled and lower clock (i.e. in order to fit a 75 W power envelope). The T4 also uses ECC RAM (and twice as much), and has a server-friendly design. So, they really aren't terribly comparable.

    Nvidia did the same thing with the Tesla P4, which used the same GP104 GPU as the GTX 1080.

    Leave a comment:


  • coder
    replied
    Originally posted by Brisse View Post
    Apparently they are designed similiarly to something Apple would do. In short: They are not made to be taken apart.
    https://youtu.be/w9FtXZGQzfM?t=6m8s
    That's rather an overstatement. If you watch the first vid, he had little trouble removing the cooler from the board, which is the main thing people do (i.e. to install a waterblock).

    The hard part was dismantling the cooler, itself. Sure, that might be an issue if you wanted to replace a fan motor, but I'm not sure how common that is. Maybe for crypto miners, but they will probably eschew the FE models, in favor of a cheaper AiB partner board.

    Leave a comment:


  • vsteel
    replied
    I am sure this will be part of the test but I would love to see "Rise of the Tomb Raider", "Mad Max" , and "Deus Ex" at 2K and 4K, graphics settings maxed and can you please include a 980(TI) card. I currently have a 980 and want to see what kind of upgrade I am going to see. I also have a 1080x1200 monitor and if I upgrade the graphics card I will upgrade my monitor and I would like to see if I am better off going with a 2K monitor or 4K monitor.

    Leave a comment:


  • boxie
    replied
    The leaked numbers I have seen appear to make it twice as fast as my Vega64 - If you can confirm that by including some Vega numbers that would be awesome.

    Leave a comment:


  • shaklee3
    replied
    Michael I'd like to see tensor core SGEMM benchmarks for varying size matrices if you can. Anandtech posted similar results for the V100.

    Leave a comment:


  • shaklee3
    replied
    Michael I'd like to see an SGEMM on the tensor cores for varying matrix sizes if you can. I believe anandtech published some results for this on the V100.

    Leave a comment:


  • Michael
    replied
    Originally posted by xorbe View Post
    I thought the embargo was until the 19th, now it's the 20th you say? Ah well.
    It's the 19th? That's news to me, I was assuming Thursday due to that being the ship date, but I've got minimal communication out of NVIDIA for this launch... I am still waiting on the driver that I am told will be out tomorrow.

    Leave a comment:


  • xorbe
    replied
    I thought the embargo was until the 19th, now it's the 20th you say? Ah well.

    Leave a comment:


  • juno
    replied
    Originally posted by tildearrow

    They may use this to be able to lie in the future (say, RTX 3080 with claims on higher amount of RT cores but actually same as 2080, but better performance because NVIDIA cheats on the driver)...

    I hate when we don't have control over our hardware.
    It's not like you take the cooler apart and can count the amount of RT cores (or anything else really). Maybe, depending on how the RT is integrated into the GPU, you can see something if you rub away layers from the die and look at it through a microscope. But you can access the GPU anyways by just taking the cooler of (and not disassemble it completely). I don't think your speculation makes sense.

    Originally posted by Xaero_Vincent View Post
    $1200 was the entire cost of my gaming desktop I built years ago. Miners have sure driven the cost of GPUs up.
    There are more reasons than miners. Nvidia has a history of raising prices with each generation for many years now. And they have no competition on the very high end products. But probably the worst thing is that people (consumers) are willingly paying those prices. It has not been economic to mine with GPUs due to high energy prices here long time a go and prices are still high and clearly above MSRPs. Just recently they dropped a bit and guess what's happening? People are selling and buying used parts for more money than the original prices for new products (including full warranty, often better cooling and free games, and the fact that they're NEW) are. That's just ridiculous.
    Last edited by juno; 18 September 2018, 08:29 PM.

    Leave a comment:

Working...
X