NVIDIA's Latest Open-Source Project Is Their NVDLA Deep Learning Compiler
Collapse
X
-
Originally posted by ElectricPrism View PostCan someone explain why some fancy pants name "X Deep Learning Compiler" is any different than a fucking regular compiler.
All these "trendy" words for dumb people are annoying me "The Cloud" (A fucking server computer) "Deep Learning" (If, Then, Else + Scammery, Algorithms, and a Database)
I feel like I'm in some fancy store where they're trying to sell me fucking holy soap supposedly hand crafted by the 12 Apostles, or fucking "moon rocks" disguising something ordinary and extraordinary.
99% Con, 1% Product
A cloud is a server topology whereby there are many servers all providing a service. Users of that service are not concerned with which specific, addressable server instance (virtual or physical server) their data or application is hosted on. The user simply accesses "the cloud" and an appropriate server within the cloud is transparently selected to work for that user.
So for example:- Instead of the user going to ftp://box44.example.com to store and access their data the user uses the Dropbox application and they do not know or care which specific server is hosting their data. Their data is "in the cloud", so to speak. Or;
- Iinstead of paying for a physical server to host a game server on, you instead use a matchmaking service built into the game. The matchmaking service is a type of cloud computing because the user does not know or care which physical server will host their game. Their game is hosted "in the cloud".
- sharing resources between many users for efficiency
- redundancy
- ease of use (from a user perspective)
Comment
-
Originally posted by ElectricPrism View PostCan someone explain why some fancy pants name "X Deep Learning Compiler" is any different than a fucking regular compiler.
Quoting from https://devblogs.nvidia.com/nvdla/ :
The compiler is a key component of the NVDLA software stack. It generates optimized execution graphs which map the tasks defined in the layers of pre-trained neural network models onto the various execution units in NVDLA. It reduces data movement as much as possible while maximizing utilization of the computational hardware.
...
Compiler optimizations such as layer fusion and pipeline scheduling work well for larger NVDLA designs, providing up to a 3x performance benefit across a wide range of neural network architectures. This optimization flexibility is key to achieving power efficiency across both large network models like ResNet-50 and small network models like MobileNet.
For smaller NVDLA designs, compiler optimizations such as memory tiling are critical for power efficiency. Memory tiling enables a design to balance on-chip buffer usage between weight and activation data, and so minimizes off-chip memory traffic and power consumption.
It's similar to what Google is doing, with MLIR:
https://medium.com/tensorflow/mlir-a...k-beba999ed18d
Originally posted by ElectricPrism View PostAll these "trendy" words for dumb people are annoying me "The Cloud" (A fucking server computer)Last edited by coder; 14 September 2019, 10:36 PM.
Comment
-
-
Originally posted by cybertraveler View PostSo "the cloud" can be thought of as ignorance of which specific server you are using. Some of the common benefits of cloud computing are:- sharing resources between many users for efficiency
- redundancy
- ease of use (from a user perspective)
There are probably better examples, but the point is that by having a scalable platform, you can more easily build scalable applications out of reusable, scalable building blocks. If each service had their own homegrown scale-out method, then it would be a mess, and far less practical, stable, and secure.
Comment
Comment