Originally posted by rikkinho
View Post
Announcement
Collapse
No announcement yet.
Apple Announces A New 3D API, OpenGL Competitor: Metal
Collapse
X
-
Originally posted by _SXX_ View PostApple aren't dumb. They not going to drop OpenGL support anytime soon. This API needed for iOS vendor lock-in.
Comment
-
Apple is playing bunker ball games
Apple is becoming an even more bunkerish company as time passes. Their development environment (xcode + objective-c + ios-osx sdk and now swift) are all os X only. Which means its mainly a mac hardware only environment. Using linux for day to day tasks I can easily install windows + visual studio to get going with .net and all the other micro$oft development fantasies, but its a no go without a mac with apple.. The fact that I have to pay 4k $ to a mac pro workstation (which I'll seldom use outside development) just to get into the ios devangelism camp is hilarious. Even microsort creates ways for developers to collaborate online with tools like visual studio online.
And now they have their laboratory 3d API. Another fragmentation in the industry which will create its own zealot community of know-all, its-da-best developers while android is %80 in smartphone market. Meh...
Comment
-
Originally posted by dungeon View PostBenchmarks please: Direct3D vs Metal vs Mantle vs OpenGL vs ....
There is a spectrum in hardware video acceleration right now, on one end - you can directly program graphics cards directly, assuming you have open programming documentation for them. AMD and Intel provide this, so really a game engine like Unreal could skip Mantle entirely and implement ASM level code. Or, I'd be interested in seeing a compiler for the radeonSI / i915 ASMs, because you could honestly do that right now.
The level above that is device specific tools already deployed, like Mantle, and like what Metal will be.
The next level above that is DirectX, just because its proprietary and platform limited, and thus inherently crippled.
And above that is open standards that agnostitize the difference between hardware, are slow to adopt change, and since its only the Khronos APIs right now they are all slow as shit and have terrible implementations because they are harder to support than a C++ lexer / parser.
I think the real future of graphics is the implementation of a programming language for modern compute class graphics hardware - the C of GPUs. The shader languages now are more like bash scripts than native code, and they split a whole slew of routines between the C API and the compiled scripts. You would have the same design by committee practice like OpenGL or C++, and would do the base language + vendor extensions thing like HTML (and OpenGL).
But you would not design it around drawing - that would be an abstraction library on top. All you need in the base language is architecture agnostic SIMD routines, to pass a vector of equations to compute with the means to adapt them into a very deep pipeline like how these modern GPUs work. The language syntax, more than specific object types, would hopefully be able to discretize where and when certain hot paths are used - IE, batch processing on vector alus, branch code on scalars.
Point is, we should have the same degree of programmatic control of our GPUs as we do of our CPUs, because in the last few years the technologies to implement compute shaders and HSA have brought them on the doorstep of equivalent complexity. It does require a completely different mindset to program for, but I can imagine a generic programming language that, based on compiler flags, would compile operations wrapped in, say, [parallel-gpu], [parallel-simd], and [parallel-threads] would compile the first to GPU ASM (you would have to specify the target arcihtecture, just like with a CPU compiler, but the object files would always be discrete so you could bundle the binaries for all kinds of different GPUs with one CPU binary, or vice versa(?)) under one syntax. Or hell, just do html style embedding, so you can take whatever this theoretical GPU C is and have a gpu () block like the current asm () block to include GPU code.
Because really that is the model we should have - it should not take complex overhead ridden APIs like OpenCL to add, say, a billion numbers to one another and store the results back into a gigabyte of memory. It should be as simple as having an async call in C++ or a Go routine.
Comment
-
Originally posted by johnc View PostTaking this to the obvious next conclusion... wouldn't this compel Intel, Qualcomm and NVIDIA to come up with their own "API" to make sure that their hardware is being utilized to the "maximum"? And start paying devs to use it?
But more likely it's to get developers locked into iOS. Too many are jumping ship for Android. If we're lucky then they'll do more damage to themselves then any good.
Comment
-
Originally posted by zanny View PostBecause really that is the model we should have - it should not take complex overhead ridden APIs like OpenCL to add, say, a billion numbers to one another and store the results back into a gigabyte of memory. It should be as simple as having an async call in C++ or a Go routine.
Comment
Comment