Announcement

Collapse
No announcement yet.

Apple Announces A New 3D API, OpenGL Competitor: Metal

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by rikkinho View Post
    after Steve Jobs dead apple loose their mine wth things, first was their maps software, now this? who next? they will put amd cpu in their mac laptops? lol
    No they invent their own wannabe C++-killer programing language. Join the queue, Apple. There are a lot of killers in front of you.

    Comment


    • #12
      Originally posted by _SXX_ View Post
      Apple aren't dumb. They not going to drop OpenGL support anytime soon. This API needed for iOS vendor lock-in.
      Nobody is going to bother to support ios if they don't support GLES. Android __dominates__ the mobile marketspace, especially for game sales.

      Comment


      • #13
        Anyway we finally know where there have been so many anti-OpenGL stories appearing out of nowhere recently

        Comment


        • #14
          Apple is playing bunker ball games

          Apple is becoming an even more bunkerish company as time passes. Their development environment (xcode + objective-c + ios-osx sdk and now swift) are all os X only. Which means its mainly a mac hardware only environment. Using linux for day to day tasks I can easily install windows + visual studio to get going with .net and all the other micro$oft development fantasies, but its a no go without a mac with apple.. The fact that I have to pay 4k $ to a mac pro workstation (which I'll seldom use outside development) just to get into the ios devangelism camp is hilarious. Even microsort creates ways for developers to collaborate online with tools like visual studio online.

          And now they have their laboratory 3d API. Another fragmentation in the industry which will create its own zealot community of know-all, its-da-best developers while android is %80 in smartphone market. Meh...

          Comment


          • #15
            Originally posted by dungeon View Post
            Benchmarks please: Direct3D vs Metal vs Mantle vs OpenGL vs ... .
            Benchmarks don't matter. Well, they do, except when they don't.

            There is a spectrum in hardware video acceleration right now, on one end - you can directly program graphics cards directly, assuming you have open programming documentation for them. AMD and Intel provide this, so really a game engine like Unreal could skip Mantle entirely and implement ASM level code. Or, I'd be interested in seeing a compiler for the radeonSI / i915 ASMs, because you could honestly do that right now.

            The level above that is device specific tools already deployed, like Mantle, and like what Metal will be.

            The next level above that is DirectX, just because its proprietary and platform limited, and thus inherently crippled.

            And above that is open standards that agnostitize the difference between hardware, are slow to adopt change, and since its only the Khronos APIs right now they are all slow as shit and have terrible implementations because they are harder to support than a C++ lexer / parser.

            I think the real future of graphics is the implementation of a programming language for modern compute class graphics hardware - the C of GPUs. The shader languages now are more like bash scripts than native code, and they split a whole slew of routines between the C API and the compiled scripts. You would have the same design by committee practice like OpenGL or C++, and would do the base language + vendor extensions thing like HTML (and OpenGL).

            But you would not design it around drawing - that would be an abstraction library on top. All you need in the base language is architecture agnostic SIMD routines, to pass a vector of equations to compute with the means to adapt them into a very deep pipeline like how these modern GPUs work. The language syntax, more than specific object types, would hopefully be able to discretize where and when certain hot paths are used - IE, batch processing on vector alus, branch code on scalars.

            Point is, we should have the same degree of programmatic control of our GPUs as we do of our CPUs, because in the last few years the technologies to implement compute shaders and HSA have brought them on the doorstep of equivalent complexity. It does require a completely different mindset to program for, but I can imagine a generic programming language that, based on compiler flags, would compile operations wrapped in, say, [parallel-gpu], [parallel-simd], and [parallel-threads] would compile the first to GPU ASM (you would have to specify the target arcihtecture, just like with a CPU compiler, but the object files would always be discrete so you could bundle the binaries for all kinds of different GPUs with one CPU binary, or vice versa(?)) under one syntax. Or hell, just do html style embedding, so you can take whatever this theoretical GPU C is and have a gpu () block like the current asm () block to include GPU code.

            Because really that is the model we should have - it should not take complex overhead ridden APIs like OpenCL to add, say, a billion numbers to one another and store the results back into a gigabyte of memory. It should be as simple as having an async call in C++ or a Go routine.

            Comment


            • #16
              I believe it will be ultimately irrelevant, but out of all this talk about graphics APIs we'll at least have a decent space state being explored

              Comment


              • #17
                I remember the same API Wars from about 15 years ago, and in the end nothing really happens - just some of companies dissappear or go bankrupt .

                Comment


                • #18
                  Originally posted by johnc View Post
                  Taking this to the obvious next conclusion... wouldn't this compel Intel, Qualcomm and NVIDIA to come up with their own "API" to make sure that their hardware is being utilized to the "maximum"? And start paying devs to use it?
                  It's not in any of those companies interests to go after their own API. Intel especially since graphics aren't their thing. Apple on the other hand has a lot of good reasons to make their own API. And frankly, they could pull it off. Unless OpenGL 5 is such a blunder that even Linux developers begin to use Direct3D in Linux.

                  But more likely it's to get developers locked into iOS. Too many are jumping ship for Android. If we're lucky then they'll do more damage to themselves then any good.

                  Comment


                  • #19
                    Originally posted by zanny View Post
                    Because really that is the model we should have - it should not take complex overhead ridden APIs like OpenCL to add, say, a billion numbers to one another and store the results back into a gigabyte of memory. It should be as simple as having an async call in C++ or a Go routine.
                    I'm more familiar with CUDA than OpenCL so maybe I'm wrong here, but isn't that basically the way OpenCL works? CUDA seems to be something very similar to what you're proposing here, a kind of GPU-specific form of kernels with very little (programmatic) overhead.

                    Comment


                    • #20

                      Comment

                      Working...
                      X