Announcement

Collapse
No announcement yet.

OS X "El Capitan" Aims To Offer Better Performance, Metal Graphics

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by profoundWHALE View Post

    I honestly see Silverlight lasting longer than Flash. In the few things that use it (that I've used), it has been very smooth and responsive. The other reason for it sticking around is because Microsoft.
    Silverlight on Linux has been given rather short shrift I say, as it has been more solid than Flash has been.

    Comment


    • #22
      Originally posted by gamerk2 View Post

      Glide died because it was tied to 3dfx hardware, and 3dfx simply got obliterated by NVIDIA in the market. As far as design goes, Glide was extraordinarily powerful. Heck, N64 emulators still have a Glide plugin to this day, since some features of the N64 are near impossible to emulate using D3D/OGL, but are easily done using Glide. [Then again, the N64 is just WEIRD form a HW perspective].
      Yeah Glide was great for emulation , lets see how Vulkan helps.

      Anyway AMD wont be dead any time soon, still they killed off mantle for Vulkan:

      Obi Wan said it best: 'If you strike me down, I'll become far more powerful than you'll ever imagine.' That's happened with AMD's API Mantle which died at 1.0 but has risen as the new Vulkan API.


      Even if Nvidia came out with yet another api, it would not be worth it long term.

      About Metal on OSX, some devs might use it for some stuff, but I don't see much uptake , specially since Macs use Intel and / or Nvidia hardware and would be stupid for apple to try to stop Vulkan support in some shady way.

      Also quote:, "Khronos is hoping to extend compatibility back a few hardware generations, which means you'll potentially notice a performance increase even on your old hardware once the API is officially released and introduced in new games.

      "We are setting a design goal. We have a very specific goal," says Trevett. "Any hardware capable of supporting OpenGL ES 3.1 will be capable of supporting Vulkan. That basically means any GPU that can do compute shaders."

      On the PC side, that equates to OpenGL 4.3, released in August of 2012. OpenGL 4.3 support extended back to the Nvidia GeForce 400 series and the ATI Radeon HD 5000 series?a.k.a. basically any GPU purchased after late 2009/early 2010. Judging by the Steam Hardware Survey those specs also encompass quite a huge amount of the PC gaming community."

      Comment


      • #23
        Originally posted by OneTimeShot View Post

        No. Just no. IOS only has 20% or so market share, Mac OSX has 10%.

        Microsoft had 98% market share, they controlled *everything*.

        It's not about marketshare, but about controlling your hardware. I assemble myself my PCs and only maybe get some CD with whatever OS, and NOT the other way round.

        Comment


        • #24
          Originally posted by johnc View Post
          ooof. I figured Metal would be coming to the desktop eventually. Which means games too, of course.

          It seems like we were moving towards "everybody should use OpenGL to target all the platforms in one shot" to "every platform will have its own API".
          Gee. If ONLY those poor folks at Khronos had been warned that this would happen...
          Oh right --- they WERE warned. OVER AND OVER AND fscking OVER again. And by the time they were finally willing to deliver something, it was too little too late.
          Enjoy your backward compatibility with software from the 90's, guys --- it'll be oh so relevant in 2025.

          Comment


          • #25
            Originally posted by Ancurio View Post
            Ohh that's very interesting. I am really curious how they beat Metal, an API explicitly designed for shared-memory GPU systems, to work on desktops. Unless they actually adapted (and thus changed) the buffer API, this smells like a big bucket of hackery.
            The OSX API allows you to tag buffers in various ways, so that either you can manually manipulate where data is held (in a single copy) for maximum performance, or can allow the OS to automatically handle the copying and coherency for you.

            A very rough description of this is given here:

            but that documentation seems to be early and is missing a bunch of settings that can be used to optimize this, which were described in the WWDC talk on the subject.

            My guess is that, long term, Apple sees separate memory for GPUs as an obsolete idea, so it's not going to put much effort into trying to make this crazy elegant and beautiful. HSA is obviously the right way to do things, and at some point either the dGPUs are going to figure out a way to work with Intel so everybody gets this functionality, or Apple is going to lose interest in dGPUs (because Intel will be good enough; or because they can get one of AMD or nV to get their act together and somehow share this coherency state, and screw the other company that didn't get on board fast enough).

            Comment

            Working...
            X