Announcement

Collapse
No announcement yet.

Sony's PlayStation 4 Is Running Modified FreeBSD 9

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Originally posted by M1kkko View Post
    One could also argue that there's something wrong with the companies themselves, when they don't see the benefits of the open source development and business model. The BSD lisence is more liberal than GNU GPL anyhow, so that it lets the companies choose whether or not they want to contribute.
    Thats why I perfer the GPL license over BSD because it forces companies to give their consumers freedom.

    Comment


    • #92
      Originally posted by elanthis View Post
      Some key points.

      * GL's object handles are super error-prone. All objects are identified by an int. Easy to fail to understand what to pass in to what. Furthemore, there is a requirement for a level of indirection between client-side API objects and handles, since you can't stuff a 64-bit pointer into a 32-bit int and there are rules about the IDs being generated sequentially in some cases any, which you can't do with a pointer-cast-to-an-int.

      * The GL latch/state system is and poorly specified. Some GL|ES vendors implement it differently, all within spec. The latch system is wher you don't just modify an object, but instead set an object with one API call and then modify the bound object with a second. In some cases, what happens when you unbind objects varies wildly, like VAOs on different hardware vendors. Larger game engines need to worry about significant work-arounds to deal with a variety of devices and vendors.

      * The GL API was originally released in 1991 for rasterization acceleration hardware. It has been extended - poorly - to keep up with modern hardware, usually years late, and way behind D3D. Many parts of the 3.0 GL API were design around what people "thought" hardware would be, and they guessed wrong. Many of those assumptions were not worked aorund in the API until 4.3, and some still remain. For back-compat reasons, those assumptions are still present, it's usually harder to do things the "right" way than the wrong way, and inexperience users are constantly doing things wrong because out-dated tutorials and the highly redundant GL API lead them astray. D3D on the other hand rather sensibly just releases a new version of the API when things have changed enough to warrant it, as well as to fix old mistakes.

      * GL tries to hold your hand more often than D3D. This results in some "GL is easier to use" comments from beginners, but actually squeezing out performance of GL is a complicated process compared to D3D. You have to use obtuse, rdundant APIs in non-obvious ways to get the best performance. Using GL|ES or GL Core Profile is about as much work as D3D11, but most novices don't deal with those.

      * GL Core Profile / Compatibility Profile was a misguided attempt to solve the crufty API problem. Of course, the only things they did was disable old redundant APIs without replacing some of the existint dual-usage APIs that could stand for a refresh. Given the lack of a test suite, most drivers were (still are) buggy as hell when in Core Profile mode. This has resulted in everyone everywhere still writing Compatibility Profile code, except on OSX where you can't get GL 3+ without enabling Core Profile. This causes a lot more portability problems than necessary. API cleanups have been abandoned by Khronos because they think their idiotic attempt at "profiles" is the only way to do API cleanups.

      * Dekstop GL drivers are buggy and incomplete. Both NVIDIA and AMD have yet to deliver bug-free fully-compatible GL 4.3 implementation almost a full year after release. The drivers are barely used. Khronos provides no test suite. Games using GL on the desktop typically have much higher support costs than D3D games due almost entirely to the GL drivers. This is not super relevant to the API itself, of course, but is a huge deal to developers shipping actual games - they can't write GL versions of the game if they want to target the PC.

      * The GL tools and debugging facilities are under-featured. Finding out why any particular GL call fails is an exercise in maximum frustration. Tools have gotten better in recent years, but the d3d tools have improved even further. Part of this is an API issue; the GL error reporting facilities are weak and the API prone to far more errors in the first place.

      * Parts of the API are overloaded and awkward. glVertexAttribPointer takes a void* as the final argument... but in all modern GL usage, you're actually passing in an integer here. Likewise, glTexImage2d and friends have redundant, pointless parameters that still must be set just right (and different than most existing documentation will state) to work properly.

      * GLSL does not reflect how the shader pipeline actually works up until GLSL 4.30... which again, you can't actually use in real life. Even then, assuming a console "does it right," the syntax in GLSL to deal with doing things properly is incredibly cumbersome compared to doing it wrong, since back-compat was kept to keep some mythical CAD folks who might want to use 4.3 and currently use 2.x happy.

      * Despite years of clamoring from some of the biggest GL proponents and even NVIDIA, Khronos rejects many API improvements like "direct state access" into the core API.

      * Extensions are a huge pain in the ass. You can't just write a GL app. You have to write slightly-different-for-each-hardware-vendor GL path in your code in order to make things work well. Consoles require this anyway, of course, but on the desktop space this matters.

      * EGL is not really a thing except for GL|ES. Many of the features provided by DirectX's DXGI are unsupported without EGL and even with EGL some bits are missing.

      * Did I mention what a terrible error-prone pain in the ass using the API is?

      * Hardware features that you should rely on are treated as "potential optimizations" in the GL API. This is especially harmful to newer users. It's still common here in 2013 to see people asking whether they should use vertex buffers or not.

      * Some GL interfaces do not reflect how the hardware works, even in 4.3. Vertex Array Objects are kinda sorta like ID3D11InputLayout except they also specify buffer bindings, which are dealt with separately on hardware. Likewise, FrameBuffer Objects do not reflect how render targets are bound on the hardware. It's difficult to minimize state changes and have you to hope that the driver does the right thing (completely unacceptable on consoles where you really, really need direct control).

      * GL misses some features found in hardware, like DXT texture compression, in the name of patents. That's great for Mesa and FOSS but shitty for people who need to make actual products that actually use the expensive graphics hardware fully. Kind of a wash as D3D is missing some texture modes and such that GL has (not ones you really need, but still, I've heard noises from a developer at Microsoft that they want to fix this in a future update).

      * GL does not support any kind of multi-threaded object management or command list generation. The old Display List functionality is deprecated and not equivalent in any case (if you send a draw call with a VBO to a display list, the VBO contents are copied, rather than merely recording the VBO handle). You can with some vendor-specific work-arounds do object management on multiple cores but this also creates a full rendering context (bad!) and again doesn't handle command list generation.

      * GL's global states mentioned previously don't just make the API more complex, but also cause frequent bugs, and don't reflect hardware. Real hardware has state blocks. D3D mirrors these. Granted, D3D11 is a little out of date in this regard as its concept of state blocks do not match 100% modern hardware, but this will most likely be corrected in D3D12. It will probably never be dealt with in GL, ever.

      * GL binds its output surface/rendering context to its concept of a device handle. Recreating a rendering surface implies regenerating the whole context.

      * GL is managed-resources only... sometimes. Don't want a buffer you upload kept in both GPU and CPU space? Up to the driver, not you. Necessary because...

      * There is no proper "surface lost" or "device lost" signal for GL. Implementing proper display adapter switching or efficient hardware resets on GL is impossible without some new can't-be-integrated-properly extension that doesn't yet exist. This is part of why the previous point exists. Of cousre, GPU-only objects will still be lost.

      * GL's API is designed for the days before compositing. On the desktop, D3D10+ assumes compositing. On a console, you can assume you might have overlays but not that you will be forced to share a single framebuffer with other processes. GL still maintains the concept of "pixel ownership" and this whole legacy cruft imposes some API nastiness that could be done away with.

      * GL is slow to update. Microsoft works with hardware vendors during the definition of new hardware and APIs, acting as a liason between the development community's requests for D3D features and hardware vendors. Microsoft ensures that their API supports the common functionality on hardware and helps push vendors towards progress. GL languishes in a pit until well after a D3D update is released and then vendors scramble to define new extensions and combinations of old vendor-specific ones to get a half-assed spec out the door. It usually takes multiple version to "get it right." GL 3.x was the worst; it's gotten better, but not enough. GL 3.0 was released years after D3D 10 and didn't hit "almost" feature parity until GL3.3 even later. Likewise it took until 4.3 to catch up _mostly_ to D3D 11, minus the raft of features missing still. If you're releasing a new cutting-edge platform, using GL as a basis means your API will be years behind your hardware.

      * Numerous other small API warts, missing features, and a disconnect with hardware.

      I used to be a pro-GL bigot too back before I had any clue what the hell I was talking about. when I first moved to the games industry and was forced to Windows from Linux I came kicking and screaming. I still recall some of the long debates I had trying to arguing pro-GL merits to a variety of developers. After having every single point refuted, the sole two advantages I could keep for GL were "some platforms require it and it also works in Windows so it's 'more portable'" and "extensions give you early access to some vendor-specific features." Not much.

      What a long post of FUD and lies.

      Comment


      • #93
        Originally posted by nukem View Post
        Thats why I perfer the GPL license over BSD because it forces companies to give their consumers freedom.
        You are assuming that those companies adhere to the license. I guarantee you there is a ton of GPL licensed code being used out there that is not adhering to the GPL license. Without disassembling every blob out there you really can't claim that it is forcing them to do so. Those very same companies that are using BSD licensed code and are "evil" can very well be using GPL licensed code as well and ignoring the license. Until someone pulls all blobs apart and audits them the GPL really offers a false sense of security. The GPL only works when someone is caught violating it an the code owner decides to do something about it.
        Last edited by deanjo; 23 June 2013, 11:33 PM.

        Comment


        • #94
          Originally posted by Rallos Zek View Post
          What a long post of FUD and lies.

          Then reply back with specific examples of how he's wrong... because I just read through every single point he made and they are all things I've heard before from developers themselves, otherwise you look like a troll
          All opinions are my own not those of my employer if you know who they are.

          Comment


          • #95
            Very cool. Congrats go to The FreeBSD project.

            Comment


            • #96
              +1. As long as I can run PS4 titles on Ubuntu, I'm happy with that.

              Comment


              • #97
                Originally posted by Ibidem View Post
                For anyone who is curious, a couple links that may clarify some things:
                http://www.softwarefreedom.org/resou...aboration.html
                Note that many tend to view the "in-file" approach as questionable at best; the current ath5k policy is to avoid it.
                That was a nice read. I was just wondering about that.

                Originally posted by Ericg View Post
                No...the BSD License won't get you into more legal troubles... the BSD license boils down to "DO WHATEVER YOU WANT WITH THIS! I DONT CARE!" The BSD is actually the ultimate free license, because it says just that "Do whatever you want." there are ZERO strings attached.
                No, the "ultimate free" license is Public Domain (CC0). That way you don't even need to preserve the copyright notice (since there is no copyright).

                Comment


                • #98
                  Originally posted by mike4 View Post
                  +1. As long as I can run PS4 titles on Ubuntu, I'm happy with that.
                  Well you're going to need a modern upper midrange machine or better (preferably 8 cores and a GPU > 2TFLOPS) and you need to figure out just what changes sony made to their distribution of FreeBSD including whatever display server they're using (Probably not X.Org but their own thing) and port those over to Ubuntu and install the FreeBSD variants of various libraries and then you might be able to run them...

                  Comment


                  • #99
                    Originally posted by Luke_Wolf View Post
                    AMD64 has been around for the past 10 years, and everything past 2006 (in Coincidence with Windows Vista) other than Intel Atoms which are utterly irrelevant to this conversation has been running a 64-bit processor with a 64-bit OS just why would people care about translating AMD64->x86 for this? Particularly when the Hardware required to match them in terms of what the PS4 is a modern upper midrange PC. To put it very bluntly this isn't going to be running on a Pentium 4 with a 8800GTX that would be your end of the line for 32-bit.
                    but afaik you cant execute x86_64 code in x86 mode, you need to enter "long mode". So you would have to run the emu under a x86_64 OS, otherwise you'd have to translate the instructions to x86 as i said. In which case you'd have to then deal with the problem of x86 software that isnt fully compatible with 64bits OSs like most advanced emus, or a lot of troubles with 32bit libraries in parallel with the 64bits versions. That's why i still use 32bits with pae, only 7z (mx=9 mmt=6, mx=8 works fine) and firefox sometimes get near or above the 2GB limit, and with pae i can still use my full 8gb of ram.

                    Comment


                    • Originally posted by GreatEmerald View Post
                      the "ultimate free" license is Public Domain (CC0). That way you don't even need to preserve the copyright notice (since there is no copyright).
                      It's worth noting that Public Domain doesn't exist everywhere: some countries don't allow an author to waive copyright. For such countries, I think the WTFPL[page includes an expletive, so possibly NSFW] licence is probably the closest thing :-)

                      Comment

                      Working...
                      X