Announcement

Collapse
No announcement yet.

Intel X.Org Driver Gets Hand-Tuning For SSE4, AVX2

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel X.Org Driver Gets Hand-Tuning For SSE4, AVX2

    Phoronix: Intel X.Org Driver Gets Hand-Tuning For SSE4, AVX2

    Chris Wilson at Intel has begun hand-tuning his SNA acceleration architecture within the Intel X.Org driver in order to take advantage of modern CPU instruction set extensions...

    http://www.phoronix.com/vr.php?view=MTMxMjU

  • #2
    Damn, guys, just design a freaking discrete GPU, please! :/

    Comment


    • #3
      Originally posted by brosis View Post
      Damn, guys, just design a freaking discrete GPU, please! :/
      IMHO intel has managed to get their hands on an outstanding developer with Chris Wilson.
      I'm checking the DDX git progress frequently and it's really interesting.

      Comment


      • #4
        I'm so excited by this. Finally someone who is hand tuning code for latest instruction sets.
        This Chris Wilson is almost single handedly giving Intel a good name in the open source world.

        Comment


        • #5
          Originally posted by brosis View Post
          Damn, guys, just design a freaking discrete GPU, please! :/
          What for? That would be not so good discreet GPU, and it would involve know-how Intel do not have (eg all that bus trafic over PCIe).

          With integrated GPU's Intel can use their superb fab prowes too

          Comment


          • #6
            Originally posted by brosis View Post
            Damn, guys, just design a freaking discrete GPU, please! :/
            They did that, and it sucked so bad it got canceled.

            Comment


            • #7
              Originally posted by smitty3268 View Post
              They did that, and it sucked so bad it got canceled.
              No, it didn't got cancelled because it "sucked", as in "performing bad". It performed pretty good and it was scaling very good.
              It also was not discrete GPU, but APU. If intel does opensource-driven GPU, that works at same hardware2software ratio, if its less than 500$, I am in.

              Comment


              • #8
                Originally posted by brosis View Post
                No, it didn't got cancelled because it "sucked", as in "performing bad". It performed pretty good and it was scaling very good.
                It also was not discrete GPU, but APU. If intel does opensource-driven GPU, that works at same hardware2software ratio, if its less than 500$, I am in.
                Every news article at the time claimed it was because Intel realized their part could not compete against the discrete GPUs that AMD and NVidia were putting out.

                Why else would they cancel it?

                Edit: talking about Larrabee.
                Last edited by smitty3268; 02-26-2013, 01:51 PM.

                Comment


                • #9
                  Are we walking about i740 or Larrabee? Both fit really.

                  Comment


                  • #10
                    *talking

                    Editing still broken.

                    Comment


                    • #11
                      Yes, talking about Larrabee. Of course.

                      My understanding is that "top managers suddenly came and killed the project".
                      Ie, it is same way like Elop speech, but without calling own product "Sh!t".

                      Because, what Elop called such, was NOT "Sh!t", but in fact(!) was "Good".

                      ... or you could formulate it as: Managers removed it, claiming it _were_ "sh!t"
                      But very own tests proved it was very good and scaling. I have no idea what Intel managers were thinking.

                      Edit: Original true motivations, I can think of, were two("or" or "and"):
                      * bribe from nvidia (why not?)
                      * antitrust probability

                      Edit2:
                      now, the bribe should be already split and spend (if any)
                      and every manufacturer (amd, nvidia) has access to own CPU in more or less form.

                      So why not?
                      Last edited by brosis; 02-26-2013, 02:38 PM.

                      Comment


                      • #12
                        I do wonder if writing code with a view to it being easy to autovectorise would be better use of time http://locklessinc.com/articles/vectorize/

                        Originally posted by brosis View Post
                        Damn, guys, just design a freaking discrete GPU, please! :/
                        xeon phi could probably run llvmpipe pretty fast.

                        Comment


                        • #13
                          Originally posted by brosis View Post
                          Damn, guys, just design a freaking discrete GPU, please! :/

                          I'd rather have the pretty good performance and lower power needs of the current igpu, at least as an option.
                          What I'm really curious about is the next gen graphics core of either broadwell or skylake. Are they going to going to try to move even closer to the mid-end discrete cards, or will they keep their relative performance and try to make it more power efficient?

                          Comment


                          • #14
                            Originally posted by brosis View Post
                            Yes, talking about Larrabee. Of course.

                            My understanding is that "top managers suddenly came and killed the project".
                            Ie, it is same way like Elop speech, but without calling own product "Sh!t".

                            Because, what Elop called such, was NOT "Sh!t", but in fact(!) was "Good".

                            ... or you could formulate it as: Managers removed it, claiming it _were_ "sh!t"
                            But very own tests proved it was very good and scaling. I have no idea what Intel managers were thinking.

                            Edit: Original true motivations, I can think of, were two("or" or "and"):
                            * bribe from nvidia (why not?)
                            * antitrust probability

                            Edit2:
                            now, the bribe should be already split and spend (if any)
                            and every manufacturer (amd, nvidia) has access to own CPU in more or less form.

                            So why not?
                            1)Nvidia doesn't have the money to bribe Intel (Nvidia is a smaller company than Red Hat, and, at an absolute minimum, they'd have to cover the vast development costs of larabee), 2)this shouldn't apply since Intel would be moving into a domain they currently are not dominant in.

                            Comment


                            • #15
                              Originally posted by liam View Post
                              1)Nvidia doesn't have the money to bribe Intel (Nvidia is a smaller company than Red Hat, and, at an absolute minimum, they'd have to cover the vast development costs of larabee), 2)this shouldn't apply since Intel would be moving into a domain they currently are not dominant in.
                              1. Hint: Bribes per definition are private gifts between two "entities".
                              2. Hint: Intel already had many times antitrust, so why not. Back then it was crystal clear: amd, ti, cyrix, intel - CPUs; nvidia, ati, matrox, (+small bunch) - GPUs.
                              After AMD took ATI, and Nvidia now went direction ARM (and Cyrix is gone), there is nothing preventing Intel to attack discrete market with nice opensource solutions. That WORK.

                              Comment

                              Working...
                              X