Announcement

Collapse
No announcement yet.

Intel Core i7 and X58 experience?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    I would not expect boot problems, but often new boards/laptops are not directly working with ALSA. Especially for laptops this is annoying.

    Comment


    • #22
      Originally posted by joffe View Post

      How people continue to recommend Windows and Mac over this for anything but games, baffles me..

      It says 'alsamixer' here though - does Ubuntu 8.10 use pulseaudio?

      Very cool indeed.

      I notice something in the picture. In the CPU History, it is showing up to 8 CPUs. I thought Core i7 965 (Extreme Edition) is only supposed to have the Hyper Threading support? Those 4 additional logical cores should not be present in the other Core i7 variants.

      As for the sound, I guess Ubuntu 8.10 defaults to ALSA. Anyhow we now know that the sound works right out of the box, thanks to you.

      Comment


      • #23
        Originally posted by mahuyar View Post
        Very cool indeed.

        I notice something in the picture. In the CPU History, it is showing up to 8 CPUs. I thought Core i7 965 (Extreme Edition) is only supposed to have the Hyper Threading support? Those 4 additional logical cores should not be present in the other Core i7 variants.

        As for the sound, I guess Ubuntu 8.10 defaults to ALSA. Anyhow we now know that the sound works right out of the box, thanks to you.
        Yes, they should, all Nehalem variants have Hyper-Threading and all Nehalem variants that are to come out will have it as well (even low-end dual cores). Seems you've been misinformed by someone, probably someone who bought the eXpensive Edition and wanted to justify it somehow..

        Comment


        • #24
          Guys i just caught up with this thread. I compiled Gentoo ~arch on my
          Core i7 920/Asus P6T/Patriot 1600 6GB on here this past week. Sound
          works perfectly fine 'hda-intel' and, network 'sky2' worked perfectly
          from my boot disc. I have not had any show stopper problems yet but
          there are some issues that need to be addressed.

          1) GCC-4.3.2 thinks this is a nocona chip so when i compiled i used
          '-march=native -msse4'. I also have 'native' set in the kernel.

          2) The linux-2.6.28-x kernels do not have a Core i7 module so i set
          it to Core2.

          3) There is no W83667HG sensor module and let me tell you this chip
          runs WAY HOT. So am forcing the W83627ehf module with 'modprobe
          w83627ehf force_id=0x8860' and fine tuning the sensors3.conf file to
          compensate. I did enable the coretemp module in the kernel but
          sensors-detect of course doesn't see such sensor so it won't pick it
          up. However if you modprobe coretemp it does work and appears perfectly
          accurate on all EIGHT CORES...LOL I spoke to a lm_sensors dev today and
          he updated sensors-detect to see the sensor so now he is working on a
          module.

          Needless to say this thing hauls arse. In comparison it used to take me
          an hour and 19 minutes to compile gcc with my AMD X2 4800@2700Mhz.
          Using the same everything but hardware it took 32 minutes with my Core
          i7 [email protected]. I have been able to get stable 4.2Ghz but temps climbed
          to the high 80's low 90's so i backed off and changed my OC strategy as
          far as using a lower multi with a higher Bclk which enables me to use
          lower cpu volts. Currently at 3.6Ghz am using 1.21875 cpu volts with a
          1.20000 QPI volts idling at 34C and 100% load reaching into the low
          70C's which from all accounts is perfectly fine on these chips. They
          run hot but they can also take it. The temps are a little under control
          with water cooling now. Otherwise i would be idling at about 48C-50C.

          If anyone has any questions/recommendations or would like me to try
          something just let me know. I am more then happy to do some testing
          within reason or to the extent of my abilities as a linux n00b.

          P.S. i forgot to mention there isn't as far as i know/read any support
          for Quickpath. I am sure once there is then, benchmarks will be drastically
          improved.
          Last edited by Jupiter; 06 December 2008, 10:35 PM.

          Comment


          • #25
            Originally posted by Jupiter View Post
            2) The linux-2.6.28-x kernels do not have a Core i7 module so i set
            it to Core2.
            I was also wondering about the same thing. I took a look at their git summaries; Core i7 was not mentioned at all.

            BTW, you mentioned you were on water cooling and achieved 3.6 GHz stable. How much do you think I could squeeze out of it from the stock air without compromising the stability?

            Comment


            • #26
              Originally posted by mahuyar View Post
              BTW, you mentioned you were on water cooling and achieved 3.6 GHz stable. How much do you think I could squeeze out of it from the stock air without compromising the stability?
              Well in my hot room, stable is keeping my temps down under 100%
              load on all four cores. While running on air i was able to run
              stable at 3.2Ghz.

              Comment


              • #27
                Some Phoronix Test Suite universe test.

                Comment


                • #28
                  Core i7-920 memory system results.

                  [QUOTE=mahuyar;52236]Now that they are out, please post your experience with Intel Core i7 CPU and X58 chipset. I am wondering if anyone has installed Linux(any distro, in general) on this new platform.

                  Excellent memory system, sad to say that most of the memory benchmarks I've see are horrible. Seems like folks mostly post results from single threaded benchmarks. Seems insane.. you have 8 threads, 3 levels of cache, and 3 memory busses.... so much potential.

                  In any case the industry standard Stream benchmark:
                  Array size = 6000000, Offset = 0
                  Number of Threads requested = 4
                  Copy: 22409.4604 0.0045 0.0043 0.0047
                  Scale: 22305.1841 0.0045 0.0043 0.0046
                  Add: 21566.8551 0.0070 0.0067 0.0073
                  Triad: 21673.6562 0.0069 0.0066 0.0073

                  That's with zero cache hits, zero cheating code, no special assembly magic, just simple C code. Looks substantially higher than what I've seen and I can assure that some real world codes actually behave like this. This is approximately 2.5x as good as the numbers I've seen from previous intels, and even matches some of the newest DUAL socket AMD systems.

                  Of course if you really want to understand the memory hierarchy and it's inherent parallelism I'd suggest:


                  Note the -d2 graphs (2 dimms), -1066 (DDR3-1066 graphs) and -1333 (DDR3-1333 graphs. I through in some q6600 numbers for comparison, it's definitely quite an upgrade.

                  So for CPU limited stuff I wouldn't expect much difference, something on the order of 10% for the same clock. But for things that are either memory latency, parallelism (more than one request at once) or bandwidth intensive the core i7 is a substantial upgrade.

                  If folks are interested maybe we can get phoronix to adopt some more reasonable memory benchmarks that don't leave a half or more of the potential hidden.

                  Comment


                  • #29
                    Originally posted by Jupiter View Post

                    Needless to say this thing hauls arse. In comparison it used to take me
                    an hour and 19 minutes to compile gcc with my AMD X2 4800@2700Mhz.
                    Using the same everything but hardware it took 32 minutes with my Core
                    i7 [email protected].
                    I'm running a non-overclocked 920, 1333 MHz memory, pretty much stock. Not sure how gentoo builds gcc, but your time sounds really slow. Are you maybe just using 1 of your 8 CPU's?

                    Here's what I get when I do a build of gcc-4.3.2 under ubuntu:

                    libtool: link: creating gc-analyze
                    make[3]: Leaving directory `/export/bill/obj/x86_64-unknown-linux-gnu/libjava'
                    make[2]: Leaving directory `/export/bill/obj/x86_64-unknown-linux-gnu/libjava'
                    make[1]: Leaving directory `/export/bill/obj'

                    real 5m43.471s
                    user 28m47.884s
                    sys 2m26.437s

                    So 28 minutes of CPU time and under 6 minutes wallclock. Even if I only used 1 (non-overclocked) I'd expect it to be around 28 minutes.


                    Originally posted by Jupiter View Post


                    P.S. i forgot to mention there isn't as far as i know/read any support
                    for Quickpath. I am sure once there is then, benchmarks will be drastically
                    improved.
                    Quickpath is just a fast point to point connection between the CPU and the southbridge. No support is needed. If you run something I/O intensive like a 16 disk raid controller or two, or a faster video card you will be less likely to have a bottleneck on either the FSB or the I/O bus. So now you can talk to the video card/raid card without sending stuff over the FSB (and vice versa). So basically under intensive loads you should hav emore consistent performance. In general this isn't something you would
                    particularly notice in a single socket system.

                    Comment


                    • #30
                      Originally posted by BillBroadley View Post
                      I'm running a non-overclocked 920, 1333 MHz memory, pretty much stock. Not sure how gentoo builds gcc, but your time sounds really slow. Are you maybe just using 1 of your 8 CPU's?

                      Here's what I get when I do a build of gcc-4.3.2 under ubuntu:

                      libtool: link: creating gc-analyze
                      make[3]: Leaving directory `/export/bill/obj/x86_64-unknown-linux-gnu/libjava'
                      make[2]: Leaving directory `/export/bill/obj/x86_64-unknown-linux-gnu/libjava'
                      make[1]: Leaving directory `/export/bill/obj'

                      real 5m43.471s
                      user 28m47.884s
                      sys 2m26.437s

                      So 28 minutes of CPU time and under 6 minutes wallclock. Even if I only used 1 (non-overclocked) I'd expect it to be around 28 minutes.
                      After making some adjustments in the kernel and make.conf
                      here is what i get now. Temps stayed under 65c and load
                      appeared to be spread evenly.
                      Code:
                      vger ~ # genlop -t sys-devel/gcc
                       * sys-devel/gcc
                      
                           Sun Dec  7 19:30:36 2008 >>> sys-devel/gcc-4.3.2
                             merge time: 9 minutes and 44 seconds.
                      
                           Fri Dec 12 10:43:29 2008 >>> sys-devel/gcc-4.3.2
                             merge time: 10 minutes and 18 seconds.
                      
                           Fri Dec 12 11:02:52 2008 >>> sys-devel/gcc-4.3.2
                             merge time: 9 minutes and 35 seconds.
                      
                           Fri Dec 12 11:34:25 2008 >>> sys-devel/gcc-4.3.2
                             merge time: 10 minutes and 41 seconds.
                      
                           Fri Dec 12 11:48:52 2008 >>> sys-devel/gcc-4.3.2
                             merge time: 9 minutes and 32 seconds.
                      
                           Fri Dec 12 12:00:01 2008 >>> sys-devel/gcc-4.3.2
                             merge time: 9 minutes and 45 seconds.
                      
                      vger ~ #
                      Originally posted by BillBroadley View Post
                      Quickpath is just a fast point to point connection between the CPU and the southbridge. No support is needed. If you run something I/O intensive like a 16 disk raid controller or two, or a faster video card you will be less likely to have a bottleneck on either the FSB or the I/O bus. So now you can talk to the video card/raid card without sending stuff over the FSB (and vice versa). So basically under intensive loads you should hav emore consistent performance. In general this isn't something you would
                      particularly notice in a single socket system.
                      I understand now and i think i see some difference in some
                      PTS results.

                      Comment

                      Working...
                      X