1. Computers
  2. Display Drivers
  3. Graphics Cards
  4. Memory
  5. Motherboards
  6. Processors
  7. Software
  8. Storage
  9. Operating Systems


Facebook RSS Twitter Twitter Google Plus


Phoronix Test Suite

OpenBenchmarking.org

ECS NVIDIA GeForce GT 240 512MB

Michael Larabel

Published on 22 January 2010
Written by Michael Larabel
Page 3 of 9 - 23 Comments

The ECS GeForce GT 240 was installed into a system running an Intel Core i7 920 processor clocked to 3.60GHz, used an ASRock X58 SuperComputer motherboard, had 3GB of DDR3 system memory, and ran with a 320GB Seagate ST3320620AS SATA HDD. On the software side was Ubuntu 9.10 (x86_64) with the Linux 2.6.31 kernel, X Server 1.6.4, and the NVIDIA 195.30 beta display driver was installed.

The GeForce GT 240 booted up just fine with the binary NVIDIA graphics driver and had properly mode-set to 2560 x 1600. The PCI ID for this graphics card was 0x0ca3 while the PCI vendor ID is of course 0x10de. Compiz was working and OpenGL applications were running just fine. PowerMizer also had no problems dynamically changing the performance levels based upon load, but when we tried using CoolBits to overclock the GPU was the first signs of a problem. When attempting to change the 3D clock frequencies manually or using the auto detect feature, the clocks would not change from 550MHz for the CPU and 1700MHz for the GDDR5 memory. Attempting to apply any other core/memory clock values would not take. We could not overclock (or underclock) this ECS GeForce GT 240 graphics card under Linux and all attempts to do so had failed. This is either a bug within NVIDIA's binary Linux driver, an issue with the video BIOS on this ECS graphics card, or some combination thereof.

Due to this overclocking issue and another set of problems to be mentioned in this article, the ECS GeForce GT 240 graphics card was then installed in an entirely different AMD-based system and it was running with the NVIDIA 190.53 stable Linux driver rather than the 195 series beta. With this driver, CoolBits claimed to work and the optimal 3D clocks that it found was 720MHz for the graphics processor and 2040MHz for the memory. These optimal clocks are rather high and equates to the GPU running 30% faster and the memory running 20% above its rated speed. These values seemed to apply fine, but once carrying out the testing following this, there was no difference in the performance compared to its stock speeds. When polling the NV-CONTROL extension to see what the clocks were reading, they were back at their stock frequencies.

Latest Linux Hardware Reviews
  1. AMD R600g/RadeonSI Performance On Linux 3.16 With Mesa 10.3-devel
  2. Intel Pentium G3258 On Linux
  3. SilverStone Precision PS10
  4. ASRock Z97 Extreme6
Latest Linux Articles
  1. KVM Benchmarks On Ubuntu 14.10
  2. X.Org Server 1.16 Officially Released With Terrific Features
  3. Ubuntu With Linux 3.16 Smashes OS X 10.9.4 On The MacBook Air
  4. Preview: Benchmarking CentOS 7.0 & Scientific Linux 7.0
Latest Linux News
  1. Open-Source AMD Hawaii Support Should Now Be Working!
  2. KDE Developers Continue Working Toward Wayland Support
  3. Ubuntu 14.04.1 LTS Released
  4. Linux Developers Jump Quickly On ACPI 5.1, Helps Out ARM
  5. Pkg 1.3.0 Released To Improve Package Management On FreeBSD
  6. GOG.com Officially Starts Rolling Out Linux Games
  7. Fedora 21 Has Been Delayed By Three Weeks
  8. Mono Begins To Focus On Performance, Assembles A Team
  9. Oracle Linux 7 Released Today As Its RHEL7 Clone
  10. Unigine Develops City Traffic System, A Driving Simulator
Latest Forum Discussions
  1. Updated and Optimized Ubuntu Free Graphics Drivers
  2. AMD "Hawaii" Open-Source GPU Acceleration Still Not Working Right
  3. Radeon related kernel bug??
  4. how the US intellegentia operates:
  5. AMD Publishes Open-Source Linux HSA Kernel Driver
  6. Next-Gen OpenGL To Be Announced Next Month
  7. Open-Source Radeon Performance Boosted By Linux 3.16
  8. Remote gui not accessible in Phoronix Test Suite 5.2