Announcement

Collapse
No announcement yet.

Nearly Two Decades Later, ATI Radeon R300 Linux Driver Sees Occasional Improvement

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AaronP
    replied
    One question I've had: Why is there no EGL support in the r300 series? Does the OpenGL 2.1 hardware not actually support it, or has it just not been coded?

    Leave a comment:


  • andre30correia
    replied
    its nice to see that, but this cards is more to use in retro machines, new OS use a lot of resources and the cpu and hdd in this machines are quite slow. I remember when I change to a sdd many years ago, its a complete different experience and the best upgrade in any machine. I have a old ati 9600 and a athlon but the board burn and have o idea where I get one this days to test this old hardware in the new OS like xubuntu 32 bit, puppy or other 32 bits os since the cpu is only 32bits

    Leave a comment:


  • kpedersen
    replied
    Originally posted by M@GOid View Post
    I for one would like to force developers to run under-powered machines on a daily basis, so the brute force approach wasn't used so frequently to resolve bad code.
    In some ways because open-source operating systems don't get immediate access to the latest hardware (have to wait for reverse engineering / open drivers to catch up), in theory they do stay a bit more conservative. For example I can run Linux fine on a Pentium 4 but Windows 10 is pretty much a no go.

    What I also see for OpenBSD in particular is that the additional security focus can often slow it down, so the developers have to focus on maintaining performance in other ways. In part, perhaps this is why its base install, including Xorg(fork) have stayed very light and unbloated. Just comparing their Xenocara with Xorg (i.e on FreeBSD or Linux) shows it consists of less dependencies but also offers the same (if not more) functionality. Fvwm and CWM are also great examples of balance of functionality vs weight.

    It is not all perfect however. For example, I am not impressed by the current state of DEs for Linux/BSD. Most of the unworkable weight seems to come from these. I feel that older happens to be better here and the design of many packaging systems are unfortunately unsuitable for maintaining previous revisions of software effectively (I have only seen this work well(ish) on Solaris tbh).
    Last edited by kpedersen; 03 October 2021, 08:06 AM.

    Leave a comment:


  • illwieckz
    replied
    One problem with those hardware is that there are problems with AGP, PCI and PCIe graphics (see this thread from this year on LKML).

    I have verified that:
    • AMD and Nvidia PCI Graphics are broken on AMD platforms, verified on K8, K10 and Piledriver architecture, including cards like ATI Radeon HD 4350 (RV710, TeraScale, GL 3.3, 512M VRAM, 1 HDMI + 1 VGA);
    • AMD and Nvidia PCIe Graphics are broken on Intel platform also featuring an AGP port, verified on VIA PT880/VT82xx chipset, lots of cards tested, including cards like AMD Radeon HD 6970 (RV910 Caicos, TeraScale 3, OpenGL 4.3, 2GB VRAM, 2 DVI-I + 1 HDMI + 2 miniDP, released 2010-12);
    • AMD AGP Graphics are broken on AMD and Intel platforms, lots of cards tested, including cadrs like ATI Radeon HD 4670 AGP (RV730 XT, TeraScale, GL 3.3, 1GB VRAM, 1 HDM + 1 DVI-I + 1 VGA);
    One big problem is that AGP code was recently disabled by default to make the AGP card run with the PCI code with the assumption the PCI code was working, but in fact the PCI code appeared to be broken since many years (I tested very old distros and in fact the PCI graphics code was already broken since ages). One solution to bring back working AGP driver is to pass radeon.agpmode=1 boot parameter to linux kernel (multiple values are available, 1, 2, 4 ,8, while -1 is the default for the broken PCI code). Unfortunately I don't know any workaround for AMD and Nvidia PCI cards on AMD hosts, neither for AMD and Nvidia PCIe cards on Linux hosts also featuring an AGP port (even if unused).

    Otherwise, the Mesa drivers are very good. This news is also about Mesa becoming even better for those old cards… But what if PCIe, PCI and AGP does not work to begin with when you align the unlucky combination of hardware?

    Leave a comment:


  • baka0815
    replied
    Nice work! But will it do Raytracing?

    Leave a comment:


  • AaronP
    replied
    Huzzah for my still-used HP zv6000-series laptop with Ati Radeon Xpress 200M graphics (RS480 on the r300 driver)!

    Edit: Running Ubuntu MATE 21.10 beta currently.

    Leave a comment:


  • david-nk
    replied
    Originally posted by M@GOid View Post
    I for one would like to force developers to run under-powered machines on a daily basis, so the brute force approach wasn't used so frequently to resolve bad code.
    It wouldn't be even hard, I used to test my applications with my CPU locked to the lowest power state (800 MHz) for a long time.
    I wish some popular IDEs would automatically start doing that by default while the application is being ran to make developers a little more performance-aware.

    Leave a comment:


  • blackshard
    replied
    Originally posted by MadCatX View Post
    Virtual reality, machine learning algorithms, networks capable of interconnecting billions of devices across the planet (and beyond), advanced data compression technologies for real-time streaming of high quality multimedia? You couldn't do any of than on an Amiga.
    None of them relates to operating systems or silicon innovations.

    Machine learning is something you don't need anything special: if you have an ALU you can do it, even the old Amiga or intel 8086. Nowadays silicon manufacturers are just putting specialized instructions to accelerate that, but you don't really need anything special; that matter has been developed academically in the 70s.

    About networks, TCP/IP has been developed in the 70s again, the protocol (IPv4) never really changed so much and IPv6 has yet to be widely adopted.

    Things got speedier, but it is years we don't see great game-changing fundamental innovations in the field. Probably the next very big thing is quantum computing.

    Leave a comment:


  • Adarion
    replied
    Originally posted by M@GOid View Post
    New OSes get bloated all the time, Linux included. Here I was testing a ancient C50 CPU and in recent versions of Ubuntu (above 18), Firefox 92 took over 20s to start, ON A SSD!! Opening the simple Wikipedia homepage took over 8 seconds. I decided to go back and discovered that on 14.04, Firefox 66 starts in 5 seconds and opens the page in half the time.

    I for one would like to force developers to run under-powered machines on a daily basis, so the brute force approach wasn't used so frequently to resolve bad code.

    Now the rant is over, I like to congratulate the devs helping to keep old and useful ATI/AMD hardware alive. Nvidia cards don't have the same driver longevity.
    Yes. Hardware doesn't magically become "slower", it's the OS with all the patching, the application software that gets more bloated with more and more abstraction layers, scripting languages mistaken as actual programming/coding languages, VMs, more and more lib dependencies.
    It's a horrible bloat.
    Some tasks we did on the i486 are still quite the same, but now you need a Multi GHz Hexacore and GiBs of RAM.
    Especially a lot of websites... just take some prefbar equivalent (was a Mozilla Suite/Seamonkey/FF addon), switch off JS and reload a page. Whoa! Fast! You actually feel the power of the HW.

    Simply save a webpage. Say, 512 bytes of actual info, Kibibytes full of CSS, Megabytes of unrelated (!) images, and 1 - 5 MiB of JavaScript code. No wonder everything is slow.

    On topic: Horray and Kudos for my heroes. I still have some R100 on a ThinClient and hope it'll also see some energy management love one day. I guess I lack R300 models, but I still got a bunch of R600 chips and cards around. And I'm always happy if people still take care for old HW. If it is still functional and isn't a heating plate (Netburst...) it's still okay to be used for the purpose.

    Leave a comment:


  • skeevy420
    replied
    Originally posted by tildearrow View Post

    Exactly this. AmigaOS, besides its "crashes are fatal" issue, was very capable, and that's on a humble 7MHz 68000.
    On the other hand, Windows 10 takes long to boot on HDD..... on a 2.5GHz Core i5.

    We used to have true innovations like accelerated rendering, memory protection, virtual memory, true color, 3D graphics... Now what?
    Less flashy behind the scenes stuff like inline compression, codecs like Zstd-15 that take forever to compress but decompress just as fast as LZ4, rise of the terminator technologies like AI and ML, Rust, I'd add Plasma 5 to the list of bitchin things that innovate, x264 is pretty sweet, high quality low bitrate audio codecs that goddamn drive-thrus needs to learn about, actual 3d screens without glasses that I wish would get more popular, motion controls (like Wii), camera recognition (both useful and scary), very low latency wireless, AMD's 128 core CPUs making everyone else look like chumps, and, most of all, I guess we forgot about yesterday reporting on Proton because there has been a lot of work on emulation and other cross platform technologies making it so that we're about to enter the golden age of running what you want wherever you want to run it.

    Leave a comment:

Working...
X