Wine Project's April Fools' Gag With Merit: Leveraging AI For Faster Code Review

Written by Michael Larabel in WINE on 4 April 2024 at 11:25 AM EDT. 25 Comments
WINE
Earlier this week Wine developer Gabriel Ivăncescu with CodeWeavers laid out a great proposal: leveraging AI for assisting with the code review process for more punctual review and upstreaming of patches into the Wine codebase for this software that allows Windows games and apps to run on Linux and other platforms. While great in theory, at this stage just amounted to an April Fools' gag for Wine.

Gabriel's proposal was to leverage a large language model to help in the timely review of code. It's great in theory and technically possible, there are various AI code assistants out there and the structured process of GitLab can be automated. There are a number of startups already in existence working on AI code review tools albeit nothing with notable traction in the open-source world or seeing widespread adoption by open-source projects. The closest at this stage in the open-source project world are the various mailing list bots checking on patches for complying with kernel coding standards, build testing, and some Intel CI testing.

Though in making it more gag-like, the April 1st proposal is for the LLM to have "full authority over the entire review process, so we can focus on writing code" and "the goal of it becoming the ultimate—andonly—maintainer for the project." We are not quite there yet but there is no disputing the shortage of code review and other factors plaguing open-source projects... This message was sent just days after the big XZ security fiasco which AI may have caught onto the always-failing sandbox code check that was intentionally added but not necessarily having an AI bot fight off intentionally rogue developers or particularly project maintainers.

The mailing list proposal went on to add other vibrant commentary like:
Due to the training data, it exhibits a bias of review styles by famous reviewers such as Linus Torvalds (from the Linux kernel), so expect a lot of productive rants. I also gave it the capability to close MRs if the code is simply unsalvageable, though obviously only when it gets authorization to do so. In my tests, 98.657% of the code I sent it was classified as "garbage" and "unsalvageable", proving its effectiveness.
...
Gone will be the days of waiting weeks to even get a response to your MR; now you'll just get bashed almost immediately and most likely even have your MR instantly closed "as a lost cause" if it stinks that much for the all-knowing LLM.

While an April Fools' gag this year, leveraging AI/LLMs in the future could help out code review for short-staffed and under-funded open-source projects and help in a multitude of other related areas for enhancing the productivity of open-source projects especially. It will be interesting to see how AI impacts open-source projects in the years to come.
Related News
About The Author
Michael Larabel

Michael Larabel is the principal author of Phoronix.com and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org automated benchmarking software. He can be followed via Twitter, LinkedIn, or contacted via MichaelLarabel.com.

Popular News This Week