Ghostboard pixel

Open Source Project LLVM Says Yes to AI-Generated Code, But Not Without Conditions

The new "human in the loop" policy holds contributors accountable for reviewing and understanding all AI-assisted submissions.
Warp Terminal

Following the lead of other open source projects, LLVM has now implemented their new "human in the loop" AI policy that governs the use of AI tools in contributions to the project.

With this in place, contributors can use whatever AI tools they like to help with their contributions, but they are fully accountable for what they submit. They also have to mention which tool they used, either in the pull request description, commit message, or wherever authorship is listed.

Additionally, contributors must be able to answer questions about their work during review, and should be confident that what they are submitting is worth a maintainer's time to go over.

LLVM's new AI policy also clarifies that:

Contributors are expected to be transparent and label contributions that contain substantial amounts of tool-generated content. Our policy on labelling is intended to facilitate reviews, and not to track which parts of LLVM are generated.

For the uninitiated, LLVM is a collection of compiler and toolchain components that serves as the foundation for many programming languages and development tools. It is used by major projects like Clang (the C/C++ compiler), Rust, Swift, and even in Linux kernel development.

The Community Was Involved

Just like any open source project worth its salt, the new policy was drafted taking community feedback into account.

One of the earlier calls for this came from a LLVM community member who pointed out that there was a mismatch in LLVM's AI-generated code handling policy, their code of conduct, and what was happening in reality.

This person cited a specific pull request that had attracted a lot of attention on Hacker News, where a contributor had openly admitted to using AI but not disclosing it in the actual pull request itself.

Reid Kleckner, an LLVM maintainer, took the lead in addressing these concerns. First, he posted a draft policy to gather community feedback. His initial proposal borrowed heavily from Fedora's AI policy and included specific limits, like restricting newcomers to 150 lines of non-test code.

A few months later, he was back after gathering extensive feedback from community meetings and forum discussions. Reid mentioned that he had moved away from the Fedora-based draft, with the new version focusing on making the requirements more explicit and actionable.

Instead of vague clauses like "owning the contribution," the updated policy spelled out clearly that contributors must review their work and be prepared to answer questions about it.

The updated AI Tool Use Policy is now live on LLVM's documentation website, complete with guidelines for handling violations and examples of acceptable AI-assisted contributions.

Via: Phoronix

About the author
Sourav Rudra

Sourav Rudra

A nerd with a passion for open source software, custom PC builds, motorsports, and exploring the endless possibilities of this world.

Become a Better Linux User

With the FOSS Weekly Newsletter, you learn useful Linux tips, discover applications, explore new distros and stay updated with the latest from Linux world

itsfoss happy penguin

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to It's FOSS.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.