Diverse perspectives on AI from Rust contributors and maintainers
Starting on Feb 6, the project began collecting perspectives around AI into a shared document. This document is a summary of those comments, authored by nikomatsakis on Feb 27 or so.
The goal of this document is to cover the full range of points made so that we can understand the landscape of opinion and the kinds of arguments on each side. For the most part I attempted to minimize summarization and to let people’s quotes speak for themselves. If you would like to read the original comments, you can find them here.
Be careful when characterizing this summary. The comments within do not represent “the Rust project’s view” but rather the views of the individuals who made them. The Rust project does not, at present, have a coherent view or position around the usage of AI tools; this document is one step towards hopefully forming one.
The discussion is also not split between “AI use on rust-lang crates” and “AI use by Rust developers elsewhere”. Many quotes assume one or the other or both, so care must be taken when interpreting them.
AI is a tool that one must learn to wield well
Those who get the best results from AI point out that it takes real engineering to get there. It is not a matter of “AI working well” or “AI not working well”, but a matter of making AI work well:
It takes care and careful engineering to produce good results. One must work to keep the models within the flight envelope. One has to carefully structure the problem, provide the right context and guidance, and give appropriate tools and a good environment. One must think about optimizing the context window; one must be aware of its limitations.
– TC
What’s more, the models are constantly improving:
Something that might not be obvious is how much things have changed over the last 2-3 months. At one time, it was hard to justify the use of models for serious work. But the state-of-the-art models are now too good to ignore.
– TC
This helps explain why people’s experiences with AI seem to be so different:
I had been struggling with some cognitive dissonance where I see people I deeply respect finding value in these tools while at the same time finding 99% of the value people claim from these tools to be all smoke and no substance and wondering whether that is the case with people like Niko. But from Jayans point I can see how inputs and the way these tools are used can still have an impact which could cause ppl like Niko to have better outcomes vs random people with no engineering background trying to use these tools.
– yaahc
Many people find value in AI for non-coding tasks
Much of the discussion around AI focuses on coding, which obscures the fact that many–though not all–people are using AI successfully for other kinds of tasks.
Searching and discovery
AIs can be helpful in when navigating unfamiliar codebases or documentation:
I do find them valuable for research-y things. We have some internal AI tooling at Arm that makes searching our 10,000+ page architecture documentation much easier, and I find that exceptionally valuable - it makes it a lot easier for me to respond to issues upstream promptly.
The AIs are often wonderful for researchy things. For example, I’ve had great success with “well I’m here and I need a
Span; where do I get one?” kinds of questions.– scottmcm
Reviewing code and exploring ideas
Another related topic is using AIs to “rubberduck” or “brainstorm” or to explore ideas interactively:
I’ve had some success with using it for double checking what I did, making it ask questions, which – while dumb – made me explore the correct idea.
They can also be very useful for reviewing code:
[Despite my reservations about AI,] I would be interested in exploring LLMs for code review. Some Linux kernel folks apparently had good success on having LLM agents assist in review using very project-specific, carefully crafted prompts. Obviously this cannot replace human code review and approval, but if done well it could still help make reviewers more effective. It seems worth a try. However, we should be careful not to get into a situation where we have an unhealthy dependency on LLMs to keep the project running. I hear some of the open-weight models are getting fairly close to the big proprietary ones; using a self-hosted instance of those could alleviate some of the aforementioned concerns.
– RalfJung
Large-scale processing of semi-structured data
Particularly when working with semi-structured data, AI can make intractable tasks much easier to achieve. There were a number of examples in the doc, here is one from the FLS:
A long-standing issue in the FLS is that the glossary and the chapter contents are maintained separately. This causes maintenance slips where the definition in the glossary or the usage of the term (essentially still a definition) are subtly different, causing a bug in the FLS. Once the directive format was nailed down, the agent was able to progressively, bit by bit, migrate all the glossary into the chapters to have a single source of truth from which the glossary can be generated wholesale. Speaking personally this is definitely the kind of thing that seems we would have not gotten to, since it’s such drudgery. Has not landed yet, being reworked to come in in phases that are easier to review.
Writing with AI however is tricky to do well
While AI can be helpful for non-coding tasks, it doesn’t work equally well across the board. Many people mentioned that AI writing in particular has lots of words and little information or structure:
[Describing an AI-generated project:] For the documentation, at the sentence level it was very good, at the paragraph level it was good, and at levels beyond that it was terrible. Bad structure, repetitive, no sense of order or flow. Just feels like a random collection of related things. This project had lots of README files and the issues were even worse when looking across them; there were incredible amounts of duplicated information.
Opinions on coding with AI are… varied
Experiences using AI for coding are all over the map. Some people found that it was not effective:
It takes more time for me to coerce AI tooling to produce the code I want plus reviews and fixes, than it is for me to just write the code myself.
For implementing new features, I find them slower in wall time than implementing the feature myself.
But others found a lot of value:
If I had to pick one word for how I feel about using AI, it is empowered. This feeling, more than any other, convinces me that AI, for all its flaws (and there are many), is here to stay.
Suddenly it feels like I can take on just about any problem – it’s not that the AI will do the work for me. It’s that the AI will help me work through it and also tackle some of the drudgery. And certainly there are some areas (github actions, HTML/CSS) that would just stop me cold before, but which now are no issue – I can build out those things and brings things to life.
I’ve been using LLM agents since around mid-2025, and they’ve been genuinely useful in developing features, fixing bugs and analyzing data. I basically treat them as a tool (fancy auto-complete) and always review and polish the output before submitting PRs.
– Turbo87
AI is good for well-constrained tasks
Several people who don’t generally use AI for coding mentioned that it can be helpful for certain specific kinds of tasks:
I use agents (Claude Code) to automate boring/annoying stuff (refactorings, boilerplate code, generate REST API calls, etc.) or for understanding complex codebases and suggestions for how to do X.
– kobzol
From the pov of a code writer, I’ve enjoyed LLMs for writing proc macro code because that’s no fun and not too correctness-sensitive. They’ve become quite good at writing Rust code lately.
Leaning on AI can cause one to lose one’s connection to the code
Many commenters mentioned feeling that use of AI coding tools caused their coding skills to “atrophy” or diminish, or that it resulted in them losing a good grip on how the code works:
[It’s] really difficult to retain “deep impressions” or develop mental models of the codebase for code that I didn’t write myself.
Peter Naur wrote an essay called “Programming as Theory Building” that I like a lot. It argues that a program exists not just as source code, but also mental models in programmer’s brains, and that the mental models are as important, or even more important, than the source code. This is why programmers are not fungible. One programmer with a good mental model will be able to modify the program effectively; someone with a poor mental model won’t. Source code that has been abandoned by the original developers is in a degraded state; if someone takes it over they need to build up their own mental model, which may differ from the original author’s. Building and maintaining these mental models is hard work, and an enormous part of programming. So what does it mean to outsource all of that to an LLM? I can’t see it having a good outcome.
AI-generated code needs careful reviewing and that is hard to do well
Leaning harder on reviews to ensure quality is going to be difficult because reviewing is hard, particularly when things are wrong in subtle ways:
I don’t think “human reviews the resulting code thoroughly” works. Experimenting with the inline snippet in VSCode while working on the search graph got it to propose incorrect, but seemingly reasonable, comments. I even used some of them without realizing they are wrong. Unless LLMs get/are good enough (or the problem simple enough) that thorough review is unnecessary, I do not want to endorse using LLMs to generate code or to contribute to conversations.
– lcnr
Some commenters felt that AI changes not just the difficulty of review but the fundamental nature of what review is for:
Code reviews are not suited for catching minutia and are instead generally focused on reducing the bus factor by keeping other people abreast of changes, sharing culture and best practices, [and] limiting the effect of blindspots with more eyes — but minutia reviews is what AI needs and the AI-using contributor is no longer an “author” but a “reviewer”. Add on top of this that regular reviews can already be a draining rather than energizing activity for many, and switch to minutia reviews and either you’ll get disengaged, blind sign offs (LGTM) or burn out.
– epage
AI can help experts move faster, but can make it harder for new folk to become experts
Several commenters raised concerns about the effect of AI on learning and teaching. If newcomers lean on AI too much too early, they may never build the deep understanding that would make them effective contributors:
In other words, LLMs can be great tools in the hands of experts, but using them too much too early can prevent a person from even becoming an expert.
– RalfJung
Others pointed to research suggesting this concern is well-founded:
The science again and again points to either it being net negative in time spent, or to learning capabilities being hindered, all while participants believe they were faster or learned well respectively.
– oli-obk
Looking at it from a choice architecture angle, enabling AI use risks making low effort and low engagement the most convenient path:
If we view it from the choice architecture angle, allowing or enabling AI use risks making low effort & low engagement the most convenient. This is the consistent theme we see in education and professional contexts: if a task is not interesting then we “fix” it by letting an AI do it.
The ethics of AI usage
AI data was scraped from the internet at large
Many comments had to do with the moral and ethical dimensions of AI. For many, the provenance of the data that LLMs are trained on is a fundamental concern — not just a legal question, but a moral one:
LLMs are trained on stolen data. It seems to me that, given the amount of data needed to train an LLM, it would not be possible to train one comparable to current models on licensed data.
AI can be expensive to access and can concentrate power
Others cited the fact that access to AI is expensive and hence not spread evenly across developers:
It is also a new kind of closed gardens where the best available LLMs are owned by few companies which keep increasing their prices while reducing the free “trial”, increasing yet again the gap these who can afford it, and the others.
And the fact that models concentrate power on a small set of companies, with potential for manipulation:
Furthermore, due to the costs of creating such models, there are only a few companies offering them, concentrating a lot of power and control of people’s programming behavior in a few hands. They are centralized, proprietary services which leave the vendor in full control over who uses them when and where to do what – the exact opposite of FOSS, of empowering individuals. All this means we should be very hesitant to do give the impression that we are endorsing LLMs and the way they are currently built.
– RalfJung
AI can propagate bias and reinforce the ills of the society that produced it
Several people commented on how AI in other parts of society has negative effects:
Many companies use résumé-scanning tools that try to guess at a person’s competence at a given job, and I’ve found that at least one of these is directly following the usual LLM pitfalls we’ve found in recent research: it’s very sensitive to unrelated parameters like names (how convenient, that names are a great way to discriminate!) and does not seem to indicate any substantial understanding of the reference material.
– Clar Fon
AIs consume a lot of power
Many commenters raised concerns about AI power usage and the effect that has had on efforts to slow or halt climate change:
The energy demands result in increasing rather than reducing the emission of greenhouse gasses into the atmosphere. Coal plants that were slated to be closed are being kept alive. Large tech companies walking back their de-carbonisation commitments. This is absolutely unjustifiable.
Literally, the scope is beyond just keeping fossil fuel usage at its current level. It’s expanding fossil fuel usage to the point of starting wars to drill oil in places where it hasn’t even been drilled yet. [This isn’t just reducing efforts to slow climate change] but a desire to actively accelerate climate change instead.
– Clar Fon
The legality of AI usage
Separately from the moral and ethical concerns, the legal landscape around AI is complicated and rapidly evolving. In the US, there are a number of relevant lawsuits wending their way through the courts attempting to adjudicate the limits of fair use around AI training. There have been a number of agreements between publishing companies and AI manufacturers, many of them private. The EU’s AI Act is beginning to impose transparency requirements on training data provenance.
There is also the question of copyright over AI-generated output. For open-source projects that depend on being able to assert their license terms, this is a significant unresolved risk:
Unsolved copyright issues with no solution in sight. The current thinking is to assume good faith and hope for the best. For a FOSS project, this is a huge Sword of Damocles.
– apiraino
I’m no lawyer, but from what I could learn so far, the more an OSS project embraces AI contributions, the greater the risks of not being able to claim its license terms (like attributions). Eventually, every contribution a project can not prove was delivered by a human is a potential issue.
AI and open source
A number of people talked about the fact that agents make it much too easy to construct “plausible looking” (but wrong) PRs and that their tendency to hallucinate can give contributors artificial confidence, making them think they understand the codebase much more than they do:
An official “stamp of approval” can often be the missing impetus that enables many people, who previously might not have pumped out LLM slop as contributions, to do so with less guilt. This of couse doesn’t represent all people, but it represents a (somewhat) growing majority of people. This subset of developers has heavy overlap with another class of LLM-using developers, namely those who’re particularly great exhibitors of the Dunning-Kruger Effect. AI for these users is akin to steroids for their Dunning-Kruger Effect. It boots confidence, but impacts the user’s competence. This is all not to say that using an LLM will make you incompetent; there are a lot of developers, experienced ones, that utilise LLMs to improve their workflow. The problem doesn’t come from LLMs themselves, but from how they’re used.
Contributors proxying reviewer comments to LLM is frustrating
One specific callout was how contributors will sometimes just forward comments to the LLM rather than responding from their own knowledge:
- A few contributors even act as a proxy between the reviewer and the LLM, copy their reviewer’s question, reply with LLM-generated response. For the love of god, please.
- I want to emphasize this is incredibly frustrating. This is the top contributing factor to potential burn outs for me.
Poor quality PRs are increasing in both number and frequency
The most obvious problem is the high number of low quality PRs, and the difficulty of detecting them:
- I have no idea how to solve the “sure, you quickly made something plausible-looking, but it’s actually subtly wrong and now you’re wasting everyone’s time” problem. We get way too many PRs “fixing” things that don’t actually solve it, and those seem to largely from AI and wouldn’t have existed without it.
- Said otherwise, I continue to think that the greatest threat to the project is its lack of review bandwidth, and LLM is only making that worse, with no realistic prospect for it to make it better. (If the LLM could actually detect the real problems it could avoid them in the first place.)
– scottmcm
Beyond quality, the sheer volume of AI-generated contributions is accelerating:
Especially recently with the advent of stuff like OpenClaw and MCP and stuff, the sheer volume of fully AI-generated slop is becoming a real drain on review/moderation capacity.
Codebases are more than code
Several commenters made the point that the problem with AI-generated contributions goes deeper than code quality:
After I wrote this, this bizarre event happened, which got me thinking more. I think “Judge the code, not the coder” is an argument we’ll hear a lot in the coming days. There are a number of reasons why I think it is a poor argument.
- An open source project is more than just a codebase.
- There is a community of people around it.
- These people have a shared commitment to the project
- These people have a shared understanding of what the program does, and why. (This ties in with the Naur essay I mentioned above.)
- Drive-by LLM contributions do not contribute to these non-code aspects. They arguably even undermine them, even if the contributions are technically valid.
- For example, an LLM that fixes an E-Easy issue steals a human’s learning opportunity.
What we collectively build, beyond the code artifacts that the compiler+tools are, is a group of people who come back, who learn, who share their understanding, who align their tastes, who take input from the community, etc etc. Merging an LLM-generated PR feeds only the “we have code that works” part of the Project; it’s not participating in all the other feedback cycles that make the project alive.
Erosion of trust and inability to detect effort
AI-authored contributions break the implicit contract that used to exist, where contributors typically had to invest significant effort to prepare a “reasonable looking” PR. The result is an erosion of trust between contributor and maintainer/reviewer:
My main concern is that LLMs break nearly all of our current ways to detect effort. This causes us to incorrectly allocate review and mentoring capacity.
– lcnr
Triaging LLM-generated bug reports is difficult
A lot of the discussion around AI contribution has focused on code, but AI influences the open-source process in other ways. AI-generated issue descriptions and writing is a particular area that people find frustrating:
I honestly suggest blanket-banning any kind of AI-assisted tooling for bug reports. I don’t mind grammar mistakes, broken English, or even the reporter using their native language. That’s easy to work with. However, I absolutely despise bug reports that are of the LLM slop flavor – a whole bunch of text but somehow doesn’t contain actually needed information for reproducing. Even worse, some of these reporters include completely useless/wrong analysis that maintainers doing triage have to waste time looking in case there’s something real in there. Triage takes maintainer time too, please do not underestimate this.
– Jieyou Xu (emphasis theirs)
Discussions with contributors can become hostile
And some folks mentioned that commenters who use AI can quickly become hostile when asked about it:
I have other thoughts about this, but my immediate one is that the last issue reporter I encountered that overtly used LLMs was rather hostile (see rust#151868). I told the reporter that their issue report missed key reproduction information, because it not only referenced tools we do not directly support (
bazel), it also omitted the basic inputs those tools demand or a description of how they got them (surelybazel build, but with whatMODULE.bazel?). They instead included LLM-generated “analysis” of compiler build outputs, which were so “summarized” it was impossible to work back to what they were trying to describe. To top it off, the report also pinged people who were completely unrelated to the issue for no obvious reason.[..]
This is not particularly unique. LLM-driven issues and pull requests are often backed by defensiveness around how they got their information, whether it is code or an error report. Simple questions that should have simple answers are evaded so completely it requires an inquisition in response. It is exhausting to run them down on the facts, even when the initial report gives enough that you have them dead to rights.
– Jubilee Young (emphasis mine)
The charged atmosphere is making conversation difficult
The intensity of feeling around AI is making it difficult to have open, productive conversations. On one side, embracing AI would alienate some contributors and users:
There will be a cost to accepting AI in terms of existing and potential contributors lost. As a mod I have been asked why I don’t stop AI-positive framings as CoC violations, considering the incredible harm AI use and rollout have done. I see a lot of negative reporting of projects going all in on AI, and at most some praise on posts about rejecting AI fully. Even deep in the comments on the “reject AI” posts I see nothing about [people] stopping [contributing] because of it.
On the flip side, the heat around this topic means we may not be getting the full picture of how people are actually using AI. Some people are reluctant to speak openly about their experiences:
How we present ourselves impacts what people will tell us, and we have some responsibility for that. I’ve gotten a number of messages from people expressing reluctance to talk about their use of AI publicly on Zulip.
I see value in AI, obviously, but the thing I care more about is Rust being a place that is focused on helping people succeed at building foundational software, especially people who didn’t feel they could do that before.
To make things even more complex, some folks have reached out privately to say that because of top-down pressure at their employer, they are reluctant to speak negatively about AI.
So what should we do about all this?
Given all of this information, what is the right course of action for the Rust project? It is obvious that we have to do something about the “slop PR” problem and the growing burden that low quality, AI-generated contributions are putting onto maintainers.
Some projects have chosen to ban the use of AI when preparing PRs, and there are some in the doc who advocate for this approach. Others point out that such a ban would be difficult to enforce and that it is not going to make people stop using AI, or even stop them from using AI with Rust. This includes people who otherwise expressed strong opposition to AI:
In general, the Rust project cannot really do anything about the global state of AI. The cat is out of the bag and it won’t go back unless the bubble pops and the entire industry discovers that AI is not sustainable. Several people here have expressed opinions highly in opposition to AI for ethical reasons. These are all valid and yet for the reason of inevitability, we cannot bury our heads in the sand and hope it just vanishes. For this reason I think it’s necessary to think and talk about AI on all levels of the project.
Short of an outright ban, people suggested a number of steps that might help lighten the load on reviewers and help ensure that, if AI is used, it is used responsibly.
Universal policy
The compiler team already has an established policy around reviews that aims to make it easy for reviewers to quickly reject PRs that appear to be “extractive”. We could revisit this policy and make it more well known and universal (perhaps after some modification and discussion).
I suggest such a AI contribution policy cover:
- Contributors are responsible and held accountable for their contributions. Just submitting AI-generated content without reviewing themselves is absolutely unacceptable.
- The contributor must understand their contributed changes. They need to be able to answer questions from the reviewer about the changes.
- Contributors must responsibly disclose if a substantial portion of their contribution is AI-generated.
- Reviewers are empowered to decline reviewing/interacting contributions (including proposals and comments) that are primarily AI-generated.
- Submitting slop results in an immediate ban.
- Piping reviewer/maintainer questions into an LLM then posting the LLM’s response verbatim is an immediate ban.
Disclosure and accountability
Several commentors mentioned requiring disclosure and making sure that contributors are aware of their responsibility.
So, as a first step, I think we should ask contributors to explicitly acknowledge as part of preparing a PR that they have either authored or reviewed the entire PR (including PR description!) themselves and are able to answer questions about it on their own. Maybe we can also give guidance on how to write a good PR description (to counter the effect of LLMs writing extremely verbose descriptions with low information density) People can of course lie about this, but I expect many people just want to help and even for the rest, this helps set a clear line.
– RalfJung
The goal is to address the phenomenon where reviewers find themselves “indirectly” working with an LLM:
Interacting with people who just pipe LLM output into PRs without understanding or reviewing it, or even communicating with me via a “LLM proxy” is incredibly annoying.
– kobzol
Encourage people to write in their native language if they are not comfortable with English.
The use of AI in writing and communication was a particular source of frustration. However, there remains a valid use case to enable people who don’t speak English to communicate with the project. One way around this is to establish that people can write in their native languages if they prefer, and to have the project either use translation on our side or else try to find people who speak that language to assist:
Honestly, I rather the reporter just write non-English native language because at least that reflects the reporter’s actual sentiment, and you can cross-compare translations (i.e. you have access to the “original”). Especially e.g. Mandarin, because maintainers might actually be fluent in the native language! Whereas if someone just posts LLM-translated content, who knows what the original version was?
Encourage AI companies to invest in Rust maintenance.
It won’t magically solve anything, but certainly the fact that highly valued companies are creating products that increase the workload on volunteer reviewers is not sustainable. As we pursue new opportunities like paid maintainer funds, we may be able to get direct support from AI companies:
Furthermore, it’s not outside the realm of possibility that as a project we could attract maintainer support funding from AI companies (who use Rust quite a bit, as I understand it). Despite the valid objections many have to these technologies, and especially the companies behind them, I would want us to take their money to support our maintainers.
It’s not from this document, but I’m reminded of a quote from the Python Developer in Residence we interviewed as part of the RFMF program:
Making this a paid position changes the sort of psychological thinking about chore a lot. I wouldn’t even imagine doing this regularly as a contributor for a longer time just as a volunteer, because it’s like ‘is this how I’m spending my free time? this sucks’. But if it’s your job it’s like ‘yeah, this is my task for the day’, better than doing something internal to a company.
– Python Developer in Residence (not talking specifically about AI)
Sponsored access to AI tooling for contributors and maintainers
We could give project contributors and maintainers sponsored access to AI tools, much as we provide access to beefy remote desktops:
Making these tools widely accessible to Project members, and then telling the stories of how these tools can be used carefully and productively — in a way that lets us raise (not lower) our standards, and eases (not adds to) the load on maintainers — is what’s likely to get us there.
– TC
Reputation programs
Rather than an outright ban, other open-source projects are exploring web-of-trust endorsements or other techniques to discourage new contributors from opening low-quality PRs without demonstrating a higher level of committment:
“Team members using AI for Rust work seems ok, we have established trust already. But unfortunately I think we need to raise the bar for new contributions to be obviously free of AI.”
– oli-obk
“Being able to filter PRs by project members or people who already contributed N times, so that reviewers can decide which kinds of PRs they want to review.”
“I anticipate this involving anti-spam filters much like email, or explicit web-of-trust-style endorsements.”
Fight fire with fire
Several people suggested that AI itself could be useful in attempting to identify and work with poor quality PRs or to help with issue triage:
We should invest resources in helping reviewers in every way possible. Some random ideas: [..]
- Have AI do a first review of the PR to spot some issues automatically, so that the reviewer can save time.
- Have AI do a first triage of the issues (to be confirmed by a human).
- Ask the author of the PR:
- if/how they used AI to write the code (as Turbo87 suggested)
- if they want to become a team member long-term or if they just want to make a one-time contribution.
One person mentioned direct experience with AI reviews and offered to help:
AI slop & “AI agents” as github users submitting issues & PRs are the worst. However, in the Matter repo I’ve found that LLM PR summaries + reviews are quite helpful. I have heard from colleagues that Rust repo reviewer time is quite precious at the moment, and an LLM doing first-passes + summaries could be helpful lightening the load for reviewers. It could also help with pushing-back on PRs from AI. Here’s an example in our repo, #367. If setting this up for the rust repo (at first simply as opt-in with
/gemini review) is something people would be interested in, I’m happy to help.– gmarcosb
Meta-observations and closing thoughts
My explicit goal in writing this document is to summarize what people said and try to give us all some common ground to use in further discussions. To that end, I thought it would be useful to call out some of the tensions that we will have to navigate as we do so.
Common ground
Despite the sharp disagreements, there is a lot of common ground:
- Maintainers are overburdened and that has to be addressed. The strain that low-quality, AI-generated contributions are placing on reviewers and moderators is recognized across the entire spectrum, from the most enthusiastic AI users to the most opposed.
- Naive use of AI generates crappy code and should be discouraged. Even AI proponents agree that it takes effort to learn to use AI well and that simply pointing an agent at a codebase and asking it to “do X” will result in a low-quality PR.
- Contributors must understand and stand behind their contributions. Nobody argues that it is acceptable to submit work you don’t understand or can’t defend.
- Reviewers should be empowered to reject low-quality work without elaborate justification. The compiler team’s existing policy is a good start in this direction. (I might go further and say that we should endeavor to identify and close low-quality PRs automatically, either via web-of-trust or spam filters or automated scans, that sort of thing. – nikomatsakis, leveraging his editorial privilege)
- AI-generated writing in issues and PR descriptions is particularly harmful. Even people who are positive about AI for code generation find AI-generated prose in project communication frustrating and wasteful of reviewer time.
- Effort used to signal commitment, and that signal is now broken. AI has made it easy to produce plausible-looking contributions without the understanding that effort once implied. This is a problem that needs to be addressed regardless of one’s broader views on AI.
Core tensions
Deep integration vs rejection on moral grounds
On one end of the spectrum, some would like to see the Rust project enthusiastically embrace AI:
[I’m not proposing it because I know people have concerns,] but what I would like is for the Rust project to enthusiastically embrace AI as a first-class way of using Rust (not the only way) while acknowledging its flaws. Basically to make a statement that we are going to work hard to ensure our tools support AI agents well, build tooling that works closely with agents, design workflows that incorporate AI, and work to address efficiency/power-usage/inequity/accessibilty/open-source concerns.
On the other end, some feel that any kind of “compromise” position on AI amounts to being an accomplice in immoral actions:
Offering a “live and let live” stance towards AI grants it a moral neutrality that it should not have. In this way, supporting developers who are users of AI is endorsing it. It implies that the human cost of AI is acceptable. I find that disgusting.
Neither position leaves much room for compromise: deep integration is not compatible with treating the technology as morally wrong, and treating it as morally wrong does not allow for participating in spaces that endorse the technology through using it.
A middle ground would be to allow individuals to make their own choices about AI usage while not endorsing either direction on behalf of the Project:
It’s clear that many of us feel strongly about the ethics and morality of these technologies. However, I don’t think it’s appropriate for the project to take a stance on them - ultimately these are personal decisions where reasonable people will find themselves differing - I think this is important for the overall health of the project. I don’t intend for that to be dismissive of the clearly impassioned and considered conclusions that my fellow project members have come to. I think this is a healthy disposition for us to have on many issues - the Rust project ought to be a big tent, bringing together people from different countries, cultures, backgrounds, experiences to build an amazing programming language - and that will necessarily involve a toleration of differences on issues such as these. There’s always an invisible cost in taking these stances in the contributors who decided Rust wasn’t for them. None of that is to say that we shouldn’t be clear and definitive in talking about the negative impact of these technologies on the project and our maintainers, we should; or that we shouldn’t respect the wishes of each individual maintainer as to whether they’d like to engage with AI, we should.
“Supporting” AI vs “endorsing” AI
Building on the previous point, “supporting” AI users in ways that will make their AI tools work better (e.g., adding AGENTS.md) can help to improve the quality of PRs that we receive, but for many that would feel like endorsement or like taking away resources from humans:
If there is desire for some AI tooling or documentation around Rust I don’t want to see it gobble up resources from our project. We have more than enough human processable documentation to improve, and I keep getting told the AI can read that, too. Yes the resources may not exist if not meant for some AI stuff, but the work will consume resources from our project, even if just by taking up space in the discussions.
– oli-obk
As a meta-note, I (nikomatsakis) believe it is possible to setup docs in a way that benefits both, basically by having human-targeted docs and then AI-signposts that help point them to the right place. But it takes effort to get things setup correctly. And this raises the tension, is accepting contributions from people who have put in that effort “help” AI tantamount to “endorsing” it?
“Opting out from AI” vs “AI can help”
It seems natural that we should permit people who dislike the use of AI to opt-out from using it themselves or from interacting with it in overt ways (including “by proxy” with people who just feed their comments to an LLM). At the same time, we have several people who cited the power of AI in categorizing and dealing with “unstructured data” such as comments and discussion, or in translating between languages. Is there room for the project to deploy AI itself in any form to help and how can that be done in a way that still respects people’s right to “opt out”?