This is a capture of the hackmd contents. For more context on this document, see the Zulip discussion here
Rust Project Perspectives on use of AI
This is a place to gather up stories and data that will be used to try and create a doc that summarizes the perspectives of Rust project members about AI.
Feel free to add your perspective here or to link to it in a gist or a blog or a hackmd or something. Do not edit others’ contributions.
To add to the doc, create a section with your github username and include the following information:
- Relationship to the Rust project:
- If a team member, of which teams?
- If a contributor, RFC author, etc, to which projects?
- If you have a ‘serious intent’ to start contributing, say more about that.
- If none of those, this isn’t the doc for you. =)
- Whatever else you want to share
If you want to leave thoughts anonymously reach out to either @nikomatsakis or @Boxy on zulip. Hackmd will record who wrote what bits of text down so you can’t just edit the document to leave stuff anonymously :)
nikomatsakis
- Co-lead, Rust language design team
- Project Director
I’ve written a lot of blog posts on AI (e.g., bbs1, bbs2, and bbs3), but I want to briefly share my perspective from a more emotional level. If I had to pick one word for how I feel about using AI, it is empowered. This feeling, more than any other, convinces me that AI, for all its flaws (and there are many), is here to stay.
Suddenly it feels like I can take on just about any problem – it’s not that the AI will do the work for me. It’s that the AI will help me work through it and also tackle some of the drudgery. And certainly there are some areas (github actions, HTML/CSS) that would just stop me cold before, but which now are no issue – I can build out those things and brings things to life.
Don’t get me wrong, I get plenty frustrated with AI coding. I’ve had multiple times where I wind up throwing out the code it wrote and starting over, or just taking the wheel for a time to lay things out just the way I want them. But that’s been true for my own coding and for code that others write too. And often I wouldn’t have been able to do the rewrite without the AI’s first draft to work from (e.g., the time I built an electron app).
For things like the Rust Vision Doc or the Rust Project Goals work, AI is a godsend. It used to be that processing English text required some human to manually work through things, but that is no longer true. And yes, the writing can be bad, it’s again something where you do want to work with the AI, review it, help lay out the guardrails, and then you can let it go.
I’m pretty intrigued by the possibilities of AI tools for making more capable programs and scripts. The tool retcon that I made is a good example: it uses a combination of Rust and an agent to rewrite your git history in a sensible fashion.
This is not to say AI doesn’t have problems. It has huge ones, power usage chief among them, but also the way irresponsible usage is creating slop PRs (not to mention the way it’s being inserted in places it doesn’t belong, and the ways it will be abused by governments).
Those problems are real, but I don’t believe that refusing to engage is going to be an effective strategy here. AI is simply too useful and powerful to go away: it enables entire classes of things (like retcon) that were basically impossible before.. The only way out is through.
To me, this means that we as the greater Rust community have to work to address those problems (not just us, obviously, but we have specific things to offer). For power usage, this is one of the goals of projects like symposium.dev, to build more efficient agents that deploy things like local and smaller models (I’d also, rather orthogonally, like to see governments getting serious about carbon taxes and capping consumption – I believe people will optimize, but they need a better incentive).
For slop PRs, I believe there are lots of steps we can take to address it, but it will take effort and engagement, not referring contributors to moderators.
The good news is that if we can leverage AI well, it can help us address a LOT of really hard, intractable problems. Some of them are technical, like being able to convert programs into Rust from other languages, or maintain cross-language libraries more effectively. But a lot of them are more social in nature, like helping people to learn Rust, or helping to bring new contributors in that were previously excluded by language barrier or by the “activation energy” to understand our (let’s face it) baroque codebase.
TC
- Lang, council, Reference, edition, etc.
TC said:
As we all reflect on this, something that might not be obvious is how much things have changed over the last 2-3 months. At one time, it was hard to justify the use of models for serious work. They made so many mistakes (and so confidently!) that, for someone concerned about correctness (and style), they felt counterproductive in most cases. It took more time to find and fix all of the errors than it would have to do the task oneself, and there was always the risk of being fooled by the confidence and missing something.
But the state-of-the-art models are now too good to ignore.
It takes care and careful engineering to produce good results. One must work to keep the models within the flight envelope. One has to carefully structure the problem, provide the right context and guidance, and give appropriate tools and a good environment. One must think about optimizing the context window; one must be aware of its limitations.
But we’re now back in the familiar world of “garbage in, garbage out” (rather than “anything in, garbage out”). In that way, it’s now come to feel like any other tool. One must still verify (and generally iterate on) the results. But those results are now worth the time to do that.
This rate of change and the relative inaccessibility of these top models are two reasons, I think, that it’s been difficult for us, as a project, to develop strong shared understandings and intuitions about model use. One has to see it to really believe it. And unless one is using the state-of-the-art models (which are expensive) and has put in the work, over the last 2-3 months, to use these well, one is likely operating on outdated information. Yet it’s understandable that most people will not have done this.
What do we do about that? One bright spot is that everyone in the Project has free access to GitHub Copilot Pro (sign up here: https://github.com/github-copilot/signup). That’s a great start. But we need more. For serious work on Rust, we’ll need to provide people (much) higher limits. We need to be engaging with the companies in this space. We need them to support the Project with these tools.
For my part, I think that’s going to be the solution. Discussion can only go so far until we’re all operating on the same information and experience. Making these tools widely accessible to Project members, and then telling the stories (cc @T-content) of how these tools can be used carefully and productively — in a way that lets us raise (not lower) our standards, and eases (not adds to) the load on maintainers — is what’s likely to get us there.
Postscript, having now read the thoughts of others in this document:
Effect of ubiquity: Some time ago, it occurred to me that, surprisingly, once everyone has access to these models, the value of posting summary comments (AI assisted or otherwise) probably drops. With these tools, we can all just create our own summaries on demand. These are always up-to-date, targeted for us, and focused on what we want. Plus, once an agent loads the context needed to write a summary, one can poke at that context — one can ask questions about it, get citations, use it to direct other work, etc. In that way, such personalized summaries beat any static summary generated by someone else.
In a similar way, it’s not helpful to maintainers for others with insufficient understanding and who put in insufficient work to use the tools to make contributions when the maintainers could just use the tools themselves.
Much of the frustration people are experiencing, I believe, turns on this.
Contributor affirmation: Maybe the above points toward an actionable solution. We need contributors to understand our expectations. Probably many contributors who are frustrating us today don’t mean to do so. Whether or not someone used an AI tool, we expect contributors to have self-reviewed a PR, to have a detailed personal theory about why it’s correct, and to own (in both a legal and accountability sense) every line of code submitted. Perhaps we take this for granted — we probably think it doesn’t need to be said. But maybe saying this in our PR templates would help. A step further would be asking contributors to explicitly affirm this.
Education: More broadly, these tools are quite new. Everyone is still figuring things out — both us and our contributors. We may need to increase our focus on educating our community to get more of what we want.
Documentation: The models put a huge premium on documentation. If it’s not written down, they don’t know it, and they won’t do it correctly. Ironically, though, what we need to write down to help the models matches what we’ve long needed to write down anyway. All teams are bandwidth constrained. The better that those who aren’t on the team can understand precisely what the team wants and will accept, the better for everyone. It often takes people coming into the Project a long time to learn these subtleties (but then they take them for granted as the rest of us do). In a sense, the models provide a way to test whether we’ve written down enough. In any case, writing down more helps us get more of what we want from the people in the Project, whether or not they’re using a model.
Verification: We can all agree, I think, that software verification is an unalloyed good. With generative AI, though, software verification becomes a superpower. If the models have a way to tell whether or not they have the right answer, that’s when they really impress you. Fortunately, the models can do software verification. They perform well at encoding questions in a way that can be checked by a SAT solver and at writing proofs that can be verified by tools such as Agda and Lean (as with anything else, one must verify the encoding of the problem statement is correct). This is a particularly enticing area to explore for value.
Review: In my experience, with engineering applied to how they are used, the models are surprisingly strong and careful reviewers. Even before the surge due to AI, load on reviewers has been a pain point for us. While it seems counterintuitive to fight fire with fire, that may be what we have to do. At least, it seems worth putting in some work to see how we can get value there.
Problems will be solved: Without question, the situation surrounding these models today is imperfect. But we can look back at previous disruptive technologies of a similar scale to understand that this is not unusual — e.g. 90s bubble and the “internet revolution” had more than its share of flaws and excesses. When there is real value, as there was with the internet and as there is with generative AI, many problems will be solved with time. Hardware will improve and use less power. Costs will come down, and until then, we can work to make the tools available to our contributors without cost. Open source, open weight models will continue to improve. Legal precedents will be set, and training data that needs to be licensed will be. As with the internet, social media, and other things that bring both good and bad, we will all learn how to make our peace and find a healthy balance. Since these models aren’t going away, it may be worth our time to start the process of finding this balance today — working to understand the good as well as the bad that’s brought by this technology — in spite of the problems.
gmarcosb (Google)
- Not currently a contributor
- Looking to contribute to async/await (Filed #152141, offering to help dingxiangfei2009 who’s a colleague)
- Have experience with LLM in Matter repo
AI slop & “AI agents” as github users submitting issues & PRs are the worst.
However, in the Matter repo I’ve found that LLM PR summaries + reviews are quite helpful.
I have heard from colleagues that Rust repo reviewer time is quite precious at the moment, and an LLM doing first-passes + summaries could be helpful lightening the load for reviewers. It could also help with pushing-back on PRs from AI.
Here’s an example in our repo, #367.
If setting this up for the rust repo (at first simply as opt-in with /gemini review) is something people would be interested in, I’m happy to help.
(I’m not tied to Gemini; if people would prefer Copilot instead, I can help with that too.)
TC: We’ve been talking about setting up AI agents in CI for Reference work. See the recent infra thread. We’re not tied at all to any one provider, and we’d be thrilled if Google could support the work in this way. In terms of building trust and experience with these tools in the Project, it’d likely be best anyway to start off with a smaller repository (such as the Reference) rather than with rust-lang/rust.
scottmcm
- lang & compiler team; libs reviewer.
TL/DR: AI for summarization tasks is great; AI for generation just makes me think we need a modern version of Sturgeon’s law: “99.99% of everything is slop”.
Good:
- The AIs are often wonderful for researchy things. For example, I’ve had great success with “well I’m here and I need a
Span; where do I get one?” kinds of questions. I can get to the right place in docs way faster using the AI when I’m doing something that’s outside of the parts I know really well. When the AI can direct you to a reliable source (like a rustdoc for the compiler) faster than looking yourself, it’s great. - For repetitive things, like search-and-replace-but-way-smarter, I of course have no issues with that. It’s often less predictable than I wish, but that’ll get better eventually.
- I would love to use them to decrease maintenance burden in places where we don’t know what best to suggest anyway. For simple and direct fixes we should of course continue giving the specific one, but if we could move more of the “this isn’t often right anyway” ones to having an AI look at more context than we could reasonable code in rustc, that might be great.
Bad:
- I have no idea how to solve the “sure, you quickly made something plausible-looking, but it’s actually subtly wrong and now you’re wasting everyone’s time” problem. We get way too many PRs “fixing” things that don’t actually solve it, and those seem to largely from AI and wouldn’t have existed without it.
- Said otherwise, I continue to think that the greatest threat to the project is its lack of review bandwidth, and LLM is only making that worse, with no realistic prospect for it to make it better. (If the LLM could actually detect the real problems it could avoid them in the first place.)
- I worry about the AI being an excuse to not do things properly. Microsoft, of late, has done “our output window spam in VS is impossible to look at as a human, but rather than trying to make it better we’ll just tell you to get the AI to interpret it” and it seems like they’re not offering basic refactors in VS (like “remove this unused parameter”) in favour of saying “make copilot do it”. Similarly, like how “design patterns” are often considered evidence of missing language features, I worry about the AI being an excuse for leaving things bad.
- Similarly, I too often see “oh I just made the AI write all the tests” code, which always seems to end up being “My RSI and bleeding fingers have hopefully appeased the testing gods and atoned for my previous omissions”-style tests rather than good ones, and end up looking exhaustive by overwhelming volume while actually missing the one interesting part and not being something that anyone ever wants to update later.
- I continue to worry about the “slop tipping point”. The AI is amazing at small things, but the more it’s the only thing the more out of hand things can get. I foresee a huge consulting boom in a few years when people find out they have no idea how their software works and they need something the AI can’t do.
- In the rust project itself we have a ton of code that can’t depend on type-based correctness the way lots of other code does. It’s incredibly easy to write something that’s not UB but nevertheless causes catastrophic miscompilations in mir optimizations, in mir lowering, in codegen, etc. Until we have something like Alive2 that could help to check the work in these places, I don’t trust that the AI can handle them since it doesn’t have the normal feedback mechanisms that make agent-mode often so good.
- Said otherwise, if companies – the same ones selling AI agents! – really thought that AI could understand nuanced safety-critical invariants, they’d keep writing C++, just with AI checks, and not bother with Rust.
kobzol
- Leadership Council/compiler/infra/… team member
I use agents (Claude Code) to automate boring/annoying stuff (refactorings, boilerplate code, generate REST API calls, etc.) or for understanding complex codebases and suggestions for how to do X. It works well when held well by a “senior” developer. On the other hand, interacting with people who just pipe LLM output into PRs without understanding or reviewing it, or even communicating with me via a “LLM proxy” is incredibly annoying.
I think that agents are becoming too good to ignore, and sooner or later almost everyone will be using them, whether they want or not. I’m not particularly happy about that for various reasons, but it is what it is and there’s no point in hiding our heads in the sand about it. Both in the sense of figuring out how to operate in a world where both Project and external people use LLMs, but also how to introduce clear rules that will allow reviewers and mods to quickly reject LLM usage that is extracting, as they currently spend a lot of time and effort on that.
Cyborus04
- Previously contributed a little to
std([T]::as_flattened). - Looking to contribute to cargo to improve the user experience with alternate registries.
I am wholly against any use of LLMs.
I will not be arguing about the technical side of LLMs. I have not used them, nor kept up to date with them, so I do not know the quality of their output. I do not care how good LLM-generated code is. It could be more productive and proficient than the entire combined output of every human programmer, but it doesn’t matter to me. My stances against AI is an ethical one.
- LLMs are trained on stolen data. It seems to me that, given the amount of data needed to train an LLM, it would not be possible to train one comparable to current models on licensed data.
- LLM datacenters consume water in comparable amounts to that of a medium-sized town.
- LLM datacenters use so much power that it is undoing progress towards net-zero carbon. In theory they could run on renewables, but they currently do not, and AI power usage is growing faster than renewable production is.
- LLM datacenters are disproportionately located near already disadvantaged communities. They pollute the air and water there with a complete disregard for the health of the local population.
- Content moderation is traumatizing work that is put on workers in other countries who can be paid less and exploited more.
- LLM companies seemed almost lustful over the idea of replacing human programmers with AI models. They are developing it with the hope of being able to rid themselves of the expense of workers. LLMs can’t unionize, after all.
- LLM usage encourages relying on the companies that provide them, and discourage critical thinking and proper understanding of the code. Learning to code by using AI does not transfer to coding without one, locking you in to that service and further cementing technocratic control.
Generative AI is an inhuman, fascist project that harms everyone and everything that it touches. Those creating it do not care of the human cost it incurs. It is a tool for control. Obscuring the cost of it is part of that; the easier it is to use it without having to think about that cost, the harder it will be to convince someone to not use it.
Offering a “live and let live” stance towards AI grants it a moral neutrality that it should not have. In this way, supporting developers who are users of AI is endorsing it. It implies that the human cost of AI is acceptable. I find that disgusting.
LLMs are not inevitable, “AI usage isn’t going away” is only true if we let it be. It’s as common as it is because companies have pushed it relentlessly. That is no reason to embrace it. The Rust project is big, so its stance on this matter will carry substantial weight. I argue there is a moral imperative to take a stance against AI, and have a part in making sure it does not become the status quo. We have the power to determine that future, we can’t afford to cede it to companies.
Sources:
- The Hidden Cost of AI: How Data Centers Are Straining Water, Power, and Communities
- The Health Divide: The AI data center boom will harm the health of communities that can least afford it
- ‘In the end, you feel blank’: India’s female workers watching hours of abusive content to train AI
Guillaume Gomez
Member of a few teams
LLMs are trained on stolen data and allow to exploit others work without retributions. It is also a new kind of closed gardens where the best available LLMs are owned by few companies which keep increasing their prices while reducing the free “trial”, increasing yet again the gap these who can afford it, and the others.
The number of contributions generated by AI keeps increasing, however the quality of such PRs is much lower, and let’s not even mention the trust I put in them. Did the person who sent the PR actually double-checked everything that was generated? Am I wasting my time in reviewing something that was not double-checked by the user who sent it? All these questions are draining my time and energy.
Nicholas Nethercote
T-compiler
I like coding. I don’t want to become an LLM shepherd.
A common refrain (seen above!) is “the LLMs have gotten so good”. Just this past week (February 2026) I have been reviewing a lot of recently-generated Rust code and documentation produced by the latest models. It was not a good experience.
For the documentation, at the sentence level it was very good, at the paragraph level it was good, and at levels beyond that it was terrible. Bad structure, repetitive, no sense of order or flow. Just feels like a random collection of related things. This project had lots of README files and the issues were even worse when looking across them; there were incredible amounts of duplicated information.
The code was very “uncanny valley”. Again, on the micro-scale it was reasonable, but it got very weird beyond that. So many style inconsistencies (not just formatting), weird orderings, strange things that make me think “huh?”. It wasn’t even close to being code I would give r+ to, and I would have been embarrassed to put code like in a PR of my own. And this was a codebase I wasn’t familiar with, so I was only doing a shallow review. Who knows what deeper issues might be found be an actual expert. I wouldn’t trust it at all.
Another common refrain is that LLMs can be useful, as long as you micromanage it, and you have enough experience to identify when it goes wrong. Which is great if you are a senior developer with iron discipline. But that’s not many people, and human nature is to lose vigilance over time.
Another common refrain: “The LLM is amazing! I feel like a god, like I can build anything! (I can feel my coding skills atrophying in real time.)”
I think we are heading towards a world where there are codebases that are 90%+ human-produced, codebases that are 90%+ LLM-produced, and very little in between. Because the incentives are such that once you start accepting significant amounts of LLM output, you’ll end up accepting more and more over time. I don’t think the LLM-heavy codebases are going to be good.
Peter Naur wrote an essay called “Programming as Theory Building” that I like a lot. It argues that a program exists not just as source code, but also mental models in programmer’s brains, and that the mental models are as important, or even more important, than the source code. This is why programmers are not fungible. One programmer with a good mental model will be able to modify the program effectively; someone with a poor mental model won’t. Source code that has been abandoned by the original developers is in a degraded state; if someone takes it over they need to build up their own mental model, which may differ from the original author’s. Building and maintaining these mental models is hard work, and an enormous part of programming. So what does it mean to outsource all of that to an LLM? I can’t see it having a good outcome.
If people find LLMs useful for rubber-ducking purposes, fine. But I think the more that direct LLM input goes into Rust, the worse Rust will become.
Update: After I wrote this, this bizarre event happened, which got me thinking more. I think “Judge the code, not the coder” is an argument we’ll hear a lot in the coming days. There are a number of reasons why I think it is a poor argument.
- An open source project is more than just a codebase.
- There is a community of people around it.
- These people have a shared commitment to the project
- These people have a shared understanding of what the program does, and why. (This ties in with the Naur essay I mentioned above.)
- Drive-by LLM contributions do not contribute to these non-code aspects. They arguably even undermine them, even if the contributions are technically valid.
- For example, an LLM that fixes an E-Easy issue steals a human’s learning opportunity.
Finally, I reject the argument that “LLMs are here to stay, so there is no point resisting them”. In the broader world, maybe. But this is the Rust project. We, the people are the Rust project. We abso-fucking-lutely are able to decide how it works and what is acceptable.
Jubilee Young
T-compiler, T-libs-contrib, etc.
I have other thoughts about this, but my immediate one is that the last issue reporter I encountered that overtly used LLMs was rather hostile (see rust#151868). I told the reporter that their issue report missed key reproduction information, because it not only referenced tools we do not directly support (bazel), it also omitted the basic inputs those tools demand or a description of how they got them (surely bazel build, but with what MODULE.bazel?). They instead included LLM-generated “analysis” of compiler build outputs, which were so “summarized” it was impossible to work back to what they were trying to describe. To top it off, the report also pinged people who were completely unrelated to the issue for no obvious reason.
This was already rather stunning. Even terse reports usually say, “Oh, I ran cargo build and got this error” and include a relevant line of actual compiler output. Reporters almost always try to describe the problem such that another human can at least infer enough to replicate it. I have performed “psychic debugging” off less, yet this more was not enough, like food without nutrients, water without hydration, or just data without information.
After the response of myself, and a few others who commented on the issue, the reporter then berated us for not using an LLM to reproduce their issue. They claimed this could have been done with a mere 30 seconds of our time. Not theirs, of course.
When a moderator responded, the issue reporter attempted to avoid the moderator’s moderation actions by opening a new issue (rust#152150). In that, they continued their churlish behavior, except worse, augmented by gloating that they had fixed the problem in their fork of rustc (no mentions, of course, of regression tests). This, of course, necessitated a moderator taking actions they could not avoid.
This is not particularly unique. LLM-driven issues and pull requests are often backed by defensiveness around how they got their information, whether it is code or an error report. Simple questions that should have simple answers are evaded so completely it requires an inquisition in response. It is exhausting to run them down on the facts, even when the initial report gives enough that you have them dead to rights.
This is apparently what we cannot ignore? Tools that encourage entitled, hostile, and anti-cooperative behavior? Well then, I indeed agree that we cannot ignore them, but I am not sure that is because we benefit from them. I despair, if this tendency is what we supposedly cannot resist, for I started participating because I saw in this group a desire to do better, to be kind and to care, and know not from where I would get the energy to struggle to do the same if others do not.
Jieyou Xu
T-compiler, T-mods-venue, etc.
The usual ethical and legal considerations aside, I want to provide some feedback as:
- A user of AI tooling
- A project maintainer/reviewer
As a user
For AI tooling, the only solid use case I have is for searching (and esp. having Gemini and stuff actually include the references). And even there, I only use them to find sources/links to read myself, as I find these tools to hallucinate too much. For “trivial” topics, the accuracy is okay, but it immediately falls apart in deeper/more specific domains that you have in-depth knowledge about. I don’t use coding agents or AI-assisted completions etc., because I find it more exhausting and mentally taxing to have to act as a reviewer for code I didn’t write. Overall, it takes more time for me to coerce AI tooling to produce the code I want plus reviews and fixes, than it is for me to just write the code myself. In other words, for me, it’s an overall slowdown. A secondary reason is that it’s really difficult to retain “deep impressions” or develop mental models of the codebase for code that I didn’t write myself.
As a reviewer/maintainer
Now, as a reviewer/maintainer, my experience with AI tooling has been nothing but negative and frustrating from AI generated contributions.
The thing is, as a reviewer I shouldn’t be able to tell that those who responsibly use AI tooling actually used AI tooling, because these contributors would have manually reviewed and fixed problems in the content themselves.
The problem is that AI tooling (especially your LLMs) substantially lowers the bar to generate plausibly-looking contributions that, if you waste a few minutes of your life scrutinizing, is completely slop. Let’s break down this into kinds of contributions:
- Issues (bug reports). I honestly suggest blanket-banning any kind of AI-assisted tooling for bug reports. I don’t mind grammar mistakes, broken English, or even the reporter using their native language. That’s easy to work with. However, I absolutely despise bug reports that are of the LLM slop flavor – a whole bunch of text but somehow doesn’t contain actually needed information for reproducing. Even worse, some of these reporters include completely useless/wrong analysis that maintainers doing triage have to waste time looking in case there’s something real in there. Triage takes maintainer time too, please do not underestimate this.
- I sometimes hear the argument “what about LLM-assisted translations”? Honestly, I rather the reporter just write non-English native language because at least that reflects the reporter’s actual sentiment, and you can cross-compare translations (i.e. you have access to the “original”). Especially e.g. Mandarin, because maintainers might actually be fluent in the native language! Whereas if someone just posts LLM-translated content, who knows what the original version was?
- Pull requests. This is even worse. As a reviewer (and venue moderator), I’ve seen too many PRs that never-seen-before accounts just vibe-code. When you ask the PR author questions or ask them to explain their changes, they can’t come up with a coherent explanation.
- There are some PRs that have mostly-reasonable changes, but then the PR description is LLM generated, that’s verbose and often also wrong in many places. Again, I also suggest blanket-banning AI-generated PR descriptions, I rather the PR author write in their native language than having to wade through entire text walls only to find it’s basically wrong or even outright misleading. As a hilarious example, there’s been quite a few PRs with whole walls of text for only typo changes. Please. Honestly, I rather they don’t write any PR description.
- It’s perfectly okay for contributors (esp. newer contributors) to be unsure about their changes. That’s fine. What’s not fine is generating changes the PR author is not sure about, not disclosing it, only for the reviewer to find out it’s plain wrong. I am absolutely not willing to serve as a reviewer for the LLM. Human contributors can grow and learn and might one day become maintainers themselves, LLM contributions are simply extractive changes.
- A few contributors even act as a proxy between the reviewer and the LLM, copy their reviewer’s question, reply with LLM-generated response. For the love of god, please.
- I want to emphasize this is incredibly frustrating. This is the top contributing factor to potential burn outs for me.
- I have bad experiences with e.g. Copilot review tools. I tried a few times, and the only feedback it was able to provide was typos. Look, we have
typosin CI for that purpose without needing to burn so much energy.
- For proposals and conversations. I really, really hate reading proposals and conversations that are generated by AI because they always seem to have too much fluff and too little content.
- A special note that is, in the past, it takes way more effort for a human to write than to read. Generative AI substantially tipped the scales. LLMs make it all too easy to generate text, but it’s a real slog trying to review or read the generated text.
- Like code contributions, the bandwidth bottleneck is not in the generation or synthesis, it’s in the review.
- I want to hear what the human contributor thinks, not the LLM. If I wanted to hear the what the LLM thinks, I can just ask the LLMs myself.
As for my opinion on e.g. allowing Claude or Copilot or other coding agents to generate e.g. rust-lang/rust PRs, I think that’s plain unacceptable and I find that disrespectful as a reviewer.
I also suggest that, we as a project, should define at the bare minimum some AI policies that help prevent maintainers/reviewers from burning out due to having to deal with the DDoS of slop.
- Please consider especially the bandwidth imbalance, there’s only so few maintainers and review bandwidth. But there’s practically infinite LLM bandwidth out there.
- The cost is measured in humans. Once your maintainers and reviewers burn out, you’re not getting them back.
- Please keep in mind not only newer and potential contributors, but also your maintainers. Please make it not only easy to contribute, but also easy to maintain.
We have a compiler team policy Policy: Empower reviewers to reject burdensome PRs #893 that I co-authored precisely because the sheer quantity of such slop makes it not feasible to “explain nicely why we are closing your PR”. This policy allows compiler reviewers to close a PR that they find burdensome without having to provide elaborate justification. AFAIK, sometimes library maintainers also cite this for convenience.
For prior art, I particularly like:
- LLVM AI tool use policy. LLVM’s framing on extractive contributions and
Our golden rule is that a contribution should be worth more to the project than the time it takes to review it.
I suggest such a AI contribution policy cover:
- Contributors are responsible and held accountable for their contributions. Just submitting AI-generated content without reviewing themselves is absolutely unacceptable.
- The contributor must understand their contributed changes. They need to be able to answer questions from the reviewer about the changes.
- Contributors must responsibly disclose if a substantial portion of their contribution is AI-generated.
- Reviewers are empowered to decline reviewing/interacting contributions (including proposals and comments) that are primarily AI-generated.
- Submitting slop results in an immediate ban.
- Piping reviewer/maintainer questions into an LLM then posting the LLM’s response verbatim is an immediate ban.
(We should of course make these policies upfront and obvious, the “proxying reviewer questions to an LLM” really grinds my gears.)
Otherwise, I rather blanket-ban AI tooling / AI-assisted contributions than to enable or encourage even more slop with an overly-weak AI policy, even if that catches out responsible uses of AI tooling.
As a side note, I don’t want access to any “advanced AI models” for my own contributions. I need less noise in the contributions that I review.
UPDATE(2026-02-20): Especially recently with the advent of stuff like OpenClaw and MCP and stuff, the sheer volume of fully AI-generated slop is becoming a real drain on review/moderation capacity.
Jayan Sunil
T-Triage
note: I’m not a member of any major team, but I have a strong interest in contributing to the rust project, and I’m actively trying to do so.
Putting aside all of the (usually well founded) ethical, legal and environmental concerns about AI, here are some views of mine related to this topic.
Preface to (what i think is) the problem
There is no doubt that AI models are becoming, day-by-day, more capable of writing code. However, as @TC has said before me, they require careful instructions to actually produce decent code. As a project, allowing any sort of AI usage has severe implications. An official “stamp of approval” can often be the missing impetus that enables many people, who previously might not have pumped out LLM slop as contributions, to do so with less guilt. This of couse doesn’t represent all people, but it represents a (somewhat) growing majority of people. This subset of developers has heavy overlap with another class of LLM-using developers, namely those who’re particularly great exhibitors of the Dunning-Kruger Effect. AI for these users is akin to steroids for their Dunning-Kruger Effect. It boots confidence, but impacts the user’s competence. This is all not to say that using an LLM will make you incompetent; there are a lot of developers, experienced ones, that utilise LLMs to improve their workflow. The problem doesn’t come from LLMs themselves, but from how they’re used.
In an ideal world, where every LLM user reveiewed the output well before submitting a PR or filing an issue, we’d be hard-pressed to spot the differences. But reality is quite different. We notice AI usage the most when its done wrong. When a developer blindly sends a PR with AI output, and responds to feedback and suggestions by piping them through the LLM, it becomes apparent that they rely on the machine istead of utilising it. This does indeed form a sort of survivorship bias, but there’s an important caveat. In this senario, it is often only the ones that “survive” (i.e. get spotted) that pose a problem. This phenomenon occurs when there begins a parasitic dependence of the user upon the LLM, which handicaps them.
The impact on reviewers and maintainers
- Issues: LLMs allow users to pump out bug reports, which can be incredibly frustrating for the people triaging and responding to these issues. They present a wall of text, which often has no meaning, but is simply there because the contributor couldn’t be bothered to scrutinize it for even a minute. A distinction that we should make somewhere, like in the issue template itself, is that English is not a barrier for entry. I’d say that most maintainers would prefer to read broken English, or even decipher another language, then to read LLM generated slop. A blanket ban on AI-generated issues (as suggested above by @memtransmute and others) would be (IMO) ideal.
- PRs: AI generated code that hasn’t been reviewed by the prompter is incredibly frustrating. When the contributor themselves haven’t read their code properly, they’re unable to respond to feedback or questions properly, which often cyclically leads to them asking the LLM to interpret the feedback. The awfully cheer-y tone of LLMs, and their profound ability to write paragraphs of nothing can burn out maintainers, which is pretty devastating to an open source project.
Case studies
curlrecently stopped theirHackerOnebug bounty programme because “A bug bounty gives people too strong incentives to find and make up “problems” in bad faith that cause overload and abuse.“ Stenberg had the following to say in his post:
“The main goal with shutting down the bounty is to remove the incentive for people to submit crap and non-well researched reports to us. AI generated or not. The current torrent of submissions put a high load on the curl security team and this is an attempt to reduce the noise,”
tailwindcsshad to lay off 75% of their engineers due to the “Brutal Impact of AI”.- The
zigprogramming language has completely moved off of Github, because of, among other things, GH’s aggressive AI push and stuffing copilot down everyone’s throats.
What other projects have done
- https://rfd.shared.oxide.computer/rfd/0576#_llm_shaming
- https://llvm.org/docs/AIToolPolicy.html#extractive-contributions
PS: https://bytesauna.com/post/dunning-kruger is a pretty nice article about the Dunning-Kruger part.
Note: To my knowledge, no other lang has first-class LLM tooling. That is to say that the tools aren’t maintained by and aren’t affiliated to the language developers officially.
dianne
T-compiler, T-lang-advisors
My primary objection to AI usage is on ethical grounds. As others have stated above, LLMs as they exist today are built on exploitation (of labor, disadvantaged communities, and the environment, at least). This isn’t unique to AI, of course, but given the opportunity, I think it’s worth not supporting AI proliferation. I wouldn’t expect the Project to take a public stance against AI, but I believe that official support for and/or usage of AI tools at the Project level would be a tacit endorsement. Regardless of whether AI usage is explicitly encouraged of Rust users, officially supporting usage of AI tools means publicly approving of AI tools.
I’ll also echo the observations that generative AI makes it very easy to waste maintainers’ time and energy, especially when would-be contributors are more confident in LLM output than in their own ability/judgment/understanding.
Nadrieril
T-compiler, T-lang-advisors
I concur with a lot of what has been said above. I’ll add that I don’t know what the future holds but today LLMs are not good citizens. We have to check every word they say for lies and misleads, they have no regard for future maintainability or code complexity (or even correctness often), they are incapable of humility when faced with something too complex for them, they of course have no staying power, suck at deep understanding, and anything they learn dies with them. This is simply not the kind of entity I want to collaborate with, even if it sometimes produces usable code.
Also, from limited experience playing with LLMs, “LLM generates the code and then the human reviews it thoroughly” is a sweet lie. Reviewing thoroughly is both more exhausting and a lot less fun than writing the same code (unless it’s boilerplate), and for LLMs made 10x more exhausting by the fact that I can’t trust anything. I have sincere doubts that anyone would do this every time.
All of that was said from the pov of a compiler maintainer. From the pov of a code writer, I’ve enjoyed LLMs for writing proc macro code because that’s no fun and not too correctness-sensitive. They’ve become quite good at writing Rust code lately. I do see the appeal, tho I feel my brain become lazier on any codebase where I’ve used them a bit.
I have very mixed feelings about this whole topic: how casually it lies, a distaste for a bunch of the culture in the vibe-coding world, a distaste for the wordwide circlejerk pouring trillions into this technology, an excitment for the insane potential we’ve started to experience, a distaste for the distortion/disregard of human values that this amplifies, etc etc.
lcnr
T-types, T-compiler
I agree with Nadrieril and Jieyou Xu. My main concern is that LLMs break nearly all of our current ways to detect effort. This causes us to incorrectly allocate review and mentoring capacity.
I don’t mind people using them to write automation for one off tasks. This is still somewhat risky. E.g. I’ve recently tried to use an LLM to generate the initial csv file when triaging a recent crater run and the way it parsed inputs missed some of the crates from the report. I personally do not feel able to deal with the fact that I cannot trust them.
I don’t think “human reviews the resulting code thoroughly” works. Experimenting with the inline snippet in VSCode while working on the search graph got it to propose incorrect, but seemingly reasonable, comments. I even used some of them without realizing they are wrong. Unless LLMs get/are good enough (or the problem simple enough) that thorough review is unnecessary, I do not want to endorse using LLMs to generate code or to contribute to conversations.
Using LLMs to get started with new areas of the codebase and to get an initial understanding of how some tool or compenent works is fine imo. I do worry that a lot of ways to use them cause you to lose/not practice a skill that’s valuable long-term, I like the perspective of https://morphenius.substack.com/p/tools-that-enrich-us here.
Also, I very much dislike LLMs due to vibes and the behavior of companies and people pushing for them. I think there are non-technical social reasons to distance ourselves from AI.
oli-obk
T-compiler, T-types, T-moderation
Using LLMs seems fine for doing small things like
- getting over the activation energy needed to start sth
- “how do I do X again?”, where X is something like “create a ’static lifetime”, which may be context sensitive on what you need to do and if the alternative is grepping for other sites doing the same, you may as well automate that grepping. whether it’s worth the money and CPU time and risk of being wrong I’m fine leaving to the individual
- fuzzy finding tests for specific cases. we just haven’t gotten to a nice organization of tests yet. so I understand, even if it removes pressure to improve things
but, and this is a BUT, generating code that you don’t wanna write because it feels boilerplaty means
- you proliferate the problem instead of becoming or supporting someone that fixes it
- you don’t know what was generated
- you don’t think about the code as you write it, not realizing that there may be a derive for it, or that you should destructure structs exhaustively to run code on all fields, or that there is a cost to this at all, you didn’t feel the cost after all
The Rust project provides a solid base for other projects to build on. We need to ensure we don’t erode that base. And imo any major inclusion and support of AI will lead to that irrevocably.
If there is desire for some AI tooling or documentation around Rust I don’t want to see it gobble up resources from our project. We have more than enough human processable documentation to improve, and I keep getting told the AI can read that, too. Yes the resources may not exist if not meant for some AI stuff, but the work will consume resources from our project, even if just by taking up space in the discussions.
Imo we should ban any and all discussions on AI tools or projects. Unless a feature can be useful without AI it should not be considered or discussed. These can live out of tree.
Team members using AI for Rust work seems ok, we have established trust already. But unfortunately I think we need to raise the bar for new contributions to be obviously free of AI. This is necessary imo for 3 reasons:
- reviewer sanity
- learning ability of new contributors (the AI doesn’t understand more by you using it)
- trust building (AI can emulate competence for quite a while)
All in all I treat AI like I treat genetically modified food:
- companies with a lot of money try to shove it into everyone’s hand and will absolutely harrass ppl that aren’t even using it
- It’s not being introduced in a scientific way, so all arguments for it are subjective individual experiences (lots of science actually points the other way: https://youtu.be/tbDDYKRFjhk?si=AM5DPcJGeg_3ignp, https://agilepainrelief.com/blog/ai-generated-code-quality-problems/?utm_source=mastodon&utm_campaign=archive-reshare, https://www.youtube.com/watch?v=7pqF90rstZQ, https://social.coop/@cwebber/114013728608044306)
- Any long term effects seem to be put aside for various reasons like “it’s inevitable” or “the model will get better” instead of the concerns being addressed
Anyway, I’d love to hear more contributors’ voices of their successful uses of AI on rustc specifically. We’ve had some, but so far they seem to fit my idea of what they are useful for (still don’t wanna use them, I don’t enjoy using them and learn via chaotically diving into new code and reading it myself)
And wrt alienating new contributors, while my personal guess is we alienate more by supporting AI, various anecdotes about younger folk make me feel that guess is right
- https://mathstodon.xyz/@jonmsterling/115963602695257750
- https://wandering.shop/@susankayequinn/115885437418160578
Jana
T-compiler
There’s many people above who I agree with, like nadri, jubilee, jieyou, and lcnr. But I’m going to quote specifically Guillaume’s answer (almost ironically at this point; with attribution) to this to make sure it’s heard once more, and because I couldn’t have said it better:
LLMs are trained on stolen data and allow to exploit others work without retributions. It is also a new kind of closed gardens where the best available LLMs are owned by few companies which keep increasing their prices while reducing the free “trial”, increasing yet again the gap these who can afford it, and the others.
Any debate from this point seems completely off-topic.
Zalathar
- T-compiler, T-bootstrap
AI services are performing a de-facto DDOS attack on Rust and other open source projects, by empowering their users to waste the time, energy, and goodwill of maintainers with unprecedented ease.
It is now very common to see superficially plausible PRs that, on closer inspection, turn out to have been submitted by people who have no real understanding of the changes being made, and show no indication of any curiosity to dig deeper.
In most cases, PR submitters conceal this context from reviewers. Whether intentional or not, that deception has tremendous real costs for the reviewers on the other side. It also has real costs for prospective contributors who are trying to do the right thing, because reviewers cannot sustainably continue to assume good faith.
RalfJung
- t-opsem, miri, t-lang-advisors, wg-const-eval
I will focus on the risks I perceive coming from LLM agents; there is no doubt that they also provide a chance to boost developer productivity but the companies selling them are already doing more than enough to talk about those aspects of LLMs. ;)
My biggest concern is that LLMs fundamentally shift the usual contract around code review: one reason we invest so much time into code review is to train new contributors, some of whom will hopefully stick around and become project members and reviewers themselves. If someone submits an LLM-generated PR they barely understand and then simply forwards all reviewer questions to the agent and the answers back to the reviewer, then effectively the reviewer is just doing agentic coding themselves but with a terrible UI. I consider this a failure mode we should work hard to avoid. It should be very clear that contributors are expected to understand the code they submit1 and answer questions about it themselves. It should also be clear that dropping a huge feature in a multi-thousand-line-PR is inacceptable; such PRs need to be split up into independently reviewable chunks (this is not new, but LLMs make it much easier for more people to write multi-thousand-line-PRs without first understanding the usual developer practices).
In other words, LLMs can be great tools in the hands of experts, but using them too much too early can prevent a person from even becoming an expert.
So, as a first step, I think we should ask contributors to explicitly acknowledge as part of preparing a PR that they have either authored or reviewed the entire PR (including PR description!) themselves and are able to answer questions about it on their own. Maybe we can also give guidance on how to write a good PR description (to counter the effect of LLMs writing extremely verbose descriptions with low information density) People can of course lie about this, but I expect many people just want to help and even for the rest, this helps set a clear line.
There is also a concern of equity and power dynamics: LLM agents are expensive; embracing them means embracing a world where the bar for entry to programming and opens-source contributions becomes even higher. Furthermore, due to the costs of creating such models, there are only a few companies offering them, concentrating a lot of power and control of people’s programming behavior in a few hands. They are centralized, proprietary services which leave the vendor in full control over who uses them when and where to do what – the exact opposite of FOSS, of empowering individuals. All this means we should be very hesitant to do give the impression that we are endorsing LLMs and the way they are currently built.
That said, I would be interested in exploring LLMs for code review. Some Linux kernel folks apparently had good success on having LLM agents assist in review using very project-specific, carefully crafted prompts. Obviously this cannot replace human code review and approval, but if done well it could still help make reviewers more effective. It seems worth a try. However, we should be careful not to get into a situation where we have an unhealthy dependency on LLMs to keep the project running. I hear some of the open-weight models are getting fairly close to the big proprietary ones; using a self-hosted instance of those could alleviate some of the aforementioned concerns.
tomassedovic
- Program manager
- goals and content team member, point of contact for Rust for Linux
- participate (taking minutes, following up, scheduling meetings, facilitating conversation) in cargo, libs-api, lang, style, cpython, spec, FLS and council
My perspective:
This is a normal (in the sense it’s nothing magical or out of this world) technology. It’s a tool. It can do useful and good things, it can do harmful things. I genuinely don’t know which use is more prevalent.
I’m a hobyist gamedev in the roguelike space and among other things, this is clearly also a procedural generation tech and therefore inherently interesting and something I’d love to dig into and play around with.
To date, I’ve used it about 7 times: three times towards the end of 2023 to try the chatbot with a few things.
And then four times in 2025 specifically related to my work here: I’ve used it on top of my spell-checker to get suggestions for making my blog posts closer to the “Project style” – which TC had been doing manually before (it was him who experimented with this, found that it caught the same kind of things he would and suggested I gave it a go). For this case, it works really well, finding issues with my blog posts and reducing TC’s review and fixing time.
(I write the blogs myself and only use the LLM before submitting the PR as the last check)
But for me personally, what the technology does and doesn’t work for, whether and where and how it can be helpful is essentially irrelevant in the face of the way it’s being developed, run and used in the real world.
The energy demands result in increasing rather than reducing the emission of greenhouse gasses into the atmosphere. Coal plants that were slated to be closed are being kept alive. Large tech companies walking back their de-carbonisation commitments. This is absolutely unjustifiable.
In my opinion, this is a fatal flaw and everything else comes as an incredibly distant second. If this technology cannot be developed and operated without making an existing global calamity worse, it should be stopped until that’s resolved.
But I have other issues, for example:
- The tech being used as an excuse to fire large swaths of people – not because they are being replaced by the generative AI, but because this can shift the blame from the execs doing the firing to a new technology and “market forces”
- The financials appear to be on a shaky ground – the hype surrounding it, the bubble that’s growing, the amount of money put into it, how a huge amount of the world’s finances is tied in the handful of AI companies and if they falter, the impact this will have on the world’s economies
- People being pressured to adopt these tools with their AI use being evaluated as part of their performance – not the quality of work, but whether and how they’re using these
- The training data for these models has been obtained through piracy on an unprecedented scale – with consequences being nothing but a tiny fee the companies paid after being sued
- The incessant indiscriminate scraping that’s affecting the viability of running websites
- spam/slop at an inhuman scale
None of this is denying that when people find uses for it, these can be really valuable!
I don’t have any fundamental issues with the tech itself (when its strengths and weaknesses are understood by its users), but the way it’s being intentionally inflicted on the society by a handful of powerful people who are incredibly suspect makes me feel it not worth using to improve my projects.
I think the same calculation holds when applied to the whole society.
I don’t blame people using it – so long as they own (not in the copyright sense as the output is in public domain, but in the sense of putting their reputation and expertise behind) whatever they produce and bring to Rust, that’s fine by me.
I’m not against Rust integrating with the tech or taking funding. But this should be considered thoroughly, and not just on the basis of “it works really well now!” and “it’s not going anyway”.
As of right now, the technology is inextricably intertwined with a whole lot of serious planet-wide issues. It’s not an unalloyed good and opposing it doesn’t make one an out-of-touch crank (likewise though, I don’t think anyone using it should have a target painted on their back).
Turbo87
- crates.io team
I’ve been using LLM agents since around mid-2025, and they’ve been genuinely useful in developing features, fixing bugs and analyzing data. I basically treat them as a tool (fancy auto-complete) and always review and polish the output before submitting PRs.
I see AI as a multiplier. If you produced high-quality work before, you can now produce even higher quality in the same time. If you produced garbage, you can now produce garbage at scale. How to deal with the latter is certainly an unsolved problem, but it shouldn’t prevent us from benefiting from the former.
I think banning AI contributions in general won’t work in practice since people would simply not disclose their usage. I’d prefer some kind of AI disclosure policy instead, similar to what e.g. renovatebot uses (no AI / minimal assistance / substantive assistance / other). This lets reviewers decide for themselves whether to apply extra scrutiny or ignore the PR.
The PR review workload also varies quite a bit across teams, so maybe we even need different policies for different parts of the project?
apiraino
member of: T-compiler, T-surveys, T-mods
I agree with many of the above talking points against endorsing LLMs and my list will mostly repeat them - to underline my support.
I will group my concerns by topic. Everyone is free to take into account the group(s) they think are relevant to the discussion.
TL;DR Even excluding the non-technical concerns, this tech looks to be still immature and I would rather wait another - say - 8/12 months for this bubble to pop and then see what good is left to endorse. Publicly endorsing this technology today would create a rippling effect increasing the second-order effects on the Rust project.
Technical concerns impacting the Rust project
- I don’t review PRs but I see how much time is spent on separating the wheat from the chaff. “Is this AI-slop? Or is this contributor just unfamiliar with our project?”. LLMs open the door to doubts and additional review churn when a PR “looks sus”.
- As a (human) moderator, I have to fend off people abusing these tools while upholding the Code of Conduct and be courteous while helping the Rust project staying a welcoming place. This means that every moderation action must be well-thought. This takes a lot of time.
- We are a FOSS project. I argue that it’s not our place to endorse a proprietary SaaS that works on a per-request basis, see this tweet when Claude went 500 the other day.
- “We can’t have nice things”: as much as I would love tech to help2, there will always be a number of people using it for their own (legit or malicious) gain employing the least amount of effort and completely disregarding second-order effects. We are on the other end of this mass of ever-increasing low-effort users. Some projects need to go the nuclear way and close the doors to contributors.
Other technical/FOSS concerns
- LLMs are not a democratic piece of tech: the cost of developing these tools is only for deep pockets, with the result of creating monopolies. “FOSS” LLMs are subpar and the gap will only increase.i
- Besides the obvious lace around the neck of people using them, LLMs put at a disadvantage those who don’t (or cannot afford) to use them. Students need to pirate Adobe Photoshop to learn a tool that the market requires and then pay for it when they have an income. LLMs cannot be “pirated” (AFAIK). This is against our ethos of a FOSS project.
- The way that some AI companies are unscrupulously scraping content without any consideration of bandwidth costs for the hosting server (I own one)
- “It’s here to stay”, “use it or be left behind” as arguments from companies having a vested interest in selling their LLMs tools are putting unduly pressure on FOSS developers, especially newcomers.
Legal concerns
- Unsolved copyright issues with no solution in sight. The current thinking is to assume good faith and hope for the best. For a FOSS project, this is a huge Sword of Damocles.
- Corollary: I argue that writing FOSS software relying on this kind of proprietary technology is not really writing FOSS (this is not the same as using IntelliJ or Sublime Text).
Ethical concerns
that in a previous discussion were dismissed as not relevant but I believe they should be part of the conversation:
- Enviromental sustainability. Training these LLMs seems to be expensive in terms of hardware and electricity bills. There is ongoing arms race to get hold of big sources of electricity.
- Exploiting underpaid workforce to clean up datasets
- Companies selling AI-tools fueling “fear of missing out” and “losing jobs to AIs”
- Dubious market practices from OpenAI (caused a worldwide RAM shortage with ripple effects on the market)
Tshepang Mbambo
member-of: rustc-dev-guide, fls
If we exclude those whose meals depend on using this tech and have no other reasonable alternatives, using this tech implies that you value convenience over all the (well-documented) harms, and my most generous interpretation is that you believe it’s worth it… the benefits are larger than the harms, and not just for you, but also for the world in general. Am not convinced.
I mean, it’s not helpful that we have very little funding for better documentation, migration tooling, better tools, more and better standards and compliance and interoperability, better search engines, and whatever else can the reduce the effort and pain we have to go through in order to build useful things (as the software industry). Imagine how far we would go if a single percent of the money going into building data centers and related things was used for those good things, including less harmful ai tech (see https://www.dair-institute.org). If you are reading this, you likely lack the power to change society at that level of course, but maybe it does help to stop giving legitimacy to horrible things that are workarounds for the failures of the tech industry.
I really hope the users and promoters are convinced it’s worth the harms, as compared to just not caring enough about the harms. I also hope I am wrong, because if not… well, let’s just say things look bleak from these eyes.
blyxyas
Member of: T-clippy
I’m actively against the use of AI because it’s trained with stolen data, it poisons our water and even with its limited use, it has gigantic energy requirements (mostly covered by fossil/coal-based energy services)
Apart from the absolute facts (such as the legal problems that AI incurs), we could also talk about its impact on our critical thinking and how it’s causing problems in other FOSS projects due to issue reporters relying on AI instead of technical knowledge.
Apart from that, personally I’ve used all models currently available extensively, and even the more complex ones such as Claude Opus 4.5 don’t match the necessary skills of a programmer, while consuming vast amounts of energy and sounding very confident. Even further, I’ve used the Copilot Pro IDE extension for about 3 months, and actively noted my programming skills deteriorating in real time. After those 3 months, I couldn’t review as fast as I could, nor implement new features or fix bugs with the same mental clarity.
I’m of the opinion that AI reviews are not to be trusted, and AI-generated code shouldn’t even be taken into account.
waffle
Member of: T-compiler
TL;DR: AI/LLMs have measurably made my life harder and worse, while not contributing anything in exchange.
LLMs remove the barrier to contribution that was once present. On first thought that might sound nice – “more people can now contribute to Rust!”, however in actuality it is the opposite of nice. Without the barrier open source projects are subject to much more spam and low effort contributions. LLMs throw off the balance between the effort of the change author and reviewer –
…
davidtwco
Member of: t-compiler (co-lead), project directors
First and foremost, it is clear that project members are dealing with the significant negative externalities of AI. This burden is not shared evenly, it is disproportionately affecting our moderators and reviewers - already underfunded, overworked and at risk of burnout. Everything that Nicholas, Jubilee, Jieyou, Nadri, lcnr, Oli, Zalathar and Ralf have said above do an excellent job at communicating the very real human cost that we’re being forced to burden due to the release of these technologies.
It isn’t just our maintainers, another negative externality of AI is the dramatically cost increases in hosting releases and crates.
AI is changing - has changed - the social contract for open source. We can no longer default to trust, the effort in making a contribution isn’t the useful signal it once was. We’ll need to adapt to this new reality and change our expectations of contributors and build new tooling to support our reviewers and moderators. I anticipate this involving anti-spam filters much like email, or explicit web-of-trust-style endorsements.
Nevertheless, the genie cannot be put back in the bottle: billions of dollars have been invested in these technologies and we’re stuck with them. They’re not going to go away. Fortunately, not everyone who uses these technologies is doing so irresponsibly, as demonstrated by our colleagues in the project who report success in using AI. As the work of the vision team has found and as Niko has said, our users are using AI to learn and work with Rust, and I find the argument that we should meet these users where they are and help them succeed with Rust, regardless of their choice of tools, compelling. I don’t believe that this amounts to an endorsement of these tools, merely an acknowledgement of the world we now find ourselves in (however fortunate or unfortunate, depending on your perspective).
It’s clear that many of us feel strongly about the ethics and morality of these technologies. However, I don’t think it’s appropriate for the project to take a stance on them - ultimately these are personal decisions where reasonable people will find themselves differing - I think this is important for the overall health of the project. I don’t intend for that to be dismissive of the clearly impassioned and considered conclusions that my fellow project members have come to. I think this is a healthy disposition for us to have on many issues - the Rust project ought to be a big tent, bringing together people from different countries, cultures, backgrounds, experiences to build an amazing programming language - and that will necessarily involve a toleration of differences on issues such as these. There’s always an invisible cost in taking these stances in the contributors who decided Rust wasn’t for them. None of that is to say that we shouldn’t be clear and definitive in talking about the negative impact of these technologies on the project and our maintainers, we should; or that we shouldn’t respect the wishes of each individual maintainer as to whether they’d like to engage with AI, we should.
Furthermore, it’s not outside the realm of possibility that as a project we could attract maintainer support funding from AI companies (who use Rust quite a bit, as I understand it). Despite the valid objections many have to these technologies, and especially the companies behind them, I would want us to take their money to support our maintainers.
Personally, I don’t use AI to generate code, like Nicholas above, I don’t want to be an LLM shepherd (at least not entirely), something about interacting with these models to code is inherently frustrating to me; but like Scott above, I do find them valuable for research-y things. We have some internal AI tooling at Arm that makes searching our 10,000+ page architecture documentation much easier, and I find that exceptionally valuable - it makes it a lot easier for me to respond to issues upstream promptly. I share some of the ethical and moral concerns of my colleagues in this thread, and disagree with others. I don’t think our being an open source project has a significant bearing - positive or negative - on whether we ought to support users using proprietary tools or platforms, or whether we ourselves ought to leverage them (other considerations are more important).
Our priority has to be supporting our maintainers and finding solutions to the new challenges we face, discussions around the potential opportunities of AI are premature while these remain unresolved.
Pete LeVasseur
- t-content, t-fls, vision doc
I’ll admit that I was a skeptic for a long time of how effective these LLMs could be at generating something workable. I’d seen someone I know back in 2024 spamming a codebase at work with typing some comments and then letting Copilot rip. He was roundly mocked.
On the other hand, it’s hard to overstate just how much of a “you have to see it to believe it” experience there is with new models and ways of interacting with them through the Human Machine Interface (HMI) of harnesses.
I find that using these frontier models + harnesses allows me to more effectively contribute to open-source and the Project than without them:
- I find that they are helpful for pulling together a series of resources from the Reference, from the FLS, from other bits of the ecosystem like the Unsafe Code Guidelines, to give me a “first best pass” at what changes should be made to the FLS upon a release of the stable compiler for a given item on the release notes. I can then review the cited sources and make sure it is coherent. (For context on the FLS: adoption announcement, Rust Foundation announcement; I’m championing the 2025H2 Project Goal to keep the FLS up to date.)
- A long-standing issue in the FLS is that the glossary and the chapter contents are maintained separately. This causes maintenance slips where the definition in the glossary or the usage of the term (essentially still a definition) are subtly different, causing a bug in the FLS. Once the directive format was nailed down, the agent was able to progressively, bit by bit, migrate all the glossary into the chapters to have a single source of truth from which the glossary can be generated wholesale. Speaking personally this is definitely the kind of thing that seems we would have not gotten to, since it’s such drudgery. Has not landed yet, being reworked to come in in phases that are easier to review.
- The Safety-Critical Rust Consortium (GitHub, arewesafetycriticalyet.org) is making coding guidelines (rendered) with the initial mandate of “they work for safety-critical”. But we’d also like to write down some of the best practices that are not captured elsewhere. (See also: What does it take to ship Rust in safety-critical? on the Rust Blog.)
- Problem: this is a large undertaking and reviews were being borne by a small subset of the Coding Guidelines Subcommittee Producers.
- Solution: Have an LLM make what such a bot should do to pull from our canonical listing of Producers and ping them for review.
- Problem: driveby contributors are not super-likely to know or want to learn reStructuredText.
- Solution: Have an LLM make an action that’ll fire when someone creates a coding guideline issue to have it create the rST from the Markdown in the issue.
- Problem: this is a large undertaking and reviews were being borne by a small subset of the Coding Guidelines Subcommittee Producers.
- Eclipse uProtocol is an open source project in the Eclipse Software Defined Vehicle Working Group. I worked on it when I was at General Motors (notably up-rust, up-streamer-rust, and up-transport-vsomeip-rust), but they appear to have backed away from it for now. I was unable to find the time to contribute meaningfully to it for the last year or so. I can now update some of the modules and get them shaped up for better maintenance and features by using these agents. (example, example)
- In t-content we wanted to have a nice presentation of our interview we had done. We got feedback that having a transcript is nice. I used Whisper + Diarization in order to separate the two speakers. I then used an LLM in order to create some transcript tools which allowed me to sort, reorder, and delete the text. Once I had that rough transcript, I worked through around 1/3 of it cleaning it up manually. I then had an LLM clean up the remaining 2/3 of the interview transcript giving it the full before and 1/3 of it that I’d done. (Published result: Interview with Jan David Nose on the Rust Blog.)
- In t-content I found that in our first outing to RustConf 2025 I managed to improperly wear the lav mic, which made for scratchy sounds that were really, really hard to remove manually. I also managed to wear the lav mic wrong in more way than one, where I had it pointed towards the other speaker. Using Auphonic, an ML/LLM-based tool, I was able to eliminate the mic bleed and the scratchy lav mic issue and come about with clean audio suitable to save the interview audio.
marcoieni
member of T-infra.
AI increases my productivity
I use AI to help me brainstorm, troubleshoot, write code, review PRs and write documentation. I don’t merge code that I don’t review myself.
I’ve been using AI since GitHub Copilot was released. However, since Codex 5.2 and Opus 4.5, I noticed the quality of these tools improved significantly, and my productivity did as well.
E.g. if I want to write Infrastructure as Code, I can describe what I want to achieve in natural language, and the AI will write the terraform code for me. Then my job is reviewing this code, making sure it’s correct and secure.
Environmental impact
I’m conscious of the environmental impact of these tools, and I try to use them responsibly, in the same way I use my AC, robot vacuum, or car.
My role consists mainly in making the Rust Project more sustainable and improving the developer experience of the Rust community. I feel like what I achieve thanks to these tools justifies the waste of resources caused by me using them.
For example, sometimes I use these tools to optimize the usage of our cloud resources, having a positive impact on the environment overall.
At the same time, I respect people who don’t use AI tools for ethical reasons.
Human review is the bottleneck
AI tools lowered the cost of writing code and text in general, which resulted in more PRs and issues being created (with varying quality). This increased the amount of human review needed.
We should invest resources in helping reviewers in every way possible. Some random ideas:
- Being able to filter PRs by project members or people who already contributed N times, so that reviewers can decide which kinds of PRs they want to review.
- Have AI do a first review of the PR to spot some issues automatically, so that the reviewer can save time.
- Have AI do a first triage of the issues (to be confirmed by a human).
- Ask the author of the PR:
- if/how they used AI to write the code (as Turbo87 suggested)
- if they want to become a team member long-term or if they just want to make a one-time contribution.
Conclusion
AI made parts of our job harder, so as a member of the infra team, I’m interested in how AI can simplify our job as well, improving the productivity of Rust Project members who are ok with using AI (hopefully improving their work-life balance as well).
Ben Kimock (saethlin)
Member of: t-miri, t-opsem, t-compiler
I think the current AI bubble is driving a social dynamic where the technology is oversold by those with power to those with less. The end result of economic bubbles is usually wealth transfer to the rich, so it is very troubling for a language intended to empower developers to sign on to this dynamic.
When used with sufficient care, a developer using AI is indistinguishable from a developer who catches on to new topics very quickly. Or a developer who is very skilled at large refactorings. In my experience, most AI use is not done with sufficient care, though I have seen it yield great results on occasion. I am not just referring to my experience with AI in Rust, but also my (more extensive) experiences with AI at my day job. In practice, I see current state-of-the-art AI tools often used as a bad idea amplifier. They can be used with little understanding to create an output (a program, a summary, a diagram) that looks like it is much better than it is.
The major LLM-based tools have improved over the past months/year, but still require an incredible amount of hand-holding. I have tried to use them. I have tried very hard to make them work, in consultation with serious AI enthusiasts. For generating a demo I find them slower than making a Figma mock-up. For refactoring, I find them significantly worse than stock-standard IDE features. For implementing new features, I find them slower in wall time than implementing the feature myself. For debugging, I’ve found they are rarely better than a rubber duck (Claude in particular seems to like guessing that single-threaded code has race conditions). The only way I can imagine being productive with them is running many sessions at once and constantly switching between them.
It does not seems like the legal and social problems associated with the current AI boom are being addressed, and in the current environment I do not see any reason they would be. The ethics in this space are so amazingly bad and the technology is not that good even when it is made free (well, it’s free for the moment). Nearly every piece of software is trying to trick me or outright force me to interact with an LLM, which is truly perverse behavior for a technology with such backend cost. Is this really the space Rust should be associated with?
Adam Harvey (LawnGnome)
Team: crates.io
Rather than rewriting what others have already written better than I would have, I’ll note that I basically agree with the IP concerns noted by Guillaume Gomez (among others), and the externality concerns articulated well by David Wood.
One concrete negative externality I’ve observed is that the velocity of crate publication has increased significantly in the last 6-9 months. I can’t prove that this is caused by people publishing crates that contain wholly- or substantially-LLM-generated code, but it’s hard to ignore the correlation with the increasing availability and use of LLMs for code generation.
Personally, while I’ve been willing to lightly use LLMs for summarisation and rubber duck purposes, I am unwilling to use them for generative purposes while their knowledge is trained on IP of unclear provenance and their output may infringe on the rights of others. I can’t imagine that changing in the current paradigm. I also agree with Nicholas Nethercote that, on a purely personal level, I have little interest in becoming an “LLM shepherd”, to steal his phrase.
Benno Lossin (BennoLossin)
- Rust contributor for field projections, in-place init
- Rust-for-Linux core team member
Every time I’ve used AI for something serious that I work on, I have been disappointed. Until about 2-3 months ago, the produced output was utterly useless at the first glance. Now it has improved quite a lot, but sadly in the wrong direction: instead of looking bad on the first glance it requires significant effort to dig through and spot the subtle mistakes. This is worse than before; instead of just being able to ignore all the output, I now feel like it actively tries to deceive me.
I’ve had some success with using it for double checking what I did, making it ask questions, which – while dumb – made me explore the correct idea. It also works great with searching (so asking for a link with the actual information) or asking about “how do I do XYZ in the rust compiler”. I’ve also used it for writing simple scripts, where everything can be easily double checked. I won’t trust it with anything important.
Personally I have huge privacy concerns with uploading any private information, so I’ve also only tested with open source work. I did try local models, which are significantly worse.
In general, the Rust project cannot really do anything about the global state of AI. The cat is out of the bag and it won’t go back unless the bubble pops and the entire industry discovers that AI is not sustainable. Several people here have expressed opinions highly in opposition to AI for ethical reasons. These are all valid and yet for the reason of inevitability, we cannot bury our heads in the sand and hope it just vanishes. For this reason I think it’s necessary to think and talk about AI on all levels of the project.
In my work as a Rust-for-Linux team member and reviewing code on the Linux kernel mailing list, I’ve seen some AI contributions. They’ve been generally bad. However, I have also seen some of my colleagues utilize AI for writing changelogs, analyzing error messages or debugging test outputs. So figuring out if there is something to be gained by using the tools seems like a good idea.
That being said, AI must not be forced upon users in any way, so any UI elements (in the web and on the command line) should be opt-in by some configuration option. I wouldn’t want to wake up to an “Explain with AI” button on docs.rs. Making Rust more usable by/with AI is fine, as long as it doesn’t reduce the effort in making Rust better for humans. Writing better docs for humans that happen to also be great for AI is good. Overall the Rust project should be wary of spending too many resources on AI. Supporting people should be the focus, while AI is just another tool that deserves documentation and quality of life. If those people are using AI and need support, then that is a different story. One area that might be very valuable might be standardization of tools or protocols.
I personally wouldn’t approve of the Rust project adopting an AI positive stance. At the same time, I also wouldn’t approve of an AI negative stance. I think it should strive for neutrality and leave the exact contributions to the people making up the project. To put it as Linus Torvalds said “It is just a tool”.
Lastly, I don’t think we need to strive for unity in this regard. It’s fine for the community to be split about tools; programmers have had editor wars, argued about which operating system or distribution is better, and also which programming language is “best”. AI is just another entry in that list.
Ubiratan Soares (ubiratansoares)
- Rust Foundation | T-infra
(Gen)AI is a technology achievement full of contradictions, and personally, I still have mixed feelings regarding it. I find it useful in some places, useless in other places. I struggle with ethical concerns as well. I want to understand what the tech can do, but I want to respect human work too. Some days it feels really hard.
That being said, I’d like to bring attention to one topic that seems important : copyright claims.
According to US law, AI-generated work is not suitable for copyright claims. This is the default in other countries and jurisdictions as well :
- https://www.congress.gov/crs_external_products/LSB/PDF/LSB10922/LSB10922.8.pdf
This legal fact has profund implications in open-source. From the document
The AI Guidance states that authors may claim copyright protection only “for their own contributions” to such works, and they must identify and disclaim AI-generated parts of the works when applying to register their copyright.
I’m no lawyer, but from what I could learn so far, the more an OSS project embraces AI contributions, the greater the risks of not being able to claim its license terms (like attributions). Eventually, every contribution a project can not prove was delivered by a human is apotential issue.
Unlike Sqlite, Rust is not licensed as “public domain”. Different licensing terms have different consequences. Right now, I’d say that acceptting AI-driven contributions put us in a gray area. We need to ask and answer to ourselves whether this is a place we want to be.
I believe open-source needs new licenses designed with this new reality in mind. Eventually the broad OSS community will figure a path ahead. We are not there yet.
Clar Fon (clarfonthey/ltdk)
- Mostly contributor, not on any teams
- Been paying attention to the project since before 1.0
I have a very nuanced take on Machine Learning on a larger level which I’m not going to dump here, because I think that it’s a very personal take, but not necessarily the correct one. But it feels abundantly clear for LLMs and all of the technology that spawned out of them that they are a toxic technology to support.
Here are just some of the gifts the “AI” industry has brought us:
- An active DDOS campaign against most of the web, fueled by incompetence and malice. Companies are using malicious software installed on random users’ devices to ensure that they have residential IP addresses, so they cannot be blocked. They also are not scraping information in a respectful way: many sites like Wikipedia and OpenStreetMap, offer daily dumps of data that can be downloaded easily, but these bots are not using daily dumps, and are instead hammering API endpoints and bringing entire sites down. In addition to the IP address nonsense, they also do not respect robots.txt or other notices and explicitly change user agents when noticed to evade countermeasures. They do not respect anyone hosting a website.
- Have started at least one war and almost started a genuine World War III. The US occupation of Venezuela is explicitly stated as being for oil, and one thing that AI companies have not stopped mentioning is their glut for power. Satya Nadella constantly brags about how they have data centres full of hardware not being used because they don’t have any way to power it. The near-war on NATO for Greenland is also very explicitly about data centre space and power, and the fact that the industry is willing to even consider a third world war over this technology is a reason to completely discount it.
- Have fully stripped environmental protections in favour of Power Now. In the US especially, because of how much these companies are desperate for power, they’ve tried to bring coal, natural gas, and other dirty sources of power back full force just for short-term gains. These have devastating effects on the climate and environment and I know that for this point, I hopefully don’t need to litigate why it’s bad.
- Have pre-reserved a large portion of the computer hardware industry specifically for building data centres, at the expense of everyone else. In particular, there has been a massive push for high-bandwidth memory and stacked memory, both of which have substantially lower yields for diminishing returns. To me, “wasting silicon” isn’t really a thing (I mean, it’s dirt, effectively), but the real issue is that they’re taking up a large portion of the hardware manifacturing capacity of the entire world and making the ability to own any computer difficult for the rest of us.
And I’m separating this last one out because it personally affects me: have fully embraced discriminatory hiring practices. A lot of LLM tools are explicitly biased because they’re trained on biased input, and for most tech companies, this is seen as a benefit of the technology. I personally have a blog post detailing my experience with this, but for the short version (featuring some extra stuff not mentioned there):
- Many companies use résumé-scanning tools that try to guess at a person’s competence at a given job, and I’ve found that at least one of these is directly following the usual LLM pitfalls we’ve found in recent research: it’s very sensitive to unrelated parameters like names (how convenient, that names are a great way to discriminate!) and does not seem to indicate any substantial understanding of the reference material. In my case, I have a version of my CV that lists my full job history similar to a site like LinkedIn or Indeed and a version that has been stripped down to a single page, and according to scanners, the person with the longer CV has less experience than the person with the shorter one, even though they’re both me. Companies are relying on this for pre-vetting candidates, meaning they don’t even talk to you if you don’t pass this test.
- For more targeted scanning tools, as mentioned in the blog post, I have direct evidence that these tools are so bad, so biased, or both that they can’t even scan for keywords in job experience at all. The example I mention is specifically for a position in Rust, many of the keywords are directly listed on my CV verbatim, and the person said that the conclusion was that I didn’t have the requisite experience.
Like, I agree that other reasons, like the training on stolen data and the absolutely abysmal conditions subjected to the people processing the data should be absolute reason to not use the software, but I figure that I should highlight the direct and ongoing damage that supporting this technology is providing right now. This is not the usual “we live in a society” kinds of nonsense: this technology is creating unprecedented amounts of suffering at an exponentially increasing pace, and supporting it directly contributes to that.
Rust claims to be about empowering developers, and I have seen this term explicitly used when describing AI as well. I do not think it can be further from the truth. This technology empowers a few and absolutely destroys several more, and we should not ever support it.
yaahc
Just to start, I strongly agree with everything Nick Nethercote said as well as cyborus’s points on the ethics of AI. I’m not going to reiterate those points.
I also appreciate Jayan’s point on the dunning kruger effect and how these llms don’t necessarily have uniformly bad outcomes or intrinsicly make people incompetent. I had been struggling with some cognitivie dissonance where I see people I deeply respect finding value in these tools while at the same time finding 99% of the value people claim from these tools to be all smoke and no substance and wondering whether that is the case with people like Niko. But from Jayans point I can see how inputs and the way these tools are used can still have an impact which could cause ppl like Niko to have better outcomes vs random people with no engineering background trying to use these tools.
Personally, I’ve generally tried to avoid using AI as much as possible and have been opting to focus on listening to the experiences of engineers who I personally know, trust, and respect to see when AI gets to the point where I need to take it seriously. Generally the overall consensus I’ve observed has been fairly negative. That said, I have had some experience personally with these systems, positive and negative. All of this is leaving aside the points from nick / cyborus, which I’ll reiterate, I strongly agree with.
The positive
I have found LLMs to be quite effective as auto completion engines. When I can’t remember a specific word but I can remember a vague definition or when I think of a concept which feels like it must be named but for which I have never encountered the name, LLMs are quite effective at taking a vague set of inputs and accurately predicting the concept you’re looking for which can then be verified against other sources like wikipedia. As an example, I was reading Discworld recently and felt like there must be a term for Pratchett’s style of humor where he sets up something dramatically then has a super mundane / silly punchline, and an llm was able to point me directly to https://en.wikipedia.org/wiki/Bathos.
The negatives
Around 6 months ago I evaluated an AI based c to rust translation tool from a hackernews post that claimed that using fuzz testing in concert with AI generated code was rediculously effective at translating c to rust. My goal was to evaluate the truth of this claim. Spoiler, it was bullshit. In practice when I dug into the author’s open source agent framework, I found it was lacking the logic to even trigger the fuzz testing after generating translated code, it was just generating harnesses and never running them. My most generous assumption was that they manually prompted the AI to run these fuzz tests, which I tested myself, and even claude’s then frontier model completely fell over upon the fuzz tests being run. The fuzz test immediately caught an input that produced different outputs in the c vs rust impls of the same code, looped a few times trying different approaches to fixing it, never managed to get the two implementations to produce the same logic, and ultimately gave up, removed the assertions in the fuzz tests, then produced a summary claiming success and how their changes adhered to secure coding guidelines because apparently tests crashing is a security issue. This is of course leaving aside the fact that this approach requires generating rust code with a C api which fundamentally limits the benefits of such a translation.
I have not found AI summaries to be trustworthy. It’s difficult to avoid them with just about every search engine trying to pre-empt search results with ai generated answers, but too often when I find myself or others relying on these answers even a cursory attempt to verify their claims will show them to be inaccurate or entirely wrong. This is most noticable in niche topics where I have some level of expertise. I would not trust them when applied to rust PRs or issues and would probably still find myself compelled to dig through the context manually to ensure no important points or nuance are lost in the summarization.
I had great hope for using AI to essentially translate from a neurodivergent to a more neurotypical communication style to help avoid misunderstandings that I too frequently encounter in communication. In practice when I’ve tried using AI tools this way I’ve not found any noticable improvement in communication fidelity, or at least no noticable improvement over using a tool like grammarly.
My feelings
All of that said, I’m deeply worried about how much of the tech industries resources are being redirected into AI. I fear that the rust project will have fewer and fewer resources available to it as tech allocates funding and contributor time away from rust and language dev in favor of shiny AI projects. I worry that we may need to find ways to adapt to this funding landscape and find features that are broadly useful including to the AI ecosystem through which we can fund ongoing work.
I also have a lot of anxiety around AI. I worry that people like Niko and TC are correct in their claims that practically, these tools are or will become too good to ignore and that not engaging with them will put me in an economically unstable position. Right now I find that evaluation of AI tools and claims from a critical perspective is the limit of how I’m able to make myself engage with AI, and even then, I am only able to do so with difficulty.
…
-
Except where explicitly noted – sometimes one cannot quite figure out why rustc is doing what and that’s okay. But hundreds of lines of code one cannot explain are not okay. ↩
-
For example we see a few non-English speakers honestly trying to use LLMs to translate from their language (see rust#152315). But due to the current state of LLM tech, they often unintentionally produce “AI-slop” ↩