Back in February, Ivan Bocharev (@ibochkarev) submitted around 74 PRs over the course of about eight days. I want to be upfront about how that landed: honestly, it hit me hard. Not just as a review backlog problem. I’d come out of the 3.2 release with a real sense of where MODX was heading. I had a direction I’d been working toward, and I wanted to get some genuine momentum behind it. The sudden volume felt like it cut across all of that in a way I wasn’t prepared for.
I want to be equally clear about what Ivan did: it was good-faith work. A real effort to improve the project. The problem wasn’t intent. The problem was that our process had no way to handle that kind of pattern, and it took me a while—and an inspiring idea from Mike Schell (@netprophet)—to realize we needed to change our perspective.
We stopped and actually thought about it, rather than just reacting. That re-evaluation led me somewhere I didn’t expect: taking a serious look at AI-assisted development, both as a workflow and in terms of what it means for an open source project like MODX. I’m still working through that. This post isn’t a conclusion dressed up as a question. I’m genuinely inquisitive as a maintainer who is trying to adapt to this quickly changing software engineering landscape which is descending upon us and is trying to think clearly about what it means for this project.
I’m writing this before anything is decided, because I want to hear from you first.
Process gaps worth talking about
The first one is about bulk contributions. When a lot of PRs come in at once without prior coordination, it creates real pressure on review capacity—and that capacity is not always visible from the outside. Right now there’s nothing in our contributor docs that says “if you’re planning to submit a dozen PRs, let’s coordinate first.” That’s on us, not on contributors. A contributor who wants to do a big sweep of improvements has no way of knowing that sequencing matters or that a quick heads-up would help.
As a solution, I am thinking of some kind of a threshold. Something reasonable, like five or more PRs within a two-week window which trigger a simple ask: open a forum thread or a GitHub Discussion before submitting, so we can talk through scope and timing. Not a gate. Not approval required. Just a coordination point so the work lands somewhere it can actually be processed with the proper attention. The contributors benefit, as well. They get a chance to prioritize the most valuable PRs before writing any code. And they gain some confidence that the work will be reviewed.
So to the community: does that threshold feel right? Is two weeks the right window? Is a forum thread the right mechanism, or would a GitHub Discussion work better? I’m leaning toward adopting GitHub Discussions for this.
The second gap is about AI-assisted development. This one’s easier to state: I suddenly use AI tooling in my own workflow—a shocking turnaround for anyone who knew my opinion of AI before the past month. A lot of contributors are adopting AI-assisted workflows and this will quickly become the norm. I’m not interested in policing that or making any kind of moral or quality judgment about it. But I think it makes sense to ask contributors to note in their PR description if AI tooling wrote or substantially generated the code. Not as a flag or disclaimer. Simply as context. It helps me understand where human judgment was applied, give appropriate feedback, and give appropriate credit. Every PR gets evaluated on the same criteria regardless of how it was written.
The edge cases are what I’m less sure about. AI-assisted refactoring feels different from AI-generated logic. Where do you draw the line? It is not an easy question to answer.
Is that reasonable? What edge cases am I not considering?
Finally, let’s think about project velocity. AI-assisted development is here whether we like it or not. How can we best adapt to that? I am working on ideas. For instance, I want to work on adopting a policy of taking reasonable contributions that need a few adjustments and simply fixing them, pushing to the PR branch, and asking the original author for their review. They still get credit, but we increase velocity and keep the project moving forward, rather than stuck in the past.
Is this something you as a contributor would be amenable to, regardless if you used AI-assistance or not?
I am exploring other policies that open source projects already dealing with these problems are adopting. I welcome everyone’s input on these topics. It is uncertain territory, but I would rather engage with it directly than simply ignore it. At least for the time being. It feels like this is an inflection point.
What I’m not doing here
I’m not announcing policy. Nothing is decided. Ivan Bochkarev’s PRs are still actively being worked through and future policy will not apply retroactively to anything already submitted. But I want to get a good sense of where those of you with a stake in this project are at.
If you’ve submitted bulk PRs or used AI tooling in your contributions, your perspective on this is especially useful. You will know where friction actually lives in a way I am barely scratching the surface of at this point.
My goal is to have an AI policy committed to in the next few weeks, but only after hearing from the community first. I look forward to anyone who wants to share their perspective.