Thinking through contribution processes and seeking input

Back in February, Ivan Bocharev (@ibochkarev) submitted around 74 PRs over the course of about eight days. I want to be upfront about how that landed: honestly, it hit me hard. Not just as a review backlog problem. I’d come out of the 3.2 release with a real sense of where MODX was heading. I had a direction I’d been working toward, and I wanted to get some genuine momentum behind it. The sudden volume felt like it cut across all of that in a way I wasn’t prepared for.

I want to be equally clear about what Ivan did: it was good-faith work. A real effort to improve the project. The problem wasn’t intent. The problem was that our process had no way to handle that kind of pattern, and it took me a while—and an inspiring idea from Mike Schell (@netprophet)—to realize we needed to change our perspective.

We stopped and actually thought about it, rather than just reacting. That re-evaluation led me somewhere I didn’t expect: taking a serious look at AI-assisted development, both as a workflow and in terms of what it means for an open source project like MODX. I’m still working through that. This post isn’t a conclusion dressed up as a question. I’m genuinely inquisitive as a maintainer who is trying to adapt to this quickly changing software engineering landscape which is descending upon us and is trying to think clearly about what it means for this project.

I’m writing this before anything is decided, because I want to hear from you first.

Process gaps worth talking about

The first one is about bulk contributions. When a lot of PRs come in at once without prior coordination, it creates real pressure on review capacity—and that capacity is not always visible from the outside. Right now there’s nothing in our contributor docs that says “if you’re planning to submit a dozen PRs, let’s coordinate first.” That’s on us, not on contributors. A contributor who wants to do a big sweep of improvements has no way of knowing that sequencing matters or that a quick heads-up would help.

As a solution, I am thinking of some kind of a threshold. Something reasonable, like five or more PRs within a two-week window which trigger a simple ask: open a forum thread or a GitHub Discussion before submitting, so we can talk through scope and timing. Not a gate. Not approval required. Just a coordination point so the work lands somewhere it can actually be processed with the proper attention. The contributors benefit, as well. They get a chance to prioritize the most valuable PRs before writing any code. And they gain some confidence that the work will be reviewed.

So to the community: does that threshold feel right? Is two weeks the right window? Is a forum thread the right mechanism, or would a GitHub Discussion work better? I’m leaning toward adopting GitHub Discussions for this.

The second gap is about AI-assisted development. This one’s easier to state: I suddenly use AI tooling in my own workflow—a shocking turnaround for anyone who knew my opinion of AI before the past month. A lot of contributors are adopting AI-assisted workflows and this will quickly become the norm. I’m not interested in policing that or making any kind of moral or quality judgment about it. But I think it makes sense to ask contributors to note in their PR description if AI tooling wrote or substantially generated the code. Not as a flag or disclaimer. Simply as context. It helps me understand where human judgment was applied, give appropriate feedback, and give appropriate credit. Every PR gets evaluated on the same criteria regardless of how it was written.

The edge cases are what I’m less sure about. AI-assisted refactoring feels different from AI-generated logic. Where do you draw the line? It is not an easy question to answer.

Is that reasonable? What edge cases am I not considering?

Finally, let’s think about project velocity. AI-assisted development is here whether we like it or not. How can we best adapt to that? I am working on ideas. For instance, I want to work on adopting a policy of taking reasonable contributions that need a few adjustments and simply fixing them, pushing to the PR branch, and asking the original author for their review. They still get credit, but we increase velocity and keep the project moving forward, rather than stuck in the past.

Is this something you as a contributor would be amenable to, regardless if you used AI-assistance or not?

I am exploring other policies that open source projects already dealing with these problems are adopting. I welcome everyone’s input on these topics. It is uncertain territory, but I would rather engage with it directly than simply ignore it. At least for the time being. It feels like this is an inflection point.

What I’m not doing here

I’m not announcing policy. Nothing is decided. Ivan Bochkarev’s PRs are still actively being worked through and future policy will not apply retroactively to anything already submitted. But I want to get a good sense of where those of you with a stake in this project are at.

If you’ve submitted bulk PRs or used AI tooling in your contributions, your perspective on this is especially useful. You will know where friction actually lives in a way I am barely scratching the surface of at this point.

My goal is to have an AI policy committed to in the next few weeks, but only after hearing from the community first. I look forward to anyone who wants to share their perspective.

10 Likes

Having been burned by top-down decisions in other contexts (not MODX), I really appreciate that you’re asking.

I have no firm opinion on AI at this point. I’m tempted to suggest discouraging AI use until we see what kind of economies, and disasters occur in other AI-assisted coding projects, although there may be enough information available now. I don’t really know where to look for it.

In any event, I think providing information about if and how AI was used in developing a pull request is critical. It might be a good idea to suggest categories of AI use to check off in a PR.

For pre-PR discussions, I’d prefer the Forums because, at least in my case, I would be more aware of the discussions, since I look at every new post in the Forums, though I confess that I don’t know what notification of PR discussions on GitHub would look like. I also suspect that discussions in the Forums would be more easily found in a Google search. I could be wrong.

1 Like

Just a short answer from my side, a (very) long term user, but sadly not a contributor besides bug reports now and then.

Basing additions to MODx on a tool that is based on stolen code would drive me away. “Everyone does it” and “it is fast” is not a good answer on intellectual property theft, which is nearly all public/big LLM companies are basing their value on. You want others to honor the MODx licensing term, I assume, definitely I expect this from commercial companies, too.

So this is a sad “development” for me …

@ goetz_3rz

Thanks. Your post made me think.

It reminded me of something R. Buckminster Fuller pointed out in his book Critical Path. He said that the most valuable commodity on Earth has always been information (he called it “know how”), and we all have reaped huge benefits from inventions like the lever, the inclined plane, the wheel, the screw, etc. Further inventions like water power, steam power, electricity, extended the power of those earlier inventions, and collectively, they are a source of immense wealth.

His principal point was that every person on earth is a descendant of at least one of those original inventors, and that a vast number of us are being cheated out of our inheritance. The titans of AI are capable of compounding that injustice by many orders of magnitude.