
If you feel like every software company on the planet has spent the last two years racing to slap AI onto its products, you’re not alone. Community platforms have begun offering chatbots that can generate engagement, automated responses designed to feel human, and features that promise to do users' thinking for them.
I've been building community software for over a decade, first at Stack Overflow and then at Discourse, and I've watched this AI tsunami hit the industry with a mixture of excitement and frustration.
I've come to believe that the best uses of AI in community are the ones you barely notice, while the worst ones are the ones that make you feel clever for implementing them.
But too many people in community are getting this backwards.
In online communities, trust takes years to build and minutes to destroy. This asymmetry should inform every AI decision you make, but most companies treat it as an afterthought.
From the beginning of our AI work at Discourse, we established one hard rule: AI must never masquerade as a human. Every piece of AI-generated content gets labeled. We tell users which language model produced the content. When our AI assistant helps with something, there's no ambiguity.
Why does this matter? Communities are built on human connection. People join forums and discussion spaces to interact with others and learn from their experiences. The moment you introduce unlabeled AI into that equation, you're gambling with the social contract that holds everything together.
"Once someone discovers that the thoughtful reply they've been engaging with came from a bot, and they start questioning every interaction they've ever had on your platform."
A year ago, I was worried about the viability of certain aspects of our hosting business. Community after community was complaining that Akismet wasn't catching obvious spam. The stuff getting through was clearly garbage to any human who looked at it, but the traditional filtering systems couldn't keep up. We introduced AI-powered spam scanning using language models, and the problem went away.
Even low cost models like Gemini Flash 2.5 can catch the vast majority of spam. But the interesting part is the steerability. With Akismet, you could mark items as good or bad, but you couldn't provide it with nuanced instructions. With LLM-based spam detection, you can tell it exactly what your community considers spam. Maybe your forum focuses on automotive repair, and any discussion of cryptocurrency is automatically suspicious. You can specify that. Maybe you run a cooking community, and links to non-food products are red flags. You can encode that preference directly. The system is adaptable to what your community cares about, rather than applying generic rules that may or may not match your context.
This is AI doing what AI should do in community software: handling tedious work with more nuance than traditional automation could manage, while keeping humans in the decision-making seat. The AI flags things, and humans decide what to do with them. No one is replaced, and everyone's time is used more effectively.
People are terrible at tagging. I'm terrible at tagging. You're probably terrible at tagging, too. It's tedious work that interrupts the flow of whatever you're actually trying to accomplish, and most community members simply don't do it.
AI is good at tagging.
The Cursor forum has been experimenting with automated tagging, and the results have been impressive. Every new post gets analyzed by an LLM. The system maintains a list of valid tags and assigns confidence scores to each potential match. If the confidence passes a threshold, the tags get applied automatically. Users don't have to think through their categorization. Moderators don't have to chase people down. The forum's organizational structure maintains itself.
It’s the same with topic splitting. Discussion topics wander. A conversation focused on JavaScript debugging drifts into a debate concerning code editors, which somehow becomes an argument on operating systems, and now you have a 200-post topic where nobody can find anything useful.
"A moderator could theoretically read through the whole thing and manually split it into coherent subtopics, but who has time for that?"
An LLM can read the entire topic and suggest logical split points. "Posts 15 through 28 are focused on TypeScript specifically. Posts 45 through 63 are on IDE configuration. You might want separate topics for these." You're not removing human judgment. You're giving moderators a map instead of making them explore uncharted territory every time.
These applications share a pattern: using AI to amplify human intelligence rather than replace it. The technology handles the tedious, rule-based tasks, while humans handle judgment and community context.
The most important principle I've landed on after watching AI wash over the community software space: these tools work best as amplifiers for human judgment, not replacements. The AI flags potential spam; a human confirms or overrides the flag. The AI suggests where to split a conversation; a moderator makes the final call.
Community moderation is fundamentally about judgment calls that depend on context, history, and relationships that no model fully captures. The same post might be perfectly acceptable in one community and a bannable offense in another. The teams getting the most value from AI are treating it like an exceptionally capable junior employee: someone who can handle routine tasks and surface important information, but who needs supervision. The teams getting the least value are treating it like magic that will solve problems they haven't bothered to think through.
I build tools for communities because I believe in what communities can be: online spaces where people with shared interests connect meaningfully across distances and have substantive conversations beyond the frantic short-form content of social media. AI can help with that mission or undermine it. The technology is agnostic. The choices we make on deployment are what matter.
Everyone working with community software needs to understand AI, that part isn't optional anymore. But understanding is the starting point. The harder work is figuring out which capabilities to deploy, where, and why. Getting that right is what will separate communities that help from those that add noise.