AI Coding Tools Double Output While Maintaining Code Quality
The Jellyfish engineering intelligence platform has released a comprehensive benchmark study revealing that AI coding tools are significantly increasing software development output while maintaining code quality. Analyzing data from over 700 companies, 200,000 engineers, and 20 million pull requests, the study demonstrates that top adopters of AI coding tools are generating nearly double the number of pull requests weekly compared to non-adopters. This productivity surge comes as 63% of companies now report using AI tools for most of their coding, with 64% generating a majority of their code with AI assistance.
Contrary to concerns about declining code quality, the study indicates that AI-assisted development maintains stable software standards. Revert rates—the metric for code requiring rollback after deployment—show only a modest increase from 0.61% at low-adoption companies to 0.65% at the highest adoption levels. This suggests that while AI tools accelerate development, they do so without proportionally increasing technical debt or introducing critical errors. The stability of quality metrics indicates that AI coding assistants are becoming reliable partners rather than introducing additional maintenance overhead.
The study also reveals patterns in AI tool adoption across development teams. GitHub Copilot, OpenAI’s Codex, and Cursor emerge as the most popular AI coding tools among engineers, with many developers reporting that these tools have fundamentally changed their workflow. “Last Fall was around the time I gave up fully writing my own code,” one engineer shared, noting they haven’t written or reviewed code independently since October 2025. This shift represents a profound change in the software development paradigm, where AI transitions from an occasional assistant to a primary work partner, raising important questions about the future of programming expertise and developer roles in the industry.