The UK’s National Cyber Security Centre (NCSC) has issued a set of security guidelines for AI coding tools, emphasizing the need to prevent vulnerability propagation in AI-generated code. NCSC CEO Richard Horne presented these guidelines at the RSA Conference 2026, framing AI-assisted development—what some call “vibe coding”—as both an opportunity and a risk in software security. The guidelines represent a significant step toward establishing security standards for the rapidly growing field of AI-powered software development tools.
NCSC CTO David C outlined specific “commandments” for securing AI-assisted coding, including integrating secure-by-default practices into AI models, implementing a trust-but-verify approach with provable model provenance, using AI to audit all generated code, and enforcing deterministic guardrails on code behavior. These recommendations acknowledge that while AI could help eliminate the persistent issue of vulnerable manually produced software, it could also introduce new security challenges if not properly governed.
The guidelines also suggest potential benefits beyond security, including AI’s ability to help organizations address technical debt by hardening legacy applications and potentially offering a migration path for companies hesitant to move to cloud platforms. This comprehensive approach reflects recognition that AI tools are becoming integral to the software development process, requiring new security paradigms rather than treating them as temporary additions to existing workflows.