During the 2026 edition of QCon London, held at the Royal Festival Hall on 12 March, senior developer Hannah Foxwell delivered a keynote that drew attention to the growing role of artificial intelligence in software creation. Foxwell, a recognized thought leader in developer productivity, opened her presentation by acknowledging that the long‑awaited acceleration in software development has finally arrived. However, she emphasized that the industry remains uncertain about how to harness this newfound speed effectively.
Focus on Human Impact Rather Than Technical Detail
Rather than delving into the intricacies of agentic coding—where autonomous AI agents generate code based on high‑level specifications—Foxwell chose to concentrate on the broader implications for the workforce. She argued that the most pressing question is not how the technology works, but how it will reshape the roles of developers, project managers, and quality assurance professionals. By setting aside the technical mechanics, Foxwell aimed to spark a conversation about workforce adaptation and skill development.
Context: QCon London and the Rise of AI Agents
QCon is a global conference series that brings together software engineers, architects, and technology leaders to discuss emerging trends. The London event in 2026 attracted over 2,000 attendees from more than 30 countries, reflecting the conference’s reputation as a leading forum for industry dialogue. The theme of this year’s conference, “Future‑Proofing Software Development,” aligned closely with Foxwell’s focus on AI‑driven code generation.
AI agents that write code are built on large language models trained on vast code repositories. These agents can produce functional code snippets, automate repetitive tasks, and even suggest architectural improvements. While the technology promises significant productivity gains, it also raises questions about code quality, security, and the need for human oversight.
Key Takeaways from Foxwell’s Presentation
1. Velocity Is Real, but Guidance Is Needed
Foxwell noted that the speed at which code can now be produced has surpassed previous benchmarks. Yet, she highlighted a lack of industry consensus on best practices for integrating AI agents into existing development pipelines. Without clear guidelines, teams risk inconsistent code quality and potential security vulnerabilities.
2. Human Roles Will Shift, Not Vanish
According to Foxwell, the introduction of AI agents will not eliminate the need for developers. Instead, it will shift responsibilities toward higher‑level design, testing, and maintenance tasks. She stressed that developers will need to develop new competencies in AI model evaluation, prompt engineering, and ethical oversight.
3. Training and Upskilling Are Imperative
Foxwell called for structured training programs that equip teams with the skills to collaborate effectively with AI agents. She suggested that organizations invest in internal workshops, certification courses, and cross‑functional teams that include data scientists, security experts, and domain specialists.
Industry Reactions
Several attendees expressed concern about the potential for AI agents to produce code that is difficult to audit. A senior architect from a leading cloud services provider remarked that “while the speed is impressive, we must ensure that the generated code adheres to our security and compliance standards.” Another participant, a product manager from a fintech startup, highlighted the opportunity for rapid prototyping but cautioned that “the human element remains critical for validating business logic.”
In a panel discussion that followed Foxwell’s keynote, experts debated the balance between automation and human oversight. One panelist, a researcher in software engineering, emphasized the importance of establishing clear governance frameworks for AI‑generated code. Another panelist, a senior developer advocate, pointed out that many organizations are already experimenting with AI agents in small, controlled environments, and that scaling these experiments will require robust monitoring and feedback loops.
Implications for the Software Development Community
The presentation underscored a broader industry trend toward integrating AI into the software development lifecycle. As AI agents become more capable, organizations will need to address several key areas:
• Code quality assurance processes must evolve to include automated testing of AI‑generated code.
• Security teams will need to develop new tools for detecting vulnerabilities introduced by AI.
• Legal and compliance departments will have to consider intellectual property implications of code produced by machine learning models.
• Human resource strategies will shift toward roles that emphasize oversight, ethical considerations, and continuous learning.
Looking Ahead
Foxwell concluded her talk by outlining a roadmap for organizations to adopt AI agents responsibly. She suggested a phased approach that begins with pilot projects, followed by incremental scaling as teams gain confidence in the technology. The conference organizers announced that QCon London will host a dedicated workshop series in the coming months to explore best practices for AI‑driven development.
As the software industry continues to grapple with the rapid evolution of AI tools, the insights shared at QCon London 2026 provide a timely reminder that technology alone does not solve all challenges. Human expertise, governance, and continuous learning remain essential components of a successful transition to AI‑augmented development.







