← Back to Summaries

Snippy Summary

Building the God Coding Agent

September 26, 2025 02:37
Latent Space avatar
Latent Space
Visit channel →

This YouTube video features a discussion with Quang and Thirstston from Sourcegraph about their new product, AMP (Agentic Multi-Process). They contrast AMP with their previous product, Cody, and explore the rapid evolution of AI coding assistants.

Main Points

  • Evolution from Cody to AMP: The transition from Cody to AMP was driven by the realization that the underlying technology and user expectations had fundamentally changed with the advent of more powerful AI models like Claude 3. Cody was designed for a different era of AI assistance, focused on RAG and chat interfaces, while AMP is built around a more advanced "coding agent" concept. [0:31-2:37]
  • The Coding Agent Concept: An agent is defined as a model, a system prompt, and tool prompts, with significant permissions to interact with the file system and editor. This allows for more sophisticated and autonomous actions. [1:33-2:06]
  • Resetting Expectations: Sourcegraph chose to create a new brand and product (AMP) to reset user expectations regarding capabilities, usage, and pricing, as the advanced features of AMP could not be retrofitted into the existing Cody product or pricing structure. [2:04-2:37, 3:05-3:38]
  • Rapid Iteration and Development: AMP is built with a philosophy of rapid iteration, shipping multiple times a day, which is enabled by a smaller, agile team and a different development approach compared to traditional enterprise software. [5:09-5:43, 11:23-11:56]
  • Targeting the Frontier: AMP is focused on attracting "power users" and developers at the "frontier" of AI adoption who want to leverage the latest capabilities, rather than trying to appeal to the broadest mainstream audience immediately. [5:41-6:15, 39:16-40:56]
  • User Interface Flexibility: The product initially launched with a VS Code extension for ease of distribution and feedback, but also developed a CLI client, recognizing its advantages for remote access and different workflows. The decision to support both highlights the evolving ways users interact with AI tools. [13:26-14:30]
  • The "Bitter Lesson" and Scaffolding: The AMP team embraces the idea that underlying models will continuously improve, making external tools and complex workflows obsolete. They focus on building a flexible "scaffolding" around models that can be easily replaced or removed as models advance. [33:05-33:36]
  • Non-Deterministic Nature of LLMs: Acknowledging that LLMs are non-deterministic, the team emphasizes the importance of managing user expectations and focusing on reliable workflows, rather than building overly complex, brittle systems that may only work most of the time. [35:08-36:14]
  • Focus on Core Capabilities: AMP deliberately avoids features like "prompt enhancers" or deeply integrated MCP servers that can lead to increased token usage, complexity, and potential for misuse, prioritizing core agentic capabilities and speed. [40:52-43:27]
  • The Future of Coding Agents: The discussion touches on the evolving landscape of AI agents, including the role of sub-agents, testing, infrastructure as code, and the potential for agents to fundamentally change how software is built and consumed. [48:06-53:20]
  • Adapting to Change: A core tenet of AMP's development is the understanding that the AI landscape is constantly shifting. They prioritize building a codebase and team structure that can react quickly to new models and paradigms, rather than investing heavily in solutions that might become outdated. [7:45-8:18, 10:22-10:54]

Key Takeaways

  • Rapid Technological Shifts: The AI landscape is evolving at an unprecedented pace, necessitating a flexible and iterative approach to product development. [7:15-7:48]
  • Focus on Power Users: Targeting early adopters and power users is a strategic advantage, as they can provide valuable feedback and push the boundaries of what's possible. [39:16-40:56]
  • Simplicity and Speed: Despite advanced capabilities, a focus on speed, efficiency, and minimizing complexity is crucial for effective AI agent development. [11:23-11:56, 40:52-42:25]
  • Trust and Iteration: Sourcegraph leverages existing customer trust to fund experimental initiatives like AMP, demonstrating the value of building strong relationships. [8:16-8:49]
  • The "Coding Agent God": The ultimate goal is to build the most capable coding agent, a continuous pursuit that drives the team's innovation. [3:05-3:38, 76:00-76:32]

Discussion Points on Product Development

  • CLI vs. VS Code Extension: Both interfaces have distinct advantages, and user adoption has shown a surprising balance between the two. [13:26-17:32]
  • Agent Orchestration: Managing multiple agents and understanding their collective actions is a significant challenge that is still being explored. [64:09-66:13]
  • Outer Loop Improvements: The "outer loop" of software development, such as PR reviews, needs to evolve to accommodate AI-generated code and workflows. [63:38-66:45]
  • Git and Version Control: The current version control systems might need to adapt to the new paradigms introduced by agents. [68:16-68:48]
  • User-Generated Content and Personalization: The trend towards user-generated content and personalized tools extends to enterprise software, with AI agents playing a key role. [69:17-71:21]

Hiring and Future Outlook

Sourcegraph is actively hiring engineers interested in agentic programming and the future of coding. They encourage feedback from users and are excited about building the future of coding agents. [81:14-81:59]