What Engineering Productivity means now: The DORA lens
Exploring the groundbreaking findings of the 2025 DORA Report and how AI is reshaping engineering productivity, platform engineering, and software delivery performance
Topics Covered
Podcast Summary
In the 6th episode of the AI x DevOps podcast, host Rohit Raveendran sits down with Nathen Harvey, the lead at DORA at Google Cloud Platform, to dissect the groundbreaking findings of the 2025 DORA Report. This deep dive explores how AI is transforming engineering productivity and what it really takes to succeed in the modern software delivery landscape.
AI as an Amplifier, Not a Magic Wand
Discover why AI is categorized as an "amplifier" rather than a "magic wand." Learn how solid existing practices are essential for AI to truly yield results, and why organizations with poor foundations struggle to benefit from AI adoption.
The Platform Engineering Revolution
Understand why 90% of survey respondents have now adopted platform engineering and what this massive shift means for DevOps teams. Explore the practical implications of this transformation and how teams are navigating the change.
Navigating the J-Curve of Productivity
Learn about the J-Curve phenomenon—the initial performance dip during transformation—and practical strategies to navigate this challenging period to reach higher stability and efficiency on the other side.
AI-Centric Platform UX
Engage with the thought-provoking question: Should platforms be redesigned to serve AI agents as primary users? Explore the implications of AI-first design thinking for internal developer platforms.
Beyond Dashboards: Measuring What Matters
Move beyond static dashboards toward team reflection and experimentation to genuinely improve software delivery. Learn why metrics alone aren't enough and how to foster a culture of continuous improvement.
Essential listening for platform engineers, engineering leaders, DevOps practitioners, and anyone interested in understanding how DORA metrics and AI are reshaping software delivery performance.
What You'll Learn
• In-depth insights from industry experts
• Practical strategies you can implement today
• Real-world examples and case studies
• Interactive Q&A and community discussion
Stay Updated
Get our latest live content and insights delivered to your inbox.
Hosts

Nathen Harvey

Rohit Raveendran
Special Guest: This session features expert insights from industry leaders outside of Facets.
Related Content
More Live Content
View all
AI meets MLOps - Making sense of the mess
In this episode of AI x DevOps, Rohit sits down with Görkem Ercan, CTO at Jozu, a company building a DevOps platform for AI agents and models. Görkem, a veteran with over two decades of software experience (including contributions to the Eclipse Foundation), explains why MLOps is fundamentally different from traditional, deterministic DevOps—leading to extreme pipeline fragmentation. ### Standardization is Key Discover why OCI is the recognized standard for packaging AI/ML artifacts, and how the Model Packs project (with ByteDance, Red Hat, and Docker) is defining the artifact structure. Learn how standardization is bringing order to the fragmented MLOps landscape. ### Open Source Challenges Understand the critical challenges maintainers face when receiving large amounts of untested, verbose, AI-generated code. Görkem shares insights on the impact of AI-generated Pull Requests on open-source projects. ### LLM Economics and Strategy Explore why running small, fine-tuned LLMs in-house can be cheaper and provide more predictable, consistent results than generic large providers. Get practical insights on when to build versus buy. ### KitOps Solution Learn how KitOps creates an abstraction that allows data scientists to focus on training while leveraging existing DevOps platforms for deployment. Discover how ModelKits are simplifying the AI/ML deployment pipeline. Essential listening for platform engineers, DevOps practitioners, MLOps engineers, and anyone working at the intersection of AI and infrastructure. Tune in to understand the standardization movement reshaping the future of AI development.

AI x DevOps with Sanjeev Ganjihal - AWS Solutions Architect
Join Rohit Raveendran as he sits down with Sanjeev Ganjihal, Senior Container Specialist at AWS and one of the first 100 Kubernetes certified professionals globally. This deep dive conversation explores the transformative shift from traditional DevOps to AI-powered operations and what it means for the future of infrastructure management. ### Evolution of DevOps and SRE Explore Sanjeev's unique journey from being an early Kubernetes adopter in 2017 to becoming a specialist in AI/ML operations at AWS. Discover how the industry has evolved from manual operations to automated, intelligent infrastructure management and what this means for traditional SRE roles. ### Multi-LLM Strategies in Practice Get insider insights into Sanjeev's personal AI development toolkit, including how he uses Claude, Q Developer, and local models for different tasks. Learn practical multi-LLM routing strategies, code review workflows, and how to choose the right AI tool for specific infrastructure challenges. ### Kubernetes Meets AI Infrastructure Understand the unique challenges of running AI workloads on Kubernetes, from GPU resource management to model serving at scale. Sanjeev shares real-world experiences from supporting financial services customers and the patterns that work for high-performance computing environments. ### The Future of AIOps Dive into discussions about Model Context Protocol (MCP), autonomous agents, and the concept of "agentic AI" that will define 2025. Learn how these technologies are reshaping the relationship between humans and infrastructure, with the memorable analogy of "you are Krishna steering the chariot." ### Security and Best Practices Explore critical security considerations when implementing AI in DevOps workflows, including safe practices for model deployment, data handling, and maintaining compliance in enterprise environments. Perfect for DevOps engineers, SREs, platform engineers, and technical leaders navigating the intersection of AI and infrastructure operations.

Kubernetes Agent for Natural Language Debugging
Discover how Facets' new Kubernetes Agent revolutionizes cluster management by enabling natural language debugging and secure troubleshooting. This episode showcases our AI-powered orchestrator that maintains proper guardrails and permissions while making Kubernetes operations conversational and intuitive. ### Live Demonstrations & Key Features Watch real-time troubleshooting as we diagnose a pod restart issue caused by missing sidecar files, identify and fix Redis deployment memory configuration problems, and demonstrate CPU usage analysis with Prometheus integration. See how the agent maintains security through user-scoped access controls while providing powerful debugging capabilities. ### Technical Deep Dive Explore the architecture behind Facets' Kubernetes Agent and how it orchestrates AI agents with secure infrastructure access. Learn about multi-tool integration supporting kubectl, Helm, and pod exec operations, plus natural language debugging that works with your existing permissions and kubeconfig setup. ### Audience Q&A Highlights Get answers to key questions about historical log analysis capabilities, chat history persistence and session management, integration possibilities with tools like Cursor and MCP, and comparisons with existing tools like ChatGPT and K9s. Plus, discover future plans for custom tool integration and blueprint generation. ### Perfect For DevOps Engineers looking to streamline Kubernetes troubleshooting workflows, Platform Engineers interested in AI-powered infrastructure management, Site Reliability Engineers seeking efficient debugging solutions, and Development Teams wanting to reduce time spent on cluster-related issues.
Related Articles
View allRuntime Behavioral Nudging for Large Language Model Agents
A comprehensive framework for dynamic behavioral guidance in LLM agents through invisible context injection. Learn how runtime nudging enables just-in-time behavioral guidance without modifying user input or restricting agent capabilities.
OCI...The Next Standard for AI Infrastructure?
Explore why OCI is emerging as the winning standard for AI/ML artifacts and how standardization is bringing order to the fragmented MLOps landscape with Görkem Ercan, CTO at Jozu
AI DevOps Reality: Field Report from the Enterprise Trenches
Understand the real-world impact of AI in DevOps with AWS Senior Container Specialist, Sanjeev Ganjihal