MCP Without the Hype: Founders Take
A candid co-founders' discussion on Model Context Protocol and its real-world implications beyond the hype
Topics Covered
Podcast Summary
Rohit and Anshul sit down for a candid co-founders' conversation about Model Context Protocol (MCP). They're not selling you anything here—just sharing their honest thoughts on what MCP actually does, where it might be useful, and why most of the buzz around it is probably premature.
What MCP Actually Is (And Isn't)
The co-founders walk through what Model Context Protocol really means in practice. They explain the problem it's trying to solve and why it's not the revolutionary breakthrough some people are claiming. Spoiler: it's more evolution than revolution, and that's probably fine.
The Reality Check on DevOps Implementation
This isn't your typical "MCP will change everything" take. Rohit and Anshul discuss the actual challenges of integrating MCP with existing tools, the performance considerations most people aren't talking about, and why context management is still really hard—protocol or no protocol.
When to Use It (And When Not To)
The co-founders get practical about decision-making. They share their framework for when MCP might actually help your team versus when you're probably better off sticking with what works. It's refreshingly honest advice from people who've been in the trenches.
A no-nonsense conversation for anyone trying to figure out if MCP is worth their time and energy, without the usual vendor pitch or conference hype.
What You'll Learn
• In-depth insights from industry experts
• Practical strategies you can implement today
• Real-world examples and case studies
• Interactive Q&A and community discussion
Stay Updated
Get our latest live content and insights delivered to your inbox.
Hosts

Rohit Raveendran

Anshul Sao
Related Content
More Live Content
View all
AI x DevOps with Sanjeev Ganjihal - AWS Solutions Architect
Join Rohit Raveendran as he sits down with Sanjeev Ganjihal, Senior Container Specialist at AWS and one of the first 100 Kubernetes certified professionals globally. This deep dive conversation explores the transformative shift from traditional DevOps to AI-powered operations and what it means for the future of infrastructure management. ### Evolution of DevOps and SRE Explore Sanjeev's unique journey from being an early Kubernetes adopter in 2017 to becoming a specialist in AI/ML operations at AWS. Discover how the industry has evolved from manual operations to automated, intelligent infrastructure management and what this means for traditional SRE roles. ### Multi-LLM Strategies in Practice Get insider insights into Sanjeev's personal AI development toolkit, including how he uses Claude, Q Developer, and local models for different tasks. Learn practical multi-LLM routing strategies, code review workflows, and how to choose the right AI tool for specific infrastructure challenges. ### Kubernetes Meets AI Infrastructure Understand the unique challenges of running AI workloads on Kubernetes, from GPU resource management to model serving at scale. Sanjeev shares real-world experiences from supporting financial services customers and the patterns that work for high-performance computing environments. ### The Future of AIOps Dive into discussions about Model Context Protocol (MCP), autonomous agents, and the concept of "agentic AI" that will define 2025. Learn how these technologies are reshaping the relationship between humans and infrastructure, with the memorable analogy of "you are Krishna steering the chariot." ### Security and Best Practices Explore critical security considerations when implementing AI in DevOps workflows, including safe practices for model deployment, data handling, and maintaining compliance in enterprise environments. Perfect for DevOps engineers, SREs, platform engineers, and technical leaders navigating the intersection of AI and infrastructure operations.

Kubernetes Agent for Natural Language Debugging
Discover how Facets' new Kubernetes Agent revolutionizes cluster management by enabling natural language debugging and secure troubleshooting. This episode showcases our AI-powered orchestrator that maintains proper guardrails and permissions while making Kubernetes operations conversational and intuitive. ### Live Demonstrations & Key Features Watch real-time troubleshooting as we diagnose a pod restart issue caused by missing sidecar files, identify and fix Redis deployment memory configuration problems, and demonstrate CPU usage analysis with Prometheus integration. See how the agent maintains security through user-scoped access controls while providing powerful debugging capabilities. ### Technical Deep Dive Explore the architecture behind Facets' Kubernetes Agent and how it orchestrates AI agents with secure infrastructure access. Learn about multi-tool integration supporting kubectl, Helm, and pod exec operations, plus natural language debugging that works with your existing permissions and kubeconfig setup. ### Audience Q&A Highlights Get answers to key questions about historical log analysis capabilities, chat history persistence and session management, integration possibilities with tools like Cursor and MCP, and comparisons with existing tools like ChatGPT and K9s. Plus, discover future plans for custom tool integration and blueprint generation. ### Perfect For DevOps Engineers looking to streamline Kubernetes troubleshooting workflows, Platform Engineers interested in AI-powered infrastructure management, Site Reliability Engineers seeking efficient debugging solutions, and Development Teams wanting to reduce time spent on cluster-related issues.

AI Security Reality Check
Nathan Hamiel, Head of Research at Kudelski Security, joins Rohit Raveendran for an essential reality check on AI security in DevOps environments. This candid conversation cuts through the hype to address real-world threats, vulnerabilities, and practical defense strategies that every team integrating AI into their infrastructure should understand. ### Real-World AI Security Threats Explore the actual security landscape facing organizations adopting AI, from model poisoning and prompt injection attacks to data exfiltration risks. Nathan shares insights from Kudelski Security's research into emerging threat vectors and how attackers are targeting AI-powered systems in production environments. ### DevOps-Specific Vulnerabilities Understand the unique security challenges that arise when AI meets DevOps workflows, including supply chain risks, model integrity issues, and the security implications of AI-generated infrastructure code. Learn how traditional security practices need to evolve for AI-augmented development pipelines. ### Practical Defense Strategies Get actionable guidance on implementing robust security measures for AI in DevOps, including model validation techniques, secure prompt engineering practices, and monitoring strategies for AI-powered infrastructure operations. Discover how to balance innovation with security requirements. ### Industry Insights and Trends Benefit from Nathan's perspective on the evolving threat landscape, emerging security standards for AI systems, and what organizations should prioritize when building security into their AI-driven DevOps practices. ### Key Takeaways for Teams Learn how to assess AI security risks in your current environment, implement baseline security controls for AI systems, and build a security-first culture around AI adoption without stifling innovation. Essential listening for security professionals, DevOps engineers, platform teams, and anyone responsible for safely integrating AI into production infrastructure and development workflows.
Related Articles
View allAI DevOps Reality: Field Report from the Enterprise Trenches
Understand the real-world impact of AI in DevOps with AWS Senior Container Specialist, Sanjeev Ganjihal
Unifying Your Toolchain: Introducing the Facets Orchestration Platform
Learn how an infrastructure orchestrator allows platform engineers to become enablers, enabling developers to self-service.
When AI Writes Code, Who Writes the Guardrails: Addressing AI Security Risks
Learn about the security risks when building AI-powered products, including prompt injection, common vulnerabilities, and architectural pitfalls.
Customer Stories
View allMPL Spinoff—GGX Transforms DevOps with Facets' Platform Engineering Solution
Good Game Exchange (GGX) achieved a 3X boost in Ops efficiency, expedited cloud migration by 75% to GCP, and streamlined developer workflows by leveraging Facets' platform engineering approach
8X Deployment Frequency with Zero-Downtime Cloud Migration
How MPL's SRE team achieved zero-downtime AWS to GCP migration, 8X faster releases, and self-service for 150+ developers using Facets