Master Safe & Scalable Deployments
Master canary releases, GitOps automation, and zero-touch AKS upgrades for implementing safe and scalable deployment strategies
Topics Covered
Office Hours Summary
Canary Releases, GitOps Automation & Zero-Touch AKS Upgrades
What You'll Learn
• In-depth insights from industry experts
• Practical strategies you can implement today
• Real-world examples and case studies
• Interactive Q&A and community discussion
Stay Updated
Get our latest live content and insights delivered to your inbox.
Hosts

Vishnu K.V

Ishaan Kalra

Adi Unni
Related Content
More Live Content
View all
AI x DevOps with Sanjeev Ganjihal - AWS Solutions Architect
Join Rohit Raveendran as he sits down with Sanjeev Ganjihal, Senior Container Specialist at AWS and one of the first 100 Kubernetes certified professionals globally. This deep dive conversation explores the transformative shift from traditional DevOps to AI-powered operations and what it means for the future of infrastructure management. ### Evolution of DevOps and SRE Explore Sanjeev's unique journey from being an early Kubernetes adopter in 2017 to becoming a specialist in AI/ML operations at AWS. Discover how the industry has evolved from manual operations to automated, intelligent infrastructure management and what this means for traditional SRE roles. ### Multi-LLM Strategies in Practice Get insider insights into Sanjeev's personal AI development toolkit, including how he uses Claude, Q Developer, and local models for different tasks. Learn practical multi-LLM routing strategies, code review workflows, and how to choose the right AI tool for specific infrastructure challenges. ### Kubernetes Meets AI Infrastructure Understand the unique challenges of running AI workloads on Kubernetes, from GPU resource management to model serving at scale. Sanjeev shares real-world experiences from supporting financial services customers and the patterns that work for high-performance computing environments. ### The Future of AIOps Dive into discussions about Model Context Protocol (MCP), autonomous agents, and the concept of "agentic AI" that will define 2025. Learn how these technologies are reshaping the relationship between humans and infrastructure, with the memorable analogy of "you are Krishna steering the chariot." ### Security and Best Practices Explore critical security considerations when implementing AI in DevOps workflows, including safe practices for model deployment, data handling, and maintaining compliance in enterprise environments. Perfect for DevOps engineers, SREs, platform engineers, and technical leaders navigating the intersection of AI and infrastructure operations.

Intro to Facets Intelligence!
Get your first look at Facets Intelligence, our revolutionary AI-powered platform that transforms how teams approach infrastructure management. This episode provides a comprehensive walkthrough of generating production-ready Terraform modules with built-in compliance and context-awareness using the Model Context Protocol (MCP). ### AI-Powered Infrastructure Generation Watch live demonstrations of how Facets Intelligence understands your infrastructure context, automatically generates compliant Terraform modules, and integrates seamlessly with your existing workflows. See how AI can accelerate infrastructure provisioning while maintaining security and governance standards. ### Model Context Protocol Integration Explore how MCP enables deep infrastructure awareness, allowing AI to understand your organization's specific requirements, compliance needs, and architectural patterns. Learn how this context-awareness ensures generated infrastructure follows your established best practices and security policies. ### Live Demo Highlights Experience real-time infrastructure generation as we create production-ready modules, demonstrate compliance validation, and show how the AI adapts to different organizational contexts and requirements. See how teams can reduce infrastructure provisioning time from hours to minutes. ### Key Benefits & Use Cases Discover how Facets Intelligence enables faster time-to-market for new services, reduces infrastructure drift and inconsistencies, ensures compliance across all generated resources, and empowers developers to self-serve infrastructure while maintaining guardrails. ### Perfect For Platform Engineers building self-service infrastructure capabilities, DevOps teams looking to accelerate provisioning workflows, compliance teams needing consistent governance, and organizations scaling their infrastructure operations with AI assistance.

Upcoming AI Features + New tools for Platform teams?
New modules, CLI tools, and platform innovations that work for you
Related Articles
View allAI DevOps Reality: Field Report from the Enterprise Trenches
Understand the real-world impact of AI in DevOps with AWS Senior Container Specialist, Sanjeev Ganjihal
Rethinking Kubernetes Fleet Management: From Central Control to Platform Engineering
Managing multiple Kubernetes clusters? Learn how platform engineering approaches solve fleet management challenges by balancing team autonomy with standardized automation.
The DevOps AI Balancing Act: Insights from the Trenches
From fragile YAMLs to LLM-powered workflows. ChatOps risks, Terraform with CDK, infra guardrails, shifting DevOps roles, and the future of AI-native platforms.