Office Hours #12

Intro to Facets Intelligence!

Discover Facets Intelligence with AI-powered Terraform module generation using Model Context Protocol for infrastructure management

June 12, 2025
38 mins

Topics Covered

aicontractsorchestrationterraformplatform engineering

Office Hours Summary

Get your first look at Facets Intelligence, our revolutionary AI-powered platform that transforms how teams approach infrastructure management. This episode provides a comprehensive walkthrough of generating production-ready Terraform modules with built-in compliance and context-awareness using the Model Context Protocol (MCP).

AI-Powered Infrastructure Generation

Watch live demonstrations of how Facets Intelligence understands your infrastructure context, automatically generates compliant Terraform modules, and integrates seamlessly with your existing workflows. See how AI can accelerate infrastructure provisioning while maintaining security and governance standards.

Model Context Protocol Integration

Explore how MCP enables deep infrastructure awareness, allowing AI to understand your organization's specific requirements, compliance needs, and architectural patterns. Learn how this context-awareness ensures generated infrastructure follows your established best practices and security policies.

Live Demo Highlights

Experience real-time infrastructure generation as we create production-ready modules, demonstrate compliance validation, and show how the AI adapts to different organizational contexts and requirements. See how teams can reduce infrastructure provisioning time from hours to minutes.

Key Benefits & Use Cases

Discover how Facets Intelligence enables faster time-to-market for new services, reduces infrastructure drift and inconsistencies, ensures compliance across all generated resources, and empowers developers to self-serve infrastructure while maintaining guardrails.

Perfect For

Platform Engineers building self-service infrastructure capabilities, DevOps teams looking to accelerate provisioning workflows, compliance teams needing consistent governance, and organizations scaling their infrastructure operations with AI assistance.

What You'll Learn

• In-depth insights from industry experts

• Practical strategies you can implement today

• Real-world examples and case studies

• Interactive Q&A and community discussion

Share This Content

Stay Updated

Get our latest live content and insights delivered to your inbox.

Hosts

Rohit Raveendran

Rohit Raveendran

Co-Founder & VP Engg
Facets
Adi Unni

Adi Unni

Product Manager
Facets

Related Content

More Live Content

View all
AI meets MLOps - Making sense of the mess
Podcast

AI meets MLOps - Making sense of the mess

In this episode of AI x DevOps, Rohit sits down with Görkem Ercan, CTO at Jozu, a company building a DevOps platform for AI agents and models. Görkem, a veteran with over two decades of software experience (including contributions to the Eclipse Foundation), explains why MLOps is fundamentally different from traditional, deterministic DevOps—leading to extreme pipeline fragmentation. ### Standardization is Key Discover why OCI is the recognized standard for packaging AI/ML artifacts, and how the Model Packs project (with ByteDance, Red Hat, and Docker) is defining the artifact structure. Learn how standardization is bringing order to the fragmented MLOps landscape. ### Open Source Challenges Understand the critical challenges maintainers face when receiving large amounts of untested, verbose, AI-generated code. Görkem shares insights on the impact of AI-generated Pull Requests on open-source projects. ### LLM Economics and Strategy Explore why running small, fine-tuned LLMs in-house can be cheaper and provide more predictable, consistent results than generic large providers. Get practical insights on when to build versus buy. ### KitOps Solution Learn how KitOps creates an abstraction that allows data scientists to focus on training while leveraging existing DevOps platforms for deployment. Discover how ModelKits are simplifying the AI/ML deployment pipeline. Essential listening for platform engineers, DevOps practitioners, MLOps engineers, and anyone working at the intersection of AI and infrastructure. Tune in to understand the standardization movement reshaping the future of AI development.

Nov 11, 202571 mins
AI x DevOps with Sanjeev Ganjihal - AWS Solutions Architect
Podcast

AI x DevOps with Sanjeev Ganjihal - AWS Solutions Architect

Join Rohit Raveendran as he sits down with Sanjeev Ganjihal, Senior Container Specialist at AWS and one of the first 100 Kubernetes certified professionals globally. This deep dive conversation explores the transformative shift from traditional DevOps to AI-powered operations and what it means for the future of infrastructure management. ### Evolution of DevOps and SRE Explore Sanjeev's unique journey from being an early Kubernetes adopter in 2017 to becoming a specialist in AI/ML operations at AWS. Discover how the industry has evolved from manual operations to automated, intelligent infrastructure management and what this means for traditional SRE roles. ### Multi-LLM Strategies in Practice Get insider insights into Sanjeev's personal AI development toolkit, including how he uses Claude, Q Developer, and local models for different tasks. Learn practical multi-LLM routing strategies, code review workflows, and how to choose the right AI tool for specific infrastructure challenges. ### Kubernetes Meets AI Infrastructure Understand the unique challenges of running AI workloads on Kubernetes, from GPU resource management to model serving at scale. Sanjeev shares real-world experiences from supporting financial services customers and the patterns that work for high-performance computing environments. ### The Future of AIOps Dive into discussions about Model Context Protocol (MCP), autonomous agents, and the concept of "agentic AI" that will define 2025. Learn how these technologies are reshaping the relationship between humans and infrastructure, with the memorable analogy of "you are Krishna steering the chariot." ### Security and Best Practices Explore critical security considerations when implementing AI in DevOps workflows, including safe practices for model deployment, data handling, and maintaining compliance in enterprise environments. Perfect for DevOps engineers, SREs, platform engineers, and technical leaders navigating the intersection of AI and infrastructure operations.

Sep 8, 20251 h 6 mins
Kubernetes Agent for Natural Language Debugging
Office Hours

Kubernetes Agent for Natural Language Debugging

Discover how Facets' new Kubernetes Agent revolutionizes cluster management by enabling natural language debugging and secure troubleshooting. This episode showcases our AI-powered orchestrator that maintains proper guardrails and permissions while making Kubernetes operations conversational and intuitive. ### Live Demonstrations & Key Features Watch real-time troubleshooting as we diagnose a pod restart issue caused by missing sidecar files, identify and fix Redis deployment memory configuration problems, and demonstrate CPU usage analysis with Prometheus integration. See how the agent maintains security through user-scoped access controls while providing powerful debugging capabilities. ### Technical Deep Dive Explore the architecture behind Facets' Kubernetes Agent and how it orchestrates AI agents with secure infrastructure access. Learn about multi-tool integration supporting kubectl, Helm, and pod exec operations, plus natural language debugging that works with your existing permissions and kubeconfig setup. ### Audience Q&A Highlights Get answers to key questions about historical log analysis capabilities, chat history persistence and session management, integration possibilities with tools like Cursor and MCP, and comparisons with existing tools like ChatGPT and K9s. Plus, discover future plans for custom tool integration and blueprint generation. ### Perfect For DevOps Engineers looking to streamline Kubernetes troubleshooting workflows, Platform Engineers interested in AI-powered infrastructure management, Site Reliability Engineers seeking efficient debugging solutions, and Development Teams wanting to reduce time spent on cluster-related issues.

Aug 7, 202513 mins 48 sec