Guild.ai’s cover photo

About us

Guild turns agents into shared production infrastructure, with a managed software center for trusted agent capabilities, and an agent hub for discovering and sharing agents. For Enterprises. AI, Trusted in Production Autonomous software requires the same guardrails as any production system. Guild enforces centralized identity, least-privilege access, and immutable audit logging so enterprise governance extends to AI agents. Agents can act on code, tickets, and operational workflows without bypassing identity controls or becoming a black box. For Developers. AI, Built Like Real Software Guild gives developers the primitives they expect: typed interfaces, versioned releases, safe execution boundaries, and full execution traces, so agents behave like systems, not scripts. The Agent Hub is a public GitHub-like platform for broad discovery and reuse of agents, allowing developers to build agents like real software and ship them as products. One Platform. Any Model Universal by design. Guild is neutral toward models, vendors, and frameworks, doesn’t lock governance into a single stack, and works with Anthropic, OpenAI, Google, and open-source models. Companies can run agents via chat, APIs, webhooks, and schedules, as well as publish trusted capabilities to version, reuse, and improve - so teams don't start from zero. Access can be controlled centrally, and usage tracked by workspace, user, agent, and trigger.

Website
https://www.guild.ai
Industry
Software Development
Company size
2-10 employees
Headquarters
San Francisco
Type
Privately Held
Founded
2025
Specialties
AI, Developers, Engineering, Technology, and Enterprise

Locations

Employees at Guild.ai

Updates

  • View organization page for Guild.ai

    930 followers

    "Charged but no order." Then another. Then another. 23 customers. Zero fulfillment. Revenue is bleeding and nobody knows why. Your on-call engineer would open Zendesk, then Jira, then New Relic, then GitHub. Context switch between four tools. Ping three people on Slack. Spend 45 minutes piecing together what happened. A Guild agent does it in 52 seconds. Jira, New Relic, GitHub. Root cause, blast radius, recovery plan. Delivered to the on-call engineer before they've opened their laptop. 8 agents. 4 integrations. One control plane. We're opening up a small number of design partner spots. If your team deals with incidents like this, we should talk. https://www.guild.ai/

  • Guild.ai reposted this

    Excited to share that I've joined Guild.ai. Everyone is building agents. Nobody's managing them. It's early days, but I think the team here has built something special to solve that. If you're thinking about how to take the leap and deploy agents somewhere other than your laptop, I'd love to chat. More soon. — While you're here, a lot of great people were recently laid off from Monte Carlo. If you're hiring or were one of those affected, please reach out. I'd love to connect you!

    • No alternative text description for this image
  • View organization page for Guild.ai

    930 followers

    Every engineering team is building agents. Nobody's managing them. Agents scattered across repos. Running on personal API keys. Accessing production systems with zero oversight. No versioning. No rollback. No audit trail. It's the early days of cloud computing all over again, and we know how that ends without a control plane. Build anywhere. Run on Guild. guild.ai

  • Thanks Theory Ventures for the great conversation 🩶 Looking forward to the next one, and if you're figuring out how to govern agents at scale, we should talk. 

    View organization page for Theory Ventures

    8,710 followers

    Can you scale from a handful of agents to hundreds? James Everingham, CEO of Guild.ai and former VP Engineering at Meta DevInfra told us how they did in our recent Office Hours. Their internal agent platform drives >50% of Meta's code! This conversation touched on some really interesting points about tool adoption, specialized agents, and governance across thousands of agents. We also heard about the “agentic intranet”, and why the enterprise software stack will soon resemble it Watch:

  • AI agents are starting to operate inside production systems. And no one fully understands the systems those agents are touching.

    The Next Outage Won’t Be a Bug. It’ll Be an Agent, Just Ask Amazon. This week, reports claimed that an AI coding agent at Amazon tried to fix a configuration issue by deleting and recreating the environment it was operating in. The result was a major outage. Regardless of the details about this issue, what matters is what is happening in companies large and small. AI agents are starting to operate directly inside our production systems. They can generate and execute changes far faster than humans can understand the systems those changes affect. For the last 50 years, software development relied on the simple assumption that the people approving changes understood the systems those changes affected. That assumption is breaking. Modern systems are already too complex for any one engineer to fully understand. Thousands of services, layers of infrastructure, complex dependencies. Now we are adding autonomous agents that can generate, refactor, and ship changes across those systems, and increasingly access and operate the tools and applications that run our infrastructure. And this is not just happening in code. Agents are expanding into workflows across the entire company. Infrastructure, configuration, deployment systems, data pipelines, security policies, operational tooling. Over time, agents will touch almost every part of your production environment. For most of software's history, the hard part was writing code. Increasingly, the hard part is understanding the system. The hard part is understanding what those changes will actually do to the system. This is exactly why we started building Guild. When humans or agents interact with complex systems, you need to understand the system those actions are touching. And you need control and guardrails around what those agents are actually allowed to do. You need to understand: - how the system actually works - what other services and systems depend on it - what behavior an action might trigger across the system - what infrastructure, workflows, or data it touches - what access and security boundaries it crosses - and what risk that interaction could introduce into production Guild builds a continuously evolving understanding of the system. Its architecture, dependencies, infrastructure, behavior, and history. That understanding becomes the foundation for control and governance. So when an agent submits a change, touches infrastructure, or attempts to access a system or dataset, you can evaluate the impact, enforce guardrails, and decide what should actually be allowed to happen.

  • Guild.ai 🤝 LinearB Thanks for the great chat ❤️

    View organization page for LinearB

    13,662 followers

    The problem with AI in engineering isn't the tools. It's that every team is building in single-player mode when they need to go multiplayer. James Everingham discovered this firsthand at Meta, where he led Dev Infra for 40,000 engineers. Instead of mandating AI tool usage from the top down, his team put impossible challenges to the organization: eliminate code freezes entirely, create self-healing infrastructure, build conversational onboarding agents. The learnings eventually led James to found Guild.ai where he is now building the AI control plane for engineering teams. James calls 2026 "the year of the agent" because smart engineering leaders are already building the infrastructure to safely scale collaborative agent workflows before their teams hit the wall. Are you one of them? Also inside: - OpenClaw becomes one of the most starred projects on GitHub - Steve Yegge explores how to federate Gas Towns - Perplexity AI gives us Perplexity Computer - Geoffrey Huntley's math on the cost of software development in 2026 - Scott Werner's artifact for exploring an agentic future that is rapidly arriving Listen to the full conversation in the newsletter 👇

  • Speed is killing AI startups. Not because they're moving too fast, but because they're not building the systems to sustain what they ship. James Everingham breaks down why the next real layer of AI infrastructure isn't another model. It's governance, auditability, and the systems that make agents safe enough to trust inside an enterprise. Full episode: https://lnkd.in/ePhdRMgi

    Speed is killing AI startups

    https://www.youtube.com/

Similar pages