James Boyer typed a full prompt into his terminal before realizing Claude wasn't even running. 🤦♂️ That very relatable moment, and what it means about how fast this shift is rewiring us all, is the opening of his latest guest post on Dev Interrupted. Yes, James is back back back again after last year's piece on coding vibes in the efficiency era hit hard, and this one hits even harder. What we respect most about James is how he shows up to this stuff. He didn't adopt AI because someone told him to. He went looking for reasons to dismiss it. Tested the edges. Poked at the seams. Couldn't find enough of them. The willingness to sit with something uncomfortable and let the evidence change your mind is the posture that matters most right now. Most people either evangelize from day one or dig in and wait it out. James did the harder thing: he experimented without assumptions, let the results accumulate, and then found the words to share it. Read the full story in this week's newsletter. 👇
LinearB
Software Development
Los Angeles עו��בים, California 13,662
The AI Productivity Platform for Engineering Leaders
עלינו
LinearB is the AI Productivity Platform for Engineering Leaders. As AI accelerates code creation, DevEx and Platform teams must manage the downstream impact—review, testing, and release. LinearB provides real-time visibility and developer-first automation to help teams ship faster and improve developer experience. Learn more at https://linearb.io
- אתר אינטרנט
-
https://www.linearb.io
קישור חיצוני עבור LinearB
- תעשייה
- Software Development
- גודל החברה
- 51-200 עובדים
- משרדים ראשיים
- Los Angeles , California
- סוג
- בבעלות פרטית
- הקמה
- 2018
- התמחויות
מוצרים
LinearB
Application Lifecycle Management (ALM) Software
The AI Productivity Platform for Engineering Leaders AI has changed how software is built. Developers generate more code faster than ever—but delivery pipelines haven't kept pace. Bottlenecks in code review, testing, and release are now more exposed. LinearB helps DevEx, Platform, and AI enablement teams manage AI's downstream impact. We provide visibility, governance, and automation to ship AI-driven code at scale—without sacrificing quality or developer experience. Our Platform AI Code Reviews: Catch security risks, bugs, and compliance issues before merge—enforcing standards without slowing teams. Workflow Automation: Policy-driven automations eliminate manual PR routing, approvals, and bottlenecks. AI Impact Measurement: Track how AI tools affect velocity, quality, and team health with tool-agnostic metrics. Developer Experience: Identify friction with metrics, surveys, and AI-powered insights to optimize team performance.
מיקומים
-
הראשי
קבלת הוראות הגעה
Los Angeles , California, US
-
קבלת הוראות הגעה
Tel Aviv, Tel Aviv 6023201, IL
עובדים ב- LinearB
עדכונים
-
We gave AI agents root access to our machines and then pointed them at an internet that was never built for them. Matt Boyle, Head of Product, Design and Engineering at Ona (formerly Gitpod), makes the case that cloud development environments aren't just developer tooling anymore, they're kernel-level security boundaries that let autonomous systems do real work without letting them curl their way past your compliance controls. We go deep on where agent autonomy breaks, what compliance-heavy orgs actually need before they can trust it, and how the developer role is mutating faster than ever. The emerging pattern is the same everywhere: autonomy without containment doesn't scale. Full newsletter + episode inside, along with this week's tech news scoop: - OpenAI is shutting down Sora to focus on enterprise - Anthropic's new Auto mode for Claude Code - Why faster coding doesn't always mean faster delivery - Philip Su's POST model for software leaders
-
Your laptop wasn't designed for what's coming. For decades, development assumed one person, one machine, one thread of work. That model held because humans drove the loop. Agents break that assumption. In this week's expert article on Dev Interrupted, past guest and Warp CEO Zach Lloyd makes the case that as coding agents become persistent, long-running collaborators (read: not occasional helpers) then the walls start to close in on personal machines: compute, security, observability, and coordination all strain under multi-agent workloads. The center of gravity for execution is shifting, and the toolchain hasn't caught up yet. Read the full piece for a glimpse of the near future, where the laptop becomes the control surface, not the execution environment.
-
AI-assisted PRs are 2.5x larger than human code and wait over 5x longer for review. Adoption is universal, but impact? That's another story. In the latest Dev Interrupted episode, Dan Lines and Ben Lloyd Pearson dig into our 2026 Engineering Benchmarks Report, the most comprehensive analysis yet of how AI is fundamentally reshaping software delivery across 8.1 million pull requests and 4,800 engineering teams. For engineering leaders, this isn't about whether to adopt AI, it's about building the foundations to make AI successful. Clear policies, reliable data quality, and context engineering are becoming the differentiators between teams that see real productivity gains and those stuck measuring the wrong metrics. That's because, as Dan and Ben discuss, AI amplifies both the good and the bad. Without the right organizational readiness, universal adoption means universal amplification of existing problems. Listen to the full discussion and check out the news round-up inside. 👇
-
Most metrics from AI tools tell you adoption happened, but can you answer if AI is helping you ship faster? If code quality held up? Has throughput improved or did it just shift the bottleneck to review? We help you track AI activity across 50+ tools and correlate that directly to commits, PRs, and delivery outcomes so you can measure how much code is AI-assisted or which teams are experiencing real delivery impact. Getting clarity can address the AI productivity gap in engineering. To guide you, we’re also introducing the APEX framework built on four pillars: AI leverage, predictability, efficiency, and developer experience. It provides you an operating model for scaling AI productivity without sacrificing delivery confidence or burning out your engineering team. Read the full announcement: https://lnkd.in/gCy57SuW
-
-
You’re scaling AI across your engineering org. Throughput is up, but is delivery still predictable? Is code actually shipping, or just piling up in review? Are developers satisfied? The APEX framework can help you answer these questions by measuring AI productivity across four pillars: - AI leverage: Is AI adoption actually increasing throughput, or just shifting where work happens - Predictability: Are your teams still delivering on commitments, or has speed introduced instability? - Efficiency: Where are bottlenecks between code written and code shipped? - Developer experience: Are productivity gains stable or are they coming at a human cost? APEX connects metrics that matter to help you measure and optimize AI engineering productivity. It establishes a recommended cadence to help you go beyond reporting, to help you course-correct as you scale. If you can prove AI improves throughput without eroding delivery confidence or developer health, you can scale faster and invest with confidence. Explore APEX for guidance on how to measure AI impact: https://lnkd.in/gZGtMae8
-
That XKCD comic about critical infrastructure depending on "some random person in Nebraska"? Soon it'll be some agent on that person's laptop. In our latest Dev Interrupted episode, Chainguard CEO Dan Lorenc explains how autonomous agents are pushing deployment speeds to the absolute limit, but our security infrastructure and the economics of open source aren't ready for the consequences. First, Dan highlights strategies for teams to move from guardrails to "guide rails" with their agentic deploy speed. The teams extracting massive value are building rock-solid deployment pipelines that turn restrictive barriers into frictionless pathways. But this conversation isn't just about velocity. The software supply chain itself is fracturing, and we dig into why and what's next. Open source maintainers are drowning in AI-generated noise while attackers get decades ahead on the exponential curve. Some projects will ban AI contributions entirely. Others will embrace full agent maintenance with tools like EmeritOSS. Listen to the full conversation inside 🎧 Also featured in this week's news roundup: - tokens as compensation ( 🙀 ????) - harness engineering playbooks - Meta's bet on AI agent social networks
-
International Women’s Day is a reminder to celebrate the women who lead, build, support, and inspire across every team and industry. We're grateful to work alongside so many thoughtful, driven, and talented women at LinearB. While the day is about recognition, it’s also about continuing the work, creating spaces where women have equal opportunities to grow, lead, and shape the future. A big thank you to the mentors, colleagues, and friends who uplift others and open doors for the next generation. Here’s to progress, partnership, and supporting one another every step of the way. #InternationalWomensDay #IWD2026 #WomenInTech #WomenSupportingWomen
-
-
The problem with AI in engineering isn't the tools. It's that every team is building in single-player mode when they need to go multiplayer. James Everingham discovered this firsthand at Meta, where he led Dev Infra for 40,000 engineers. Instead of mandating AI tool usage from the top down, his team put impossible challenges to the organization: eliminate code freezes entirely, create self-healing infrastructure, build conversational onboarding agents. The learnings eventually led James to found Guild.ai where he is now building the AI control plane for engineering teams. James calls 2026 "the year of the agent" because smart engineering leaders are already building the infrastructure to safely scale collaborative agent workflows before their teams hit the wall. Are you one of them? Also inside: - OpenClaw becomes one of the most starred projects on GitHub - Steve Yegge explores how to federate Gas Towns - Perplexity AI gives us Perplexity Computer - Geoffrey Huntley's math on the cost of software development in 2026 - Scott Werner's artifact for exploring an agentic future that is rapidly arriving Listen to the full conversation in the newsletter 👇
-
700 engineers, zero bureaucracy, one month: how monday.com hit AI escape velocity. The result? 40,000 apps created and a 33-year technical debt problem solved in five months. Behind those numbers was a foundational insight: you can't build transformational AI on shaky infrastructure. While most engineering teams are rushing to slap AI features onto existing products, monday.com VP of R&D Sergei Liakhovetsky took a different approach. Check out the full interview in the newsletter below to scoop monday.com's blueprint on building for both humans and machines simultaneously.