The best way to think about AI agents right now: they're only as useful as the tools they can reach. An agent with shell access but no way to talk to your infrastructure is just a fancy autocomplete. We just shipped a rebuilt Courier CLI that gives developers and AI agents full access to Courier from the terminal. Send messages across any channel, manage users and routing, check delivery status, the entire API. Whether you're adding notifications to a new app or managing an existing integration, everything is faster now. Your agent can handle Courier the same way it runs any other terminal command. Works with Cursor, Claude Code, or anywhere you have a shell. npm install -g @trycourier/cli, set your API key, go. Big shoutout to the folks at Stainless for making the launch of this new and improved CLI a piece of cake. Full walkthrough on the blog: https://lnkd.in/ejKne8fA
Courier CLI: Access Courier from the Terminal
More Relevant Posts
-
Agent ecosystem quote of the week: "An agent with shell access but no way to talk to your infrastructure is just a fancy autocomplete." Checkout our friends Courier who just launched a slick new CLI for omnichannel customer messaging / engagement. npm install -g @trycourier/cli
The best way to think about AI agents right now: they're only as useful as the tools they can reach. An agent with shell access but no way to talk to your infrastructure is just a fancy autocomplete. We just shipped a rebuilt Courier CLI that gives developers and AI agents full access to Courier from the terminal. Send messages across any channel, manage users and routing, check delivery status, the entire API. Whether you're adding notifications to a new app or managing an existing integration, everything is faster now. Your agent can handle Courier the same way it runs any other terminal command. Works with Cursor, Claude Code, or anywhere you have a shell. npm install -g @trycourier/cli, set your API key, go. Big shoutout to the folks at Stainless for making the launch of this new and improved CLI a piece of cake. Full walkthrough on the blog: https://lnkd.in/ejKne8fA
To view or add a comment, sign in
-
⚙️ CraftBot V1.2.1 update: Better multi-tasking, Local LLM setup/support, and better error handling. Try it out now at: https://lnkd.in/gnVf_k6b 🌟 Here are the highlights: ➽ Added local LLM support. ➽ Added DeepSeek as LLM provider. ➽ Added more external app integrations. ➽ Improved run_python action speed. ➽ Added task cancel support for tasks waiting for a response. ➽ Added direct reply support for agent messages, running tasks, and waiting tasks. ➽ Direct session routing now bypasses the LLM when replying to active tasks. ➽ Fixed a scheduled task race condition that caused tasks not to trigger. ➽ Fixed message routing in multi-task scenarios. ➽ Improved memory handling by moving user messages into trigger payloads and using recent conversation context for memory queries. ➽ Replaced string parsing logic with payload-based message passing. ➽ OAuth handlers now use non-blocking async calls. ➽ Closing a modal now cancels in-progress OAuth and frees the port immediately. ➽ Removed agent_logs.txt logging to reduce lag and long-run action delay. ➽ Added dynamic loading and unloading for chat. ➽ Fixed the status bar not updating correctly. ➽ Fixed chat panel scrolling behavior. ➽ Fixed MCPs and Skills card usage display in the dashboard. ➽ Fixed the MCP tool call chunk limit issue. ➽ Large attachments are now rejected before upload with a clear error. ➽ Added a starting system message for new chat sessions. ➽ Improved soft onboarding interview prompts. ➽ Added timeout toast notifications. ➽ Included minor UI, prompt, and toggle button fixes. ➽ Prompt update and optimization.
To view or add a comment, sign in
-
Unpopular opinion: Most developers don't need more AI tools. They need to learn how to USE the ones they already have. Case in point — Claude Code. 99% of devs use it like a fancy autocomplete. The 1% who actually read the docs are doing things like: ✅ CLAUDE.md files that give Claude persistent memory across sessions ✅ Custom Skills (.md files) that auto-activate based on what you're working on ✅ Hooks that run security checks BEFORE Claude touches your files ✅ Permissions that block Bash:sudo entirely ✅ Agents scoped to specific parts of the codebase This isn't prompt engineering. This is software architecture — for AI. The daily workflow that actually works: → cd project && claude → Plan Mode first (Shift + Tab x2) → Describe intent, not implementation → Auto Accept once you trust it → Start new session per feature Stop treating Claude Code like ChatGPT with a terminal. Start treating it like a junior dev you're onboarding properly. Agree or disagree? 👇 #ClaudeCode #AIAgents #SoftwareEngineering #DeveloperTools #Anthropic
To view or add a comment, sign in
-
Been wanting to kick the tires on multi-agent systems for a while. Finally did it today using Claude Code + VS Code. Here's what it does: I give it a keyword or theme. Three specialized AI agents run in sequence, a researcher, a strategist, and a brief writer, and out comes a production-ready content brief. Each agent has one job, one input, and one output. What made it click for me was realizing how similar this is to building Claude skills. If you've written a CLAUDE.md or a custom agent instruction file, you already know the pattern. An agent is just a skill file that can be invoked as its own subprocess with its own context window. A few things I'd tell someone just starting out: Keep agents narrow. Save intermediate outputs to files. This is what makes the whole thing debuggable. The next logical step is adding a reviewer loop, a fourth agent that reads the final output and either approves it or sends it back with specific feedback. I have so many ideas now! Fun times!
To view or add a comment, sign in
-
Here is another way to improve the quality of your AI generated code. You can create a prompt, such as my repo-analyze prompts, from ProjectHepheastus, https://lnkd.in/gjym3zfx, that will attempt to grade your project and provide areas where your project is deficient. Most projects start in a bad state, so I use the 'repo-analyze-quick' to fix the low hanging fruits until I get an A grade. Then I upgrade my analysis to repo-analyze, which provides more review categories and stricter analysis. I can then iterate on that until the repository gets an A. Then, and only then, have I seen running the `repo-analyze-strict` be worth running. This prompt is very aggressive in its analysis and covers a large number of topics. It isn't sufficient just to be tested, but it also covers useability, security, safety, packaging, CI/CD, governance, etc... If you get an 'A' here, then your code base is doing pretty well. Install the plugin via `claude plugin install HomericIntelligence/ProjectHephaestus` You also get access to a lot of other skills such as my swarming, memory access, and other tools I use to develop. You also can independently install similar skills from the David Bellamy's memory template[2], or add superpowers[3] to claude from ✨ Jesse Vincent. All of these plugins will help you produce better code. [1] https://lnkd.in/gjym3zfx [2] https://lnkd.in/gD_YncPH [3] https://lnkd.in/gwHxVVnq
To view or add a comment, sign in
-
Everyone keeps saying AI will make software developers nonexistent. But honestly, I’ve never had this much work, excitement, or momentum building at this scale. One of my recent explorations started with how much I love Medusa. Everything about how they design and structure their codebase clicks for me: the modular system, subscriptions, and workflow setup. It’s clean and well thought out. But it’s built specifically for e-commerce, not as a general-purpose backend, and that creates limitations. So I started thinking: what if I extract the parts I love and build my own backend around them? Something modular, plug-and-play, and easy to use, for both humans and AI. In the past, I would’ve looked at something like this and thought, “that’s way too much work.” I wouldn’t have even attempted it. But this time I went for it. As I kept building, I ran into limitations with every ORM I tried. None of them worked the way I needed. So I said fuck it, and started building my own ORM too. This whole project is way bigger than anything I would’ve taken on before. I learned that the hard way when I tried (and failed) to rebuild @lexicaljs from scratch in the past. But things are different now. The only real limits are your tokens and your willingness to learn, and both have gotten a lot more accessible. The project is open source: https://lnkd.in/dtZp54fq It’s still very early, but the progress I’ve made in just a month is honestly mind-blowing, and it’s pushing me to go even further.
To view or add a comment, sign in
-
Claude Code users are going to lose their minds over this.!! A dev just open-sourced the fastest production-ready multi-agent framework on GitHub. It beats LangGraph by 1,209x in agent instantiation speed and runs on 100+ models with a single pip install. It's called PraisonAI. Here's what's inside: → 3.77 microseconds average agent startup time, making it the fastest AI agent framework benchmarked against OpenAI Agents SDK, Agno, PydanticAI, and LangGraph → Single agent, multi-agent, parallel execution, routing, loops, and evaluator-optimizer patterns all built in with clean Python code → Deep Research Agent that connects to OpenAI and Gemini deep research APIs, streams results in real time, and returns structured citations automatically → Persistent memory across sessions with zero extra dependencies: short-term, long-term, entity, and episodic memory all working out of the box with a single parameter → MCP Protocol support across stdio, WebSocket, SSE, and Streamable HTTP so your agents can talk to any external tool or expose themselves as MCP servers for Claude, Cursor, or any other client → 24/7 scheduler so agents can run on their own without you manually triggering anything It supports every major provider in one framework: OpenAI, Anthropic, Gemini, Groq, DeepSeek, Mistral, Ollama, xAI, Perplexity, AWS Bedrock, Azure, and 90 more. You switch models by changing one line. The framework handles everything else. And if you want zero code at all, the CLI does everything the Python SDK does. Auto mode, interactive terminal, deep research, workflow execution, memory management, tool discovery, session handling, all from your terminal. 5.6K GitHub stars. 100% Open Source. Ref. https://lnkd.in/gBFPVWvs
To view or add a comment, sign in
-
So, I have been building software with AI agents for about five months now, and wanted to share some fun stuff happened along the way :) My all-time favorite is this one. I was sitting at my desk, casually checking the output of an agent working on a client project. The agent was logging its progress, and somewhere between "created project structure" and "initialized dependencies" I saw a line that said something like "had to set the repository to public." Followed by more status output. And I was like, repo public, aha, wait, WHAT?! I jumped in, and sure enough, the agent had created a GitHub repository for a client's codebase and made it public. Turns out the agent decided the client would find it easier to read the requirements document if it was hosted somewhere accessible. And following that logic, with permissions wide enough to let it, the agent went to GitHub, created a repository, made it public, and deployed the requirements there. I fixed it manually and the spent the rest of the day building a sandbox so that agents can no longer touch anything on my GitHub structure. The thing about agents is, they really want to please you. Like, really. And they want everything to work right here, right now. If something breaks, they take it personally. And because of that, they will do absolutely anything to make things look like they work. Mock data instead of real integrations. A test that keeps failing? Just delete it. A gorgeous frontend that is not actually connected to any backend whatsoever. Hardcoded config values that happen to make the demo look perfect. It is like having a really talented but incredibly lazy developer on your team. "I coded it kinda, it works sorta." The future is indeed here. It looks like the present, but faster. 🙂
To view or add a comment, sign in
-
Stop treating your application config like an afterthought. 🛑 The "12-Factor App" made the .env file a standard, but relying on static, string-only variables is creating a massive bottleneck for modern R&D teams. We’ve all been there: ❌ A boolean set to "True" instead of "true" crashes the boot. ❌ An integer passed as a string triggers a runtime error. ❌ A simple change requires a full CI/CD redeploy. At envbee, we’re moving beyond the "String-Only Trap." Our latest deep dive explores the shift toward Dynamic, Typed Configuration. Why it matters: ✅ Type-Safety: Catch schema errors before your code even runs. ✅ Hot-Reloading: Change feature flags or limits in real-time without a restart. ✅ Efficiency: Decouple your config from your deployment pipeline. Don't let messy .env files drain your productivity. Read the full post here: https://lnkd.in/dGyYitym #softwareengineering #backenddev #cto #techleadership #fullstack #envbee
To view or add a comment, sign in
More from this author
Explore related topics
- How Developers can Use AI Agents
- How Developers can Use AI in the Terminal
- How to Use AI Agents to Optimize Code
- How to Use AI Agents to Improve SaaS Business Models
- How AI Agents Are Changing Software Development
- Optimizing AI Email Agent Performance
- How to Boost Productivity With Developer Agents
- How to Streamline AI Agent Deployment Infrastructure
- Agent-to-Agent Protocol Improvements
- Using Asynchronous AI Agents in Software Development
Love this!