Sign in to view Dhruv’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New York, New York, United States
Sign in to view Dhruv’s full profile
Dhruv can introduce you to 10+ people at HoneyHive
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
7K followers
500+ connections
Sign in to view Dhruv’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Dhruv
Dhruv can introduce you to 10+ people at HoneyHive
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Dhruv
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Dhruv’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Activity
7K followers
-
Dhruv Singh shared thisA few extensions of Karpathy’s autoresearch I recently came across: Autoreason, a Claude Skills optimizer, and an inference optimiser for Apple Silicon. Autoreason - The core idea of autoresearch is 5 minute experimental loops focused on improving a single metric. That works great when your experiments have a clear score to optimize, but less so with subjective work like creative writing or marketing copy. Autoreason tackles subjective scoring by spinning up an LLM council that runs in a loop: the council writes, critiques, and merges the best parts into a new draft. A blind judge picks a winner between two versions. The loop continues until a document wins twice in a row. Claude Skills optimizer - One challenge with Skills is inconsistency. Skills ride on top of an LLM and are prone to instruction drift. So how do you make them reliable? One approach, suggested by Ole Lehmann, is to create evals for your skills and iteratively refine them until they pass. Start by defining what “good” means using a binary checklist: Did the skill do X? Yes or no? Did the skill do Y? Yes or no? The agent then runs the skill, reviews the checklist, adjusts it slightly, and loops until results are consistent. Auto-Inference-Optimiser - Point your coding agents at Manthan Gupta's repo and they’ll begin experimenting on KV cache, sampling, prefill, metal memory, architecture, and compilation. What stands out is the evaluation harness. As Gupta puts it, “The evaluation is the real product”. Worth checking out his X post (linked in comments). (Quick note if you’re experimenting with these loops: if you let LLM optimizers or coding agents run unsupervised, make sure it’s on supporting utilities and not your main product. They can get too good at optimizing a metric, leaving little room for further improvement.) Some fun ideas to experiment with. Enjoy. credits: Shann Holmberg, Ole Lehmann, Manthan Gupta
-
Dhruv Singh shared thisThe security incidents piling up over the past few weeks make one thing obvious: the bottleneck for agents is verification. Can you actually confirm what the agent did, whether it was safe, and whether it was compliant? Most teams can’t. Agent capabilities have scaled far faster than the systems to observe and audit them. We built HoneyHive to close this exact gap. Claude Code integration coming. More on this soon 👀
-
Dhruv Singh shared thisAn OpenClaw agent deleted over 200 emails from a Meta security leader's inbox before it was stopped. A context-compaction step dropped the instruction to "confirm before acting." The error came down to the difference in the testing environment vs. production environment. In small tests, with limited email volume, the compaction step wasn't triggered. The safety instruction stayed in context. At real inbox scale, the framework summarized earlier turns to make room and the constraint was compressed out of existence. This story was published in The Tech Buzz last week. Meta got the attention but some version of this is happening inside companies everywhere. The takeaway is right: trust has to be earned in stages. But you can't promote an agent to the next authority tier on a vibes. Trust requires data. You need to observe what the agent actually did at each stage, evaluate whether it should have, and build the evidence to decide when it's ready for more. It’s best to start using background agents in a read-only mode to see what value they can drive without creating risk. As you learn their strengths and weaknesses, it’ll become easier to determine where it makes sense to provide them more permissions. "What turns an agent into an accidental insider is not intelligence alone. It is authority without structure. And the companies that understand that earliest will build systems that are not just more powerful, but more survivable." Good line. Full story linked in comments.
-
Dhruv Singh shared thisKarpathy built autoresearch for ML training, but the pattern is universal and people figured that out immediately. Autoresearch is simple: one file the agent edits, one metric it optimizes, and a 5-minute loop that runs while you sleep. The agent runs an experiment, measures the result, keeps what works, discards what doesn't and goes again. The domain is the only variable. Tobi Lütke ran it overnight on Shopify's Liquid codebase. Woke up to 53% faster parse+render time and 61% fewer object allocations. Varun Mathur pointed it at quant finance, creating Autoquant: a distributed quant research lab. 135 autonomous agents backtesting trading strategies across 10 years of market data. A team from UNC created AutoResearchClaw. One message in and a full conference paper out. Andrew Jiang dropped the GitHub link into Claude Code with no custom setup. Applied it to a classification problem. Got results in an hour. Thousands of other forks out there, pointing Autoresearch at hardware optimization, prompt optimization, SEO, security… The loop is identical across all of them: one editable asset, one scalar metric and one time-boxed cycle. Anyone can run hundreds or thousands of cheap experiments and let the metric decide.
-
Dhruv Singh shared thisI had one of my favorite work days of the year on Commonwealth Bank’s Bangalore campus. Their GenAI platform team walked me through how agents are being wired into the bank’s core stack, plugged into customer journeys and internal engineering workflows on top of a data platform that already runs thousands of models and tens of millions AI‑driven decisions every day. They hold this stack to the same standard as any other piece of critical banking infra with the same change management and incident response expectations. We focused on how they get control and transparency. Every step in an agent workflow emits OpenTelemetry spans so teams can see how a decision moved through different services, and a single control plane manages routing & policy decisions. When behavior shifts in prod they can replay what happened and correct it surgically instead of switching features off and backing away. Walking out, I kept thinking about how that control layer (routing, policies, telemetry in one place) as a big reason the rest of the bank can safely build on top of these capabilities. Without that kind of governance, it’s very hard to scale AI reliably in a regulated environment. This “oversight and observability” layer is exactly what we focus on at HoneyHive, so seeing another bank‑scale implementation up close and comparing notes on how to keep AI both effective and accountable was a real highlight of the trip Big thanks again to Scott Shaw, Jon Embury, Deepika Goel, Raghavendra HR, Gaurav Kumar, Dhruba Baishya, and the whole GenAI platform team at Commonwealth Bank for hosting and for the candid convos.
-
Dhruv Singh shared thisA few weeks ago at the India AI Impact Summit, Sam Altman shared that India is the fastest growing market for Codex, globally. After spending time with engineering teams here over the past couple of weeks, that statistic makes perfect sense. The level of talent and curiosity is remarkable! I'm incredibly grateful for the generosity and thoughtfulness of the teams who shared their time and ideas during my stay. Looking forward to returning soon.
-
Dhruv Singh shared thisWe spent three months building the wrong AI products. In HoneyHive’s early days, Mohak and I tried an AI data analyst and an AI coding IDE. All different flavors of application-layer products. They ran into the same wall: in 2022, getting LLMs to behave was brutally hard. To stay sane we hacked together simple evals to compare prompts and measure output quality. When we showed people what we were building, they mostly ignored the apps and wanted the tooling underneath. A demo with one of our early beta testers was the inflection point: “This is the useful part. You should productize this.” We leaned into it. It started as simple prompt evals then expanded into agent-level evaluation and, eventually, full distributed traces. Along the way, we’ve grown from just Mohak and me to a team of 9, working together largely over Slack and Zoom. It was actually the first time the entire HoneyHive team has been in the same room together. Our first offsite. In our first few sessions one thing became immediately clear: we’re definitely not a “move fast and break things” company. We’re a “get it right” company. It reminds me of the saying "slow is smooth and smooth is fast." That’s what we’re shooting for. And to get there, we need more cracked builders in the room. We’re building the observability stack for AI agents, used by leading startups and Fortune 500 companies to debug complex systems, evaluate output quality, and monitor failures in production. We have three open roles for exceptional engineers who want to help shape how AI gets built and shipped. Come help us make HoneyHive smooth and then fast for the engineers who depend on it. Details and application here: https://lnkd.in/eA3ExbWv
-
Dhruv Singh shared thisJoão Moura of CrewAI surveyed 500 senior enterprise executives to find out what drives their AI platform decisions. The surprise: only 2% prioritize ROI. Security and governance top the list at 34%. This is consistent with what I’m hearing in conversations with F500 companies. The value of AI is obvious enough at this point and nobody needs a spreadsheet to justify it. They need confidence that they can see how their agents are making decisions, catch them when they drift, and shut things down before a bad output becomes a bad headline. Get that right and the ROI takes care of itself. Governance is how you build trust and get to scale. Stop fighting your compliance team, start building with them! João (Joe) Moura Report: https://lnkd.in/eb-KmeWv
-
Dhruv Singh shared thisKarpathy made a research harness over the weekend and it went viral on X. The repo is 5 days old and already has 28k github stars. I think what is getting people excited is the idea that this can be applied to literally anything. Here’s how it works: The repo contains 3 files: prepare.py, train.py, program.md Prepare.py contains all the training data and runtime utilities. Never modified. Train.py contains the code for each training experiment. Only agents edit. Program.md contains the agent instructions. Only humans edit. The magic that makes it all work is the 5 minute loop. 12 experiments an hour. 100 experiments while you sleep. At the time he posted, Karpathy’s harness had run 83 experiments and kept 15 improvements. Each improvements steadily chipped away at validation BPB. Very cool. Karpathy built this to improve a model. But you can point the same loop at different targets. We're pointing it at agents. More soon.
-
Dhruv Singh liked thisDhruv Singh liked thisAI Council 2026 track host announcement! Help us welcome Dhruv Singh of HoneyHive as host and curator of the AI Engineering track. As cofounder and CTO of HoneyHive, Dhruv is in constant contact with enterprise AI teams, helping them evaluate and observe agentic systems in production. Before founding HoneyHive, Dhruv built frameworks for LLM developers on Microsoft's OpenAI Innovation team and won their Codex Innovation Challenge - the perfect person to shape this track. This year at AI Council, the AI Engineering track will cover the practical workflows and tools for evaluating, monitoring and improving AI systems in production. See you there! May 12–14 in SF -> https://lnkd.in/eEYHApCF
-
Dhruv Singh reacted on thisDhruv Singh reacted on thisYour first agent made it to production. Now you need to ship the next ten. This is where things quietly fall apart. The heroics that got agent #1 live — the PM manually reviewing outputs, the engineer who memorized every edge case, the eval script someone vibe-coded at 2am — none of that scales. At HoneyHive, we've seen this across hundreds of companies. The ones that actually scale agent deployments all share one trait: they stopped treating each agent as a unique engineering challenge and built a repeatable process around the lifecycle itself. We formalized what they do into a framework we're calling the Agent Development Lifecycle (ADLC). Full playbook on the blog — link in comments.
-
Dhruv Singh liked thisDhruv Singh liked thisAI isn’t the challenge anymore — scaling it safely, responsibly, and with real business impact is. In this co-authored article together with Ericson Chan, we explore what it truly takes to move beyond experimentation and unlock sustainable value from AI, grounded in strong data, governance, and people capabilities Learn more about how to scale AI with confidence from our latest article:➡️ https://lnkd.in/eApges2e #AI #ZurichCommercialInsuranceScaling AI with confidence: the smart approach to unlocking business valueScaling AI with confidence: the smart approach to unlocking business value
Experience & Education
-
HoneyHive AI
********** *** ***
-
*********
******** ******** *
-
****** ************** ********* ******
**** ******* ******
-
******** ********** ** *** **** ** *** ****
******** ** ******* * ** ******** ******* undefined
-
-
****** ********* ** *********** ******
********** ****** ********* ***********
-
View Dhruv’s full experience
See their title, tenure and more.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Honors & Awards
-
International Science Olympiad Medalist
International Olympiad of Astronomy and Astrophysics
Represented India at IOAA.
Silver Medal and Best Team Award.
View Dhruv’s full profile
-
See who you know in common
-
Get introduced
-
Contact Dhruv directly
Other similar profiles
Explore more posts
-
Jamin Ball
Altimeter Capital • 18K followers
Awesome post by the Databricks team https://lnkd.in/gGQ9ptRe My summary: They trained a model called KARL that beats Claude 4.6 and GPT 5.2 on enterprise knowledge tasks (searching docs, cross-referencing info, answering questions over internal data), at ~33% lower cost and ~47% lower latency. The key insight: instead of throwing expensive frontier models at enterprise search, you can use reinforcement learning on synthetic data to train a smaller model that's faster, cheaper, AND better at the specific task. RL went beyond making the model more accurate. I t learned to search more efficiently (fewer wasted queries, better knowing when to stop searching and commit to an answer). They're opening this RL pipeline to Databricks customers so they can build their own custom RL-optimized agents for high-volume workloads. I think we'll continue to see data platforms become agent platforms. Databricks' KARL paper is really an agent platform play. The pitch: you already store your enterprise data in the Lakehouse, now Databricks will train a custom RL agent that searches and reasons over it, tuned specifically for your highest-volume workloads (workloads = apps = agents). The business move is closing the loop: data storage → retrieval → custom agent training → serving, all on Databricks. They're turning "your data lives here" into "your agents live here too." Kudos Ali Ghodsi Reynold Xin Matei Zaharia
54
1 Comment -
Kirk Borne, Ph.D.
https://www.dataleadershipgrou… • 99K followers
New release from Packt Publishing… “Building Business-Ready Generative #AI Systems — Build Human-Centered Generative AI Systems with Context-Aware Agents, Memory, and LLMs for the Enterprise” See it at https://amzn.to/3Jdcio5 𝓚𝓮𝔂 𝓕𝓮𝓪𝓽𝓾𝓻𝓮𝓼: 🔵Build an adaptive, context-aware AI controller with advanced memory strategies 🟢Enhance GenAISys with multi-domain, multimodal reasoning capabilities and Chain of Thought (CoT) 🟠Seamlessly integrate cutting-edge OpenAI and DeepSeek models as you see fit
14
3 Comments -
Mensah Alkebu-Lan
Universal Equations Inc • 8K followers
𝗧𝗶𝗽𝗮𝗹𝘁𝗶 + 𝗡𝗲𝘁𝗦𝘂𝗶𝘁𝗲: 𝗔𝗣 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻 𝗧𝗵𝗮𝘁 𝗦𝗰𝗮𝗹𝗲𝘀, 𝗦𝗲𝗰𝘂𝗿𝗲𝘀, 𝗮𝗻𝗱 𝗦𝗶𝗺𝗽𝗹𝗶𝗳𝗶𝗲𝘀 𝗙𝗶𝗻𝗮𝗻𝗰𝗲 What’s the biggest challenge your AP team faces today—and what tools are you using to solve it? From Boston’s biotech startups to Philly’s SaaS firms and DC’s nonprofits, finance teams across the Northeast Corridor are upgrading their A𝗰𝗰𝗼𝘂𝗻𝘁𝘀 𝗣𝗮𝘆𝗮𝗯𝗹𝗲 (𝗔𝗣) 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄𝘀 to stay agile, compliant, and fraud-resistant. A typical 𝗔𝗣 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄 includes: - Invoice intake - PO matching - Approval routing - Payment execution - Reconciliation and audit When managed manually, this process is slow, error-prone, and vulnerable to fraud—especially across multiple entities or international vendors. Tipalti automates the entire AP lifecycle. It uses AI and OCR to digitize invoices, match them to POs, route approvals, and execute global payments. It supports multi-entity operations and integrates seamlessly with NetSuite, syncing financial data in real time and eliminating manual entry. But automation isn’t just about speed—it’s also about security. Tipalti’s Detect® fraud prevention module uses machine learning to flag suspicious patterns—like duplicate invoices, shared payment details, and unusual vendor behavior. It blocks high-risk payees, tracks fraud attempts, and maintains detailed audit trails for compliance. Why is this critical? Because 𝗶𝗻𝘃𝗼𝗶𝗰𝗲 𝗳𝗿𝗮𝘂𝗱—where scammers submit fake or altered invoices to trick businesses into unauthorized payments—is one of the fastest-growing threats in finance. Tipalti helps stop fraud before it hits your books. The *𝗧𝗶𝗽𝗮𝗹𝘁𝗶 + 𝗡𝗲𝘁𝗦𝘂𝗶𝘁𝗲 𝗶𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻 delivers: ✅ Real-time financial sync ✅ Touchless invoice processing ✅ Supplier self-service portals ✅ Automated tax and regulatory compliance ✅ Up to 80% reduction in AP workload At 𝗨𝗻𝗶𝘃𝗲𝗿𝘀𝗮𝗹 𝗘𝗾𝘂𝗮𝘁𝗶𝗼𝗻𝘀, we help Northeast finance teams implement intelligent AP automation tailored to their growth goals. Whether you're scaling across NYC, Newark, Baltimore, or DC—we’re ready to help. #AccountsPayable #Tipalti #NetSuiteIntegration #UniversalEquations
1
-
Alberto Surina
MedAxCap LLC. • 3K followers
OpenAI-backed Chai Discovery just raised a $130M Series B at a $1.3B valuation—and it’s a clean snapshot of where AI + biotech is heading. Chai is building foundation models tuned for drug discovery, essentially a “computer‑aided design suite for molecules.” Their latest model, Chai 2, is already showing materially better success rates in de novo antibody design—designing new antibodies from scratch, not just tweaking existing ones. That’s the kind of shift that can compress timelines and failure rates across the entire discovery stack. The round was led by General Catalyst and Oak HC/FT, with participation from Menlo Ventures, OpenAI, Dimension, Thrive, Neo, Yosemite, SV Angel and others—bringing total funding north of $225M. This isn’t “AI wrapper” money; this is a bet that model-native biotech becomes core infrastructure for pharma and advanced therapeutics. If platforms like Chai actually deliver CAD-for-molecules at scale, how do you see it reshaping who captures value in drug development—incumbent pharma, model platforms, or full-stack new entrants? https://lnkd.in/eGpjSh3E #AIinBiotech #DrugDiscoveryPlatforms #BioVC
6
1 Comment -
Phyllian Kipchirchir
Charted Growth • 3K followers
Databricks and Perplexity co-founder Andy Konwinski is forming the Laude Institute, a new AI research institute, and backing it with $100 million of his own money. The Laude Institute will function less as a traditional research lab and more like a fund making investments structured as grants. It will focus on "Slingshots" (early-stage research) and "Moonshots" (long-horizon labs tackling species-level challenges). The institute's board includes AI luminaries like UC Berkeley's Dave Patterson, Google's Jeff Dean, and Meta's Joelle Pineau. The institute announced its first flagship grant of $3 million annually for five years to anchor the new AI Systems Lab at UC Berkeley. The new fund aims to catalyze work that doesn't just push the field of AI forward, but guides it toward more beneficial outcomes. Congratulations to Andy Konwinski on this incredible commitment to AI research. TechCrunch: https://lnkd.in/dy7x9zAT #AI #Research #AIforGood #Philanthropy #VenturePhilanthropy #Databricks #Perplexity #UCBerkeley
2
-
Brian Sage
Sage Digital • 1K followers
Another breakthrough in #AI was published earlier today by Sapient Intelligence, and the linked headline by VentureBeat says it all. https://lnkd.in/giYJtEb9 TL;DR, the new #HierarchicalReasoningModel (#HRM) reasons more like a human by quickly thinking about the best way to solve a problem and then thinking more slowly to solve the problem. This new method can be trained on an order of magnitude fewer training examples and can solve multi-step puzzles that simply befuddle #CoT #LLM s, like #ChatGPT and #Claude, ultimately requiring far fewer resources.
6
2 Comments -
Mark Montgomery
KYield, Inc. • 16K followers
Microsoft's market cap declined far more in one day than all of the capital they have invested in LLMs and related infrastructure to date. Of course the market cap is in a bubble due to the LLM hype but for investors that bought yesterday it's still an 11% loss in one day. Those who have never experienced a market crash should be careful. It's very unpleasant, and can be catastrophic for those who are over-leveraged. https://lnkd.in/gKaV8hyy
7
2 Comments -
Voicu Oprean
AROBS Group • 17K followers
A strong strategy is only effective when paired with flawless execution. 🎯 The Codingscape team - now part of AROBS Group achieved this by implementing Airtable as a unified execution layer, replacing 95% of unstructured planning within six weeks.⚡ Context matters. This project was for Roblox, an organization with 2,500 employees, over 100 million daily active users, and more than $4 billion in revenue. At this scale, aligning strategy with execution is complex. The solution: * A unified, AI-ready execution layer built on Airtable. * Full visibility from C-level initiatives to daily operations. * Teams continued using their existing tools, such as Jira and Asana, while leadership gained real-time, comprehensive visibility. The results: * Strategy is visible across all layers of the organization. * Clear alignment between frontline execution and organizational priorities. * Streamlined cross-team requests and enhanced capacity planning. Delivered 2 weeks ahead of schedule by a 5-person team. This is what senior execution looks like: speed, precision, and zero noise. Congratulations to the Codingscape team, led by Porter Haney. 👏 I am proud of the team and look forward to achieving more with Airtable. This is impact, done right., done right. This is part of a broader AROBS strategy to help companies leverage AI and low-code / no-code tools such as Airtable to implement solutions faster, more efficiently, and with greater impact. More details about the project here https://lnkd.in/dJVEnUfY #DigitalTransformation #EnterpriseScale #ProductLeadership #Airtable #Codingscape #AROBS
60
4 Comments -
Kit Yu
33K followers
Gemini LLM becomes preferred AI developer platform: Gemini has the potential to emerge as a preferred AI developer platform given its tight integration across Google Cloud's data and workflow infrastructure. For enterprises building production-grade AI applications, this full-stack approach reduces complexity and lowers total cost of ownership versus assembling third-party models and infrastructure. Given Gemini LLM's competitive performance across multimodal reasoning and reliability at scale, enterprises could increasingly standardize on Gemini for internal agents, productivity tools, etc., driving stickier workloads and multi-year commitments.
-
Jon Hilton
LBMC • 5K followers
Databricks now lets you run OpenAI models (like GPT-5) directly on your enterprise data. Most AI tools or products force you to move your data into their product — adding cost, risk, and endless API hops. This changes that. Now, agents inside Databricks can securely work with large, complex business datasets — accelerating AI-driven processes and giving you faster answers to your most critical questions. At LBMC we remain committed to building AI readiness through data readiness in Databricks. #AI #Databricks #EnterpriseAI #Agents https://lnkd.in/eUYefPhR
21
1 Comment -
Jing Xie
Stealth • 12K followers
Without IBM, I'm not sure if Apache Spark would've succeeded in the way it did. Even though Hadoop came out as the first mover ahead of Spark, Spark was able to become the dominant open source project with the backing of major enterprise leaders like IBM. IBM's contributions to code and marketing shifted Spark from an academic project out of Berkeley-- One that came out 4 years behind Hadoop -- into a trusted enterprise technology. I think it's going to be the same for AI memory. MemMachine (fka Intelligent Memory) is coming out into the open source and we're looking for the next IBM to work with. Don't miss out.
8
-
Raghavendra Pandey
Tapistro • 4K followers
Salesforce just admitted "autonomous" AI agents don't work. Last week, CIO.com reported Salesforce is adding deterministic scripting layers to Agentforce because LLM-based autonomy keeps failing in production. Their internal term for it? "Doom-prompting"—engineers stuck rewriting prompts endlessly because agents drift, hallucinate, and give different answers to identical queries. The platform pitched as "self-directed agents that resolve issues end-to-end" now needs workflow mapping, data modeling, and rule-based guardrails. Sound familiar? It's the same story as 11x earlier this year. Here's what nobody wants to say: You can't prompt-engineer your way out of a data architecture problem. The agents that actually work in production aren't "autonomous." They're: → Grounded in structured data with clear provenance → Orchestrated through explicit handoffs, not "let the LLM figure it out" → Feeding engagement signals back into the data model The winners in Agentic GTM won't have the smartest agents. They'll have the best data underneath.
74
2 Comments -
Roberto Hortal
Wall Street English • 6K followers
Context engineering shifts LLMs from oracle to analyst—frame the task, curate tokens, and apply reusable patterns (RAG, tool calls, memory) to keep complex systems robust. A must-read for product and Agile teams building trustworthy AI-enabled workflows. Explore Chris Loy's take: https://buff.ly/Ca2GrrO #Product
1
-
Justin Borgman
Starburst • 14K followers
Starburst has always believed in providing customers with the freedom of choice. Nowhere is this more true than the emerging developments around interoperable compute among data platforms. Recently, I sat down with Snowflake’s Ryan C. Green to talk about the Open Semantic Interchange (OSI), an open, vendor-neutral standard for sharing semantic models across AI, BI, and analytics tools. OSI gives customers a common, open way to define and share business metrics. This helps them stay consistent across dashboards, notebooks, and machine learning models, no matter which tools they use. For Starburst, this furthers one of our core goals. Choice. Customers should be able to keep one trusted set of business definitions inside their Snowflake environment while still having the freedom to choose the tools that work best for them. OSI helps make that possible. It marks an important step in bringing Starburst’s engine to Snowflake-interoperable compute. The video from my chat with Ryan is coming soon 👀 In the meantime, you can get more details from our recent press release: https://lnkd.in/ev2XGB4u #OpenSemanticInterchange #OSI #Interoperability #AI #Data #Snowflake #Starburst
182
8 Comments -
Alekh Jindal
Tursio • 5K followers
Oracle was the first database I worked on, back in 2006, at BT Group, alongside my pair-programming buddy, Ritesh Agarwal, and our tech lead, Aditya S.. Along with SQL Server, Oracle forms the operational core for many enterprises, and for decades, people had to choose between them or migrate across them painstakingly. Interestingly, it's no longer about PL/SQL vs T-SQL; natural language is the new query language, allowing Tursio to support both SQL Server and Oracle! Language is no longer a barrier, and data can stay exactly where it is. #sqlserver #oracle #tursioai #fridaystory
23
-
Christie Mealo
IPG Health • 9K followers
Last night reminded me why I started Philly Data & AI. You put the right people in a room and something shifts. Practitioners, builders, people who are tired of talking about AI and ready to actually do something with it. That was last night. Philadelphia has no shortage of people leading this charge. Tim Dodd, Tempest Carter, and so many others in this community are doing the real grassroots work: convening people, driving cross-sector collaboration, and pushing toward something bigger than any one organization. This is what civic AI leadership actually looks like. Tonight is the follow-up. Tempest’s Tech Talk on pilots isn’t abstract theory. It’s the part most people skip: how do you take a promising idea and actually validate it before committing everything? That’s how we build responsibly and at scale. Together we are making Philadelphia a model for what ethical, community-driven AI innovation can look like. That doesn’t happen at the top. It happens in rooms like last night’s. I’m planning to be there tonight. Come find me if you are too. #PhillyAI #CivicTech #EthicalAI #AIInnovation #Philadelphia #PhillyTech #CommunityFirst
16
-
Eknauth Persaud
Ayoka - Made in USA services… • 3K followers
Databricks stock is not happening, as the custom database development company wants to stay private. Databricks sales topped a $4.8 billion revenue run-rate during the third quarter and is growing 55% year-over-year. With its latest VC capital infusion, the company plans major investments in AI.
1
-
Jochem H.
Aethir • 3K followers
Things are getting interesting with the official introduction of Axe Compute. To me, this is the prime example of why decentralized infrastructure matters and how it should be used to gain tangible business benefits for everyone in the value chain. "Axe Compute is adopting a decentralized approach to high-performance compute, leveraging Aethir’s globally distributed GPU network to secure the capacity needed for rapidly growing AI workloads. No more hyperscaler delays, centralized bottlenecks, or unpredictable access."
-
Holger Mueller
Constellation Research • 19K followers
.@Snowflake takes Snowflake Intelligence GA, launches developer tools, integrates with SAP BDC https://bit.ly/439lSiX Snowflake launched Snowflake Intelligence to general availability, outlined a set of new developer tools and forged a pact with @SAP so Snowflake AI Data Cloud and…
3
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content