Activeloop’s cover photo
Activeloop

Activeloop

Software Development

Mountain View, California 6,301 followers

Building Deeplake: the GPU-native, sandboxed Postgres for AI agents. → deeplake.ai

About us

Building Deeplake: the GPU-native, sandboxed Postgres for AI agents. Try out Deeplake today via deeplake.ai

Industry
Software Development
Company size
11-50 employees
Headquarters
Mountain View, California
Type
Privately Held
Founded
2018
Specialties
Data Science, AI, Artificial Intelligence, Data pipelines, Cloud computing, Machine Learning, Computer Vision, Generative AI, Vector Search, LLMs, and Large Language Models

Locations

Employees at Activeloop

Updates

  • View organization page for Activeloop

    6,301 followers

    The Claude Code leak just confirmed what you suspected: your agent's memory has hard limits. 200 lines. 25KB. That's the cap on MEMORY md, the index that tells Claude Code which memories exist. Line 201? Silently dropped. No error thrown. Your agent sees a clean context and never knows it's missing memories. It gets worse. Only 5 memory files are loaded per turn. Only 4 memory types exist. Retrieval is based on filenames and descriptions, not embeddings. And the "autoDream" consolidation system? It prunes everything that doesn't fit, summarizes the rest, and throws away the originals. Deeplake removes the ceiling entirely. No line caps. No silent truncation. Persistent, versioned, queryable memory your agent can mount as a filesystem. Your agent's memory shouldn't have a ceiling it doesn't know about ➡️ https://www.deeplake.ai

    • No alternative text description for this image
  • Your agent forgets. Deeplake fixes that. Default OpenClaw memory is a flat file with keyword search. It works until your agent needs to recall context from three sessions ago, coordinate with another agent, or store anything that isn’t text. ❌ Then it just…doesn’t. ✅ Deeplake gives your agent structured, persistent, queryable memory. ✅ Task completion jumps 43%. ✅ Not because of a better model, because of better recall. A smarter agent starts with better memory, not a bigger model. deeplake.ai

    • No alternative text description for this image
  • Your OpenClaw agent is burning 10x more tokens than it needs to. Default OpenClaw memory loads full context on every retrieval: ~10,000 tokens per query, whether your agent needs it or not. ❌ That adds up fast. ✅ Deeplake replaces flat retrieval with tiered loading. ✅ Your agent reads a summary first (~100 tokens), drills into details only when needed. ✅ Result: up to 91% fewer tokens per session.* Same agent. Fraction of the cost. deeplake.ai

    • No alternative text description for this image
  • Both are Postgres. One was built for apps. The other was built for agents. Neon and Lakebase give you serverless Postgres with pgvector. Great for apps that need vector search. But when your agents need sandboxed environments, GPU-native storage, and a filesystem they can mount: you need a Postgres designed for that. ✅ Deeplake is sandboxed Postgres for AI agents. ✅ Same query language. ✅ Different architecture. Learn more here: deeplake.ai

    • No alternative text description for this image
  • An awesome article by our CTO Sasun Hambardzumyan on how we made Postgres Serverless. Read here https://lnkd.in/grhZwc4T

    Every AI agent should get its own Postgres in under a second. The rules: - spin up per request - scale reads and writes - drop to zero when idle - no state tied to machines - keep Postgres, rethink storage engine What we got: ~14s cold start → ~1s Databases in ~200ms Stateless pods, infinite horizontal scale Postgres is the interface. Deeplake is storage. DuckDB is execution. The system looks simple in hindsight. But every piece breaks a default assumption. This might be the simplest way to run databases for agents: Make compute ephemeral. Make storage immutable. Let scaling be automatic. Here’s how we built it. Link below

    • No alternative text description for this image
  • Robots have been stuck reacting. Not understanding the real world. That’s the bottleneck in last-mile delivery. We combined Deeplake GPU database with Intel Core Ultra to power real-time VLA perception. Result: 9x higher throughput. Robots that don’t just see, but act intelligently. Physical AI just crossed a threshold.

    View organization page for Intel Business

    177,465 followers

    Pinkbot’s here to solve last-mile delivery, but the physical AI behind the brand’s fleet needed both power and data to deliver on its vision. Using #IntelCoreUltra Series 3 processors and its Deep Lake GPU database, Activeloop helped Pinkbot increase VLA throughput by 9x and enable robots to better assess their environment in real time. Learn more about Intel Core Ultra Series 3 at http://ms.spr.ly/6044QcFpr

  • Activeloop reposted this

    Last weekend, Deeplake (Activeloop) was proud to sponsor the Intelligence at the Frontier Hackathon at Frontier Tower in SF, organized by Funding the Commons. I also had the chance to serve as one of the judges for Physical AI spec. It was a great experience working with the teams, hearing their ideas, and helping them think through their approaches during the build process. The winning team X-G1 used Deeplake to manage their training data pipeline, enabling team collaboration, dataset ingestion, and fast tensor streaming for model fine-tuning. This allowed them to rapidly iterate during the 2-day hackathon while building a pipeline for the Unitree G1 humanoid robot, moving from teleoperated demonstrations to autonomous policy testing. Congrats to Team X-G1 for taking first place in both challenge tracks: 🤖 Physical AI & Robotics: Data at Scale 🤖 Physical AI & Robotics by NomadicML #physicalAI #deeplake

    • No alternative text description for this image
    • No alternative text description for this image
  • Activeloop reposted this

    Jensen was just talking at GTC about the storage, retrieval and interpretation of multimodal data: video, audio, images, documents. The entire keynote pointed at this problem. Activeloop just shipped the answer with Deeplake. Sandboxed, serverless Postgres that spins up with every agent and handles all of it natively. This is the infrastructure layer Jensen was describing.

  • Deeplake now is GPU pilled. Excited to announce The GPU Database. Stay tuned!

    Jensen just announced the start of the GPU-accelerated database era at #GTC26. AI runs on GPUs. But your data still runs on CPUs. That mismatch is breaking the AI stack. For the last two months, we’ve been busy solving this problem. Excited to announce Deeplake becoming the GPU Database. Deeplake brings your database directly onto the GPU, eliminating the CPU <-> GPU bottleneck for AI workloads. The pendulum has switched. GPU-native queries are now 10× faster and an order of magnitude cheaper to run. Last week we even put up a 101 banner in San Francisco. And this is just the beginning. We’re planning a huge set of announcements starting this week. Stay tuned.

    • No alternative text description for this image
    • No alternative text description for this image

Similar pages

Browse jobs

Funding