"We need the open source community to come in and even tweak the architecture, help us define how these models should change." At #NVIDIAGTC, Ranjay Krishna, Ai2 Director of Multimodal and Embodied AI, discussed the prevalence of open source AI and the role of the developer community.
About us
Explore the latest breakthroughs made possible with AI. From deep learning model training and large-scale inference to enhancing operational efficiencies and customer experience, discover how AI is driving innovation and redefining the way organizations operate across industries.
- Website
-
http://nvda.ws/2nfcPK3
External link for NVIDIA AI
- Industry
- Computer Hardware Manufacturing
- Company size
- 10,001+ employees
- Headquarters
- Santa Clara, CA
Updates
-
NVIDIA NemoClaw is an open source stack that adds privacy and security controls to OpenClaw. Learn how to get started with NemoClaw on DGX Spark and why DGX Spark is an ideal platform for your own autonomous agent running locally on your Spark.
DGX Spark Live: Getting Started with NVIDIA NemoClaw
www.linkedin.com
-
In last week’s livestream we covered getting started with NVIDIA NemoClaw for building long-running agents—and the most common question by far was: "How do I actually control what my agent can do?" Before we can answer that, it helps to know which layer of the stack is doing what — because NemoClaw, OpenShell, and OpenClaw each contribute a distinct piece of the picture, and that distinction matters when you're configuring security. This session cuts through the confusion and goes hands-on with NVIDIA OpenShell's policy system as it operates inside a NemoClaw deployment. What you'll learn: - What each layer actually does — OpenClaw is the agent, OpenShell is the runtime that enforces sandbox boundaries (network, filesystem, process) out-of-process so policies hold even if the model misbehaves, and NemoClaw is the distribution that wires them together with onboarding, inference routing, and the hardened blueprint that ships your policy YAML. - How to read, write, and apply OpenShell network policies — walk through the deny-by-default model, how to allow specific hosts and API paths per binary, how unlisted destinations get surfaced to the operator in real time for approval, and how to hot-reload a policy mid-session without restarting the sandbox. - How to configure filesystem and process restrictions — understand capability drops, the least-privilege Dockerfile, and blueprint digest verification so you have a reproducible, auditable baseline and know exactly what your agent can and can't touch on the host. Join us live, bring your questions about securing agents, and follow along as we walk through securing an agent deployment together in real time.
Configure Policies & Access Controls for Autonomous Agents | Nemotron Labs
www.linkedin.com
-
🦞 Learn how to get started with NVIDIA NemoClaw on DGX Spark -- an ideal platform for running autonomous agents locally. NemoClaw is an open source stack that adds privacy and security controls to OpenClaw. 📅 Friday, April 3 at 11 a.m. Pacific 📌YouTube: https://nvda.ws/4s7GWQB 📌LinkedIn: https://nvda.ws/4svriyJ
-
-
Catch the high-energy GTC panel with top NVIDIA researchers, hosted by Károly Zsolnai-Fehér of Two MinutePapers, now available on YouTube. 📹 https://nvda.ws/4m9jHUS Hold on to your papers, fellow scholars! 🙌 They dive into the latest breakthroughs in AI, spotlight the most promising emerging technical trends, and candidly explore the biggest open challenges facing the field today. Sanja Fidler | VP, AI Research Yejin Choi | Sr. Research Director Karoly Zsolnai-Fehér | Researcher and Founder | Two Minute Papers Yashraj Narang | Sr. Robotics Research Manager Marco Pavone | Sr. Research Director
-
🙌 Congrats Google DeepMind, Google AI for Developers on the release of your Gemma 4 models!🎉 The new multimodal and multilingual models are built for fast, efficient, and secure AI across devices – and optimized to run locally on NVIDIA RTX, RTX PRO, DGX Spark, and Jetson. 👉 Prototype the 31B model and start experimenting for free on https://lnkd.in/gttfrsCb 🔗Check out the details to get started in our Technical Blog: https://lnkd.in/gC8iTd2m
Gemma 4 is here. 💻 We’ve built a new family of open models based on the same world class research and tech as Gemini 3. “Open” means the model weights are yours to download, customize, and run on your own hardware. ⚖️ Four sizes: High-performance versions for workstations (31B Dense & 26B MoE) and highly optimized “Edge” versions (E4B & E2B) built specifically for mobile. 🧠 Advanced reasoning: Capable of multi-step planning and deep logic with native vision and audio support. 🤖 Built for agents: Native tool use lets you build autonomous systems that can actually do things, like search databases or trigger APIs. 🔒 Apache 2.0 License: Complete flexibility to build, fine-tune, and deploy however you want. Start building with Gemma 4 now in Google AI Studio. You can also download the model weights from Hugging Face, Kaggle, or Ollama. Find out more → https://goo.gle/4cb8LBE
-
-
ICYMI: We just dropped a step-by-step tutorial on installing the new NemoClaw open source reference stack on DGX Spark. You'll learn how NemoClaw, OpenShell, and OpenClaw fit together to provide a safer environment for executing autonomous agents. Watch 👉 https://nvda.ws/4sc9N6a
-
-
Fine-tuning multi-modal AI just got a whole lot easier. With the latest release of NVIDIA TAO, developers can accelerate post-training for reasoning VLM and embedding models using fine-tuning microservices (FTMS) with built-in recipes. New features: ⚡ NVIDIA Cosmos Reason VLM fine-tunable with AutoML in just a few config tweaks ⚡ New multimodal embeddings, including Cosmos Embed1 (video/text) and NVIDIA RADIO-CLIP (image/text) ⚡ NVPanoptix3D for 3D panoptic reconstruction from RGB images, now available on Hugging Face with FTMS fine-tuning 🔗 https://nvda.ws/4cktgNe
-
It's been two weeks since we unveiled NemoClaw live at #NVIDIAGTC 🦞 Thanks to everyone who has downloaded and contributed. What are you building?