NVIDIA AI’s cover photo
NVIDIA AI

NVIDIA AI

Computer Hardware Manufacturing

Santa Clara, CA 1,719,393 followers

About us

Explore the latest breakthroughs made possible with AI. From deep learning model training and large-scale inference to enhancing operational efficiencies and customer experience, discover how AI is driving innovation and redefining the way organizations operate across industries.

Website
http://nvda.ws/2nfcPK3
Industry
Computer Hardware Manufacturing
Company size
10,001+ employees
Headquarters
Santa Clara, CA

Updates

  • View organization page for NVIDIA AI

    1,719,393 followers

    In last week’s livestream we covered getting started with NVIDIA NemoClaw for building long-running agents—and the most common question by far was: "How do I actually control what my agent can do?" Before we can answer that, it helps to know which layer of the stack is doing what — because NemoClaw, OpenShell, and OpenClaw each contribute a distinct piece of the picture, and that distinction matters when you're configuring security. This session cuts through the confusion and goes hands-on with NVIDIA OpenShell's policy system as it operates inside a NemoClaw deployment. What you'll learn: - What each layer actually does — OpenClaw is the agent, OpenShell is the runtime that enforces sandbox boundaries (network, filesystem, process) out-of-process so policies hold even if the model misbehaves, and NemoClaw is the distribution that wires them together with onboarding, inference routing, and the hardened blueprint that ships your policy YAML. - How to read, write, and apply OpenShell network policies — walk through the deny-by-default model, how to allow specific hosts and API paths per binary, how unlisted destinations get surfaced to the operator in real time for approval, and how to hot-reload a policy mid-session without restarting the sandbox. - How to configure filesystem and process restrictions — understand capability drops, the least-privilege Dockerfile, and blueprint digest verification so you have a reproducible, auditable baseline and know exactly what your agent can and can't touch on the host. Join us live, bring your questions about securing agents, and follow along as we walk through securing an agent deployment together in real time.

    Configure Policies & Access Controls for Autonomous Agents | Nemotron Labs

    Configure Policies & Access Controls for Autonomous Agents | Nemotron Labs

    www.linkedin.com

  • View organization page for NVIDIA AI

    1,719,393 followers

    Catch the high-energy GTC panel with top NVIDIA researchers, hosted by Károly Zsolnai-Fehér of Two MinutePapers‬, now available on YouTube. 📹 https://nvda.ws/4m9jHUS Hold on to your papers, fellow scholars! 🙌 They dive into the latest breakthroughs in AI, spotlight the most promising emerging technical trends, and candidly explore the biggest open challenges facing the field today. Sanja Fidler | VP, AI Research Yejin Choi | Sr. Research Director Karoly Zsolnai-Fehér | Researcher and Founder | Two Minute Papers Yashraj Narang | Sr. Robotics Research Manager Marco Pavone | Sr. Research Director

  • View organization page for NVIDIA AI

    1,719,393 followers

    🙌 Congrats Google DeepMind, Google AI for Developers on the release of your Gemma 4 models!🎉 The new multimodal and multilingual models are built for fast, efficient, and secure AI across devices – and optimized to run locally on NVIDIA RTX, RTX PRO, DGX Spark, and Jetson.  👉 Prototype the 31B model and start experimenting for free on https://lnkd.in/gttfrsCb 🔗Check out the details to get started in our Technical Blog: https://lnkd.in/gC8iTd2m

    View organization page for Google DeepMind

    1,511,464 followers

    Gemma 4 is here. 💻 We’ve built a new family of open models based on the same world class research and tech as Gemini 3. “Open” means the model weights are yours to download, customize, and run on your own hardware. ⚖️ Four sizes: High-performance versions for workstations (31B Dense & 26B MoE) and highly optimized “Edge” versions (E4B & E2B) built specifically for mobile. 🧠 Advanced reasoning: Capable of multi-step planning and deep logic with native vision and audio support.  🤖 Built for agents: Native tool use lets you build autonomous systems that can actually do things, like search databases or trigger APIs. 🔒 Apache 2.0 License: Complete flexibility to build, fine-tune, and deploy however you want. Start building with Gemma 4 now in Google AI Studio. You can also download the model weights from Hugging Face, Kaggle, or Ollama. Find out more → https://goo.gle/4cb8LBE

    • No alternative text description for this image
  • View organization page for NVIDIA AI

    1,719,393 followers

    Fine-tuning multi-modal AI just got a whole lot easier. With the latest release of NVIDIA TAO, developers can accelerate post-training for reasoning VLM and embedding models using fine-tuning microservices (FTMS) with built-in recipes. New features: ⚡ NVIDIA Cosmos Reason VLM fine-tunable with AutoML in just a few config tweaks ⚡ New multimodal embeddings, including Cosmos Embed1 (video/text) and NVIDIA RADIO-CLIP (image/text) ⚡ NVPanoptix3D for 3D panoptic reconstruction from RGB images, now available on Hugging Face with FTMS fine-tuning 🔗 https://nvda.ws/4cktgNe

Affiliated pages

Similar pages