NVIDIA Robotics’ cover photo
NVIDIA Robotics

NVIDIA Robotics

Computer Hardware Manufacturing

Santa Clara, California 478,936 followers

Inspiring visionaries and developers to create the next gen of AI-driven robots and explore the world of physical AI.

About us

The NVIDIA Robotics platform accelerates the development of AI-driven robots, streamlining processes from design and simulation to deployment. It enables key functions like navigation, mobility, grasping, and vision, supporting robotics across industries such as manufacturing, agriculture, logistics, and healthcare.

Website
https://www.nvidia.com/en-us/industries/robotics/
Industry
Computer Hardware Manufacturing
Company size
10,001+ employees
Headquarters
Santa Clara, California

Updates

  • NVIDIA Robotics reposted this

    View profile for Jim Fan
    Jim Fan Jim Fan is an Influencer

    The power of the Claw, in the palm of a robot hand. Agentic robotics is here! Today, we open-source CaP-X: vibe agents, alive in the physical world. They incarnate as robot arms and humanoids with a rich set of perception APIs, actuation APIs, and auto synthesize skill libraries as they go. CaP-X is a strict superset of our old stack, because policies like VLAs are “just” API calls as well. It solves many tasks zero-shot that a learned policy would struggle with. And we are doing much more than vibing. CaP-X is our most systematic, scientific study on agentic robotics so far: - We build a comprehensive agentic toolkit: perception (SAM3 segmentation, Molmo pointing, depth, point cloud), control (IK solvers, grasp planner, navigation), and visualization (EEF, mask overlays) that work across different robots.  - CaP-Gym: LLM’s first *Physical Exam*! 187 manipulation tasks across RoboSuite, LIBERO-PRO, and BEHAVIOR. Tabletop, bimanual, mobile manipulation. Sim and real. Can’t wait to see the gradients flow from CaP-Gym to the next wave of frontier LLM releases.  - CaP-Bench: we benchmark 12 frontier LLMs/VLMs (Gemini, GPT, Opus, Qwen, DeepSeek, Kimi, and more) across 8 evaluation tiers. We systematically vary API abstraction level, agentic harness, and visual grounding methods. Lots of insights in our paper. - CaP-Agent0: a training-free agentic harness that matches or exceeds human expert code on 4 out of 7 tasks without task-specific tuning.  - CaP-RL: if you get a gym, you get RL ;). A 7B OSS model jumps from 20% to 72% success after only 50 training iterations. The synthesized programs transfer to real robots with minimal sim-to-real gap. 3 years ago, our team created Voyager, one of the earliest agentic AI that plays and learns in Minecraft continuously. Its key ideas — skill libraries, self-reflection loops, and in-context planning — have since influenced many modern agentic designs. Today, the agent graduates from Minecraft and gets a real job. It’s April Fool’s, but this Claw is getting its hands dirty for real! As usual, we open-source everything, MIT license: https://capgym.github.io/ Code: https://lnkd.in/gtw7VUUU Paper: https://lnkd.in/g96CW3Xs CaP-X is brought to you by NVIDIA, Berkeley, Stanford, and CMU! I'd like to thank my co-advisors and collaborators who poured their hearts into the work.

  • NVIDIA Robotics reposted this

    With 10,000x more compute, Intuitive's da Vinci 5 supports new capabilities in analytics using data, video, and kinematics. Powered by NVIDIA Blackwell, Isaac, and Omniverse, these real-time AI technique insights with digital twin simulations enable continuous learning in the operating room that help surgeons refine their practice. Read the article: https://lnkd.in/epFm4StQ Watch Intuitive Surgical's GTC Session: https://nvda.ws/48jLcFh

    • No alternative text description for this image
  • Exciting news for Jetson developers 🎉 Gemma 4 is now on Jetson. Google DeepMind and Google AI for Developers’ latest multimodal, multilingual models run across the full Jetson platform—from Orin Nano to Thor—bringing on-device AI to robotics, edge, and embedded systems. Cut latency, manage costs, and keep sensitive data secure. Check out the tutorial and download the container to get started: https://lnkd.in/gdzupWaZ

    View organization page for Google DeepMind

    1,511,474 followers

    Gemma 4 is here. 💻 We’ve built a new family of open models based on the same world class research and tech as Gemini 3. “Open” means the model weights are yours to download, customize, and run on your own hardware. ⚖️ Four sizes: High-performance versions for workstations (31B Dense & 26B MoE) and highly optimized “Edge” versions (E4B & E2B) built specifically for mobile. 🧠 Advanced reasoning: Capable of multi-step planning and deep logic with native vision and audio support.  🤖 Built for agents: Native tool use lets you build autonomous systems that can actually do things, like search databases or trigger APIs. 🔒 Apache 2.0 License: Complete flexibility to build, fine-tune, and deploy however you want. Start building with Gemma 4 now in Google AI Studio. You can also download the model weights from Hugging Face, Kaggle, or Ollama. Find out more → https://goo.gle/4cb8LBE

    • No alternative text description for this image
  • Congrats to Google DeepMind and Google AI for Developers teams on your launch of Gemma 4. Jetson developers can now run these new multimodal, multilingual models at the edge—from Jetson Orin Nano all the way up to Jetson Thor—to cut latency, manage costs, and keep sensitive data secure on device. Whether you're building for robotics, smart machines, or industrial automation, Gemma 4 brings frontier intelligence to the edge. See our technical blog for details: https://lnkd.in/gC8iTd2m

    View organization page for Google DeepMind

    1,511,474 followers

    Gemma 4 is here. 💻 We’ve built a new family of open models based on the same world class research and tech as Gemini 3. “Open” means the model weights are yours to download, customize, and run on your own hardware. ⚖️ Four sizes: High-performance versions for workstations (31B Dense & 26B MoE) and highly optimized “Edge” versions (E4B & E2B) built specifically for mobile. 🧠 Advanced reasoning: Capable of multi-step planning and deep logic with native vision and audio support.  🤖 Built for agents: Native tool use lets you build autonomous systems that can actually do things, like search databases or trigger APIs. 🔒 Apache 2.0 License: Complete flexibility to build, fine-tune, and deploy however you want. Start building with Gemma 4 now in Google AI Studio. You can also download the model weights from Hugging Face, Kaggle, or Ollama. Find out more → https://goo.gle/4cb8LBE

    • No alternative text description for this image
  • View organization page for NVIDIA Robotics

    478,936 followers

    Build a fully local physical AI pipeline on DGX Spark. 💡 Join the livestream on Wednesday April 1 @ 11 AM PT with Clayton Littlejohn and Damien Fagnou. Learn how to: 💻 Install & configure NemoClaw locally with Ollama 🧩 Use a Nemotron agent to orchestrate NuRec for neural 3D reconstruction 🌐 Visualize results in Omniverse via OpenUSD + Isaac Sim

    Build a Physical AI Pipeline on DGX Spark

    Build a Physical AI Pipeline on DGX Spark

    www.linkedin.com

  • The next frontier of AI is physical. 🤖 From simulation to synthetic data, developers are training robots to understand the real world with NVIDIA Cosmos, Isaac simulation frameworks and open-source physics engine, Newton. Watch the video to see how partners like FOXCONN HON HAI TECHNOLOGY, Hexagon Robotics, Humanoid, Noble Machines, and Skild AI are building the future. 🎥 https://nvda.ws/4tj3Ntp

Affiliated pages

Similar pages