Human robots are controlled by Nvidia using Apple Vision Pro.

Apple Vision Pro [Nvidia] controlling a robot
Apple Vision Pro can be used to control and monitor a new Nvidia control service that allows developers to work on humanoid robot projects. The nature of the control system for humanoid robotics is one of the many challenges that developers face when developing these devices. Nvidia offers a variety of tools to help with robotic simulations, including some that assist in control. The suite of platforms and models, which Nvidia provides to major robot manufacturers and developers of software, is meant to train a new breed of humanoid robotics. The NIM microservices frameworks are intended for simulation and training. Nvidia OSMO orchestration services are available for multi-stage robotics workloads as well as AI-enabled teleoperation work flows. In these workflows, spatial computing devices and headsets like the Apple Vision Pro are used to not only view data but also control hardware. Jensen Huang, CEO and founder of Nvidia, said that the next wave of AI will be robotics. Humanoid robots are one of the most exciting developments. “We are advancing the entire NVIDIA robot stack, allowing humanoid developers, companies and organizations to use the platforms and acceleration libraries that best suit their needs.” The NIM microservices use Nvidia’s inference software to reduce deployment time. Two of these NIM microservices were designed to assist developers with simulation workflows within the Nvidia Isaac SIM reference application. The MimicGen NIM Microservice is used to control hardware with the Apple Vision Pro or other spatial computing devices. It generates synthetic movement data for the robotic device based on “recorded data”, i.e. translating movements from the Apple Vision Pro to movements for the robot. Videos and images demonstrate that this is not just about moving a camera using the headset’s movement. Apple Vision Pro sensors are used to record and use hand gestures and movements. Apple Vision Pro allows users to watch the robot move and control its hands and arms. While humanoid robots may try to mimic gestures exactly, Nvidias systems can infer what users want to do instead. It could be dangerous to mimic hand movements since users do not have tactile feedback on what the robot is holding. Another teleoperation workflow, also demonstrated at Siggraph, allowed developers to create large quantities of motion and perception information. All of this was created by a few remotely captured demonstrations. Subscribe to AppleInsider YouTube for these demonstrations. An Apple Vision Pro was utilized to capture the hand movements of a human. These were used to simulate recordings with the MimicGen NIM Microservice and Nvidia Isaac Sim which generated synthetic datasets. The developers were able to train the Project Groot humanoid using a combination of synthetic and real data. This process should reduce the time and costs spent on creating the data. Alex Gu, CEO of robotics platform maker Fourier, said that developing humanoid machines is a complex process. It requires an enormous amount of data from the real world. NVIDIA’s new simulation tools and generative AI developer toolkits will help bootstrap our model development workflows. The Nvidia Humanoid Developer Program offers microservices as well as access to models, OSMO managed robotics services, and other frameworks. The company provides access to humanoid robot, software, and hardware manufacturers.

 

Add a Comment

Your email address will not be published. Required fields are marked *