Skip to content
OVEX TECH
Technology & AI

Humanoid Robot Masters Arm Control With AI

Humanoid Robot Masters Arm Control With AI

Humanoid Robot Masters Arm Control With AI

A significant leap forward in humanoid robotics has been demonstrated, showcasing an AI-powered system that allows for precise and intuitive control of robotic arms. This development bypasses complex, low-level motor commands, opting instead for a more accessible, data-driven approach that could accelerate the integration of humanoid robots into everyday tasks.

Bridging the Gap in Robotic Dexterity

Controlling a robotic arm with the dexterity of a human hand has long been a complex challenge. Traditionally, achieving precise movements required intricate programming of individual motors, accounting for their interactions and the resulting Cartesian space movements. This process is not only time-consuming but also demands a high level of technical expertise.

The Unitree G1 humanoid robot, a platform gaining attention for its potential in research and development, faced this very hurdle. While its locomotion capabilities, powered by systems like LAR (likely a SLAM or navigation module), were progressing, controlling its arms and hands remained a bottleneck. The available SDK, while offering some high-level arm controls, primarily exposed joint-level commands. This meant that to move the arm’s end-effector (the hand) in a desired direction, such as “up” or “left” in Cartesian space, developers had to translate that abstract command into a series of precise adjustments across the arm’s seven motors.

This joint-level control is problematic because manipulating one motor affects the position and movement of others. The complexity multiplies with each joint, making it difficult to achieve smooth, predictable movements without extensive simulation, reinforcement learning (RL), or manual tuning. The speaker highlights the arduous nature of RL, likening it to a supervised process of reward function tuning, and the persistent challenge of “sim-to-real” transfer, where models trained in simulation don’t perform as expected in the real world.

A Novel Approach: Learning from Demonstration

Instead of resorting to traditional, complex AI training methods, the developers explored a simpler, more intuitive approach: learning from demonstration. The core idea was to simply “show” the robot how to perform desired actions.

The process involved manually moving the robot’s arm through a series of specific actions – up, down, left, right, forward, and backward. For each action, the corresponding joint positions were recorded. Remarkably, the system achieved good results with as few as 50 samples per action. This data, consisting of the command (e.g., “up”), the initial joint positions, and the target end joint positions, was then used to train a simple machine learning model.

The training utilized a basic Multi-Layer Perceptron (MLP) with two hidden layers, each containing 32 neurons, implemented using scikit-learn. The model takes the starting joint positions and the desired action command as input and outputs the target joint positions to achieve that action. The simplicity of this model, combined with the surprisingly effective demonstration data, allowed the robot to learn to control its arm with a level of dexterity previously requiring much more complex methods.

Beyond Arm Control: Other Improvements

Alongside the arm control breakthrough, several other enhancements were made to the Unitree G1 system:

  • GitHub Repository: A significant portion of the development files have been uploaded to GitHub, making them accessible to the growing community of G1 users, including those new to programming.
  • Navigation Filter: A filter was implemented to ignore sensor data from the immediate vicinity of the LAR unit, preventing the robot from detecting its own body parts as obstacles during navigation and thus improving the accuracy of its occupancy grid.
  • Occupancy Grid Accuracy: The occupancy grid, which represents the robot’s understanding of its environment, has been recalibrated for improved accuracy. This involved correcting for the upside-down mounting of the LAR unit and an additional flip introduced by the development environment (03).
  • Battery Percentage Display: A feature to display the robot’s battery percentage was added, addressing a surprising omission in the original SDK.
  • Increased Walking Speed: The robot can now walk faster by holding the Shift key, effectively doubling its previous speed.
  • Emergency Stop (E-Stop): An E-stop functionality was integrated, accessible via a remote control (pressing L1 + A), providing a critical safety feature to immediately halt the robot’s actions.
  • Waist Control and Balance: Developers found that taking over arm control did not compromise the robot’s balance systems, which continue to manage leg movements. This allows for independent control of the upper body while maintaining stable locomotion.

Why This Matters

This development is crucial for several reasons:

  • Accessibility: By reducing the complexity of robotic arm control from low-level motor commands to a demonstration-based learning approach, it significantly lowers the barrier to entry for developers and researchers. This could democratize humanoid robot development.
  • Practical Applications: The ability to perform precise arm and hand movements is fundamental for robots to interact with the physical world. This breakthrough paves the way for humanoids to perform tasks like grasping objects, manipulating tools, and assisting with chores, moving them closer to practical applications in homes, warehouses, and other environments.
  • Rapid Prototyping: The efficiency of the learning-from-demonstration method, requiring minimal samples, allows for rapid prototyping and iteration of new robotic behaviors.
  • Safety: The inclusion of an E-stop and the ability to maintain balance while controlling the arms are vital steps towards ensuring the safe deployment of increasingly capable humanoid robots.

Future Outlook

The immediate next step for the developers is to leverage this newfound arm control to perform a practical task: retrieving a bottle of water. This demonstration aims to counter skepticism about the usefulness of humanoid robots, showcasing their potential to perform valuable chores. While the simulation environment did not perfectly replicate the robot’s real-world performance, the successful implementation on the physical Unitree G1 robot underscores the viability of this AI-driven approach. The team is confident that with these advancements, the Unitree G1 is now capable of grasping objects, a critical step towards fulfilling more complex tasks.


Source: Unitree G1 – Moving the arms/hands – Dev w/ G1 Humanoid P.3 (YouTube)

Leave a Reply

Your email address will not be published. Required fields are marked *

Written by

John Digweed

334 articles

Life-long learner.