Skip to content
OVEX TECH
Technology & AI

Humanoid Robot “Jeff” Masters LiDAR and SLAM

Humanoid Robot “Jeff” Masters LiDAR and SLAM

Humanoid Robot “Jeff” Masters LiDAR and SLAM

The Unitree G1 humanoid robot, affectionately nicknamed “Jeff,” is undergoing significant software advancements, particularly in its perception and navigation capabilities. Recent updates have integrated LiDAR (Light Detection and Ranging) and SLAM (Simultaneous Localization and Mapping) technologies, bringing the robot closer to autonomous operation. These developments, largely driven by extensive use of Large Language Models (LLMs) in the development process, showcase a rapidly evolving AI landscape where complex robotics are becoming more accessible.

Enhanced Perception with LiDAR and SLAM

At the core of these upgrades is the integration of a Livox Mid360 LiDAR unit. While the documentation suggests a specific mounting orientation, the G1’s unit is positioned upside down beneath its head. This required some mathematical adjustments to ensure accurate data interpretation for SLAM. The LiDAR unit, functioning as a self-contained computer with its own IP address, communicates with the robot’s main Jetson board.

The process of SLAM allows the robot to build a map of its environment while simultaneously tracking its own location within that map. For the G1, this involves using the LiDAR data, processed through a system called KISS ICP (Keep It Simple, Stupid – Iterative Closest Point), and visualized using Open3D. This approach, implemented largely in Python, enables the robot to generate a point cloud of its surroundings and create an occupancy grid, which represents free space, occupied space, and unknown areas. While this implementation is suitable for indoor environments, the developer notes that mapping larger areas like city blocks would require more sophisticated SLAM algorithms capable of loop closure to prevent drift.

Addressing Navigation Challenges

The occupancy grid, derived from SLAM data, is a crucial step towards more intelligent navigation. It allows the robot to understand its environment in terms of traversable and non-traversable areas. The developer has begun experimenting with route planning based on this grid, though acknowledges that this feature requires further refinement. Initial attempts at pathfinding have shown promise, but the system sometimes struggles with complex environments or identifying obstructions correctly, leading to unexpected routes, such as an attempt to move through a wall.

A significant hurdle encountered during development was the integration of Robot Operating System (ROS). The developer reported spending considerable time attempting to configure ROS for LiDAR integration, finding the process more challenging than expected, even compared to previous experiences with setting up CUDA on Ubuntu. This led to the adoption of simpler, Python-based solutions like KISS ICP for a more streamlined development workflow.

Improved User Interaction and Boot Sequence

User interaction has also seen enhancements. The G1’s graphical user interface (GUI) now utilizes PyQT, featuring a more intuitive keyboard control system. This allows for press-and-release commands, making teleoperation smoother and easier compared to the previous press-and-hold with a separate stop command. This improved control is vital for tasks requiring precise movements and fine-tuning robot behavior during testing.

A critical software fix addresses the robot’s boot sequence. Previously, upon restarting, the G1 might enter a dormant state before attempting to stand, even if it was already balanced. This could lead to instability and potential falls. The updated logic now checks if the robot is already in a balanced standing position before initiating the full boot sequence, preventing unnecessary movements and improving startup reliability. This fix also resolved several other startup-related issues, contributing to a more robust user experience.

Connectivity and Hardware Considerations

The G1 houses three onboard computers, requiring careful network configuration. The initial development extensively used a wired Ethernet connection due to complexities in establishing reliable wireless communication between the LiDAR unit, the Jetson board, and the main control unit. While attempts were made to transition to Wi-Fi for greater operational range, issues with consistent connectivity, particularly with the RGB camera, led to a temporary reversion to wired connections. Future work aims to resolve these networking challenges to enable untethered operation.

The robot’s camera system, comprising RGB and depth sensors, is positioned on the head. The head’s ability to tilt up and down is currently understood to be for manual camera positioning rather than addressable motor control. The default downward angle of the cameras is explained by the assumption that many common tasks, such as household chores or working on a desk, require looking down. However, the developer notes the potential utility of being able to tilt the cameras upwards for tasks like reaching shelves.

Why This Matters

The advancements in the Unitree G1’s software stack, particularly its successful integration of LiDAR and SLAM, represent a significant step forward in making sophisticated AI and robotics more accessible. The use of LLMs in development and the adoption of Python-based solutions highlight a trend towards democratizing complex technologies. For consumers and researchers, this means more capable and potentially more affordable humanoid robots are on the horizon. The ability for robots to perceive, map, and navigate their environment autonomously is fundamental for applications ranging from domestic assistance and logistics to scientific research and hazardous environment operations. While challenges remain, especially in robust wireless connectivity and advanced path planning, the progress demonstrated by “Jeff” indicates a promising future for practical humanoid robotics.


Source: Unitree G1 LiDAR, SLAM, navigation and control. Dev w/ G1 Humanoid P.2 (YouTube)

Leave a Reply

Your email address will not be published. Required fields are marked *

Written by

John Digweed

339 articles

Life-long learner.