Hang Liu 「刘航」

I am a first-year Master student at Umich focsing on Robot Learning and Legged Robot Control. I am currently working as a research assistant at Tsinghua Ai & Robot Lab and TSLab advised by Houde Liu and Linqi Ye. Besides, I joined a Startup Zerith and responsible for humanoid locomotion controller design.

My Research interests lies in Robotics and Reinforcement Learning, I am tring to figure out how to build a truly intelligent humanoid who could learn from humans.

[Highlight]: I am looking for a PhD position to start in 25/26 Fall. Contact me if you are interested in my work! Also I am seeking for Internship, Summer Research and RA opportunity, espeically in the area of Learning&Robotics.

| CV | Email | Github | Twitter | Bilibili |

News

  • [09/2024] One Paper was accepted by CORL 2024, See U in Munich!
  • [06/2024] Two Paper was accepted by IROS 2024
  • [05/2024] Participated in IEEE ICRA Quadruped Robot Challenge
  • [10/2023] Win the China National Scholarship(top 0.2%)

  Publications

Multi-Brain Collaborative Controll for Quadruped Robots
Hang Liu*, Yi Cheng*, Rankun Li, Xiaowen Hu, Linqi Ye, Houde Liu
CORL 2024

webpage | pdf | abstract |

In the field of locomotion task of quadruped robots, Blind Policy and Perceptive Policy each have their own advantages and limitations. The Blind Policy relies on preset sensor information and algorithms, suitable for known and structured environments, but it lacks adaptability in complex or unknown environments. The Perceptive Policy uses visual sensors to obtain detailed environmental information, allowing it to adapt to complex terrains, but its effectiveness is limited under occluded conditions, especially when perception fails. Unlike the Blind Policy, the Perceptive Policy is not as robust under these conditions. To address these challenges, we propose a Multi-Brain collaborative system that incorporates the concepts of Multi-Agent Reinforcement Learning and introduces collaboration between the Blind Policy and the Perceptive Policy. By applying this multi-policy collaborative model to a quadruped robot, the robot can maintain stable locomotion even when the perceptual system is impaired or observational data is incomplete. Our simulations and real-world experiments demonstrate that this system significantly improves the robot's passability and robustness against perception failures in complex environments, validating the effectiveness of multi-policy collaboration in enhancing robotic motion performance.

@article{TBD
}

Quadruped robot traversing 3D complex environments
Yi Cheng*, Hang Liu*, Guoping Pan, LinQi Ye, Houde Liu, Bin Liang
IROS 2024(Oral Pitch)

webpage | abstract | bibtex | arXiv | video |

Traversing 3-D complex environments has always been a significant challenge for legged locomotion. Existing methods typically rely on external sensors such as vision and lidar to preemptively react to obstacles by acquiring environmental information. However, in scenarios like nighttime or dense forests, external sensors often fail to function properly, necessitating robots to rely on proprioceptive sensors to perceive diverse obstacles in the environment and respond promptly. This task is undeniably challenging. Our research finds that methods based on collision detection can enhance a robot's perception of environmental obstacles. In this work, we propose an end-to-end learning-based quadruped robot motion controller that relies solely on proprioceptive sensing. This controller can accurately detect, localize, and agilely respond to collisions in unknown and complex 3D environments, thereby improving the robot's traversability in complex environments. We demonstrate in both simulation and real-world experiments that our method enables quadruped robots to successfully traverse challenging obstacles in various complex environments.

  @inproceedings{go2traverse,
    title={Quadruped robot traversing 3D complex environments},
    author={Yi Cheng, Hang Liu, Guoping Pan, Linqi Ye, Houde Liu, Bin Liang},
    booktitle={arXiv preprint arXiv:2404.18225},
    year={2024},
  }
sym

Structural Optimization of Lightweight Bipedal Robot via SERL
Yi Cheng*, Chenxi Han*, Yuheng Min, LinQi Ye, Houde Liu, Hang Liu, Bin Liang
IROS 2024(Oral Pitch)

webpage | pdf | abstract | bibtex | arXiv | code

Designing a bipedal robot is a complex and challenging task, especially when dealing with a multitude of structural parameters. Traditional design methods often rely on human intuition and experience. However, such approaches are time-consuming, labor-intensive, lack theoretical guidance, and struggle to obtain optimal design results within vast design spaces, thus failing to fully exploit the inherent performance potential of robots. In this context, this paper introduces the SERL (Structure Evolution Reinforcement Learning) algorithm, which combines reinforcement learning for locomotion tasks with evolution algorithms. The aim is to identify the optimal parameter combinations within a given multidimensional design space. Through the SERL algorithm, we successfully designed a bipedal robot named Wow Orin, where the optimal leg length is obtained through optimization based on body structure and motor torque. We have experimentally validated the effectiveness of the SERL algorithm, which is capable of optimizing the best structure within specified design space and task conditions. Additionally, to assess the performance gap between our designed robot and the current state-of-the-art robots, we comparedWow Orin with mainstream bipedal robots Cassie and Unitree H1. A series of experimental results demonstrate the Outstanding energy efficiency and performance of Wow Orin, further validating the feasibility of applying the SERL algorithm to practical design.

  @inproceedings{TBD}

  Projects

"WOW" humanoid whole body control

Robust Locomotion, we named humanoid "wow"

IEEE ICRA Quadruped Robot Challenge

The Forth Placement in Tele-Operation

Rope-driven Humanoid Sim2Real

The first rope-driven controller based on deep reinforcement learning

「RoboMaster」 Navigation SystemFor 3D Lidar

Code | Blog |

Mapping, relocalization, and navigation system for three-dimensional lidar

Simulation of 3D LiDAR in complex terrain environments

Code | Video |

Simulation platform for testing 3D laser-based navigation algorithms in challenging terrains

Hybrid jumping robot with both legs and wheels based on Webots

Code | Video |

Lagrangian dynamics modeling was used to build a state space, and the LQR algorithm was used to build a robust control method for the wheeled-legged robot

「RoboMaster」Multi-threaded autonomous targeting vision system

Code |

Using OpenCV for image processing to detect and recognize targets, and using Kalman filtering to predict the movement of the targets

sym

Robot arm calligraphy

Utilizing a robotic arm to replicate the user's handwriting trajectory





Website template from here and here