Tianhao Zang

B.Eng. (Hons) Computer Science and Artificial Intelligence | Embodied AI Research Assistant
Portrait of Tianhao Zang

Current focus: visual navigation, embodied intelligence, sim-to-real robot learning, and cross-embodiment local planning.

Affiliation: East Lab, Eastern Institute of Technology (EIT), Ningbo, under Prof. Wei Zhang.

Academic path: University of Nottingham Ningbo Campus (2024-2026), followed by University of Nottingham Jubilee British Campus (2026-2028), B.Eng. (Hons) in CSAI.

Current GPA: 4.0

Pronouns: he/him

Working languages: English and Mandarin Chinese.

Local time in Ningbo (UTC+8): Loading...

Full CV (PDF)

Research and News

I am currently looking for long-term thesis collaboration opportunities and future doctoral supervision in embodied AI, visual robot navigation, and data-efficient reinforcement learning. My main goal is to design learning systems that can transfer robustly from simulation to real robots, while remaining practical for constrained sensing hardware.

At East Lab (EIT), I lead and co-develop multiple projects on visual local planning for mobile robots and quadrupeds. My work combines perception models, reinforcement learning, and deployment pipelines that are tested on real platforms including Unitree Go2 and Turtlebot2.

Before joining East Lab, I worked at the Computer Vision and Perception Lab (UNNC) under Prof. Jianfeng Ren, focusing on abstract visual reasoning, machine number reasoning, and compositional relation learning.

Current research tracks

  • Lead RGB-to-LiDAR Navigation: built a monocular RGB-to-point-cloud pipeline with Depth Anything to replace LiDAR, generating stable 20 Hz input for local planning.
  • Core Developer FastDSAC: ported DCLP to NVIDIA Isaac Lab and built a GPU-parallel Blender-Isaac-PyTorch training pipeline, reducing training turnaround from around 2-3 days to around 8 hours.
  • Sole Researcher AgniNav / Cross-Embodiment Planning: developing a unified depth and LaserScan policy network from RGB observations and deploying on both quadruped and wheeled robots.
  • In Progress Quiet Gait for Quadrupeds: introducing touchdown-velocity penalties in RL reward design to reduce foot-strike noise while preserving locomotion stability.

Publications and Manuscripts

I am actively building a publication profile across embodied intelligence, navigation, and visual reasoning. My publication status is summarized below for potential collaborators and supervisors.

  • CeRLP: A Cross-embodiment Robot Local Planning Framework for Visual Navigation. Submitted to IEEE Transactions on Robotics (T-RO).
    Contribution: lead researcher on RGB-to-point-cloud navigation pipeline and real-time local planning integration.
  • AgniNav: Towards Cross-Embodiment Local Planning for Embodied Robot Navigation. Submitted to IEEE Robotics and Automation Letters (RA-L).
    Contribution: sole researcher for multi-embodiment policy design and deployment on Unitree Go2 and Turtlebot2.
  • FastDSAC: Unlocking the Potential of Maximum Entropy RL in High-Dimensional Humanoid Control. Manuscript in preparation / under submission workflow.
    Contribution: core developer for Isaac Lab migration and high-throughput sim-to-real training.
  • DARR: A Dual-Branch Arithmetic Regression Reasoning Framework for Solving Machine Number Reasoning. Accepted at AAAI 2025.
    Contribution: participated in the few-shot, contrastive-learning-oriented reasoning pipeline.
  • Predictive Reasoning with Augmented Anomaly Contrastive Learning for Compositional Visual Relations. Published in IEEE Transactions on Multimedia (IEEE TMM).
    Contribution: participated in compositional visual relation modeling and evaluation.

Background

I am an undergraduate researcher in Computer Science and Artificial Intelligence with a strong focus on bridging algorithmic novelty and practical robotic deployment. My training spans deep learning, computer vision, and reinforcement learning, with hands-on experience in full-stack robot experimentation from modeling and simulation to on-robot validation.

Period Institution / Lab Role and focus
Sept. 2024 - 2026 University of Nottingham Ningbo Campus B.Eng. (Hons) in CSAI, current GPA 4.0.
Sept. 2026 - 2028 University of Nottingham Jubilee British Campus B.Eng. (Hons) continuation in CSAI, planned advanced thesis trajectory in embodied AI.
Mar. 2024 - 2025 Computer Vision and Perception Lab, UNNC Research Assistant under Prof. Jianfeng Ren, working on visual reasoning and few-shot machine number reasoning.
Feb. 2025 - Present East Lab, Eastern Institute of Technology (EIT), Ningbo Research Assistant under Prof. Wei Zhang, focusing on cross-embodiment visual navigation and sim-to-real reinforcement learning.

I am particularly interested in doctoral projects where the central challenge is to connect perception, planning, and control under realistic hardware constraints. I value research that is both theoretically principled and experimentally grounded.

Teaching and Collaboration Style

While my primary role is research-oriented, I regularly contribute to peer mentoring in lab environments through reproducible experiment setup, debugging support, and onboarding documentation for simulation and deployment toolchains.

In collaborative projects, I typically provide:

  • Clear experiment tracking and implementation notes for ablation studies and benchmark comparisons.
  • Codebase support for RL training pipelines, environment integration, and deployment scripts.
  • Bilingual technical communication (English and Mandarin) for international and cross-institution teams.

For prospective supervisors and collaborators, I am looking for teams with strong interest in embodied intelligence, robotics foundations, and impactful real-world deployment.

Software, Skills, and Demonstrations

Programming Languages Python, C++, C, Bash
Frameworks PyTorch, TensorFlow
Computer Vision Toolkits OpenCV, MMDetection, Depth Anything, YOLOv5, YOLOv8
Robotics and Simulation ROS 2, NVIDIA Isaac Lab / Isaac Sim, Gazebo, Nav2, Unitree Go2 SDK, URDF and MJCF modeling, sim-to-real transfer
Algorithms CNNs, Transformers, Contrastive Learning, Deep RL (DQN, PPO, TD3, SAC)
Platforms and Tools Linux (Ubuntu), CUDA, Docker, Git

Robot navigation demonstrations

Unitree Go2 autonomous navigation

Turtlebot2 autonomous navigation

Contact

I welcome messages related to thesis collaboration, research internships, and PhD supervision opportunities in embodied AI and robot navigation.