Wenzhe Caiβ Jiaqi Pengβ Yuqiang Yangβ Yujian Zhangβ Meng Weiβ
Hanqing Wangβ Yilun Chenβ Tai Wangβ Jiangmiao Pangβ
Shanghai AI Laboratoryβ Tsinghua Universityβ
Zhejiang Universityβ The University of Hong Kongβ
- We release the InternVLA-N1 - the first end-to-end navigation dual-system.
- We release the InternNav - an all-in-one open-source toolbox for embodied naivgation.
Navigation Diffusion Policy (NavDP) is an end-to-end mapless navigation model that can achieves cross-embodiment generalization without any real-world robot data. By building a highly efficient simulation data generation pipeline as well as the superior model design, NavDP achieves real-time path-planning and obstacle avoidance across various navigation tasks, including nogoal exploration, pointgoal navigation, imagegoal navigation.
Please fill this form to access the link to download the latest model checkpoint.
Please follow the instructions to config the environment for NavDP.
Step 0: Clone this repository
git clone https://github.com/InternRobotics/NavDP cd NavDP/baselines/navdp/
Step 1: Create conda environment and install the dependency
conda create -n navdp python=3.10 conda activate navdp pip install -r requirements.txt
Run the following line to start navdp server:
python navdp_server.py --port ${YOUR_PORT} --checkpoint ${SAVE_PTH_PATH}
Then, follow the subsequent tutorial to build the environment for IsaacSim and start the evaluation in simulation. By running with our benchmark, you should be able to replicate the navigation examples below:
This repository is a high-fidelity platform for benchmarking the visual navigation methods based on IsaacSim and IsaacLab. With realistic physics simulation and realistic scene assets, this repository aims to build an benchmark that can minimizing the sim-to-real gap in navigation system-1 evaluation.
- β Decoupled Framework between Navigation Approaches and Evaluation Process
The evaluation is accomplished by calling navigation method api with HTTP requests. By decoupling the implementation of navigation model with the evaluation process, it is much easier for users to evaluate the performance of novel navigation methods.
- β Fully Asynchronous Framework between Trajectory Planning and Following
We implement a MPC-based controller to constantly track the planned trajectory. With the asynchronous framework, the evaluation metrics become related to the navigation approaches' decision frequency which help align with the real-world navigation performance.
- β High-Quality Scene Asset for Evaluation
Our benchmark supports evaluation in diverse scene assets, including random cluttered environments, realistic home scenarios and commercial scenarios.
- β Support Image-Goal, Point-Goal and No-Goal Navigation Tasks
Our benchmark supports multiple navigation tasks, including no-goal exploration, point-goal navigation as well as image-goal navigation.
- π Overview
- π Prepare Scene Asset
- π§ Installation of Benchmark
- βοΈ Installation of Baseline Library
- π» Running Basline as Server
- πΉοΈ Running Teleoperation
- π Running Evaluation
- π Citation
- π License
- π Acknowledgements
Please download the scene asset from InternScene-N1 at HuggingFace. The episodes information can be directly accessed in this repo. After downloading, please organize the structure as follows:
assets/scenes/ βββ SkyTexture/ β βββ belfast_sunset_puresky_4k.hdr β βββ citrus_orchard_road_puresky_4k.hdr β βββ ... βββ Materials/ β βββ Carpet β βββ textures/ β βββ Carpet_Woven.mdl β βββ ... β βββ ... βββ cluttered_easy/ β βββ easy_0/ β βββ cluttered-0.usd/ β βββ imagegoal_start_goal_pairs.npy β βββ pointgoal_start_goal_pairs.npy β βββ ... βββ cluttered_hard/ β βββ hard_0/ β βββ cluttered-0.usd/ β βββ imagegoal_start_goal_pairs.npy β βββ pointgoal_start_goal_pairs.npy β βββ ... βββ internscenes_commercial/ β βββ models/ β βββ Materials/ β βββ scenes_commercial/ β βββ MV4AFHQKTKJZ2AABAAAAADQ8_usd/ β βββ models/ β βββ Materials/ β βββ metadata.json β βββ start_result_navigation.usd β βββ imagegoal_start_goal_pairs.npy β βββ pointgoal_start_goal_pairs.npy β βββ ... βββ internscene_home/ β βββ models/ β βββ Materials/ β βββ scenes_home/ β βββ MV4AFHQKTKJZ2AABAAAAADQ8_usd/ β βββ models/ β βββ Materials/ β βββ metadata.json β βββ start_result_navigation.usd β βββ imagegoal_start_goal_pairs.npy β βββ pointgoal_start_goal_pairs.npy β βββ ...
Category | Download Asset | Episodes |
---|---|---|
SkyTexture | Link | - |
Materials | Link | - |
Cluttered-Easy | Link | Episodes |
Cluttered-Hard | Link | Episodes |
InternScenes-Home | Link | Episodes |
InternScenes-Commercial | Link | Episodes |
Our framework is based on IsaacSim 4.2.0 and IsaacLab 1.2.0, you can follow the instructions to configure the conda environment.
# create the environment conda create -n isaaclab python=3.10 conda activate isaaclab # install IsaacSim 4.2 pip install --upgrade pip pip install isaacsim==4.2.0.2 isaacsim-extscache-physics==4.2.0.2 isaacsim-extscache-kit==4.2.0.2 isaacsim-extscache-kit-sdk==4.2.0.2 --extra-index-url https://pypi.nvidia.com # check the isaacsim installation isaacsim omni.isaac.sim.python.kit # install IsaacLab 1.2.0 git clone https://github.com/isaac-sim/IsaacLab.git cd IsaacLab/ git checkout tags/v1.2.0 # ignore the rsl-rl unavailable error ./isaaclab.sh -i # check the isaaclab installation ./isaaclab.sh -p source/standalone/tutorials/00_sim/create_empty.py
After preparing for the dependencies, please clone our project to get started.
# with the created environment following the previous steps git clone https://github.com/InternRobotics/NavDP.git cd NavDP pip install -r requirements.txt
We collect the checkpoints for other navigation system-1 method from the corresponding respitory and organize their code to support the HTTP api calling for our benchmark. The links of paper, github codes as well as the pre-trained checkpoints are listed in the table below. Some of the baselines requires additional dependencies, and we provide the installation details below.
Baseline | Paper | Repo | Checkpoint | Support Tasks |
---|---|---|---|---|
DD-PPO | Arxiv | GitHub | Checkpoint | PointNav |
iPlanner | Arxiv | GitHub | Checkpoint | PointNav |
ViPlanner | Arxiv | GitHub | Checkpoint Mask2Former | PointNav |
GNM | Arxiv | GitHub | Checkpoint | ImageNav, NoGoal |
ViNT | Arxiv | GitHub | Checkpoint | ImageNav, NoGoal |
NoMad | Arxiv | GitHub | Checkpoint | ImageNav, NoGoal |
NavDP | Arxiv | GitHub | Checkpoint | PointNav, ImageNav, NoGoal |
To verify the performance of DD-PPO with continuous action space, we interpolate the predicted discrete actions {Stop, Forward, TurnLeft, TurnRight} into a trajectory. To play with the DD-PPO in our benchmark, you need to install the habitat-lab and habitat-baselines. As Habitat only supports python <= 3.9, we recommand to create a new environment.
conda create -n habitat python=3.9 cmake=3.14.0 conda activate habitat conda install habitat-sim withbullet -c conda-forge -c aihabitat git clone --branch stable https://github.com/facebookresearch/habitat-lab.git cd habitat-lab pip install -e habitat-lab pip install -e habitat-baselines
No addition dependencies are required if you have configured the environment for running the benchmark.
For Viplanner, you need to install the mmcv and mmdet for Mask2Former. We recommand to create a new environment with torch 2.0.1 as backend.
pip install torch==2.0.1+cu118 --extra-index-url https://download.pytorch.org/whl/cu118 pip install torchvision==0.15.2+cu118 --extra-index-url https://download.pytorch.org/whl/cu118 pip install mmcv==2.0.0 -f https://download.openmmlab.com/mmcv/dist/cu118/torch2.0/index.html pip install mmengine mmdet pip install git+https://github.com/cocodataset/panopticapi.git
To play with GNM, ViNT and NoMad, you need to install the following dependencies:
pip install efficientnet_pytorch==0.7.1 pip install diffusers==0.33.1 pip install git+https://github.com/real-stanford/diffusion_policy.git
For each pre-built baseline methods, each contains a server.py file, just simply run server python script with parsing the server port as well as the checkpoint path. Taking NavDP as an example:
# please first download the checkpoint from the above link cd baselines/navdp/ python navdp_server.py --port 8888 --checkpoint ./checkpoints/navdp_checkpoint.ckpt
Then, the server will run at backend waiting for RGB-D observations and generate the preferred navigation trajectories.
For quickstart or debug with your novel navigation approach, we provide a teleoperation script that the robot move according to your teleoperation command while outputs the predicted trajectory for visualization. With a running server, the teleoperation code can be directly started with one-line command:
# if the running server support no-goal task python teleop_nogoal_wheeled.py # if the running server support point-goal task python teleop_pointgoal_wheeled.py # if the running server support image-goal task python teleop_imagegoal_wheeled.py
Then, you can use 'w','a','s','d' on the keyboard to control the linear and anguler speed.
With a running server, it is simple to start the evaluation as:
# if the running server support no-goal task python eval_nogoal_wheeled.py --port {PORT} --scene_dir {ASSET_SCENE} --scene_index {INDEX} --scene_scale {SCALE} # if the running server support point-goal task python eval_pointgoal_wheeled.py --port {PORT} --scene_dir {ASSET_SCENE} --scene_index {INDEX} --scene_scale {SCALE} # if the running server support image-goal task python eval_imagegoal_wheeled.py --port {PORT} --scene_dir {ASSET_SCENE} --scene_index {INDEX} --scene_scale {SCALE}
Notes: Please parse the port to match the server port, and always parse the absolute path for the scene_dir. For internscenes, please parse scene_scale as 0.01 and 1.0 for cluttered scenes.
The open-sourced code are under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License .
If you find our work helpful, please cite:
@misc{navdp, title = {NavDP: Learning Sim-to-Real Navigation Diffusion Policy with Privileged Information Guidance}, author = {Wenzhe Cai, Jiaqi Peng, Yuqiang Yang, Yujian Zhang, Meng Wei, Hanqing Wang, Yilun Chen, Tai Wang and Jiangmiao Pang}, year = {2025}, booktitle={arXiv}, }
- InternUtopia (Previously
GRUtopia
): The closed-loop evaluation and GRScenes-100 data in this framework relies on the InternUtopia framework. - InternNav: All-in-one open-source toolbox for embodied navigation based on PyTorch, Habitat and Isaac Sim.
- Diffusion Policy: Diffusion policy implementation.
- DepthAnything: The foundation representation for RGB image observations.
- ViPlanner: ViPlanner implementation.
- iPlanner: iPlanner implementation.
- visualnav-transformer: NoMad, ViNT, GNM implementation.