| πΊοΈ Overview | π Experimental Results | β‘ Get Started | π» Inference & Demo | π₯ Training |

VITA-E can handle various complex interactive scenarios, including concurrency and nearly real-time interruption.
π½ VITA-E Demo Show! Here We Go! π₯
- Success rate comparison of VITA-E and baseline models on two fundamental manipulation tasks.
- Key Interactive Performance.
| Speech Interruption | Task Switching | Emergency Stop | Avg. voice response latency |
|---|---|---|---|
| 100% | 93.3% | 100% | 2.26s |
Install conda environment.
git clone https://github.com/VITA-MLLM/VITA-E cd VITA-E conda create -n vita_e python=3.10 -y conda activate vita_e pip install --upgrade pip pip install -r vita_e_requirements.txt pip install flash-attn --no-build-isolation Download the required model weights to local path: VITA-E.
huggingface-cli download VITA-MLLM/VITA-E --local-dir checkpoints/VITA-ERun the inference script.
python inference_vita_e.py \ --model_path_vlm checkpoints/VITA-E/vita_vla_finetune \ --model_path_policy checkpoints/VITA-E/vita_gr00t_robotYou can interact with our VITA-E web demo with mocked robot state data to experience the features, with no need of any embodied robot entity. (A total of 48 GB GPU memory is needed.)
Prepare a VAD (Voice Activity Detection) module. You can choose to download silero_vad.onnx and silero_vad.jit, and place these files in the ./demo/wakeup_and_vad/resource/ directory.
python -m vita_e.server_vla_vita \ --model_path_vlm checkpoints/VITA-E/vita_vla_finetune \ --model_path_policy checkpoints/VITA-E/vita_gr00t_robot \ --ip 0.0.0.0 \ --port 8081Wait about three minutes to completely load all modules. Open 127.0.0.1:8081 website on you server and enjoy it.
Deploy server script on your server.
python -m vita_e.server_vla_vita \ --model_path_vlm checkpoints/VITA-E/vita_vla_finetune \ --model_path_policy checkpoints/VITA-E/vita_gr00t_robot \ --ip 0.0.0.0 \ --port 8081Start client script on the robot client.
cd demo python vla_robot_client.pyOur VITA-E model is built upon the VITA-1.5 and Isaac-GR00T architectures. We leverage VITA-1.5 as the VLM component and integrate Isaac-GR00T's pre-trained diffusion action expert as the action model.
The training process involves two stages: first, we fine-tune the VLM component and integrate it into the Isaac-GR00T framework by replacing the original VLM; then, we perform end-to-end fine-tuning on the complete model using VLA data.
Please refer to VITA-1.5 and Isaac-GR00T for more details.
If you find our work helpful for your research, please consider citing our work.
@article{liu2025vitae, title={VITA-E: Natural Embodied Interaction with Concurrent Seeing, Hearing, Speaking, and Acting}, author={Xiaoyu, Liu and Chaoyou, Fu and Chi, Yan and Chu, Wu and Haihan, Gao and Yi-Fan, Zhang and Shaoqi, Dong and Cheng, Qian and Bin, Luo and Xiuyong, Yang and Guanwu, Li and Yusheng, Cai and Yunhang, Shen and Deqiang, Jiang and Haoyu, Cao and Xing, Sun and Caifeng, Shan and Ran, He}, journal={arXiv preprint arXiv:2510.21817}, year={2025} }Explore our related researches:
- [VITA-1.5] VITA-1.5: Towards GPT-4o Level Real-Time Vision and Speech Interaction
- [VITA-1.0] VITA: Towards Open-Source Interactive Omni Multimodal LLM
- [Awesome-MLLM] A Survey on Multimodal Large Language Models
- [MME] MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
- [Video-MME] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis
VITA-E is built with reference to the following outstanding works: Isaac-GR00T and Lerobot. ThanksοΌ

