Skip to content

Tencent/VITA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

30 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

VITA-E: Natural Embodied Interaction with Concurrent Seeing, Hearing, Speaking, and Acting

🌐 Project Page Β· πŸ“– Paper Β· πŸ€– Model Weights Β· πŸš€ Live Demo

| πŸ—ΊοΈ Overview | πŸ“Š Experimental Results | ⚑ Get Started | πŸ’» Inference & Demo | πŸ”₯ Training |


VITA-E can handle various complex interactive scenarios, including concurrency and nearly real-time interruption.
πŸ“½ VITA-E Demo Show! Here We Go! πŸ”₯

πŸ—ΊοΈ VITA-E Overview

VITA-E Logo

We are excited to present VITA-E, which incorporates a series of advancements:

  1. Dual-Model Framework for Seamless Interaction. VITA-E introduces a groundbreaking dual-model core, where an "Active Model" executes tasks while a "Listening Model" stands ready for new commands.

  2. Innovative "Model-as-Controller" Paradigm. We pioneer a "model-as-controller" approach where the Vision-Language Model is fine-tuned to generate special tokens that function as direct system-level commands, enabling precise, reliable, and immediate control over system actions.

  3. Smooth Human-Computer Interaction. By this manner, VITA-E supports smooth two-way voice interaction, allows replies while executing, voice interruption during actions, and natural action transition. Besides, VITA-E supports both English and Chinese.

  4. Strong Performance in Critical Interactive Scenarios. Tested on a physical humanoid robot, VITA-E demonstrated exceptional reliability and responsiveness. It achieves a high success rate across multiple interactive and operational tasks and is compatible with a wide range of mainstream VLA models.

πŸ“Š Experimental Results

  • Success rate comparison of VITA-E and baseline models on two fundamental manipulation tasks.

  • Key Interactive Performance.
Speech Interruption Task Switching Emergency Stop Avg. voice response latency
100% 93.3% 100% 2.26s

⚑ Get Started

Install conda environment.

git clone https://github.com/VITA-MLLM/VITA-E cd VITA-E conda create -n vita_e python=3.10 -y conda activate vita_e pip install --upgrade pip pip install -r vita_e_requirements.txt pip install flash-attn --no-build-isolation 

Download the required model weights to local path: VITA-E.

huggingface-cli download VITA-MLLM/VITA-E --local-dir checkpoints/VITA-E

πŸ’» Inference & Demo

πŸ“ Inference

Run the inference script.

python inference_vita_e.py \ --model_path_vlm checkpoints/VITA-E/vita_vla_finetune \ --model_path_policy checkpoints/VITA-E/vita_gr00t_robot

πŸ“ Demo

Web Demo

You can interact with our VITA-E web demo with mocked robot state data to experience the features, with no need of any embodied robot entity. (A total of 48 GB GPU memory is needed.)

Prepare a VAD (Voice Activity Detection) module. You can choose to download silero_vad.onnx and silero_vad.jit, and place these files in the ./demo/wakeup_and_vad/resource/ directory.

python -m vita_e.server_vla_vita \ --model_path_vlm checkpoints/VITA-E/vita_vla_finetune \ --model_path_policy checkpoints/VITA-E/vita_gr00t_robot \ --ip 0.0.0.0 \ --port 8081

Wait about three minutes to completely load all modules. Open 127.0.0.1:8081 website on you server and enjoy it.

Real Robot Demo

Deploy server script on your server.

python -m vita_e.server_vla_vita \ --model_path_vlm checkpoints/VITA-E/vita_vla_finetune \ --model_path_policy checkpoints/VITA-E/vita_gr00t_robot \ --ip 0.0.0.0 \ --port 8081

Start client script on the robot client.

cd demo python vla_robot_client.py

πŸ”₯ Training

Our VITA-E model is built upon the VITA-1.5 and Isaac-GR00T architectures. We leverage VITA-1.5 as the VLM component and integrate Isaac-GR00T's pre-trained diffusion action expert as the action model.

The training process involves two stages: first, we fine-tune the VLM component and integrate it into the Isaac-GR00T framework by replacing the original VLM; then, we perform end-to-end fine-tuning on the complete model using VLA data.

Please refer to VITA-1.5 and Isaac-GR00T for more details.

βœ’οΈ Citation

If you find our work helpful for your research, please consider citing our work.

@article{liu2025vitae, title={VITA-E: Natural Embodied Interaction with Concurrent Seeing, Hearing, Speaking, and Acting}, author={Xiaoyu, Liu and Chaoyou, Fu and Chi, Yan and Chu, Wu and Haihan, Gao and Yi-Fan, Zhang and Shaoqi, Dong and Cheng, Qian and Bin, Luo and Xiuyong, Yang and Guanwu, Li and Yusheng, Cai and Yunhang, Shen and Deqiang, Jiang and Haoyu, Cao and Xing, Sun and Caifeng, Shan and Ran, He}, journal={arXiv preprint arXiv:2510.21817}, year={2025} }

πŸ“œ More Research

Explore our related researches:

πŸ‘ Acknowledgments

VITA-E is built with reference to the following outstanding works: Isaac-GR00T and Lerobot. Thanks!

About

The official implement of VITA, VITA15, LongVITA, VITA-Audio, VITA-VLA, and VITA-E.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published