Skip to content

radar-lab/TransRAD

Repository files navigation

📡 TransRAD

IEEE Transactions on Radar Systems Paper:
TransRAD: Retentive Vision Transformer for Enhanced Radar Object Detection

GitHub Repo stars GitHub forks GitHub last commit IEEE arXiv


🧑‍🤝‍🧑 Contributors

Contributors

📧 Contact


🎯 I. Abstract

Despite significant advancements in environment perception capabilities for autonomous driving and intelligent robotics, cameras and LiDARs remain notoriously unreliable in low-light conditions and adverse weather, which limits their effectiveness. Radar serves as a reliable and low-cost sensor that can effectively complement these limitations. However, radar-based object detection has been underexplored due to the inherent weaknesses of radar data, such as low resolution, high noise, and lack of visual information. In this paper, we present TransRAD, a novel 3D radar object detection model designed to address these challenges by leveraging the Retentive Vision Transformer (RMT) to more effectively learn features from information-dense radar Range-Azimuth-Doppler (RAD) data. Our approach leverages the Retentive Manhattan Self-Attention (MaSA) mechanism provided by RMT to incorporate explicit spatial priors, thereby enabling more accurate alignment with the spatial saliency characteristics of radar targets in RAD data and achieving precise 3D radar detection across Range-Azimuth-Doppler dimensions. Furthermore, we propose Location-Aware NMS to effectively mitigate the common issue of duplicate bounding boxes in deep radar object detection. The experimental results demonstrate that TransRAD outperforms state-of-the-art methods in both 2D and 3D radar detection tasks, achieving higher accuracy, faster inference speed, and reduced computational complexity.

📊 II. Results

🚀 III. Train and Test


📂 1. Dataset Preparation

You need to use the RADDet radar dataset for training.

👉 Please refer to RADDet by ZhangAoCanada for preparing the dataset.


🏋️ 2. Train

Run the following script to train the TransRAD model:

python Train.py

💡 You may change the configuration per your needs.


✅ 3. Test

After training your model, run:

python Test.py

to get the testing results.


🙏 IV. Acknowledgment

We sincerely acknowledge and appreciate the contributions of the following repositories, which have provided valuable references for our work:

Our implementation has been inspired by these works, and we extend our gratitude to the authors for their open-source contributions.

About

Code for our paper: TransRAD: Retentive Vision Transformer for Enhanced Radar Object Detection

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages