This is the implementation of our RA-L work 'Real-world Multi-object, Multi-grasp Detection'. The detector takes RGB-D image input and predicts multiple grasp candidates for a single object or multiple objects, in a single shot. The original arxiv paper can be found here. The final version will be updated after publication process.
If you find it helpful for your research, please consider citing:
@inproceedings{chu2018deep, title = {Real-World Multiobject, Multigrasp Detection}, author = {F. Chu and R. Xu and P. A. Vela}, journal = {IEEE Robotics and Automation Letters}, year = {2018}, volume = {3}, number = {4}, pages = {3355-3362}, DOI = {10.1109/LRA.2018.2852777}, ISSN = {2377-3766}, month = {Oct} } If you encounter any questions, please contact me at fujenchu[at]gatech[dot]edu
- Clone this repository
git clone https://github.com/ivalab/grasp_multiObject_multiGrasp.git cd grasp_multiObject_multiGrasp - Build Cython modules
cd lib make clean make cd .. - Install Python COCO API
cd data git clone https://github.com/pdollar/coco.git cd coco/PythonAPI make cd ../../.. - Download pretrained models
- trained model for grasp on dropbox drive
- put under
output/res50/train/default/
- Run demo
./tools/demo_graspRGD.py --net res50 --dataset grasp you can see images pop out.
-
Generate data
1-1. Download Cornell Dataset
1-2. RundataPreprocessingTest_fasterrcnn_split.m(please modify paths according to your structure)
1-3. Follow 'Format Your Dataset' section here to check if your data follows VOC format -
Train
./experiments/scripts/train_faster_rcnn.sh 0 graspRGB res50 Yes! please find it HERE
This repo borrows tons of code from
- tf-faster-rcnn by endernewton
