Create and activate conda environment 'giraffehd':
conda env create -f environment.yml conda activate giraffehd Create lmdb dataset:
python prepare_data.py --out LMDB_PATH --n_worker N_WORKER --size SIZE DATASET_PATH This will convert images to jpeg and pre-resizes them.
Train model in distributed settings:
python -m torch.distributed.launch --nproc_per_node=N_GPU --master_port=PORT train.py \ --wandb --batch BATCH_SIZE --dataset DATASET --size SIZE --datasize DATASIZE LMDB_PATH Evaluate trained model:
python eval.py --ckpt CKPT --batch BATCH_SIZE --control_i CONTROL_I Use --control_i to specify which feature to control,
0: fg_shape; 1: fg_app; 2: bg_shape; 3: bg_app; 4: camera rotation angle; 5: elevation angle; 7: scale; 8: translation; 9: rotation; Change L168-183 in eval.py to specify interpolation interval if needed (training intervals will be used if not specified). For example, set --control_i to 8, and
args.translation_range_min = [0., 0., -0.1] args.translation_range_max = [0., 0., 0.1] to perform object vertical translation.
Model checkpoints are available in google drive.
Thanks to giraffe and stylegan2-pytorch
This repository is released under the MIT license.
@inproceedings{xue2022giraffehd, author = {Yang Xue and Yuheng Li and Krishna Kumar Singh and Yong Jae Lee}, title = {GIRAFFE HD: A High-Resolution 3D-aware Generative Model}, booktitle = {CVPR}, year = {2022}, } 