Skip to content
This repository was archived by the owner on Jun 22, 2022. It is now read-only.

Commit 1700427

Browse files
author
minerva-ml
committed
resolving conflicts
2 parents 51b3977 + fb308cd commit 1700427

File tree

9 files changed

+464
-287
lines changed

9 files changed

+464
-287
lines changed

README.md

Lines changed: 56 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -10,59 +10,89 @@ We are building entirely open solution to this competition. Specifically:
1010
1. **Learning from the process** - updates about new ideas, code and experiments is the best way to learn data science. Our activity is especially useful for people who wants to enter the competition, but lack appropriate experience.
1111
1. Encourage more Kagglers to start working on this competition.
1212
1. Deliver open source solution with no strings attached. Code is available on our [GitHub repository :computer:](https://github.com/neptune-ml/open-solution-googleai-object-detection). This solution should establish solid benchmark, as well as provide good base for your custom ideas and experiments. We care about clean code :smiley:
13-
1. We are opening our experiments as well: everybody can have **live preview** on our experiments, parameters, code, etc. Check: [Google-AI-Object-Detection-Challenge :chart_with_upwards_trend:](https://app.neptune.ml/neptune-ml/Google-AI-Object-Detection-Challenge).
13+
1. We are opening our experiments as well: everybody can have **live preview** on our experiments, parameters, code, etc. Check: [Google-AI-Object-Detection-Challenge :chart_with_upwards_trend:](https://app.neptune.ml/neptune-ml/Google-AI-Object-Detection-Challenge) and images below:
14+
15+
| UNet training monitor :bar_chart: | Predicted bounding boxes :bar_chart: |
16+
|:---|:---|
17+
|[![unet-training-monitor](https://gist.githubusercontent.com/kamil-kaczmarek/b3b939797fb39752c45fdadfedba3ed9/raw/19272701575bca235473adaabb7b7c54b2416a54/gai-1.png)](https://app.neptune.ml/-/dashboard/experiment/f945da64-6dd3-459b-94c5-58bc6a83f590)|[![predicted-bounding-boxes](https://gist.githubusercontent.com/kamil-kaczmarek/b3b939797fb39752c45fdadfedba3ed9/raw/19272701575bca235473adaabb7b7c54b2416a54/gai-2.png)](https://app.neptune.ml/-/dashboard/experiment/c779468e-d3f7-44b8-a3a4-43a012315708)|
18+
19+
## Disclaimer
20+
In this open source solution you will find references to the [neptune.ml](https://neptune.ml). It is free platform for community Users, which we use daily to keep track of our experiments. Please note that using neptune.ml is not necessary to proceed with this solution. You may run it as plain Python script :snake:.
21+
22+
# How to start?
23+
## Learn about our solutions
24+
1. Check [Kaggle forum](https://www.kaggle.com/c/google-ai-open-images-object-detection-track/discussion/62895) and participate in the discussions.
25+
1. Check our [Wiki pages :dolphin:](https://github.com/neptune-ml/open-solution-googleai-object-detection/wiki), where we describe our work. Below are link to specific solutions:
26+
27+
| link to code| link to description |
28+
|:---:|:---:|
29+
|[solution-1](https://github.com/neptune-ml/open-solution-googleai-object-detection/tree/solution-1)|[palm-tree :palm_tree:](https://github.com/neptune-ml/open-solution-googleai-object-detection/wiki/RetinaNet-with-sampler)|
1430

1531
## Dataset for this competition
1632
This competition is special, because it used [Open Images Dataset V4](https://storage.googleapis.com/openimages/web/index.html), which is quite large: `>1.8M` images and `>0.5TB` :astonished: To make it more approachable, we are hosting entire dataset in the neptune's public directory :sunglasses:. **You can use this dataset in [neptune.ml](https://neptune.ml) with no additional setup :+1:.**
1733

18-
## Learn more about our solutions
19-
[Kaggle discussion](https://www.kaggle.com/c/google-ai-open-images-object-detection-track/discussion) is our primary way of communication, however, we are also documenting our work on the [Wiki pages :blue_book:](https://github.com/neptune-ml/open-solution-googleai-object-detection/wiki). Click on the dolphin to get started [:dolphin:](https://github.com/neptune-ml/open-solution-googleai-object-detection/wiki).
20-
21-
## Disclaimer
22-
In this open source solution you will find references to the [neptune.ml](https://neptune.ml). It is free platform for community Users, which we use daily to keep track of our experiments. Please note that using neptune.ml is not necessary to proceed with this solution. You may run it as plain Python script :wink:.
34+
## Start experimenting with ready-to-use code
35+
You can jump start your participation in the competition by using our starter pack. Installation instruction below will guide you through the setup.
2336

2437
## Installation
2538
### Fast Track
26-
1. Clone repository and install requirements (check _requirements.txt_)
27-
1. Register to the [neptune.ml](https://neptune.ml/login) _(if you wish to use it)_
28-
1. Run experiment:
39+
1. Clone repository, install requirements (check _requirements.txt)
40+
41+
```bash
42+
pip3 install -r requirements.txt
43+
```
44+
45+
2. Register to the [neptune.ml](https://neptune.ml/login) _(if you wish to use it)_ and create your project, for example Google-AI-Object-Detection-Challenge.
46+
3. Train RetinaNet:
47+
48+
:hamster:
49+
```bash
50+
neptune send --worker m-4p100 \
51+
--environment pytorch-0.3.1-gpu-py3 \
52+
--config configs/neptune.yaml \
53+
main.py train --pipeline_name retinanet
54+
```
2955

3056
:trident:
3157
```bash
32-
neptune run will appear here soon :)
58+
neptune run main.py train --pipeline_name retinanet
3359
```
3460

3561
:snake:
3662
```bash
37-
python command will appear here soon :)
63+
python main.py -- train --pipeline_name retinanet
3864
```
3965

40-
### Step by step
41-
1. Clone this repository
42-
```bash
43-
git clone https://github.com/neptune-ml/open-solution-googleai-object-detection.git
66+
4. Evaluate/Predict RetinaNet:
67+
68+
**Note** in case of memory trouble go to `neptune.yaml` and change `batch_size_inference: 1`
69+
70+
:hamster:
71+
With cloud environment you need to change the experiment directory to the one that you have just trained. Let's assume that your experiment id was `GAI-14`. You should go to `neptune.yaml` and change:
72+
73+
```yaml
74+
experiment_dir: /output/experiment
75+
clone_experiment_dir_from: /input/GAI-14/output/experiment
4476
```
45-
2. Install requirements in your Python3 environment
77+
4678
```bash
47-
pip3 install requirements.txt
79+
neptune send --worker m-4p100 \
80+
--environment pytorch-0.3.1-gpu-py3 \
81+
--config configs/neptune.yaml \
82+
--input /GAI-14 \
83+
main.py evaluate_predict --pipeline_name retinanet --chunk_size 100
4884
```
49-
3. Register to the [neptune.ml](https://neptune.ml/login) _(if you wish to use it)_
50-
4. Update data directories in the [neptune.yaml](https://github.com/neptune-ml/open-solution-googleai-object-detection/blob/master/neptune.yaml) configuration file
51-
5. Run experiment:
5285

5386
:trident:
5487
```bash
55-
neptune login
56-
neptune run will appear here soon :)
88+
neptune run main.py train --pipeline_name retinanet --chunk_size 100
5789
```
5890

5991
:snake:
6092
```bash
61-
python command will appear here soon :)
93+
python main.py -- train --pipeline_name retinanet --chunk_size 100
6294
```
6395

64-
6. collect submit from `experiment_directory` specified in the [neptune.yaml](https://github.com/neptune-ml/open-solution-googleai-object-detection/blob/master/neptune.yaml)
65-
6696
## Get involved
6797
You are welcome to contribute your code and ideas to this open solution. To get started:
6898
1. Check [competition project](https://github.com/neptune-ml/open-solution-googleai-object-detection/projects/1) on GitHub to see what we are working on right now.
@@ -72,6 +102,6 @@ You are welcome to contribute your code and ideas to this open solution. To get
72102

73103
## User support
74104
There are several ways to seek help:
75-
1. [Kaggle discussion](https://www.kaggle.com/c/google-ai-open-images-object-detection-track/discussion) is our primary way of communication.
105+
1. [Kaggle discussion](https://www.kaggle.com/c/google-ai-open-images-object-detection-track/discussion/62895) is our primary way of communication.
76106
1. Read project's [Wiki](https://github.com/neptune-ml/open-solution-googleai-object-detection/wiki), where we publish descriptions about the code, pipelines and supporting tools such as [neptune.ml](https://neptune.ml).
77107
1. Submit an [issue]((https://github.com/neptune-ml/open-solution-googleai-object-detection/issues)) directly in this repo.

configs/neptune.yaml

Lines changed: 19 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
1-
project: YOUR_PROJECT_NAME
1+
project: USERNAME/googleai-object-detection
22

3-
name: google AI object detection
3+
name: Google AI object detection
44
tags: [solution-1]
55

66
metric:
77
channel: 'MAP'
88
goal: maximize
99

1010
#Comment out if not in Cloud Environment
11-
#pip-requirements-file: requirements.txt
11+
pip-requirements-file: requirements.txt # Comment out if Local execution
1212

1313
exclude:
1414
- .git
@@ -23,18 +23,18 @@ exclude:
2323

2424
parameters:
2525
# Data Paths
26-
train_imgs_dir: ''
27-
test_imgs_dir: ''
28-
annotations_filepath: ''
29-
annotations_human_labels_filepath: ''
30-
bbox_hierarchy_filepath: ''
31-
valid_ids_filepath: ''
32-
sample_submission: ''
33-
experiment_dir: ''
34-
class_mappings_filepath: ''
35-
metadata_filepath: ''
26+
train_imgs_dir: /public/datasets/open-images-dataset-v4/bounding-boxes/train
27+
test_imgs_dir: /public/datasets/open-images-dataset-v4/bounding-boxes/test_challenge_2018
28+
annotations_filepath: /public/challenges/google-ai-open-images-object-detection-track/annotations/challenge-2018-train-annotations-bbox.csv
29+
annotations_human_labels_filepath: /public/challenges/google-ai-open-images-object-detection-track/annotations/challenge-2018-train-annotations-human-imagelabels.csv
30+
bbox_hierarchy_filepath: /public/challenges/google-ai-open-images-object-detection-track/metadata/bbox_labels_500_hierarchy.json
31+
class_mappings_filepath: /public/challenges/google-ai-open-images-object-detection-track/metadata/challenge-2018-class-descriptions-500.csv
32+
valid_ids_filepath: /public/challenges/google-ai-open-images-object-detection-track/metadata/challenge-2018-image-ids-valset-od.csv
33+
sample_submission: /public/challenges/google-ai-open-images-object-detection-track/sample_submission.csv
34+
experiment_dir: /output/experiment
35+
clone_experiment_dir_from: '' #When running eval specify this as for example /input/GAI-14/output/experiment
3636

37-
# Execution
37+
# Execution
3838
clean_experiment_directory_before_training: 1
3939
num_workers: 4
4040
num_threads: 4
@@ -62,24 +62,24 @@ parameters:
6262

6363
# Retina parameters (multi-output)
6464
encoder_depth: 50
65-
num_classes: 500
65+
num_classes: 10
6666
pretrained_encoder: 1
6767
pi: 0.01
6868
aspect_ratios: '[1/2., 1/1., 2/1.]'
6969
scale_ratios: '[1., pow(2,1/3.), pow(2,2/3.)]'
7070

7171
# Training schedule
7272
epochs_nr: 100
73-
batch_size_train: 1
74-
batch_size_inference: 1
73+
batch_size_train: 8
74+
batch_size_inference: 8
7575
lr: 0.00001
7676
momentum: 0.9
7777
gamma: 1.0
7878
patience: 30
7979
lr_factor: 0.3
8080
lr_patience: 30
81-
training_sample_size: 50000
82-
validation_sample_size: 10000
81+
training_sample_size: 10000
82+
validation_sample_size: 1000
8383

8484
# Regularization
8585
use_batch_norm: 1

notebooks/dataset_exploration.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,8 +11,8 @@
1111
"import glob\n",
1212
"import pandas as pd\n",
1313
"\n",
14-
"DIRPATH = '/mnt/ml-team/open-images-v4/bounding-boxes'\n",
15-
"DIRPATH_COMPETITION = '/mnt/ml-team/minerva/open-solutions/googleai-object-detection/data'"
14+
"DIRPATH = '/PATH/TO/open-images-v4/bounding-boxes'\n",
15+
"DIRPATH_COMPETITION = '/PATH/TO/DATA'"
1616
]
1717
},
1818
{

notebooks/submission_merge.ipynb

Lines changed: 0 additions & 175 deletions
This file was deleted.

0 commit comments

Comments
 (0)