Skip to content

Commit 3754090

Browse files
author
Stefan
committed
2 parents 9ae2389 + d539ceb commit 3754090

File tree

1 file changed

+15
-0
lines changed

1 file changed

+15
-0
lines changed

README.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,8 +37,23 @@ for epoch in range(epochs):
3737
## Train on LIDC Dataset
3838
One of the datasets used in the original paper is the [LIDC dataset](https://wiki.cancerimagingarchive.net). I've preprocessed this data and stored them in 5 .pickle files which you can [download here](https://drive.google.com/drive/folders/1xKfKCQo8qa6SAr3u7qWNtQjIphIrvmd5?usp=sharing). After downloading the files you can load the data as follows:
3939
```
40+
import torch
41+
import numpy as np
42+
from torch.utils.data import DataLoader
43+
from torch.utils.data.sampler import SubsetRandomSampler
4044
from load_LIDC_data import LIDC_IDRI
45+
4146
dataset = LIDC_IDRI(dataset_location = 'insert_path_here')
47+
dataset_size = len(dataset)
48+
indices = list(range(dataset_size))
49+
split = int(np.floor(test_split * dataset_size))
50+
np.random.shuffle(indices)
51+
train_indices, test_indices = indices[split:], indices[:split]
52+
train_sampler = SubsetRandomSampler(train_indices)
53+
test_sampler = SubsetRandomSampler(test_indices)
54+
train_loader = DataLoader(dataset, batch_size=batch_size, sampler=train_sampler)
55+
test_loader = DataLoader(dataset, batch_size=batch_size, sampler=test_sampler)
56+
print("Number of training/test patches:", (len(train_indices),len(test_indices)))
4257
```
4358
Combining this with the training code snippet above, you can start training your own Probabilistic UNet.
4459

0 commit comments

Comments
 (0)