Skip to content

Commit 9baca57

Browse files
author
Stefan Knegt
authored
Update README.md
1 parent 3ad908b commit 9baca57

File tree

1 file changed

+14
-0
lines changed

1 file changed

+14
-0
lines changed

README.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,8 +37,22 @@ for epoch in range(epochs):
3737
## Train on LIDC Dataset
3838
One of the datasets used in the original paper is the [LIDC dataset](https://wiki.cancerimagingarchive.net). I've preprocessed this data and stored them in 5 .pickle files which you can [download here](https://drive.google.com/drive/folders/1xKfKCQo8qa6SAr3u7qWNtQjIphIrvmd5?usp=sharing). After downloading the files you can load the data as follows:
3939
```
40+
import torch
41+
from torch.utils.data import DataLoader
42+
from torch.utils.data.sampler import SubsetRandomSampler
4043
from load_LIDC_data import LIDC_IDRI
44+
4145
dataset = LIDC_IDRI(dataset_location = 'insert_path_here')
46+
dataset_size = len(dataset)
47+
indices = list(range(dataset_size))
48+
split = int(np.floor(test_split * dataset_size))
49+
np.random.shuffle(indices)
50+
train_indices, test_indices = indices[split:], indices[:split]
51+
train_sampler = SubsetRandomSampler(train_indices)
52+
test_sampler = SubsetRandomSampler(test_indices)
53+
train_loader = DataLoader(dataset, batch_size=batch_size, sampler=train_sampler)
54+
test_loader = DataLoader(dataset, batch_size=batch_size, sampler=test_sampler)
55+
print("Number of training/test patches:", (len(train_indices),len(test_indices)))
4256
```
4357
Combining this with the training code snippet above, you can start training your own Probabilistic UNet.
4458

0 commit comments

Comments
 (0)