*Memos:
- My post explains Oxford 102 Flower.
- My post explains OxfordIIITPet().
- My post explains StanfordCars().
Flowers102() can use Oxford 102 Flower dataset as shown below:
*Memos:
- The 1st argument is
root
(Required-Type:str
orpathlib.Path
). *An absolute or relative path is possible. - The 2nd argument is
split
(Optional-Default:"train"
-Type:str
). *"train"
(1,020 images),"val"
(1,020 images) or"test"
(6,149 images) can be set to it. - The 3rd argument is
transform
(Optional-Default:None
-Type:callable
). - The 4th argument is
target_transform
(Optional-Default:None
-Type:callable
). - The 5th argument is
download
(Optional-Default:False
-Type:bool
): *Memos:- If it's
True
, the dataset is downloaded from the internet and extracted(unzipped) toroot
. - If it's
True
and the dataset is already downloaded, it's extracted. - If it's
True
and the dataset is already downloaded and extracted, nothing happens. - It should be
False
if the dataset is already downloaded and extracted because it's faster. - You can manually download and extract the dataset(
102flowers.tgz
withimagelabels.mat
andsetid.matff
from here todata/flowers-102/
.
- If it's
- About the label from the categories(classes) for the train and validation image indices, 0 is 0~9, 1 is 10~19, 2 is 20~29, 3 is 30~39, 4 is 40~49, 5 is 50~59, 6 is 60~69, 7 is 70~79, 8 is 80~89, 9 is 90~99, etc.
- About the label from the categories(classes) for the test image indices, 0 is 0~19, 1 is 20~59, 2 is 60~79, 3 is 80~115, 4 is 116~160, 5 is 161~185, 6 is 186~205, 7 is 206~270, 8 is 271~296, 9 is 297~321, etc.
from torchvision.datasets import Flowers102 train_data = Flowers102( root="data" ) train_data = Flowers102( root="data", split="train", transform=None, target_transform=None, download=False ) val_data = Flowers102( root="data", split="val" ) test_data = Flowers102( root="data", split="test" ) len(train_data), len(val_data), len(test_data) # (1020, 1020, 6149) train_data # Dataset Flowers102 # Number of datapoints: 1020 # Root location: data # split=train train_data.root # 'data' train_data._split # 'train' print(train_data.transform) # None print(train_data.target_transform) # None train_data.download # <bound method Flowers102.download of Dataset Flowers102 # Number of datapoints: 1020 # Root location: data # split=train> len(set(train_data._labels)), train_data._labels # (102, # [0, 0, 0, ..., 1, ..., 2, ..., 3, ..., 4, ..., 5, ..., 6, ..., 101]) train_data[0] # (<PIL.Image.Image image mode=RGB size=754x500>, 0) train_data[1] # (<PIL.Image.Image image mode=RGB size=624x500>, 0) train_data[2] # (<PIL.Image.Image image mode=RGB size=667x500>, 0) train_data[10] # (<PIL.Image.Image image mode=RGB size=500x682>, 1) train_data[20] # (<PIL.Image.Image image mode=RGB size=667x500>, 2) val_data[0] # (<PIL.Image.Image image mode=RGB size=606x500>, 0) val_data[1] # (<PIL.Image.Image image mode=RGB size=667x500>, 0) val_data[2] # (<PIL.Image.Image image mode=RGB size=500x628>, 0) val_data[10] # (<PIL.Image.Image image mode=RGB size=500x766>, 1) val_data[20] # (<PIL.Image.Image image mode=RGB size=624x500>, 2) test_data[0] # (<PIL.Image.Image image mode=RGB size=523x500>, 0) test_data[1] # (<PIL.Image.Image image mode=RGB size=666x500>, 0) test_data[2] # (<PIL.Image.Image image mode=RGB size=595x500>, 0) test_data[20] # (<PIL.Image.Image image mode=RGB size=500x578>, 1) test_data[60] # (<PIL.Image.Image image mode=RGB size=500x625>, 2) import matplotlib.pyplot as plt def show_images(data, ims, main_title=None): plt.figure(figsize=(12, 6)) plt.suptitle(t=main_title, y=1.0, fontsize=14) for i, j in enumerate(iterable=ims, start=1): plt.subplot(2, 5, i) im, lab = data[j] plt.imshow(X=im) plt.title(label=lab) plt.tight_layout(h_pad=3.0) plt.show() trainval_ims = (0, 1, 2, 10, 20, 30, 40, 50, 60, 70) test_ims = (0, 1, 2, 20, 60, 80, 116, 161, 186, 206) show_images(data=train_data, ims=trainval_ims, main_title="train_data") show_images(data=val_data, ims=trainval_ims, main_title="val_data") show_images(data=test_data, ims=test_ims, main_title="test_data")
Top comments (0)