How to apply different titles for each different subplots using Plotly in Python?

How to apply different titles for each different subplots using Plotly in Python?

Building a Generative Adversarial Network (GAN) using Keras involves creating two distinct models: the generator and the discriminator. The generator creates new data instances, while the discriminator evaluates them. Together, the generator and the discriminator form the GAN architecture.

Below is a step-by-step guide to building a basic GAN for generating images that resemble the MNIST dataset of handwritten digits.

Step 1: Import the necessary libraries

import numpy as np import matplotlib.pyplot as plt from tensorflow.keras.datasets import mnist from tensorflow.keras.layers import Dense, Flatten, Reshape, LeakyReLU, BatchNormalization from tensorflow.keras.models import Sequential from tensorflow.keras.optimizers import Adam 

Step 2: Load and preprocess the data

# Load the MNIST dataset (X_train, _), (_, _) = mnist.load_data() # Normalize the images to [-1, 1] X_train = X_train / 127.5 - 1. X_train = np.expand_dims(X_train, axis=3) 

Step 3: Define the generator and discriminator

def build_generator(z_dim): model = Sequential() model.add(Dense(128, input_dim=z_dim)) model.add(LeakyReLU(alpha=0.01)) model.add(BatchNormalization()) model.add(Dense(28 * 28 * 1, activation='tanh')) model.add(Reshape((28, 28, 1))) return model def build_discriminator(img_shape): model = Sequential() model.add(Flatten(input_shape=img_shape)) model.add(Dense(128)) model.add(LeakyReLU(alpha=0.01)) model.add(Dense(1, activation='sigmoid')) return model 

Step 4: Define the GAN

def build_gan(generator, discriminator): model = Sequential() model.add(generator) model.add(discriminator) return model # Set the dimensions of the noise vector z_dim = 100 # Build the generator and discriminator discriminator = build_discriminator(img_shape=(28, 28, 1)) discriminator.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy']) generator = build_generator(z_dim) # Keep the discriminator's parameters constant for the generator training discriminator.trainable = False # Build and compile the GAN with fixed discriminator to train the generator gan = build_gan(generator, discriminator) gan.compile(loss='binary_crossentropy', optimizer=Adam()) 

Step 5: Define the training loop

losses = [] accuracies = [] iteration_checkpoints = [] def train(iterations, batch_size, sample_interval): # Load the dataset (X_train, _), (_, _) = mnist.load_data() # Rescale [-1, 1] X_train = X_train / 127.5 - 1.0 X_train = np.expand_dims(X_train, axis=3) # Labels for real and fake examples real = np.ones((batch_size, 1)) fake = np.zeros((batch_size, 1)) for iteration in range(iterations): # ------------------------- # Train the Discriminator # ------------------------- # Get a random batch of real images idx = np.random.randint(0, X_train.shape[0], batch_size) imgs = X_train[idx] # Generate a batch of fake images z = np.random.normal(0, 1, (batch_size, z_dim)) gen_imgs = generator.predict(z) # Train the discriminator d_loss_real = discriminator.train_on_batch(imgs, real) d_loss_fake = discriminator.train_on_batch(gen_imgs, fake) d_loss, accuracy = 0.5 * np.add(d_loss_real, d_loss_fake) # --------------------- # Train the Generator # --------------------- # Generate a batch of noise vectors z = np.random.normal(0, 1, (batch_size, z_dim)) # Train the generator (to have the discriminator classify fake images as real) g_loss = gan.train_on_batch(z, real) if (iteration + 1) % sample_interval == 0: # Save losses and accuracies so they can be plotted after training losses.append((d_loss, g_loss)) accuracies.append(100.0 * accuracy) iteration_checkpoints.append(iteration + 1) # Output training progress print(f"{iteration + 1} [D loss: {d_loss}, acc.: {100.0 * accuracy}%] [G loss: {g_loss}]") # Output a sample of generated image sample_images(generator, iteration + 1, z_dim) def sample_images(generator, iteration, z_dim, image_grid_rows=4, image_grid_columns=4): # Sample random noise z = np.random.normal(0, 1, (image_grid_rows * image_grid_columns, z_dim)) # Generate images from random noise gen_imgs = generator.predict(z) # Rescale images to [0, 1] gen_imgs = 0.5 * gen_imgs + 0.5 # Set image grid fig, axs = plt.subplots(image_grid_rows, image_grid_columns, figsize=(4, 4), sharey=True, sharex=True) cnt = 0 for i in range(image_grid_rows): for j in range(image_grid_columns): axs[i, j].imshow(gen_imgs[cnt, :, :, 0], cmap='gray') axs[i, j].axis('off') cnt += 1 plt.suptitle(f"Iteration: {iteration}") plt.show() 

Step 6: Train the GAN

# Set hyperparameters iterations = 20000 batch_size = 128 sample_interval = 1000 # Train the GAN for the specified number of iterations train(iterations, batch_size, sample_interval) 

This code will train the GAN by alternating between training the discriminator and the generator. Every sample_interval iterations, it will output the current losses and a sample of images generated by the GAN.

Remember, training GANs can be tricky; they are sensitive to hyperparameter settings, model architecture, and training procedure. The learning process might need to be tuned, and it's not uncommon to face challenges like mode collapse, where the generator starts to produce the same outputs, or failure to converge.


More Tags

streaming watch android-actionbar-compat dagger expirationhandler google-signin data-visualization live element-ui has-many

More Programming Guides

Other Guides

More Programming Examples