Spectrum xAI is a comprehensive pipeline for explainability in artificial intelligence. It provides a suite of state-of-the-art explainability methods to analyze and interpret deep learning models. The pipeline supports evaluation metrics such as IoU, F1-Score, AUC, and execution time, along with visualizations for better understanding of model predictions.
-
Explainability Methods:
- Grad-CAM
- Gradient SHAP
- Integrated Gradients
- LIME
- Saliency Map
- Score-CAM
-
Evaluation Metrics:
- Intersection over Union (IoU)
- F1-Score
- Area Under the Curve (AUC)
- Execution Time
-
Supported Model:
- DenseNet (fully implemented)
-
Visualization:
- Heatmaps and thresholded masks for each explainability method.
- Ground truth comparison for qualitative evaluation.
-
Clone the repository:
git clone https://github.com/IsmailHatim/Spectrum cd Spectrum -
Install the required dependencies:
pip install -r requirements.txt
-
Ensure you have the DAGM dataset in the
data/dataset/directory. You can download it from the Kaggle Challenge
To train a model (e.g., DenseNet121) on the first class of the DAGM dataset:
python run_training.py --model_name densenet121 --epochs 10 --batch_size 32 --learning_rate 0.0001 --plotTo run an explainability method (e.g., Grad-CAM) on a trained model and save figures:
python run_explanation.py --method gradcam --index 0 --threshold 0.5 --model_name densenet121 --conv_layer_index -2 --saveTo run the whole evluation using a specific method (e.g., Grad-CAM) on a trained model:
python run_evaluation.py --method gradcam --threshold 0.5 --model_name densenet121 --conv_layer_index -2-
Data Loading:
- The pipeline uses the DAGM dataset, which is divided into training and testing splits for each class.
-
Model Training:
- Train models like DenseNet121, on the dataset.
- Save trained models for later use.
-
Explainability Methods:
- Apply explainability methods to generate heatmaps and thresholded masks for model predictions.
-
Evaluation:
- Evaluate the explainability methods using IoU, F1-Score, AUC, and execution time.
-
Visualization:
- Visualize the input image, heatmap, thresholded mask, and ground truth for qualitative analysis.
-
Train a DenseNet121 model on Class 1 of the DAGM dataset:
python run_training.py --model_name densenet121 --epochs 10 --batch_size 32 --learning_rate 0.0001 --plot
-
Run Grad-CAM on the trained model:
python run_explanation.py --method gradcam --index 0 --threshold 0.5 --model_name densenet121 --conv_layer_index -2 --save
-
Run evluation using Grad-CAM on the whole test set:
python run_evaluation.py --method gradcam --threshold 0.5 --model_name densenet121 --conv_layer_index -2
-
Visualize the results:
- Input image
- Grad-CAM heatmap
- Thresholded mask
- Ground truth
- Mean and Standard Deviation metrics
-
Explainability Methods:
- Extend the pipeline with additional explainability methods and other models.
-
Dataset Support:
- Generalize the pipeline to work with other datasets.
Contributions are welcome! Please fork the repository and submit a pull request.
