Source code/webpage/demos for the What-If Tool
- Updated
Nov 27, 2025 - HTML
Source code/webpage/demos for the What-If Tool
Sample project using IBM's AI Fairness 360 is an open source toolkit for determining, examining, and mitigating discrimination and bias in machine learning (ML) models throughout the AI application lifecycle.
"My journey through computer vision basics. This repo contains organized code snippets, exercises, and examples demonstrating core OpenCV functionalities, from reading images to implementing filters."
Fairness and bias detection library for Elixir AI/ML systems
Advanced Computer Vision Projects: YOLOv8 Real-time Tracking and YOLOv5 Object Detection
Tools to assess fairness and mitigate unfairness in sociolinguistic auto-coding
Deep-Learning approach for generating Fair and Accurate Input Representation for crime rate estimation in continuous protected attributes and continuous targets.
Analise de Fairness em ML usando metrica ABLNI com dataset Pima Diabetes - SDK completo com visualizacoes e relatorios
Add a description, image, and links to the ml-fairness topic page so that developers can more easily learn about it.
To associate your repository with the ml-fairness topic, visit your repo's landing page and select "manage topics."