SaaSHub helps you find the best software and product alternatives Learn more β
Interpret Alternatives
Similar projects and alternatives to interpret
-
-
Stream
Stream - Scalable APIs for Chat, Feeds, Moderation, & Video. Stream helps developers build engaging apps that scale to millions with performant and flexible Chat, Feeds, Moderation, and Video APIs and SDKs powered by a global edge network and enterprise-grade infrastructure.
-
AIF360
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
-
-
yggdrasil-decision-forests
A library to train, evaluate, interpret, and productionize decision forest models such as Random Forest and Gradient Boosted Decision Trees.
-
-
shapash
π Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
-
-
InfluxDB
InfluxDB β Built for High-Performance Time Series Workloads. InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.
-
-
-
imodels
Interpretable ML package π for concise, transparent, and accurate predictive modeling (sklearn-compatible).
interpret discussion
interpret reviews and mentions
- [D] Alternatives to the shap explainability package
Maybe InterpretML? It's developed and maintained by Microsoft Research and consolidates a lot of different explainability methods.
- What Are the Most Important Statistical Ideas of the Past 50 Years?
You may also find Explainable Boosting Machines interesting: https://github.com/interpretml/interpret
They're a bit like a best of both worlds between linear models and random forests (generalized additive models fit with boosted decision trees)
Disclosure: I helped build this open source package
- [N] Google confirms DeepMind Health Streams project has been killed off
Microsoft Explainable Boosting Machine (which is a Gaussian Additive Model and not a Gradient Boosted Trees π model) is a step in that direction https://github.com/interpretml/interpret
- [Discussion] XGBoost is the way.
Also I'd recommend everyone who works with xgboost to give EBM's a try! They perform comparably (except in the case of extreme interactions) but are actually interpretable! https://github.com/interpretml/interpret/ Beside that they since on runtime they're practically a lookup table they're very quick (at the cost of longer training time).
- [D] Generalized Additive Models⦠with trees?
Open source code by Microsoft: https://github.com/interpretml/interpret (called EBM in this implementation).
- Machine Learning with Medical Data (unbalanced dataset)
If it's not an image, have a go at Microsoft's Explainable Boosting Maching) https://github.com/interpretml/interpret which is not a GBM but a GAM (Gradient Boosting Machine vs Gradient Additive Model). This will also give you explanation via SHAP or LIME values.
- A note from our sponsor - SaaSHub www.saashub.com | 25 Dec 2025
Stats
interpretml/interpret is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of interpret is C++.