Comment on page
Risk mitigation roadmaps
We introduce here the risk mitigation roadmaps, a set of guides that will help you mitigate some of the most common AI risks.
A roadmap explains outlines the technical risk and presents potential solutions, usually composed of two or more steps. The roadmaps are also accompanied by tutorials and examples in the form of Jupyter Notebooks.
We can think of AI risks as being divided in 5 different areas:
- Efficacy: Risk that the system underperforms relative to its use-case.
- Robustness: Risk that the system fails in response to changes or attacks.
- Privacy: Risk that the system is sensitive to personal or critical data leakage.
- Bias: Risk that the system treats individuals or groups unfairly.
- Explainability: Risk that an AI system may not be understandable to users and developers.