We introduce here the risk mitigation roadmaps, a set of guides that will help you mitigate some of the most common AI risks. A roadmap explains outlines the technical risk and presents potential solutions, usually composed of two or more steps. The roadmaps are accompanied by Jupyter notebooks available on this repository.
We can think of AI risks as being divided in 5 different areas: Efficacy, Robustness, Privacy, Bias and Explainability. For each one of these verticals, we have created a guide explaining how we can measure and mitigate such risk. We link the guides below
- Efficacy: Risk that the system underperforms relative to its use-case.
Improving generalisation through model validation
- Robustness: Risk that the system fails in response to changes or attacks.
Adversarial training for robustness
- Privacy: Risk that the system is sensitive to personal or critical data leakage.
- Bias: Risk that the system treats individuals or groups unfairly.
Measuring Bias and Discrimination
Mitigating Bias and Discrimination
- Explainability: Risk that an AI system may not be understandable to users and developers.