Documentation for improved explainability of Machine Learning models
Why does this matter?
Explainability is essential for building and maintaining trust across the whole ecosystem of stakeholders. It is also important from a regulatory perspective, for example, new AI regulations give users the right to know why a certain automated decision was taken in a certain form (Right to an Explanation – EU General Data Protection Regulation (2016)).
A good explainability framework will be able to explain how an AI system works, what is driving its decisions and whether the model can be trusted or not. We identify two main routes to improve the explainability of the system:
Documentation: e.g. having clear informative material for users, documenting the way the dataset and the model are built and used, so that it can be reproduced and understood, etc.
Tools: e.g. techniques to extract meaningful explanations from models; debugging tools, etc
This roadmap will focus on the former option, you can find a roadmap about technical tools here. You can find more information about creating a good explainability framework here.
This roadmap
In this roadmap, we will focus on documentation techniques that can be used to improve the explainability of your system. Firstly, we will introduce dataset sheets, which will help you create clear documentation on your dataset. Then, we will introduce model cards, which will provide you with a template for documenting your machine learning or AI model.
Last updated