Would you trust a machine if it could explain itself?

AI Explainability and automated decision making

What makes a black box society is the inability for the box to explain itself. At least that’s an argument put forward by people who see explainability as a path to legitimizing automated decision making.

However a larger question is if a machine could explain itself, would you believe it? Would you accept the explanation or would you want more evidence …

This post is for paying subscribers