Adrian Cartland Article: Explainable AI is all the rage at legal technology conferences currently.

Explainable AI is all the rage at legal technology conferences currently. It is considered essential to algorithms that are used in law. Here is why I think that popular view is wrong — and why I generally dislike prediction algorithms anyway.
Machine learning is Statistics.

There is a popular joke circulating at the moment

“When you’re fundraising, it’s AI.

When you’re hiring, it’s ML.

When you’re implementing, it’s logistic regression.”

Now, behind every joke there is at least an ounce of truth. But the punchline of this joke is only funny if one understands that there is quite a serious difference between deep learning and a simple statistical analysis. The mathematics that we are more commonly familiar with; statistics regressions, a dependent variable and correlation between different factors, are all what we might typically think of as explainable, but they are done entirely differently to machine learning. For example, we might calculate a relationship between inflation and unemployment, or education and life-time earnings, and therefore draw conclusions based on those relationships. Under good scientific analysis these regressions will be repeatable, with the assumptions made in calculating them will be explainable, and therefore we have the possibility of high level transparency if we make decisions based on these regressions.

Creator of Ailira, the Artificial Intelligence that automates legal information and research, and Principal of Cartland Law.