Google tackles the black box problem with Explainable AI

There is a problem with artificial intelligence.

It can be amazing at churning through gigantic amounts of data to solve challenges that humans struggle with. But understanding how it makes its decisions is often very difficult to do, if not impossible.

That means when an AI model works it is not as easy as it should be to make further refinements, and when it exhibits odd behaviour it can be hard to fix.

But at an event in London this week, Google’s cloud computing division pitched a new facility that it hopes will give it the edge on Microsoft and Amazon, which dominate the sector. Its name: Explainable AI.

To start with, it will give information about the performance and potential shortcomings of face- and object-detection models. But in time the firm intends to offer a wider set of insights to help make the “thinking” of AI algorithms less mysterious and therefore more trustworthy.

“Google is definitely the underdog behind Amazon Web Services and Microsoft Azure in terms of the cloud platform space, but for AI workloads I wouldn’t say that’s the case – particularly for retail clients,” commented Philip Carter from the consultants IDC.

“There’s a bit of an arms race around AI… and in some ways Google could be seen to be ahead of the other players.”

Leave a Reply

Your email address will not be published. Required fields are marked *