One of the biggest challenges in the development of artificial intelligence is AI model development. Despite the advances in machine learning and deep learning, these technologies are still seen as black boxes; you don’t always know how neural network processes annotated data in the development of an AI model.
It doesn’t take much for an AI model to become complex. When the model becomes too complex, it can be difficult to predict the output based on the input data sets. This too is a challenge that must be tackled, mainly because the inability to predict AI outputs means the level of trust in an AI model is significantly lower than it should be.
The problem – the decreasing level of trust in complex AI models – is more severe when you consider how AI is now used in the decision-making processes. AI is responsible for generating insights that are later used to make business-critical decisions. When the AI model doesn’t perform correctly, a bad decision can be made, and the effect can be catastrophic.
The Concept of a GlassBox
One thing to understand about deep learning is the blackbox nature of it. When data is fed through a deep learning model – especially during the development of an AI model – you cannot always understand how the data is processed other than by looking at the output. The real process of converting large data sets into a decision-making AI model isn’t always clear.
There’s also always a risk of using imprecise data (the garbage in – garbage out problem). As more companies acquire data from data scrapers, it becomes increasingly difficult to make sure that the model is working properly.
It is Deloitte that first took a real step towards making the AI model development process clearer. Instead of relying on the blackbox of deep learning, Deloitte visualized how an AI model comes to a decision or a conclusion. This is the complete opposite of what major AI companies are doing; conventional AI companies tend to put a lot of trust in the blackbox.
GlassBox is the name of the tool (and the approach). Instead of accepting that the process is unknown, Deloitte’s GlassBox makes the process transparent. There are several evaluation methods used to keep the process transparent – we will get to them in a bit. More importantly, Deloitte does this for the purpose of making AI more credible and reliable.
Understanding AI
GlassBox as a tool actually lets us understand how AI really processes information. There are several evaluation methods used to make the process more transparent. Each of the evaluation methods is tied to a model family, which is commonly used in the AI industry.
The first model family is a neural network, which mostly uses Garson’s method of relative node importance. When a data set is fed through a neural network, each parameter is weighted and visualized accordingly. The result is an AI model with a slight tendency of relying on hyper-parameters.
The second model family is a tree-based method. As the name suggests, variables are arranged in a tree and get prioritized accordingly. When data sets are fed through this model family, a variable importance plot is used to translate the data. This usually leads to lower accuracy in general.
A Discrimination-based method actively discriminates parameters and data nodes to perform deep analysis of specific features or functions. A specific set of variables becomes the center of attention when analysis is performed, hence the discriminative nature of this model family.
An instance-based method, on the other hand, relies on certain parameters to identify risk drivers, and then takes a more detailed approach by analyzing neighbor data points when performing analysis. This may not be the best performing model family, but it is one of the most accurate.
Last but certainly not least, we have the generative method, which relies on its own ability to validate class imbalance and perform course corrections when doing analysis of data sets. The generative method checks two things: whether the parameters hold and whether they are realistic.
With GlassBox revelations, these processes, the reliability, and accuracy of AI models can be elevated to a whole new level. More importantly, AI developers and business users can now influence the learning curve of their AI core in a more active way, allowing for a suitable AI model to be developed over a shorter period of time.
The Future
GlassBox comes with a lot of benefits, but the biggest one of them all is its offer of better, more accurate, and more reliable AI models. Since developers and users can actively take part in the learning process of an AI, accuracy and trustworthiness can be set as attainable objectives.
That doesn’t mean there aren’t any other unknown factors in the development of an AI model. Deep learning itself is still – pretty much – a black box; we don’t know the content of that black box and don’t always have complete control over the process. It is still necessary to provide good, well-annotated data in order to produce a capable AI model.