The core of our technology is a variational autoencoder which is used to generate the synthetic data used within the assessment process and to select the most relevant information that needs to be evaluated by model owners.
Data scientists can use our Model Assessment to quantify data drift and to explain outliers present in the model training and assessment datasets. This can highlight how ready is a model to operate in a production environment.
Our tool looks for model limitation by dividing the assessment set into subgroups based on the similarities learnt by the generative model. The model performance for each subgroup can change considerably and by using this kind of clustering it is possible to have a better overview about potential model limitations and the presence of irreducible uncertainty. The overall model robustness is also tested by feeding new input perturbations produced by the generative model.
Being able to interpret the output of a model is a crucial part of an assessment procedure. Receiving additional information about the model’s decision process can help the model owner not only understand mistakes but also to detect undesired behaviour and explain model bias. Our tool automatically selects the most representative points in the assessment set whose explanations need to be evaluated by a human.
Model agnostic
Cloud or On-Prem
Development and production pipelines