Startup Fiddler Labs Inc. said today it’s doing more to help companies ensure their artificial intelligence models are trustworthy and responsible.
The company has announced a big upgrade to its Model Performance Management software, with new capabilities that include model ingestion at giga-scale, natural language processing and computer vision monitoring and a more intuitive user experience.
Founded in October 2018, Fiddler Labs has created a platform aimed at boosting visibility into AI, helping companies analyze, manage and deploy machine learning models at scale and protect against issues such as bias and model drift.
Bias and model drift are big problems for AI because they cause models to come to inaccurate conclusions that can have an adverse impact on businesses. What’s more, bias is hard to solve because there are multiple causes of it.
For example, it can be caused by insufficient training data, where some demographic groups are absent or underrepresented. A second problem is that everyone carries conscious or unconscious biases, and these can find their way into the training data and be captured by models.
Fiddler Labs attempts to remove bias by proving machine learning models at different granularities in order to understand its true behavior. In this way, it provides model explainability, monitoring and bias detection to help companies understand the reasons why models come to the conclusions they do.
In a recent interview with theCUBE (below), SiliconANGLE Media’s mobile livestreaming studio, Fiddler Chief Executive Krishna Gade explained that Fiddler is attempting to help companies “operationalize AI” in order to make it more reliable.
“Without this visibility, you cannot build trust and actually use it in your business,” Gade said. “With Fiddler, what we provide is we actually open up this black box and we help our customers to really understand how those models work.”
Fiddler Labs said today’s updates to the Fiddler MPM platform provide an even deeper understanding of unstructured model behavior and performance, enabling users to discover rarer and more nuanced forms of model drift.
For example, the new NLP and computer vision monitoring capabilities help companies gain deeper insights into more complex models that are trained on unstructured data such as text, images and embeddings. That will allow medical practitioners to achieve greater accuracy when using AI to recognize patterns that could signify illnesses, the company said. Moreover, manufacturer will be alerted when a “defect detection” model changes its behavior — something that could lead to manufacturing defects not being recognized.
Fiddler said the other focus of today’s update is on addressing class imbalance. The platform can help organizations discover highly nuanced model drifts with regard to minority segments, while surfacing “fraud-like use cases” in finance, retail, gaming, manufacturing and education.
For instance, it can teach models to get better at recognizing fraudulent transactions with subtle variations that might cost a gaming company millions of dollars in lost revenue. It can also help companies protect their advertising efforts by detecting higher-than-usual ad click rates, which could signify malicious behavior.
To keep things simple for users, Fiddler said it has revamped its user interface, creating a kind of command center for machine learning operations teams, with visibility into the behavior of each of the AI models being tracked and monitored. Teams now have a single pane of glass to view, prioritize and manage updates, alerts, traffic and drifts, the company said.
Gade said that AI operations can only be successful when companies know their models are resilient in response to shifts in data and not unduly discriminating against certain minority groups. “The ability to understand and explain unstructured data and discover rare but costly model drifts is game changing, and opens up tremendous AI opportunities across a plethora of use cases and a diverse set of industries,” Gade said.