IBM Corp. is boosting its data quality capabilities, announcing the acquisition today of a company called Ltd. for an undisclosed amount. is a provider of data observability software that’s used by companies to fix any problems with their data, such as errors, pipeline failures and poor quality.

IBM said data quality has become a big issue for enterprises today as they become more reliant on huge volumes of information to inform business decision-making. Companies need a way to understand the health of the data in their systems, so they can identify any issues and fix them before they impact their business.

So-called “bad data” that’s inaccurate or incomplete can cause big problems with services such as artificial intelligence systems and predictive models that are used to assess demand for a certain product, for example. If the data feeding into such a system is flawed, then the results it generates cannot be trusted — hence the need for data observability tools to ensure the quality of that information is not compromised.

For large enterprises, bad data is a big problem. Gartner Inc. estimates that it costs organizations an average of $12.9 million per year. is a leading player in data observability that uses historical trends to compute statistics about data workloads and data pipelines at the source to determine if they’re working correctly and where problems might exist. The company employs an open and extendable approach to data observability that can easily be integrated into existing data infrastructure, IBM said.

The plan is to combine’s tools with services such as IBM Observability by Instana APM and IBM Watson Studio, the computing giant said, in order to enhance its existing capabilities in the data observability space. For instance, will alert teams to a problem, such as incomplete or missing data.

Then, that team will be able to use Instana to find out where the missing data originated from and why it is causing an application or service to fail. By using the two tools together, customers will have a more complete view of their application’s infrastructure and data pipeline, making any issues much easier to resolve, IBM said.

Constellation Research Inc. analyst Dough Henschen explained that bad data continues to be a very real challenge for enterprises, with one of the main issues being that “good data goes bad” as the systems producing it change or fail.

“Data observability is a growing approach wherein information isn’t just cleaned once as part of a prep or transformation process,” Henschen said. “Rather, data quality is monitored and tracked on an ongoing basis and owners, stewards and users of downstream systems are proactively alerted to take preventative or remediating action to get data quality back on track when problems arise in dynamic environments.”

IBM General Manager for Data and AI Daniel Hernandez said many of the company’s clients are data-driven enterprises that rely on high-quality and trustworthy information to power mission-critical processes and applications.

“When they don’t have access to the data they need, their business can grind to a halt,” Hernandez said. “With the addition of, IBM offers the most comprehensive set of observability capabilities for IT across applications, data and machine learning.”

Tel Aviv-based will become a part of IBM’s Data and AI business unit, which includes IBM Watson and IBM Cloud Pak for Data. Its tools will be made available as a software-as-a-service or self-hosted software subscription, the company added.


Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

Source link

Load More By Michael Smith
Load More In Technology
Comments are closed.

Check Also

Average Jaguar Land Rover car hits £70,000 in push for margins

Such is the demand for those models in its big markets of the US and China that they will …