in

Anomalo’s unstructured data solution cuts enterprise AI deployment time by 30%

Anomalo’s unstructured data solution cuts enterprise AI deployment time by 30%

November 21, 2024 6:00 AM

Credit: Image generated by VentureBeat with FLUX-pro-1.1

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

Enterprise AI is only as good as the data that is available to a model.

In the past, enterprises largely relied on structured data. With the rapid adoption of generative AI, enterprises are increasingly aiming to consume vastly larger amounts of unstructured data. Unstructured data, by definition, doesn’t have structure and can be in any number of formals. For enterprises that can be a challenge as the data quality of unstructured data is often unknown. Data quality can refer to accuracy, knowledge gaps, duplication and other issues that impact the utility of data. 

Data quality tools, long used for structured data, are now expanding to unstructured data for enterprise AI. One such vendor is Anomalo, which has been developing its data quality platform for structured data for several years. Today the company announced an expansion of its platform to better support unstructured data quality monitoring.

Anomalo’s co-founder and CEO Elliot Shmukler believes that his company’s technology can have a strong impact in organizations. 

“We believe that by eliminating data quality issues, we can accelerate at least 30% of gen AI deployments,” Shmukler told VentureBeat in an exclusive interview.

He noted that enterprises abandon some AI projects after the proof-of-concept stage. The root issue lies in the poor data quality, large data gaps and the fact that enterprise data is not ready for gen AI consumption.

“We believe using Anomalo’s unstructured monitoring could accelerate typical gen AI projects in the Enterprise by as much as a year,” Shmukler said. “This is due to the ability to very quickly understand, profile and ultimately curate the data that these projects rely on.”

Alongside the product update, Anomalo announced a $10 million extension of its Series B funding first announced on Jan. 23, bringing the round up to $82 million.

Why data quality matters for enterprise AI

Unlike traditional structured data quality concerns, unstructured content presents unique challenges for AI applications.

“Because it’s unstructured data, anything could be in there,” Shmukler emphasized. “It could be personally identifiable information, people’s emails, names, social security numbers… there could be proprietary secret information in those documents that maybe you don’t want to send to the large language models.”

The Anomalo platform addresses these challenges by adding structured metadata to unstructured documents. That enables organizations to better understand and control their data before it reaches AI models.

The Anomalo software provides the following key features for unstructured data quality:

Custom issue definition: Allows users to define their own issues to detect in document collections, beyond the pre-defined issues like personally identifiable information (PII) or abusive content.

Support for private cloud models: Enables enterprises to use large language models (LLMs) deployed in their own cloud provider environments, providing more control and comfort over their data.

Metadata tagging: Adds structured metadata to unstructured documents, such as information about detected issues, to enable better curation and filtering of the data for gen AI applications.

Redaction: An upcoming feature that will allow the software to provide redacted versions of documents, removing sensitive information.

Competitive differentiation in an emerging market for unstructured data quality

Anomalo isn’t alone in the unstructured data quality market, just as it wasn’t alone in structured data quality.

Multiple data quality vendors including Monte Carlo Data, Collibra and Qlik have various forms of unstructured data quality technology. Shmukler sees several areas and ways in which his company differentiates itself.

He noted that some of the other vendors are approaching unstructured data quality by integrating with and monitoring vector databases that contain data powering a retrieval augmented generation (RAG) workflow. Shmukler explained that the approach requires that a pipeline is already set up to send the appropriate data into the vector database. He added it also restricts applications to only the traditional RAG approach rather than newer approaches such as large context models, that may not even require a vector database. 

“Anomalo is different in that we analyze the raw unstructured data collections, before any pipeline has been set up to ingest such data,” Shmukler said. “This allows for broader exploration of all the available data before committing to building a pipeline and also opens up all possible approaches to using this data beyond traditional RAG techniques.”

How Anomalo’s monitoring fits into enterprise AI deployments

The Anomalo platform can accelerate various aspects of enterprise AI deployments.

Shmukler noted that teams can integrate data quality monitoring into the data preparation phase, before sending any data to a model or vector database. Fundamentally what Anomalo does is it provides a bit of structure, in the form of metadata, on top of the unstructured data. Enterprises can use structured metadata to ensure high-quality, issue-free data when training or fine-tuning genAI models.

Anomalo’s data quality monitoring can also integrate with the data pipelines that feed into RAG. In the RAG use case unstructured data is ingested into vector databases for retrieval. The metadata can be used to filter, rank and curate data used in RAG, ensuring the quality of the information used to generate outputs.

Another core area where Shmukler sees the impact of data quality monitoring is compliance and risk mitigation. Anomalo’s data tagging helps enterprises prevent genAI from exposing sensitive information and violating compliance.

“Every enterprise is worried about LLMs answering with data that they shouldn’t have, revealing sensitive information,” Shmukler said. “A big piece of this as well is just being able to sleep better at night, while building your gen AI applications, knowing that it’s much, much less likely that any sensitive data or any data that you don’t want the LLM to know about, will actually make it to the LLM.”

VB Daily

Stay in the know! Get the latest news in your inbox daily

By subscribing, you agree to VentureBeat’s Terms of Service.

Thanks for subscribing. Check out more VB newsletters here.

An error occured.

Report

What do you think?

Newbie

Written by Mr Viral

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

xpander.ai’s Agent Graph System makes AI agents more reliable, gives them info step-by-step

Wordware raises $30 million to make AI development as easy as writing a document

Wordware raises $30 million to make AI development as easy as writing a document