in

Databricks Has a Trick That Lets AI Models Improve Themselves

Databricks Has a Trick That Lets AI Models Improve Themselves

Databricks, a company that helps big businesses build custom artificial intelligence models, has developed a machine-learning trick that can boost the performance of an AI model without the need for clean labeled data.

Jonathan Frankle, chief AI scientist at Databricks, spent the past year talking to customers about the key challenges they face in getting AI to work reliably.

The problem, Frankle says, is dirty data.

”Everybody has some data, and has an idea of what they want to do,” Frankle says. But the lack of clean data makes it challenging to fine-tune a model to perform a specific task. “Nobody shows up with nice, clean fine-tuning data that you can stick into a prompt or an [application programming interface]” for a model.

Databricks’ model could allow companies to eventually deploy their own agents to perform tasks, without data quality standing in the way.

The technique offers a rare look at some of the key tricks that engineers are now using to improve the abilities of advanced AI models, especially when good data is hard to come by. The method leverages ideas that have helped produce advanced reasoning models by combining reinforcement learning, a way for AI models to improve through practice, with “synthetic,” or AI-generated, training data.

The latest models from OpenAI, Google, and DeepSeek all rely heavily on reinforcement learning as well as synthetic training data. WIRED revealed that Nvidia plans to acquire Gretel, a company that specializes in synthetic data. “We’re all navigating this space,” Frankle says.

The Databricks method exploits the fact that, given enough tries, even a weak model can score well on a given task or benchmark. Researchers call this method of boosting a model’s performance “best-of-N.” Databricks trained a model to predict which best-of-N result human testers would prefer, based on examples. The Databricks reward model, or DBRM, can then be used to improve the performance of other models without the need for further labeled data.

DBRM is then used to select the best outputs from a given model. This creates synthetic training data for further fine-tuning the model so that it produces a better output the first time. Databricks calls its new approach Test-time Adaptive Optimization or TAO. “This method we’re talking about uses some relatively lightweight reinforcement learning to basically bake the benefits of best-of-N into the model itself,” Frankle says.

He adds that the research done by Databricks shows that the TAO method improves as it is scaled up to larger, more capable models. Reinforcement learning and synthetic data are already widely used, but combining them in order to improve language models is a relatively new and technically challenging technique.

Databricks is unusually open about how it develops AI, because it wants to show customers that it has the skills needed to create powerful custom models for them. The company previously revealed to WIRED how it developed DBX, a cutting-edge open source large language model (LLM) from scratch.

Without well-labeled, carefully curated data, it is challenging to fine-tune an LLM to do specific tasks more effectively, such as analyzing financial reports or health records to find patterns or identify problems. Many companies now hope to use LLMs to automate tasks with so-called agents.

An agent used in finance might, for example, analyze a company’s key performance then generate a report and automatically send it to different analysts. One used in health insurance might help guide customers toward information about a relevant drug or condition.

Databricks tested the TAO approach on FinanceBench, a benchmark that tests how well language models answer financial questions. On this benchmark, Llama 3.1B, the smallest of Meta’s free AI models, scores 68.4 percent compared to 82.1 percent for OpenAI’s proprietary GPT-4o and o3-mini models. Using the TAO technique, Databricks got Llama 3.1B to score 82.8 percent on FinanceBench, surpassing OpenAI’s models.

“The general idea is very promising,” says Christopher Amato, a computer scientist at Northeastern University who works on reinforcement learning. “I do completely agree that the lack of good training data is a big problem.”

Amato says that many companies are now searching for ways to train AI models with synthetic data and reinforcement learning. The TAO method, “is very promising, as it could allow much more scalable data labeling and even improved performance over time as the models get stronger and the labels get better over time,” he says.

Amato adds, however, that reinforcement learning can sometimes behave in unpredictable ways, meaning that it needs to be used with care.

Frankle says that DataBricks is using the TAO technique to boost the performance of customers’ AI models and help them build their first agents. One customer, which makes a health-tracking app, has found that the TAO approach allowed it to deploy an AI model that was not previously reliable enough. “You want [the app] to be medically accurate,” he says. “This is a tricky problem.”

Report

What do you think?

Newbie

Written by Mr Viral

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

The Bachelor Finale: Juliana Finally Shares What She Thinks After Litia’s Shocking Revelations About Grant

The Bachelor Finale: Juliana Finally Shares What She Thinks After Litia’s Shocking Revelations About Grant

It’s Looking More Likely NASA Will Fly the Artemis II Mission

It’s Looking More Likely NASA Will Fly the Artemis II Mission