December 20, 2024 10:07 AM
Credit: VentureBeat made with ChatGPT
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
OpenAI is slowly inviting selected users to test a whole new set of reasoning models named o3 and o3 mini, successors to the o1 and o1-mini models that just entered full release earlier this month.
OpenAI o3, so named to avoid copyright issues with the telephone company O2 and because CEO Sam Altman says the company “has a tradition of being truly bad at names,” was announced during the final day of “12 Days of OpenAI” livestreams today.
Altman said the two new models would be initially released to selected third-party researchers for safety testing, with o3-mini expected by the end of January 2025 and o3 “shortly after that.”
“We view this as the beginning of the next phase of AI, where you can use these models to do increasingly complex tasks that require a lot of reasoning,” Altman said. “For the last day of this event we thought it would be fun to go from one frontier model to the next frontier model.”
The announcement comes just a day after Google unveiled and allowed the public to use its new Gemini 2.0 Flash Thinking model, another rival “reasoning” model that, unlike the OpenAI o1 series, allows users to see the steps in its “thinking” process documented in text bullet points.
The release of Gemini 2.0 Flash Thinking and now the announcement of o3 shows that the competition between OpenAI and Google, and the wider field of AI model providers, is entering a new and intense phase as they offer not just LLMs or multimodal models, but advanced reasoning models as well. These can be more applicable to harder problems in science, mathematics, technology, physics and more.
The best performance on third-party benchmarks yet
Altman also said the o3 model was “incredible at coding,” and the benchmarks shared by OpenAI support that, showing the model exceeding even o1’s performance on programming tasks.
• Exceptional Coding Performance: o3 surpasses o1 by 22.8 percentage points on SWE-Bench Verified and achieves a Codeforces rating of 2727, outperforming OpenAI’s Chief Scientist’s score of 2665.
• Math and Science Mastery: o3 scores 96.7% on the AIME 2024 exam, missing only one question, and achieves 87.7% on GPQA Diamond, far exceeding human expert performance.
• Frontier Benchmarks: The model sets new records on challenging tests like EpochAI’s Frontier Math, solving 25.2% of problems where no other model exceeds 2%. On the ARC-AGI test, o3 triples o1’s score and surpasses 85% (as verified live by the ARC Prize team), representing a milestone in conceptual reasoning.
Deliberative alignment
Alongside these advancements, OpenAI reinforced its commitment to safety and alignment.
The company introduced new research on deliberative alignment, a technique instrumental in making o1 its most robust and aligned model to date.
This technique embeds human-written safety specifications into the models, enabling them to explicitly reason about these policies before generating responses.
The strategy seeks to solve common safety challenges in LLMs, such as vulnerability to jailbreak attacks and over-refusal of benign prompts, by equipping the models with chain-of-thought (CoT) reasoning. This process allows the models to recall and apply safety specifications dynamically during inference.
Deliberative alignment improves upon previous methods like reinforcement learning from human feedback (RLHF) and constitutional AI, which rely on safety specifications only for label generation rather than embedding the policies directly into the models.
By fine-tuning LLMs on safety-related prompts and their associated specifications, this approach creates models capable of policy-driven reasoning without relying heavily on human-labeled data.
Results shared by OpenAI researchers in a new, non peer-reviewed paper indicate that this method enhances performance on safety benchmarks, reduces harmful outputs, and ensures better adherence to content and style guidelines.
Key findings highlight the o1 model’s advancements over predecessors like GPT-4o and other state-of-the-art models. Deliberative alignment enables the o1 series to excel at resisting jailbreaks and providing safe completions while minimizing over-refusals on benign prompts. Additionally, the method facilitates out-of-distribution generalization, showcasing robustness in multilingual and encoded jailbreak scenarios. These improvements align with OpenAI’s goal of making AI systems safer and more interpretable as their capabilities grow.
This research will also play a key role in aligning o3 and o3-mini, ensuring their capabilities are both powerful and responsible.
How to apply for access to test o3 and o3-mini
Applications for early access are now open on the OpenAI website and will close on January 10, 2025.
Applicants have to fill out an online form that asks them for a variety of information, including research focus, past experience, and links to prior published papers and their repositories of code on Github, and select which of the models — o3 or o3-mini — they wish to test, as well as what they plan to use them for.
Selected researchers will be granted access to o3 and o3-mini to explore their capabilities and contribute to safety evaluations, though OpenAI’s form cautions that o3 will not be available for several weeks.
Researchers are encouraged to develop robust evaluations, create controlled demonstrations of high-risk capabilities, and test models on scenarios not possible with widely adopted tools.
This initiative builds on the company’s established practices, including rigorous internal safety testing, collaborations with organizations like the U.S. and UK AI Safety Institutes, and its Preparedness Framework.
OpenAI will review applications on a rolling basis, with selections starting immediately.
A new leap forward?
The introduction of o3 and o3-mini signals a leap forward in AI performance, particularly in areas requiring advanced reasoning and problem-solving capabilities.
With their exceptional results on coding, math, and conceptual benchmarks, these models highlight the rapid progress being made in AI research.
By inviting the broader research community to collaborate on safety testing, OpenAI aims to ensure that these capabilities are deployed responsibly.
Watch the stream below:
Daily insights on business use cases with VB Daily
If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
Read our Privacy Policy
Thanks for subscribing. Check out more VB newsletters here.
An error occured.
GIPHY App Key not set. Please check settings