On Wednesday, OpenAI introduced o3-pro, its most powerful reasoning model to date on ChatGPT. This follows the launch of the standalone o3 and o4-mini models back in April 2025. While o3-pro is built on the same base model as o3, it operates in a high-compute mode leveraging greater processing power and longer reasoning time to tackle more complex tasks.
According to OpenAI, o3-pro delivers stronger performance than the standard o3 model across several domains, including science, education, programming, data analysis, and writing. In evaluations, expert testers consistently favored o3-pro’s responses over those of o3, highlighting its improved output quality.
Designed with a focus on reasoning, o3-pro performs exceptionally well in subjects such as math, science, and programming. It scored 93% on the AIME 2024 exam, 84% on the GPQA Diamond benchmark, and earned a ranking of 2,748 on Codeforces. In each of these tests, o3-pro outperformed the base o3 model largely due to its use of increased compute at test time.
What’s particularly notable is that o3-pro comes equipped with a suite of tools within ChatGPT, including web browsing, file and visual analysis, a Python interpreter, memory, and more. It’s also replacing the previous o1-pro model and is currently being rolled out to ChatGPT Pro and Team subscribers. Unfortunately, users on the ChatGPT Plus plan won’t have access to this high-compute mode.
Enterprise and Edu users will receive access to the o3-pro model in the coming week. In addition to being available on ChatGPT, o3-pro is also offered through the API and its pricing is unexpectedly competitive at \$20 per million input tokens and \$80 per million output tokens. It supports context windows of up to 200,000 tokens, with a knowledge cutoff of June 1, 2024.