Apple Makes Its On-Device AI Model Available to Developers

apple-on-device-ai-model-powering-a-third-party-app-azmotech

At WWDC 2025 today, Apple introduced the Foundation Models Framework, giving developers access to its on-device AI capabilities. With the new API, developers can add AI-driven features to their apps. The framework runs Apple’s proprietary AI models directly on the device, ensuring user data remains private.

While unveiling the framework, Craig Federighi, Apple’s Senior Vice President of Software Engineering, stated:

Thanks to the new Foundation Models API, developers no longer need to depend on third-party providers like OpenAI or Google to add AI features to their apps. A major advantage is that these features function offline as Apple’s AI models operate entirely on-device eliminating any AI inference costs for developers.

That said, Apple has yet to showcase what its in-house models are truly capable of. Last year, the company revealed an on-device AI model trained on 3 billion parameters. In terms of performance, it was comparable to models like Google’s Gemma-1.1-2B and Microsoft’s Phi-3-mini.

It remains uncertain whether Apple is still using the same AI model or if it has developed a more advanced version for its on-device AI system.

Apple also touched on Siri, stating, “We’re continuing our work to deliver the features that make Siri even more personal. This work needed more time to reach our high-quality bar, and we look forward to sharing more about it in the coming year.”

It seems the enhanced, AI-driven Siri will arrive next year. In the meantime, Apple’s intelligence features are expanding to support more languages, such as French, German, Italian, Spanish, and others.

Share this article
Shareable URL
Leave a Reply

Your email address will not be published. Required fields are marked *

Read next
0
Share