Apple introduces its new Open Source AI Models that runs on device rather than cloud services

Apple has entered the AI race with its new open-source large language models (LLMs), OpenELM, designed to run on devices rather than through cloud services.

In Short

  • Apple has introduced its new open-source large language models (LLMs)
  • Apple’s OpenELM is designed to run directly on devices rather than through cloud services
  • OpenELM models aim to empower the research community with state-of-the-art language models

Apple has finally stepped into the AI race. The Cupertino based tech giant has introduced its new open source large language models (LLMs) OpenELM (Open-source Efficient Language Models)– designed to run directly on devices rather than through cloud services. OpenELM models are currently available on the Hugging Face Hub, a well-known community platform for sharing AI code.

According to the release white paper PDF, Apple’s LLM is a suite of eight language models, consisting of four pre-trained using the CoreNet library and four instruction-tuned models. The company is using a layer-wise scaling strategy in these models, aimed at optimising both accuracy and efficiency.

To set apart its LLM from the competitors and simply offering pre-trained models, Apple has instead released the entire framework, including code, training logs, and multiple versions.
Apple’s decision to make OpenELM models open source aims to empower and enrich the research community with state-of-the-art language models. According to Apple, by sharing open source models,it allows researchers to not only utilise the models but also delve into their inner workings, allowing faster progress and “more trustworthy results” in the field of natural language AI.

MUST READ

TRENDING TOPICS:

Researchers, developers, and companies can use Apple’s OpenELM models as they are or customise them to suit specific needs. This openness is also ditching the previous practices where companies often only provided model weights and inference code without access to underlying training data or configurations.

Meanwhile, the benefits of Apple’s on-device AI processing are twofold: privacy and efficiency. By keeping data and processing local, OpenELM addresses growing concerns about user privacy and potential cloud server breaches. Additionally, on-device processing eliminates reliance on internet connectivity, enabling AI functionalities even in offline scenarios. Apple emphasises this advantage, highlighting that OpenELM achieves “enhanced accuracy” while requiring fewer resources compared to similar models.

While Open Sourcing is benefiting researchers, it also carries strategic advantages for Apple. The open sharing of information will allow Apple to collaborate within the research community, enabling others to contribute to and refine OpenELM. Additionally, this open environment will also attract top talent, including engineers, scientists, and experts for the company. According to Apple, OpenELM essentially serves as a springboard for further AI advancements, benefiting not only Apple but the entire AI landscape.

Although Apple has not yet introduced these AI capabilities to its devices, the release of iOS 18 is imminent, and rumours are swirling about Apple’s plan of bringing on-device AI features with new OS. Now with the launch of its own LLM, it’s clear that Apple is laying the groundwork for AI upgradation of its devices including iPhones, iPads, and Macs. It is expected that Apple will incorporate its large language models into its devices, enabling more personalised and efficient user experiences. This shift towards on-device processing could allow Apple to facilitate user privacy and allowing developers with readily available, efficient AI tools.

Leave a Comment

Your email address will not be published. Required fields are marked *