Skip to content

Why Apple is taking a small model approach to generative AI

Among the biggest questions surrounding models like ChatGPT, Gemini, and Midjourney since their launch is what role (if any) they will play in our daily lives. It’s something Apple strives to answer with its own vision for the category. Apple Intelligencewhich was officially presented this week at WWDC 2024.

The company led Monday’s presentation with flash; This is how keynotes work. When Senior Vice President Craig Federighi wasn’t skydiving or practicing parkour with the help of some Hollywood (well, Cupertino) magic, Apple was determined to prove that its in-house models were just as capable as those of the competition.

The jury is still out on that question, as beta versions weren’t released until Monday, but the company has since revealed some of what makes its approach to generative AI different. First and foremost is reach. Many of the industry’s top companies take a “bigger is better” approach to their models. The goal of these systems is to serve as a kind of one-stop shop for the world’s information.

Apple’s approach to the category, on the other hand, is based on something more pragmatic. Apple’s intelligence is a more personalized approach to generative AI, built specifically with the company’s different operating systems as a base. It’s a very Apple approach in that it prioritizes a frictionless user experience above all.

Apple Intelligence is a branding exercise in one sense, but in another, the company prefers that the generative aspects of AI be seamlessly integrated into the operating system. It’s completely fine (or even preferable, actually) if the user has no idea about the underlying technologies that drive these systems. This is how Apple products have always worked.

Keep models small

The key to much of this is creating smaller models: training systems on a custom data set designed specifically for the types of functionality required by users of their operating systems. It is not immediately clear how much the size of these models will affect the black box problemBut Apple believes that, at the very least, having more topic models will increase transparency about why the system makes specific decisions.

Due to the relatively limited nature of these models, Apple doesn’t expect there to be a lot of variety when asking the system to, for example, summarize text. Ultimately, however, the variation from one suggestion to another depends on the length of the text being summarized. The operating systems also have a feedback mechanism where users can report problems with the generative AI system.

While Apple Intelligence is much more focused than larger models, it can cover a spectrum of requests thanks to the inclusion of “adapters,” which are specialized for different tasks and styles. Generally speaking, however, Apple’s approach is not a “bigger is better” approach to creating models, as things like size, speed, and computing power need to be taken into account, especially when This is models on the device.

ChatGPT, Gemini and the rest

opening up to third-party models like OpenAI’s ChatGPT It makes sense considering the narrow focus of Apple’s models. The company trained its systems specifically for the macOS/iOS experience, so there will be a lot of information that will be out of your reach. In cases where the system believes that a third-party application would be more appropriate to provide a response, a system message will ask if you would like to share that information externally. If you don’t receive a message like this, the request is being processed using Apple’s internal models.

This should work the same with everyone. External models Apple partners with, including Google Gemini. It is one of the rare cases where the system will draw attention to the use of generative AI in this way. The decision was made, in part, to eliminate any privacy concerns. Each company has different standards when collecting and training on user data.

Requiring users to sign up every time takes some of the responsibility off of Apple, even if it adds some friction to the process. You can also opt out of using third-party platforms system-wide, although doing so would limit the amount of data the OS/Siri can access. However, you cannot opt ​​out of Apple Intelligence all at once. Instead, you’ll have to do it feature by feature.

Private cloud computing

On the other hand, it will not be clear whether the system processes a specific query on the device or through a remote server with Private Cloud Compute. Apple’s philosophy is that such disclosures are not necessary, as it holds its servers to the same privacy standards as its devices, down to the source silicon they use.

One way to know for sure whether the query is being managed on or off the device is to disconnect your machine from the Internet. If the problem requires cloud computing to solve, but the machine cannot find a network, it will generate an error indicating that it cannot complete the requested action.

Apple is breaking down the details on which actions will require cloud-based processing. There are several factors at play there, and the ever-changing nature of these systems means that something that might require cloud computing today could be achieved on the device tomorrow. On-device computing won’t always be the fastest option, as speed is one of the parameters Apple Intelligence takes into account when determining where to process the message.

However, there are certain operations that will always be performed on the device. The most notable of the bunch is Image Playground, as the entire broadcast model is stored locally. Apple modified the model so that it generates images in three different house styles: animation, illustration, and sketch. The animation style is quite similar to the house style. another company founded by Steve Jobs. Similarly, text generation is currently available in three styles: friendly, professional and concise.

Even at this early beta stage, Image Playground generation is impressively fast, often only taking a couple of seconds. Regarding the issue of inclusion when generating images of people, the system requires you to enter specific data, rather than just guessing on things like ethnicity.

How Apple will handle data sets

Apple models are trained on a combination of licensed data sets and tracking publicly accessible information. The latter is achieved with AppleBot. The company’s web crawler has been around for some time, providing contextual data to apps like Spotlight, Siri, and Safari. The tracker has an opt-out feature for publishers.

“With Applebot-Extended,” Apple notes, “web publishers can opt out of using their website content to train Apple’s core models that power generative AI features across all Apple products, including intelligence, Apple development tools and services.

This is achieved by including a message within the website code. With the arrival of Apple Intelligence, the company has introduced a second message, which allows sites to be included in search results but excluded from training the generative AI model.

Responsible AI

Apple published a whitepaper on the first day of WWDC titled “Introducing Apple’s Base Server and Device Models.” Among other things, it highlights the principles that govern the company’s AI models. In particular, Apple highlights four things:

  1. “Provide users with intelligent tools: We identify areas where AI can be used responsibly to create tools that address specific user needs. “We respect how our users choose to use these tools to achieve their goals.”
  2. “Represent our users: We create deeply personal products with the goal of authentically representing users around the world. “We continually work to avoid perpetuating systemic stereotypes and biases in our AI tools and models.”
  3. “Design carefully: We take precautions at every stage of our process, including design, model training, feature development, and quality assessment to identify how our AI tools may be misused or cause potential harm. “We will continually and proactively improve our AI tools with the help of user feedback.”
  4. “Protect privacy: We protect the privacy of our users with powerful on-device processing and innovative infrastructure like Private Cloud Compute. “We do not use our users’ private personal data or user interactions when we train our base models.”

Apple’s customized approach to core models allows the system to be tailored specifically to the user experience. The company has applied this UX-first approach since the arrival of the first Mac. Providing as seamless an experience as possible serves the user, but it shouldn’t come at the expense of privacy.

This will be a difficult balancing application that the company will have to navigate as the current crop of beta versions of the operating system reach general availability this year. The ideal approach is to offer as much (or little) information as the end user requires. There will certainly be many people who won’t care, say, whether or not a query runs on the machine or in the cloud. They are content with the system defaulting to whatever is most accurate and efficient.

For privacy advocates and others interested in those details, Apple should strive for as much transparency as possible for users, not to mention transparency for publishers who might prefer their content not be sourced to train these models. There are certain aspects where the black box problem is currently unavoidable, but in cases where transparency can be offered, it should be available upon request of users.

Leave a Reply

Your email address will not be published. Required fields are marked *