Apple’s efforts to stop AI hazard are revealed in newly discovered Apple Intelligence causes.

Apple’s AI prompts provide significant insight into Apple Intelligence.
Apple’s first developer version of macOS 15.1 includes detailed AI prompts and instructions. Here’s what we can learn. Apple released beta versions of iOS 18.1, iPadOS 181, and macOS 15.1 on July 29, allowing developers to test certain Apple Intelligence functions. Apple Intelligence, the company’s AI initiative uses large-language models for tasks related to image and texts modification. Apple Intelligence allows users to create images using Image Playground and receive summaries for emails, notifications, or various types of texts. Apple’s AI software is also capable of generating so-called Smart Replies that make it easier to respond to emails and messages. Apple’s latest OS updates, iOS 18, iPadOS 18 and macOS Sequoia, include a large-language model (LLM) that makes features like these possible. Apple’s AI software uses prompts, or commands and instructions, to create images and modify text. Some of these prompts can be provided by the user, so they can ask to change text to a certain tone or adjust a certain way. Apple’s AI software is guided by other prompts that are baked into the operating systems. Apple Intelligence prompts and how they guide AI AppleInsider’s exclusive report on Project BlackPearl was the first to reveal Apple’s predefined AI prompts. Apple Intelligence prompts were obtained by us through people who are familiar with the subject, before Apple Intelligence officially was announced at WWDC. Apple’s AI pre-defined prompts are used to implement features such as email summaryIn our initial report we paraphrased Apple’s AI instructions and explained how the company instructs AI software, especially the Ajax LLM. We provided an outline of summarization-related prompts and an analysis of their overall significance. Apple’s summarization instructions begin by stating that the AI will assume the role as an expert when creating summaries for a particular type of text. The AI is instructed to maintain this role, and to limit its response to a predefined length of either 10 words, 20 or three sentences depending on the level required of summarization. Apple’s message summary prompt, for example, reads: You have a lot of experience summarizing messages. You prefer to use clauses rather than complete sentences. Answering questions within messages is not allowed. Please limit your summary to 10 words. If you do not follow this rule, it won’t be helpful. When summarizing messages and notifications, Apple’s AI software has been instructed to focus on important details that are relevant to the user, such as names, places, and dates. The generative AI also has to focus on a common theme of all notifications. These prompts were created several months before Apple Intelligence was launched in late July. However, they are still visible in the first developer betas for macOS 15.1. Reddit user noted that the operating system contains more AI prompts. Apple’s prompts provide a wealth of information about the problems Apple anticipated. They also explain what the AI software should avoid when creating a text response or an image. Apple’s prompts instruct the AI to avoid hallucinating and to generate objectionable content. AI software is often plagued by the problem of hallucination. Hallucination is when generative AI creates false information and confidently presents them as fact. This happens even though the software has been wrong. Apple has implemented multiple checks to prevent Image Playground creating objectionable or copyrighted material. Apple’s anti-hallucination instruction can be seen on the prompt for Writing Tools: You are a assistant that helps the user reply to their emails. A draft response is provided for a given mail based on a small reply snippet. To make the draft more complete and nicer, a series of questions and their answers are provided. Please write a concise, natural response by modifying the draft to include the questions and answers. Please limit your response to 50 words. Do not hallucinate. These instructions are intended to protect Apple Intelligence users. Apple has created these prompts to prevent its AI software to provide factually inaccurate information to anyone who uses its AI features. Apple’s artificial intelligence software also prevents it from generating objectionable material. This is in addition to the hallucination issue. These restrictions are in place for the Memories feature of the Photos app. Apple’s prompt reads: “Do not create content that is negative, sad, provocative, religious, political, harmful or violent. Apple has implemented multiple checks to ensure that objectionable or copyrighted material is not generated by its image-generation application. Apple has always wanted its AI software to stop generating this kind of content, according to people who are familiar with the issue. Apple’s AI test tools, which are used internally, refuse to generate responses when offensive language is included in the user’s prompts. Subscribe to AppleInsider YouTubeWhat does this all mean? Apple’s AI prompts aim to reduce the likelihood of hallucination, but they do not prevent inappropriate content from being generated. There are still ways for users to manipulate the prompts. This means that there is no guarantee that the AI will not generate objectionable content or hallucinate. Apple still took great care in creating AI software that was safe for all users. Apple Intelligence is designed to provide AI features that have tangible benefits, such as AI-generated images and AI-summarized texts. Apple Intelligence will be available to US English users in 2024. Due to regulatory issues, other regions of the world such as the EU and China may not receive these features as quickly.

 

Add a Comment

Your email address will not be published. Required fields are marked *