Apple Intelligence likely abide by the novel, ambiguous federal laws governing artificial intelligence.

Apple Intelligence could expand in the future with more AI partners and subscriptions.
Apple, along with other large tech companies, agreed to a voluntary initiative that is toothless and asks for fairness when developing artificial intelligence models and monitoring potential security or privacy concerns as the development progresses. In a Friday morning announcement, the White House made it clear that Apple had agreed to adhere voluntarily to artificial intelligence safeguards crafted by the Biden administration. Alphabet, Amazon Meta, Microsoft and OpenAI are among the other big tech companies that have already signed on to this effort. The guidelines don’t yet set out any hard rules against certain behaviors. Apple has agreed in principle to six key tenets of the executive order. Require that the developers of the most powerful AI system share their safety tests results and other important information with the U.S. Government
Develop standards, tools and tests to ensure AI systems are safe and secure.
Protect yourself from the dangers of using AI to create dangerous biological materials
Protect Americans from AI enabled fraud and deception through standards and best practices to detect AI-generated content.
Create an advanced cybersecurity program that uses AI to find and fix critical software vulnerabilities
Order the development a National Security Memorandum to direct further actions on AI security and AI. Under the executive order, companies must share the results of compliance testing between themselves and with the federal government. The order also calls for a voluntary assessment of security risks, whose results are to be widely shared. There are currently no penalties or enforcement frameworks for non-compliance. To be eligible for federal purchases, AI systems need to be tested before submitting any requests for proposals. There is no monitoring of compliance with the terms of order and it’s unclear if any equally enforceable guardrails are going to be applied. It’s not clear what will happen to the agreement if a new administration takes over. After the initial lip service, there hasn’t been much movement or discussion on AI regulation. It is unlikely that the debate will resume anytime soon, or even before the November elections in 2024. Apple’s October Executive Order, which preceded its commitment, was launched in November 2023 before a multi-national effort to establish safe frameworks for AI. It’s still not clear how or if the two initiatives are related. The White House will hold a second briefing on this issue on July 26.

 

Add a Comment

Your email address will not be published. Required fields are marked *