Cupertino, California – Apple’s artificial intelligence-powered notification summarization feature on iPhones has come under scrutiny for generating inaccurate and misleading news alerts, sparking concerns about its potential to spread misinformation.
The issue came to light last week when the AI feature inaccurately summarized notifications from the BBC News app. In one case, it falsely claimed British darts player Luke Littler had won the PDC World Darts Championship, a day before the tournament’s actual final, which Littler went on to win. Hours later, another notification incorrectly claimed that tennis legend Rafael Nadal had come out as gay.
The incidents have prompted criticism of Apple Intelligence, the tech giant’s AI system, which is currently in beta. The BBC revealed that it had been urging Apple to address the problem for over a month. In December, the broadcaster reported another incident in which the AI-generated headline falsely stated that Luigi Mangione, a suspect in the murder of UnitedHealthcare CEO Brian Thompson, had shot himself — an event that never occurred.
Apple told the BBC on Monday that it is working on an update to resolve the issue. The update will include a clarification to indicate when text displayed in notifications has been generated by Apple Intelligence, rather than appearing as if sourced directly from news outlets.
“Apple Intelligence features are in beta, and we are continuously making improvements with the help of user feedback,” the company said in a statement. Apple also encouraged users to report concerns if they encounter unexpected or inaccurate notifications.
The BBC is not the only media organisation affected. In November, the feature incorrectly claimed Israeli Prime Minister Benjamin Netanyahu had been arrested. The error was flagged on Bluesky by Ken Schwencke, a senior editor at ProPublica.
Apple’s AI notification summaries aim to consolidate and rewrite news app notifications into brief, digestible updates. However, this has led to what experts call “hallucinations” — instances where AI generates false or misleading information with unwarranted confidence.
Ben Wood, chief analyst at CCS Insights, noted the broader challenges posed by generative AI technology. “We’ve already seen numerous examples of AI services confidently telling mistruths, so-called ‘hallucinations.’ Apple’s attempt to compress content into short summaries has compounded the issue, creating erroneous messages,” Wood said.
Apple’s rivals in the tech industry are closely watching how the company addresses the issue. The company has promised a fix “in the coming weeks.”
Generative AI systems, like Apple’s, rely on large language models trained on vast datasets to generate responses. When uncertain, these systems can still produce confidently inaccurate results, further fueling concerns about their reliability in handling sensitive or factual information.
The incidents highlight the risks of deploying AI technologies for public-facing applications without adequate safeguards, and Apple faces mounting pressure to restore trust in its AI-driven features.
Get Faster News Update By Joining Our: WhatsApp Channel
All rights reserved. This material, and other digital content on this website, may not be reproduced, published, broadcast, rewritten or redistributed in whole or in part without written permission from CONVERSEER. Read our Terms Of Use.