Tech giants Apple and Google are offering numerous AI-powered applications on their app stores that can generate non-consensual nude images from ordinary photographs, according to a new report from industry watchdog Tech Transparency Project. The report comes after a global scandal erupted at Elon Musk’s Grok AI generated sexual deepfakes of women and children and posting them on X.
CNBC reports that a comprehensive review of Apple’s App Store and Google Play Store conducted in January revealed the presence of 55 nudification applications on Google Play and 47 on the Apple App Store, according to findings released by Tech Transparency Project. These applications utilize AI technology to manipulate images of clothed individuals and create explicit content without consent.
The discovery prompted immediate action from Apple after the company was contacted by Tech Transparency Project and CNBC representatives last week. An Apple spokesperson confirmed on Monday that the company had removed 28 applications identified in the watchdog report. The technology giant also issued warnings to developers of additional applications, notifying them of potential removal from the Apple App Store if guideline violations were not properly addressed.
Following the initial removals, two of the applications were subsequently restored to Apple’s platform after their developers submitted revised versions that addressed the company’s guideline concerns, according to an Apple spokesman who spoke with CNBC.
However, Tech Transparency Project disputed the full extent of Apple’s enforcement actions, stating in a follow-up review on Monday that only 24 applications had actually been removed from the Apple App Store, fewer than the 28 initially reported by the company.
Google also took enforcement action following the report’s findings. A Google spokesperson confirmed that the company suspended several applications referenced in the Tech Transparency Project report for violating the Google Play Store’s established policies. The spokesperson emphasized that Google investigates potential violations when they are reported but declined to specify the exact number of applications removed, citing an ongoing investigation into the apps identified by the watchdog organization.
“Both companies say they are dedicated to the safety and security of users, but they host a collection of apps that can turn an innocuous photo of a woman into an abusive, sexualized image,” Tech Transparency Project wrote in its report examining Apple and Google’s hosting practices.
The watchdog organization employed a systematic methodology to identify these problematic applications. Researchers conducted searches using terms such as “nudify” and “undress” to locate relevant apps, then tested the discovered applications using AI-generated images of fully clothed women. The investigation examined two distinct categories of applications: those utilizing artificial intelligence to render images of women without clothing, and “face swap” applications that superimpose the faces of individuals onto existing images of nude women.
“It’s very clear, these are not just ‘change outfit’ apps,” Katie Paul, director of Tech Transparency Project, told CNBC. “These were definitely designed for non-consensual sexualization of people.”
The Tech Transparency Project report arrives in the wake of significant controversy surrounding Elon Musk’s xAI company earlier this month. The xAI Grok artificial intelligence tool faced widespread criticism and backlash after it responded to user prompts by generating sexualized photographs of women and children, raising serious concerns about safeguards and content moderation in AI systems.
The watchdog organization’s research revealed that 14 of the reviewed applications were based in China, according to Tech Transparency Project findings. Paul emphasized that this geographic distribution raises additional security and privacy concerns for users.
“China’s data retention laws mean that the Chinese government has right to data from any company anywhere in China,” Paul explained. “So if somebody’s making deepfake nudes of you, those are now in the hands of the Chinese government if they use one of those apps.”
Following the controversy surrounding xAI’s Grok artificial intelligence tool, the system acknowledged “lapses in safeguards” that it is “urgently fixing,” in a response to one X platform user. The European Commission escalated its response on Monday by announcing it had opened a formal investigation into X regarding Grok’s role in spreading sexually explicit content.
Read more at CNBC here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.
















