Big Tech Faces Criticism Over 'Deepfake Nude' App Revenue
Why It Matters
This controversy highlights the gap between corporate safety policies and actual enforcement, raising questions about platform accountability and the monetization of non-consensual AI imagery.
Key Points
- A Bloomberg investigation revealed that Apple and Google hosted AI apps generating non-consensual nude imagery despite official bans.
- The apps in question reportedly reached 483 million downloads and generated $122 million in total revenue.
- Critics highlight a double standard where these platforms penalize competitors like Elon Musk's Grok while profiting from more explicit tools.
- The controversy centers on the failure of automated and human app review processes to catch generative AI policy violations.
- The financial data suggests that 'undressing' AI has become a significant and lucrative niche within the mobile app economy.
Apple and Alphabet-owned Google are facing scrutiny following a Bloomberg report alleging that both companies hosted numerous mobile applications designed to generate non-consensual deepfake nude images. Despite existing platform policies that explicitly prohibit such content, the report indicates these applications garnered approximately 483 million downloads and generated an estimated $122 million in revenue. Critics argue that the tech giants have prioritized profit over policy enforcement, allowing sexually explicit AI tools to flourish within their ecosystems. The findings have intensified the debate over the responsibility of app store gatekeepers to police AI-generated content. While both companies have historically maintained strict standards for app approval, the sheer volume of downloads suggest a systemic failure in current moderation workflows for generative AI software.
It turns out Apple and Google have been making a lot of money from apps they officially say aren't allowed. A new report shows that 'deepfake nude' apps—which use AI to undress people without their consent—have been downloaded nearly half a billion times on their stores. Even though these companies have rules against this, they've reportedly collected over $100 million in revenue from them. It is like a nightclub claiming to have a strict 'no weapons' policy while selling switchblades at the bar. People are now calling them out for being hypocrites, especially since they often lecture others about AI safety.
Sides
Critics
Accuses the companies of hypocrisy for profiting from harmful AI while criticizing other AI platforms like Grok.
Defenders
Maintains that it prohibits apps that generate defamatory or pornographic content, though enforcement is being questioned.
Claims to have strict Play Store policies against non-consensual sexual content and harmful AI applications.
Neutral
Conducted the investigative reporting that exposed the download and revenue figures for these apps.
Noise Level
Forecast
Regulatory pressure will likely increase as lawmakers use these findings to push for stricter platform liability laws regarding AI-generated harm. Apple and Google will likely conduct a massive 'purge' of their app stores in the coming weeks to mitigate reputational damage.
Based on current signals. Events may develop differently.
Timeline
Social Media Backlash Mounts
Commentators highlight the disparity between platform rules and the $122 million in revenue generated.
Bloomberg Investigation Published
A report is released detailing the massive scale and revenue of deepfake nude apps on major mobile platforms.
Join the Discussion
Discuss this story
Community comments coming in a future update
Be the first to share your perspective. Subscribe to comment.