The Hidden Dangers of AI “Nudify” Apps and the GenNomis Data Breach
Artificial intelligence has introduced remarkable advancements, but alongside innovation comes misuse—particularly in the rise of “nudify” apps. These tools use AI to create fake nude images by digitally stripping clothing from photos, often without the subject’s knowledge or consent. This disturbing trend not only infringes on individual privacy but also raises serious ethical and legal questions.
A striking example of this danger came to light in March 2025 when a cybersecurity researcher discovered an unprotected database operated by GenNomis, a product of the South Korean firm AI-NOMIS. The massive 47.8 GB dataset contained over 93,000 AI-generated adult images, some depicting individuals who looked underage. In addition to the explicit images, the database included JSON files that stored text commands and links to the images, revealing the mechanics behind the image generation. This breach underscored not only the vulnerabilities in data protection but also the disturbing misuse of AI to generate non-consensual adult content.
The widespread availability of “nudify” tools has opened the door to harassment, blackmail, and emotional harm, particularly targeting women and minors. Victims can suffer long-lasting reputational and psychological damage from the distribution of these false images. Unfortunately, legislation in many countries has not caught up with the capabilities of AI. While some laws address the sharing of intimate content without consent, they often don’t account for synthetic or AI-created images, leaving gaps that allow creators to avoid accountability.
Combating this growing issue requires a combination of legal, technological, and social solutions. Lawmakers must revise or introduce legislation to address synthetic explicit imagery explicitly. Tech companies should strengthen data protection policies and enforce ethical standards for AI development. Equally important is increasing public awareness to inform users about the dangers of these tools and to promote responsible digital conduct. As AI technology evolves, so too must the frameworks we use to ensure it isn’t weaponized against individuals.