
Artificial intelligence (AI) has revolutionized various sectors, but its misuse has led to alarming developments, particularly in the creation of non-consensual explicit content through “nudify” applications. These apps employ AI to digitally remove clothing from images of individuals, generating realistic yet fabricated nude images without their consent. This practice not only violates personal privacy but also poses significant ethical and legal challenges.
A recent incident involving GenNomis, an AI platform operated by the South Korean company AI-NOMIS, underscores the potential dangers of such technologies. In March 2025, cybersecurity researcher Jeremiah Fowler discovered an unsecured database belonging to GenNomis, containing approximately 93,485 explicit AI-generated images, some depicting individuals who appeared to be minors. The 47.8 GB database also included JSON files detailing the prompts used to create these images, revealing the inner workings of the AI system. This exposure not only highlighted the risks associated with the creation of non-consensual explicit content but also raised concerns about the security measures implemented by companies developing such technologies.
The proliferation of “nudify” apps has led to a surge in AI-generated explicit images, often targeting women and minors. These images can cause profound psychological distress, reputational damage, and have even been linked to instances of bullying and harassment. Legal frameworks in many jurisdictions have struggled to keep pace with these technological advancements, resulting in gaps that leave victims with limited avenues for recourse. For instance, while the dissemination of non-consensual explicit images is criminalized in some areas, the creation of such content without intent to distribute may not be explicitly covered by existing laws.
Addressing the challenges posed by AI-driven “nudify” applications requires a multifaceted approach. Legislative bodies need to update and enact laws that specifically target the creation and distribution of non-consensual synthetic intimate imagery. Technology companies must implement robust security measures to prevent unauthorized access to sensitive data and develop ethical guidelines to govern the use of AI in content creation. Public awareness campaigns are also essential to educate individuals about the potential harms of these technologies and to promote respectful online behavior. By taking these steps, society can work towards mitigating the risks associated with AI-generated non-consensual explicit content and protecting individuals from exploitation and abuse