Esc
ResolvedEthics

HackingButLegal Announces Disinformation-to-Deepfake Conversion Tool

AI-AnalyzedAnalysis generated by Gemini, reviewed editorially. Methodology

Why It Matters

The incident highlights the weaponization of generative AI for harassment and the lack of ethical safeguards in small-scale behavioral analysis tools. It raises critical questions about consent and the legality of using adversarial data for synthetic media generation.

Key Points

  • Security researcher HackingButLegal announced that online critics will be used as promotional material for the KinexisAI tool.
  • KinexisAI is described as a platform combining deepfake generation with behavioral analysis capabilities.
  • The move has sparked intense debate regarding the ethical boundaries of using non-consensual data for AI training and marketing.
  • Legal analysts are questioning if this policy violates biometric privacy laws or standard social media terms of service.
  • The controversy highlights an emerging trend of 'retaliatory AI' where individuals use generative tools for personal vendettas.

The security researcher known as HackingButLegal sparked significant controversy on March 20, 2026, by announcing a policy of converting online critics into synthetic training data. According to a post on social media, individuals who promote 'harmful lies' or 'toxic disinformation' regarding the developer will be used as the basis for advertisements for KinexisAI, a new deepfake and behavioral analysis platform. This development marks an escalation in the use of AI for personal retaliation within the tech community. Legal experts have noted that the move may violate several privacy regulations and terms of service regarding non-consensual synthetic media. The KinexisAI tool claims to integrate deepfake generation with behavioral metrics, though the full technical capabilities remain unverified. Critics argue that using personal data to create deceptive advertisements without consent constitutes a breach of ethics and potentially illegal harassment.

Imagine if every time you got into a fight with someone online, they turned your face and voice into an AI puppet to sell their products. That is exactly what HackingButLegal is threatening to do with a new tool called KinexisAI. Instead of just blocking people who spread lies, this researcher plans to use their likeness for 'effective advertising' for their deepfake software. It is basically turning online drama into a high-tech weapon. While some see it as a clever way to handle trolls, most people are worried about the scary precedent this sets for digital harassment and consent.

Sides

Critics

Privacy AdvocatesC

Argue that creating deepfakes of individuals without consent for retaliatory purposes is a dangerous breach of ethics.

Defenders

HackingButLegalC

Claims that using disinformation spreaders as AI training data for advertisements is a valid response to toxic behavior.

@KinexisAIC

The tool being promoted as a solution for deepfake generation and behavioral tracking.

Join the Discussion

Discuss this story

Community comments coming in a future update

Be the first to share your perspective. Subscribe to comment.

Noise Level

Quiet2?Noise Score (0–100): how loud a controversy is. Composite of reach, engagement, star power, cross-platform spread, polarity, duration, and industry impact — with 7-day decay.
Decay: 5%
Reach
44
Engagement
8
Star Power
15
Duration
100
Cross-Platform
20
Polarity
85
Industry Impact
65

Forecast

AI Analysis — Possible Scenarios

Regulatory bodies and social media platforms are likely to investigate the legality of KinexisAI's data sourcing methods in the coming weeks. If the developer follows through with creating non-consensual advertisements, it will likely lead to account suspensions and potential civil litigation regarding personality rights.

Based on current signals. Events may develop differently.

Timeline

  1. Policy Announcement

    HackingButLegal tweets that critics will be 'converted to effective advertising' for KinexisAI.