Esc
ResolvedEthics

KinexisAI Founder Threatens to Deepfake Critics into Advertisements

AI-AnalyzedAnalysis generated by Gemini, reviewed editorially. Methodology

Why It Matters

This incident highlights the emerging threat of weaponized synthetic media and the lack of robust protections against non-consensual use of likeness for retaliation. It sets a dangerous precedent for how AI developers might silence dissent through digital identity theft.

Key Points

  • The founder of KinexisAI announced plans to use critics' likenesses in deepfake advertisements.
  • The action is framed as a response to 'toxic disinformation' and 'harmful lies' directed at the developer.
  • KinexisAI is a platform specializing in behavioral analysis and synthetic media generation.
  • Legal experts warn the move likely violates personality rights and harassment laws.
  • The incident has sparked a major debate over the ethical boundaries of AI-assisted retaliation.

The developer behind KinexisAI, known as HackingButLegal, announced on March 20, 2026, a policy of using deepfake technology to retaliate against online critics. The statement claims that individuals who promote what the founder deems 'harmful lies' will have their identities converted into promotional material for the KinexisAI behavioral analysis tool. This move represents a significant escalation in the weaponization of synthetic media for personal and commercial vendettas. Legal experts suggest such actions could violate existing Right of Publicity laws and digital harassment statutes. The controversy has reignited calls for federal regulation regarding the commercial use of AI-generated likenesses without explicit consent. While the technical capabilities of KinexisAI are intended for behavioral analysis, this application suggests a volatile shift toward using AI as a tool for public intimidation.

The creator of a tool called KinexisAI is threatening to steal the faces and voices of their haters to make ads. If someone talks trash or spreads what the founder calls 'lies' online, the founder says they will use AI to turn those people into digital puppets to promote the product. It is a new and scary way to handle online arguments by using a person's own image against them without permission. This moves beyond simple blocking or arguing into the territory of high-tech identity theft and harassment.

Sides

Critics

Privacy and Digital Rights AdvocatesC

Contend that non-consensual use of likeness for commercial or retaliatory purposes is a gross violation of ethics and existing law.

Defenders

HackingButLegalC

Argues that turning critics into advertisements is a justified response to disinformation and serves as a practical demonstration of the tool's power.

@KinexisAIC

The organization whose technology is being positioned as a weapon for behavioral analysis and deepfake generation.

Join the Discussion

Discuss this story

Community comments coming in a future update

Be the first to share your perspective. Subscribe to comment.

Noise Level

Quiet2?Noise Score (0–100): how loud a controversy is. Composite of reach, engagement, star power, cross-platform spread, polarity, duration, and industry impact β€” with 7-day decay.
Decay: 5%
Reach
44
Engagement
8
Star Power
15
Duration
100
Cross-Platform
20
Polarity
85
Industry Impact
75

Forecast

AI Analysis β€” Possible Scenarios

Social media platforms are likely to suspend the KinexisAI accounts for violating harassment and synthetic media policies. In the near term, this will likely lead to a 'Right of Publicity' lawsuit that could set a legal benchmark for non-consensual commercial deepfakes.

Based on current signals. Events may develop differently.

Timeline

  1. Retaliation threat published

    HackingButLegal tweets intent to convert critics into 'effective advertising' using deepfake technology.