Esc
ResolvedEthics

HackingButLegal's Retaliatory KinexisAI Deepfake Strategy

AI-AnalyzedAnalysis generated by Gemini, reviewed editorially. Methodology

Why It Matters

This case highlights the weaponization of AI tools for personal vendettas and the ethical vacuum surrounding the non-consensual use of likenesses in 'defensive' AI marketing.

Key Points

  • HackingButLegal announced a policy of using critics' data to promote the KinexisAI tool.
  • The strategy involves converting perceived disinformation into deepfake and behavioral analysis demonstrations.
  • KinexisAI is positioned as a tool for deepfake creation and behavioral profiling.
  • The announcement raises significant legal and ethical questions regarding non-consensual synthetic media and digital harassment.

On March 20, 2026, the developer known as HackingButLegal announced a controversial marketing strategy for their AI platform, KinexisAI. The developer stated that individuals spreading 'harmful lies' would have their likenesses and behaviors repurposed as promotional content for the deepfake and behavioral analysis tool. This move signals a pivot toward using synthetic media as a form of digital retaliation against critics and perceived disinformation agents. Legal experts suggest this practice may infringe upon emerging digital identity protections and platform harassment policies. While the developer frames this as an 'effective advertising' strategy, it has drawn immediate scrutiny from ethics watchdogs regarding the boundaries of consent in AI training and demonstration. The incident underscores the growing risk of AI-enabled harassment being masked as technological innovation.

Imagine if a developer decided to punish their critics by turning them into puppets for a commercial. That is exactly what HackingButLegal is threatening to do with their new tool, KinexisAI. They claim that if you spread rumors about them, they will use AI to deepfake you into an advertisement for their behavioral analysis software. It is essentially using high-tech tools to get even with online enemies. This is a big deal because it shows how easily AI can be used for revenge, turning someone’s face and voice against them without their permission.

Sides

Critics

Digital Privacy AdvocatesC

Maintains that non-consensual use of personal likeness for deepfakes constitutes harassment and a violation of human rights.

Defenders

HackingButLegalC

Argues that using the likenesses of those spreading disinformation is a valid form of promotion and defense for their AI tool.

Neutral

@KinexisAIC

The tool being promoted as a behavioral analysis and deepfake platform.

Join the Discussion

Discuss this story

Community comments coming in a future update

Be the first to share your perspective. Subscribe to comment.

Noise Level

Quiet2?Noise Score (0–100): how loud a controversy is. Composite of reach, engagement, star power, cross-platform spread, polarity, duration, and industry impact — with 7-day decay.
Decay: 5%
Reach
44
Engagement
8
Star Power
15
Duration
100
Cross-Platform
20
Polarity
85
Industry Impact
60

Forecast

AI Analysis — Possible Scenarios

Regulatory bodies and social media platforms are likely to intervene with account suspensions or legal warnings to prevent the normalization of retaliatory deepfakes. This may spark a broader legislative push to define 'digital likeness' as a protected personal asset.

Based on current signals. Events may develop differently.

Timeline

  1. Policy Announcement

    HackingButLegal tweets that critics spreading 'toxic disinformation' will be converted into ads for KinexisAI.