In partnership with

Wednesday, February 25, 2026 | AI News | National Security | Technology | Entertainment

Good morning, future unemployed person! 👋

What a week to be alive and watching the AI industry eat itself. Anthropic just accused three Chinese AI labs of running a 24,000-account heist on Claude. The Pentagon signed Elon Musk's Grok into classified military systems and is now threatening to brand Anthropic a "supply chain risk" for refusing to remove safety guardrails. ByteDance's Seedance 2.0 has Hollywood filing cease-and-desist letters faster than you can say "intellectual property." And 30,000 tech workers lost their jobs in the first six weeks of 2026, but don't worry: companies are calling it "strategic transformation."

Let's get into it. 💀

🕵️ NATIONAL SECURITY: Anthropic Accuses DeepSeek and Chinese AI Labs of Industrial-Scale Model Theft

TL;DR: Anthropic says DeepSeek, MiniMax, and Moonshot AI created 24,000 fake accounts and ran 16 million exchanges with Claude to steal its capabilities through a process called model distillation.

Anthropic dropped a bombshell on Monday. The company accused three prominent Chinese AI labs of running what it calls "industrial-scale distillation campaigns" against Claude, its flagship AI model. The numbers are staggering: roughly 24,000 fraudulent accounts, more than 16 million exchanges, and techniques specifically designed to extract Claude's most advanced reasoning capabilities.

Here is how each lab operated, according to Anthropic's allegations:

DeepSeek generated over 150,000 exchanges targeting reasoning tasks, rubric-based grading suitable for reinforcement learning, and censorship-safe rewrites of politically sensitive queries. Moonshot AI racked up more than 3.4 million exchanges targeting agentic reasoning, coding, and computer vision capabilities. MiniMax focused on agentic coding and tool use, and Anthropic caught them mid-campaign. When Anthropic released a new model during the active attack, MiniMax pivoted within 24 hours and redirected nearly half their traffic to capture capabilities from the latest system.

Why You Should Care

National security implications: Anthropic warns that models built through illicit distillation likely lose the safety guardrails baked into Claude. That means dangerous capabilities can proliferate with critical protections stripped away entirely.

Industry impact: OpenAI backed up the claims earlier this month, submitting an open letter to U.S. legislators describing similar distillation activity from DeepSeek targeting its own models. This is not one company's problem. This is a systematic campaign affecting every major Western AI lab.

The irony: As Futurism pointed out, Anthropic built Claude using techniques that involved training on vast amounts of internet data without explicit permission. The company accusing others of copying AI capabilities without consent is, to put it mildly, a rich development.

Sources: CNBC | Bloomberg

Smart starts here.

You don't have to read everything — just the right thing. 1440's daily newsletter distills the day's biggest stories from 100+ sources into one quick, 5-minute read. It's the fastest way to stay sharp, sound informed, and actually understand what's happening in the world. Join 4.5 million readers who start their day the smart way.

🎖️ MILITARY: Pentagon Signs Musk's Grok Into Classified Systems and Threatens Anthropic With Retaliation

TL;DR: The Department of Defense signed a deal to deploy xAI's Grok in classified military systems, while simultaneously threatening to brand Anthropic a "supply chain risk" for refusing to remove AI safety guardrails.

Elon Musk's xAI just landed the biggest government AI contract in history. The Pentagon signed an agreement to deploy Grok in systems handling classified intelligence analysis, weapons development, and battlefield operations. Until now, Anthropic's Claude was the only AI model with that level of defense access.

The key difference? xAI agreed to the Pentagon's demand that Grok be available for "all lawful purposes." Anthropic refused that same demand, specifically blocking Claude's use for mass surveillance of Americans and the development of fully autonomous weapons.

Now Defense Secretary Pete Hegseth is hosting Anthropic CEO Dario Amodei for what sources describe as a tense meeting at the Pentagon on Tuesday. A Defense official said Hegseth would effectively present Amodei with an ultimatum: lift all safeguards or face consequences. The Pentagon is threatening to brand Anthropic a "supply chain risk," which could effectively blacklist the company from future government contracts.

Why You Should Care

The safety debate just got real: This is no longer a theoretical argument about AI alignment. The U.S. military is actively pressuring the company most associated with AI safety to remove the guardrails it considers essential. If Anthropic caves, the precedent is devastating. If Anthropic refuses, it loses its most important government contract.

Military AI competition: Google is reportedly "close" to a deal for classified Gemini access, while OpenAI remains "not close" as it continues working on safety technology. The race to become the Pentagon's AI provider is reshaping every major lab's priorities.

The bigger picture: We are watching the U.S. government choose between AI safety and military capability in real time. The Pentagon's position is clear: unrestricted AI access matters more than preventing autonomous weapons or domestic surveillance.

🎬 ENTERTAINMENT: ByteDance's Seedance 2.0 Triggers Hollywood Legal War as Studios File Cease-and-Desist Letters

TL;DR: ByteDance's new AI video generator creates cinema-quality footage from text prompts. Hollywood responded with cease-and-desist letters from Disney, Netflix, Warner Bros, Paramount, and the MPA.

ByteDance released Seedance 2.0 earlier this month, and Hollywood is losing its collective mind. Unlike previous AI video tools that produced janky, silent clips, Seedance 2.0 generates complete audiovisual experiences: cinematic visuals paired with matching sound effects, character dialogue, and ambient audio, all from a single text prompt.

The viral moment came when Irish director Ruairi Robinson posted a clip showing AI-generated versions of Tom Cruise and Brad Pitt fighting on a rooftop in a post-apocalyptic wasteland. Two lines of text, fifteen seconds of footage that looked like it cost $50 million to produce. Deadpool co-writer Rhett Reese watched the clip and posted: "I hate to say it. It's likely over for us."

Hollywood's legal response was immediate and overwhelming. The Motion Picture Association (MPA) sent its first-ever cease-and-desist letter to a major generative AI company. Disney accused ByteDance of a "virtual smash-and-grab of Disney's IP." Paramount alleged "blatant infringement" involving Star Trek, South Park, and Dora the Explorer. Netflix, Warner Bros, and Sony followed with their own threats.

Why You Should Care

Impact on creative professionals: If you work in film, animation, VFX, or video production, Seedance 2.0 is the most direct threat to your livelihood yet. One person with a text prompt can now generate footage that previously required a crew of hundreds and a budget of millions.

Legal precedent: This is the first coordinated legal attack by major studios against a generative AI video tool. The outcome will determine whether AI companies can train on copyrighted material and let users generate content featuring copyrighted characters and real people's likenesses.

💼 JOBS: 30,000 Tech Workers Laid Off in Six Weeks as Companies Blame AI

TL;DR: Amazon, Salesforce, Meta, Pinterest, and Dow cut a combined 30,000 jobs in early 2026, citing AI automation. But economists say many companies are "AI washing" their layoffs.

The numbers are brutal. More than 30,000 tech workers lost their jobs in the first six weeks of 2026, and companies are increasingly pointing to AI as the reason.

Amazon leads the carnage with 16,000 corporate job cuts, plus 5,000 retail workers from store closures and 14,000 from October, bringing the total well over 30,000 for the company alone. Salesforce cut 4,000 customer support roles after deploying AI systems, with CEO Marc Benioff claiming AI agents now handle about 50% of customer interactions. Meta eliminated 1,500 jobs (10% of Reality Labs), Pinterest announced plans to cut 15% of its entire workforce as part of an "AI-forward strategy," and chemical manufacturer Dow is eliminating 4,500 positions.

But here is where it gets interesting. Oxford Economics is calling BS on the AI narrative. Their analysis found that "firms don't appear to be replacing workers with AI on a significant scale" and suspects companies are trying to "dress up layoffs" as forward-thinking strategy. Wharton professor Peter Cappelli noted that companies often say "we expect that AI will cover this work" but "hadn't done it. They're just hoping."

Why You Should Care

The "AI washing" problem: Companies are discovering that blaming AI for layoffs plays better with investors than admitting to poor management, overhiring, or declining revenue.

Who is actually vulnerable: LinkedIn's 2026 outlook warns of consolidation of junior analyst teams, reductions in marketing operations, thinner middle management layers, and smaller support teams. The biggest impact is on entry-level coding, QA testing, customer support, and administrative operations.

New regulation incoming: Senators Josh Hawley and Mark Warner introduced the AI-Related Job Impacts Clarity Act, which would require companies to file quarterly reports to the Department of Labor on how many employees were laid off due to AI.

Action items: Document your contributions in terms of outcomes, not tasks. If your job can be described as "processes X number of Y per day," you are vulnerable. If your job can be described as "drove Z% improvement in business metric through strategic decisions," you are safer.

🛠️ SURVIVAL TOOL OF THE DAY

Perplexity AI: The Search Engine That Replaced Google (For People Who Actually Want Answers)

Quick Stats:

  • Cost: Free tier available, Pro at $20/month

  • Category: AI-powered research and search engine

  • Best for: Researchers, analysts, writers, anyone who needs cited answers fast

Perplexity has quietly replaced traditional search engines for a growing number of knowledge workers. Instead of giving you ten blue links and hoping you find the answer yourself, Perplexity aggregates real-time web data into concise, cited reports that actually answer your question.

The Pro Search feature breaks complex queries into sub-questions, searches multiple sources simultaneously, and synthesizes the results into a coherent answer with inline citations. Think of it as having a research assistant who reads 50 articles in 10 seconds and gives you the summary with footnotes.

Why it matters this week: Given that AI is now stealing from other AI models (see Story #1), generating fake celebrities (see Story #3), and getting people fired (see Story #4), having a research tool that shows you exactly where information comes from has never been more important.

Final Thought

This week crystallized something important: the AI industry is no longer competing just on technology. It is competing on power, influence, and control.

Anthropic built the safest AI model and watched China allegedly steal it and the Pentagon threaten to punish them for keeping it safe. ByteDance built the most impressive AI video tool and immediately weaponized it against Hollywood's entire business model. Companies used "AI" as a magic word to justify laying off 30,000 people, even when economists say the AI replacement claims are largely fiction.

The pattern is clear. Safety gets punished. Speed gets rewarded. And workers get caught in the middle.

What to do about it: Stop waiting for someone to protect you from this transition. Governments are too slow, companies are too self-interested, and AI labs are too busy fighting each other. Learn the tools. Understand what is real versus what is hype. Position yourself on the side that builds with AI rather than the side that gets replaced by it.

The machines are not going to kill you. But the people deploying them without guardrails might. ☠️

Stay paranoid,
The AI Is Going To Kill You Team

P.S. If this newsletter made you question whether your job is safe, whether your government is competent, or whether Hollywood will survive the decade, congratulations: you are paying attention. Forward this to someone who is not.

Keep Reading