MI Történik?

Mesterséges intelligencia hírek magyarul — naponta frissülve

← Vissza a főoldalra

Az Anthropic kínai AI laborokat vádol modellek másolásával desztillációs támadásokon keresztül

Anthropic accused three Chinese AI labs — DeepSeek, Moonshot AI, and MiniMax — of using 24,000 fake accounts and 16 million exchanges to run coordinated distillation attacks. Distillation is a technique where a smaller model is trained on the outputs of a larger, smarter model, allowing it to match performance for a fraction of the original research cost. This practice helps explain why open-source models consistently trail frontier models by only an average of three months.
Miért fontos?

Anthropic and OpenAI think China’s approach to distillation presents a national security threat: large-scale intellectual property (IP) extraction by a foreign adversary. But the legal ground is murky — especially when many frontier models were trained on data that wasn’t theirs in the first place.

Eredeti forrás megtekintése (angol) →