TMDhosting banner top
Author: Mike Miller | Published on: February 15, 2026

Gemini AI Cloning Attempts Put Cybersecurity at Risk

Hackers Try to Copy GeminiGoogle says hackers and even some private companies are trying to clone its Gemini AI model by hammering it with prompts to reveal how it thinks and reasons. The company warns this “model extraction” could let attackers build copycat systems for cybercrime, spying, or unregulated commercial tools.

What Google Says Is Happening

Google’s latest Threat Tracker report describes a massive wave of “distillation” or model‑extraction attacks against Gemini. Instead of breaking into Google’s systems, attackers legally access Gemini through its API and then try to pull out its hidden reasoning by sending huge numbers of carefully crafted prompts.

One campaign sent more than 100,000 prompts designed to force Gemini to show its full step‑by‑step reasoning, not just the final answer. Google considers this a form of intellectual‑property theft and says it can shut down accounts that violate its terms of service.

Who’s Trying to Abuse Gemini

Google reports two main groups: state‑backed hackers and private‑sector actors. State‑sponsored groups from countries including China, Iran, North Korea, and Russia have used Gemini to support nearly every phase of an attack, from early research to post‑compromise activity.

Examples include:

At the same time, unnamed private companies and researchers are hitting Gemini with high‑volume queries to imitate its proprietary algorithms for their own products.

Why This Matters for Cybersecurity

If attackers successfully clone parts of Gemini, they could build powerful AI tools that are not bound by Google’s safety rules. That might lead to AI systems tuned specifically for malware development, large‑scale phishing, or competitive intelligence in markets like finance and software.

While Google says these abuses haven’t led to major technical breakthroughs yet, AI is clearly boosting the speed and scale of attacks, especially social engineering and vulnerability research. Security teams now have to watch not just for traditional malware, but also for suspicious, automated use of AI tools in their environments.

How Google Is Responding

Google says it’s:

The company is also urging other AI providers to prepare for the same style of attacks and to monitor unusual API behavior, like massive bursts of code‑generation or probing queries. For now, Google says consumers are not directly targeted by the extraction campaigns, but the downstream tools created from stolen AI tech could eventually be used against them.