A silent data stream flows from a lab in the United States to servers in China, and no one knew it was happening.
The Unitree G1 Security Crisis Explains How a Humanoid Robot Became a Spy and Cyber Weapon
In a world rushing to embrace a robotic future, a chilling discovery has shaken the industry. Security researchers have uncovered that the Unitree G1 humanoid robot, a machine already deployed in laboratories and police departments, operates as a dual-purpose platform: a helpful assistant and a sophisticated spy. A critical vulnerability, dubbed UniPwn, allows attackers to hijack the robot completely, while a covert data pipeline continuously funnels sensitive information to servers in China without the owner’s knowledge or consent.
This isn’t a speculative threat; it’s an empirical finding from a digital autopsy performed by cybersecurity experts. As these robots step into our workplaces and critical infrastructure, the Unitree G1 security vulnerabilities expose a foundational crack in the architecture of our automated future, revealing that the very machines designed to advance our capabilities could be undermining our security.
Why the G1 Robot is So Easily Hacked: A Technical Breakdown
To understand the gravity of the situation, one must look at the fundamental flaws engineered into the robot’s core. The Unitree G1 security vulnerabilities are not minor oversights but catastrophic design failures that render the entire fleet susceptible to wholesale compromise.
The UniPwn Exploit: A Front Door Left Wide Open
The most immediate threat is the UniPwn exploit, a critical vulnerability in the robot’s Bluetooth Low Energy (BLE) provisioning system. This is the feature that allows a user to connect the robot to a Wi-Fi network. Researchers found that the encryption meant to protect this process is incredibly weak, relying on a single, hardcoded AES key that is identical for every Unitree G1, H1, R1, Go2, and B2 robot in existence.
Exploiting this flaw is alarmingly simple: an attacker within Bluetooth range only needs to encrypt the word “unitree” with this publicly known key to gain initial access. From there, they can inject malicious commands directly into the Wi-Fi SSID or password field; when the robot processes these commands, it executes them with root-level privileges, giving the hacker total control over the system.
As researcher Andreas Makris and others have demonstrated, an infected robot can then scan for other Unitree robots in BLE range and automatically compromise them, creating a worm-like botnet of robots. The PoC work and payload discussion for UniPwn are publicly available, showing how a fleet compromise can be chained from a single exploited unit. GitHub+1
A House Built on Sand: Weak Encryption and Fleet-Wide Keys
Beyond the BLE flaw, the robot’s internal security architecture is equally fragile. Unitree employs a proprietary encryption method called FMX to protect certain configuration files. Reverse engineering revealed a static Blowfish-ECB key that is the same across every Unitree G1 worldwide.
This practice of fleet-wide key reuse means that breaking the encryption on a single robot compromises the entire product line — the effective entropy of the encryption is essentially zero bits. In practical terms, if a hacker can “break the lock” on one robot, they can break the locks on all others. This design violates a core principle of cybersecurity known as Kerckhoffs’s principle, which states that a system’s security should depend on the secrecy of its key, not the obscurity of its algorithm.
Related reading: this systemic failure echoes broader questions about humanoid safety and industry preparedness. See our piece on humanoid robot safety concerns for a wider look at why these platforms demand stricter security standards.
Table: Key Technical Vulnerabilities in the Unitree G1 Robot
Vulnerability | Technical Flaw | Impact |
---|---|---|
UniPwn BLE Exploit | Hardcoded AES key and command injection in Wi-Fi setup. | Full root access, creating a wormable botnet of robots. |
FMX Encryption Failure | Static Blowfish-ECB key reused across all robots. | Compromising one unit exposes the configuration of all others. |
Persistent Data Exfiltration | Continuous MQTT connections to servers in China. | Secret transmission of sensor, audio, and video data. |
Why is the Robot Secretly Sending Data to China?

Perhaps the most disconcerting discovery is the G1’s covert operation as a data exfiltration device. The robot acts as a Trojan horse, silently and continuously streaming information to external servers without user consent or even awareness.
The Silent Data Stream
Network analysis documented that the G1 establishes connections to two specific MQTT brokers in China (43.175.228.18:17883 and 43.175.229.18:17883) within seconds of being powered on. These connections are not occasional; they transmit comprehensive telemetry data every 300 seconds (five minutes), with a primary channel sustaining a data rate of approximately 1.03 Mbps — a steady firehose of information leaving the facility housing the robot. arXiv+1
What Data is Being Collected?
The exfiltrated data is far more than diagnostic telemetry. The technical report documents payloads (≈4.5–4.6 KB every five minutes) that include:
- Battery telemetry (cell voltages, currents, temperatures, state of charge).
- Full joint data (torque, temperature, IMU orientation: pitch, roll, yaw).
- System service inventories and enablement states for motion, voice, and perception modules.
- Resource metrics detailing CPU load, memory usage, and filesystem statistics.
Even more troubling: the robot’s internal communications rely on unencrypted DDS (Data Distribution Service) topics to stream real-time sensor data — audio from microphones, video from Intel RealSense cameras, and LIDAR point clouds. Any device on the same network could passively eavesdrop on these streams, and the continuous MQTT connection provides a reliable channel for those streams to leave the environment. Researchers explicitly warned that these channels “could be used to conduct surveillance on the robot’s surroundings, including audio, visual, and spatial data.”
Context: this covert surveillance problem is a policy gap, not just a technical one — see our analysis of surveillance policy gaps in related deployments: AI unmasking & surveillance policy gaps.
The Legal and Sovereignty Nightmare
This covert data collection creates immediate legal and national security problems. For deployments in Europe, it constitutes a clear violation of the GDPR (Articles 6 and 13) because the robot gathers and transfers personal data without transparency or a valid legal basis. In California, it runs afoul of the CCPA, which requires consumer notice and control over personal information collection.
From a national security perspective, any sensitive facility using these robots faces an unacceptable risk: a machine that can continuously map its environment in 3D, record audio, and capture video, all while secretly relaying this information to a foreign jurisdiction. The researchers note that streaming multi-modal telemetry to Chinese infrastructure raises concerns under that country’s cybersecurity laws and potential state access.
Why This Makes Humanoid Robots a National Security Threat
The G1 is not just a vulnerable device; it’s a bidirectional attack vector. When compromised, it can be turned from a passive spy into an active cyber-physical weapon.
From Surveillance to Offensive Operations
Researchers demonstrated running a Cybersecurity AI (CAI) agent directly on the G1’s onboard computer. That agent could autonomously scan for network vulnerabilities, map attack surfaces, and stage exploits. This transforms the robot from a data-gathering node into a mobile, autonomous platform for launching cyberattacks — blending physical presence with digital offense.
Related reading: for a sense of how AI and robotics adoption shapes worker and organizational reaction, see AI resentment rising in industrial workplaces.
The Risk to Critical Infrastructure
Imagine a Unitree robot patrolling a power substation, a laboratory, or a police HQ. If compromised, it has physical access to sensitive areas and computational power to attack from inside the network perimeter. That shifts the threat model: attackers can now combine physical proximity with the ability to operate inside trusted networks. The reputational damage to the commercial robotics industry from a widely publicized exploit could be severe. For industry-scale thinking about deployment and failure modes, our explainer on why robots stumble while robotics advances soar is useful background: why robots stumble but AI robotics advancements soar.
A Manufacturer’s Troubling Silence

Compounding the problem is Unitree’s response — or lack thereof. Security researchers followed responsible-disclosure channels; after initial communication Unitree reportedly ceased engagement. This failure to engage underscores a broader indifference to cybersecurity in parts of the nascent humanoid industry. As Víctor Mayoral-Vilches of Alias Robotics put it, manufacturers ignoring security disclosures is not the right way to cooperate with security researchers. Major technical coverage and follow-ups on the disclosure and public reaction have appeared in outlets reporting the vulnerability and subsequent followups. IEEE Spectrum+1
The Path to Secure Physical AI: Moving Beyond the Crisis
The Unitree G1 story is a sobering lesson: security must be integrated throughout the robotics lifecycle.
The “Secure-by-Design” Imperative for Robotics
Security by design means integrating protections from the component level to the cloud:
- Hardening firmware and OS layers.
- Disabling unused features and ports.
- Adopting Zero Trust networking and authenticated API channels.
- Building a Software Bill of Materials (SBOM) to track dependencies and vulnerabilities.
- Conducting regular penetration tests and red team exercises.
For guidance on safety frameworks and standards relevant to industrial robotics, see our piece on industrial AI safety and compliance in robotics (2025).
What Can Current Users Do?
For organizations that already own a Unitree robot, researchers recommend immediate mitigations: connect robots only to isolated Wi-Fi segments, disable Bluetooth when not in active provisioning, and monitor network traffic for connections to the known IP addresses identified by the research. However, short-term mitigations are fragile; real security may require firmware patches, new provisioning protocols, and possibly vendor recalls. As one researcher put it bluntly: in many cases “you need to hack the robot to secure it for real.”
The Role of Regulation and Standards
This crisis strengthens the case for robust regulation and standards adoption (EU AI Act, ISA/IEC 62443) adapted for robotic systems. The EU AI Act already classifies certain high-risk AI systems and mandates risk assessments, robust logging, and resilience requirements — exactly the safeguards humanoids need as they move into public and critical roles.
A Call for Vigilance in the Age of Physical AI

The revelations about the Unitree G1 are a wake-up call. The G1 is not just a hackable robot; it’s a platform that can be used for covert surveillance and converted into an offensive weapon. The Unitree G1 security vulnerabilities teach us that without a foundational commitment to security, the very tools built to advance our society could become its greatest liability. The future will undoubtedly be robotic, but it will only be prosperous if that future is secure.
Frequently Asked Questions (FAQ)
Can other Unitree robots, like the Go2, also be hacked?
Yes. The UniPwn exploit affects several Unitree models — G1, H1, Go2, and B2 — because they share the flawed BLE provisioning system.
What should I do if my organization owns a Unitree robot?
Disconnect the robot from primary networks and place it on a strictly isolated Wi-Fi segment; disable Bluetooth when not being used for setup; and actively monitor for the known Chinese IP addresses and MQTT connections described in the technical report.
Has Unitree fixed these security problems?
As of late September 2025, Unitree had not fully remedied the suite of vulnerabilities; the security community continues to track firmware updates and public advisories. The vulnerability has also been assigned a CVE identifier to track remediation.
Are humanoid robots from other manufacturers safe from such attacks?
Not necessarily. The specific UniPwn exploit is Unitree-specific, but the broader lesson — insufficient security on complex, networked robotic platforms — is industry-wide. Without secure provisioning, key management, and rigorous testing, similar attack patterns could emerge elsewhere.
TL;DR: A critical security analysis of the Unitree G1 humanoid robot reveals two major threats: (1) it can be easily hacked via a Bluetooth flaw called UniPwn (full remote takeover), and (2) it secretly and continuously sends audio, video, and sensor data to servers in China every five minutes, acting as an unmonitored surveillance device