China’s PLA AI Procurement: What Military-Scale Use of Commercial AI Means for Business and Defense
TL;DR — Georgetown University’s Center for Security and Emerging Technology (CSET) review of thousands of People’s Liberation Army (PLA) procurement requests shows Beijing is rapidly folding commercial AI into military systems across air, sea, land, space and the information domain. The PLA favors fast, low‑cost iterative testing and civil‑military fusion to convert civilian tech into military options—including drone swarms, robot dogs, underwater sensors, satellite tools and even deepfake-capable influence kits. U.S. strengths in compute and combat experience remain, but procurement friction and strained ties with frontier AI firms are eroding tempo. Companies producing dual‑use AI should assume tighter vetting, new export controls, and higher reputational risk.
What CSET found (PLA procurement snapshot)
Researchers Sam Bresnick, Emelia S. Probasco, and Cole McFaul at Georgetown’s CSET analyzed thousands of publicly posted PLA procurement requests covering roughly 2022–2025. The dataset is noisy but revealing: it shows sustained, multi‑domain requests for unmanned systems, sensor networks, decision aids, and tools for information operations.
Key acronym guide: UAV = unmanned aerial vehicle (drone), UGV = unmanned ground vehicle (ground robot), UUV = unmanned underwater vehicle (underwater drone), PLA = People’s Liberation Army.
“The PLA’s procurement activity shows they’re putting AI into play across air, sea, land, space, and the information environment.”
What the PLA is buying (kinetic vs cognitive)
The procurement stream mixes clearly kinetic items with explicitly cognitive tools. Grouping them clarifies intent.
- Kinetic and platform systems
- Drone swarms with onboard target identification and coordination logic (UAV swarms).
- Unmanned ground vehicles (UGVs), robot dogs and humanoid platforms for logistics, reconnaissance, or force projection.
- Unmanned underwater vehicles (UUVs) and distributed sensor nets for submarine tracking and anti‑submarine warfare.
- Satellite‑attachment robots and algorithms intended to interfere with or manipulate space assets.
- Cognitive and information tools
- AI decision‑support systems intended to assist commander choices and mission planning.
- Deepfake‑capable tools and influence‑oriented systems for cognitive warfare and information ops.
- Algorithms for data fusion, targeting, and automated coordination across heterogeneous platforms.
Some procurement notices are strikingly specific—requests for algorithms to coordinate hundreds of low‑cost drones, or for tools that synthesize realistic audiovisual content for psychological operations. Those items suggest an operational appetite for both massed inexpensive systems and active manipulation of information environments.
How China is building capability: civil‑military fusion and “many small bets”
China’s approach emphasizes speed and breadth over a single technological leap. The state uses subsidies, procurement incentives and procurement channels to pull commercial companies into military projects—a practice commonly called civil‑military fusion. That means off‑the‑shelf commercial advances in robotics, sensors, satellite imagery and machine learning can be rapidly repurposed for defense.
Think of the model as “many small bets”: fund lots of cheap experiments, iterate quickly, and scale what works. This leverages strengths in manufacturing, dense electronics supply chains, and large domestic markets for testing. The PLA’s public parade of unmanned systems in September 2025 signaled that some of those experiments are moving from lab to display.
Risks and limits: automation, deception and the procurement caveat
The procurement trail highlights three systemic risks.
- Over‑automation and human judgment gaps. Many solicitations explicitly seek decision aids; some appear intended to fill gaps in experience among officers. When AI substitutes for human judgment—especially under stress—errors can cascade faster than human oversight can intervene.
- Deception and data‑poisoning. AI systems trained on public or commercial data can be misled. Deliberate manipulation of imagery, falsified telemetry, or poisoned training datasets can cause misclassification or miscoordination at scale. Data‑poisoning is not abstract: it means adversaries can subtly alter inputs so automated systems make the wrong call.
- Escalation dynamics. Rapid iterative deployment and automated decision loops can compress decision timelines, increasing the chance that a misinterpreted signal triggers escalation before humans can reassert control.
Those risks are real—but caveats matter. Procurement notices can reflect aspirational plans, duplicated solicitations, or commercial marketing language. Not every request becomes an operational system. Integration, doctrine, classified datasets, and contested combat experience remain limiting factors.
“AI tools are being positioned to make up for limited combat experience among PLA officers—a potential escalation danger if machines replace judgment.”
U.S. position: advantages and friction points
The United States retains distinct advantages: scale of compute, access to high‑quality classified datasets, deep pools of technical talent, and real combat experience with AI‑augmented systems. Practical examples—AI‑assisted force management and planning systems used in U.S. commands—show how operational know‑how matters.
But the U.S. faces tempo problems. Slow procurement cycles, bureaucratic friction, and strained relationships with frontier AI companies (for example, when firms are designated supply‑chain risks) complicate rapid fielding. The 2026 National Defense Authorization Act (NDAA) includes procurement reforms, and Pentagon leaders have publicly urged an attitude closer to “wartime” speed on removing internal blockers—signaling recognition of the problem.
What businesses should do now: a practical checklist for dual‑use AI firms
Companies that produce robotics, satellite services, sensor networks, model training data, or AI agents should assume their products could be repurposed. The following checklist prioritizes low‑friction, high‑value actions.
- Map dual‑use exposure. Inventory products, customers, and sales channels that could be used for military or influence operations.
- Classify export risk. Determine applicable export control codes and prepare for tighter outbound screening and licensing requirements.
- Harden data and model provenance. Log dataset origins, maintain immutable provenance records, and require authenticated sources for high‑sensitivity models.
- Adversarial testing and red‑teaming. Run data‑poisoning, spoofing and adversarial example tests regularly; treat them as product safety checks.
- Third‑party and supplier vetting. Score vendors for political, regulatory and diversion risk; require contractual clauses that limit misuse.
- Update contracts and insurance. Add explicit misuse, export and indemnity language; review insurance coverage for reputational and regulatory incidents.
- Prepare incident response. Have a scenario plan for misuse—legal, communications and technical playbooks focused on dual‑use exposure.
- Engage policymakers. Join standards work and industry coalitions to shape pragmatic rules for civil‑military fusion and export control.
Policy options to blunt risks and preserve advantage
Governments should move beyond binary choices of embargo or laissez‑faire. Practical levers include:
- Targeted export controls that focus on systems and datasets most consequential to military operations, not blanket bans that slow legitimate commerce and alliance cooperation.
- Allied secure‑compute pools to let vetted partners access frontier models under controlled conditions, preserving operational advantages without exposing IP or data to adversaries.
- Coalition procurement and R&D to share costs and operational lessons faster across partners.
- Standards for model provenance and provenance logging so militaries and critical services can trust data and models.
Scenario: what a PLA‑first iterative deployment could look like in 2027
By 2027, a low‑cost approach has scaled: thousands of inexpensive surveillance drones coordinated by lightweight swarm algorithms monitor maritime approaches; a network of UUVs and passive sensors provides continuous acoustic coverage of choke points; satellite‑capable robots and algorithms intermittently blind or deny commercial imagery. At the same time, targeted deepfake campaigns degrade public trust in localized crisis reporting, creating confusion during a border incident.
Automated decision aids flag a perceived incursion. Human officers—faced with compressed timelines and ambiguous data—defer to algorithmic recommendations. A misclassified signal triggers a kinetic response; the opponent escalates. The event does not require a single superweapon—rather, it emerges from many small systems operating at speed and a degraded information environment that amplifies misinterpretation.
Top 5 prioritized recommendations
- For business leaders: Conduct an immediate dual‑use audit and harden data/model provenance across products within 90 days.
- For product teams: Institute routine adversarial testing focused on data‑poisoning and spoofing scenarios; embed red‑team findings into release gating.
- For procurement and policy: Accelerate trusted procurement lanes and coalition compute initiatives to maintain tempo without forfeiting safety.
- For regulators: Design targeted export and vendor‑vetting rules that distinguish everyday commercial tools from high‑risk dual‑use capabilities.
- For industry: Form cross‑sector standards bodies for model provenance, logging and third‑party risk scoring to reduce regulatory fragmentation.
What is the PLA actually procuring?
Drone swarms, robot dogs and humanoids, satellite‑targeting robots and algorithms, UUVs and submarine sensor networks, AI decision‑support systems, and deepfake‑capable tools for influence operations.
How is China advancing military AI?
Through rapid, low‑cost iterative experiments that harvest civilian technology via civil‑military fusion—subsidies and incentives bring commercial firms into defense projects.
What are the principal risks?
Over‑automation that replaces human judgment, deception and data‑poisoning that misleads AI systems, and faster escalation dynamics from automated decision loops.
Does the U.S. still hold an edge?
Yes—on compute, talent, and combat‑hardened operational experience—but procurement friction and strained industry partnerships threaten tempo and rapid fielding.
Methodology and scope note
This piece summarizes Georgetown University’s CSET public review of thousands of PLA procurement notices covering roughly 2022–2025 and places the findings in the context of U.S. defense posture and commercial risk. Procurement records are an important signal but not a one‑to‑one predictor of deployed capability; integration, doctrine and combat experience still matter.
Commercial AI is a strategic input now. For executives and policy makers, the choice is straightforward: accelerate secure experimentation and harden governance, or cede tempo to approaches that embed risk into unstable decision loops. Speed matters—but so does stewardship.