Policy & Regulatory Landscape
Current and emerging policy framework governing foreign-origin AI in defense, covering DoD directives through Five Eyes posture.
Policy Brief — Section 06 Scope: Publicly available policy sources Knowledge Basis: Compiled from publicly available sources through early 2025. Users should verify all cited documents for subsequent updates, amendments, or revocations — particularly given the rapidly evolving AI policy environment and changes in administration priorities since January 2025.
Table of Contents
- Executive Summary
- DoD Policy on Foreign-Origin AI Models
- CDAO / JAIC Guidance
- NIST AI Risk Management Framework
- ITAR/EAR and Export Control Implications
- Executive Orders on AI
- Five Eyes and NATO Allied Approaches
- Congressional Action
- FedRAMP and Emerging AI Authorization Frameworks
- Synthesis and Implications for Chinese-Origin Models
- Sources and References
1. Executive Summary
Bottom Line Up Front: As of early 2025, no single DoD directive explicitly bans the use of all foreign-origin AI models by name. However, the cumulative effect of existing policies, executive orders, acquisition regulations, and supply chain security requirements creates an environment in which deploying a Chinese-origin AI model (e.g., Qwen, DeepSeek, Baichuan) in any defense or intelligence context faces severe — and likely prohibitive — regulatory friction. The barriers include:
- DoD Directive 3000.09 and the DoD AI Adoption Strategy require traceable, trustworthy AI with accountable supply chains.
- NIST AI RMF (AI 100-1) and the Adversarial ML taxonomy (AI 100-2) establish risk categories that foreign-adversary-origin models inherently trigger.
- Export control regimes (ITAR/EAR) create legal risk if classified or CUI data is processed through foreign-origin model architectures, even locally hosted.
- EO 14110 (October 2023) imposed reporting requirements on frontier AI and specifically flagged foreign models as a national security concern, though the Trump administration’s EO 14148 (January 2025) revoked EO 14110 and shifted the policy posture.
- Congress has introduced multiple bills targeting Chinese AI in government, and the FY2024/FY2025 NDAAs contain relevant supply chain provisions.
- Five Eyes allies are converging on similar postures of caution or outright exclusion.
The absence of an explicit ban should not be read as permission. The regulatory and policy environment creates what amounts to a de facto prohibition for any program subject to DoD oversight, CUI handling, or operating under authority to operate (ATO) requirements.
2. DoD Policy on Foreign-Origin AI Models
2.1 DoD Directive 3000.09 (Autonomy in Weapon Systems)
Originally issued in 2012 and updated November 25, 2023, this directive governs autonomous and semi-autonomous weapons systems. While focused on lethal autonomy, it establishes principles relevant to all DoD AI:
- Human judgment and accountability must be maintained in AI-enabled systems.
- Systems must be “sufficiently robust” to function as intended and must undergo rigorous test and evaluation (T&E).
- Supply chain integrity is implied: a system whose behavior cannot be predicted or verified cannot satisfy these requirements.
Relevance: A Chinese-origin model with opaque training data, unknown alignment procedures, and potential embedded behaviors cannot readily satisfy the “sufficiently robust” standard.
Source: DoD Directive 3000.09, “Autonomy in Weapon Systems,” updated November 25, 2023.
2.2 DoD AI Principles (Adopted February 2020)
The Department adopted five ethical AI principles: Responsible, Equitable, Traceable, Reliable, and Governable. The traceability principle is most directly relevant:
“The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.”
Relevance: A model trained by Alibaba or a Chinese AI lab using undisclosed datasets, with training processes subject to PRC regulatory oversight (including China’s Interim Measures for the Management of Generative AI Services, effective August 15, 2023), cannot satisfy traceability requirements. US defense personnel cannot audit training procedures subject to PRC state control.
Source: DoD, “DOD Adopts 5 Principles of Artificial Intelligence Ethics,” February 24, 2020.
2.3 DoD Instruction 5000.82 — Acquisition of IT and Business Systems
This instruction, along with the broader DoD 5000 acquisition framework, requires programs to assess supply chain risk. Section 875 of the FY2018 NDAA (10 U.S.C. 3252) established the Federal Acquisition Supply Chain Security Act, which empowers the Federal Acquisition Security Council (FASC) to issue exclusion and removal orders for risky technologies.
Relevance: An AI model produced by an entity subject to PRC National Intelligence Law (Article 7, requiring organizations to “support, assist, and cooperate with state intelligence work”) is a textbook supply chain risk under this framework.
2.4 DFARS 252.204-7012 and CMMC
The Defense Federal Acquisition Regulation Supplement clause 252.204-7012 requires contractors handling Covered Defense Information (CDI) to implement NIST SP 800-171 controls. The Cybersecurity Maturity Model Certification (CMMC) 2.0 framework further operationalizes these requirements.
Relevance: Processing CUI/CDI through a foreign-origin model introduces an information processing component that must be inventoried, assessed, and controlled under these frameworks. An AI model is a data processing system — its provenance is a factor in the security posture assessment. Assessors applying NIST 800-171 controls (specifically the Supply Chain Risk Management family, SR-1 through SR-12) would be expected to flag a Chinese-origin model as a risk.
3. CDAO / JAIC Guidance
3.1 Organizational Background
The Joint Artificial Intelligence Center (JAIC) was established in 2018 and absorbed into the Chief Digital and Artificial Intelligence Office (CDAO) in June 2022. The CDAO, led by the Chief Digital and Artificial Intelligence Officer, reports directly to the Deputy Secretary of Defense and is responsible for accelerating DoD’s AI adoption while managing risk.
3.2 Responsible AI (RAI) Toolkit and Implementation Pathway
The CDAO published the Responsible AI (RAI) Toolkit (publicly available at rai.tradewindai.com), which provides a structured process for DoD AI projects. Key elements relevant to foreign-model risk:
- RAI Assessment Process: Projects must complete assessments that include evaluation of data provenance, model provenance, and supply chain integrity.
- AI T&E (Test and Evaluation) Guidance: The CDAO, in coordination with the Director of Operational Test and Evaluation (DOT&E), has emphasized that AI systems must undergo testing that accounts for adversarial robustness — a standard that a model with potential embedded adversarial behaviors cannot cleanly pass.
- Model Cards and Documentation: Following industry practices (and aligned with NIST AI RMF), the CDAO encourages comprehensive model documentation. A model from a PRC entity cannot provide the level of training documentation the RAI Toolkit expects.
3.3 CDAO Data, Analytics, and AI Adoption Strategy
The CDAO’s adoption strategy emphasizes the use of AI through platforms like Advana (the DoD’s enterprise data and analytics platform) and through programs like Task Force Lima (generative AI task force, established August 2023). Task Force Lima was directed to:
- Assess generative AI use cases across DoD.
- Evaluate available models and their suitability.
- Develop guidance on deployment and risk management.
Task Force Lima’s work has been reported as favoring commercial models from US providers with appropriate security controls. While specific guidance naming Chinese models has not been publicly released in an unclassified format, the operational posture has been to work with US hyperscalers (Microsoft Azure Government, AWS GovCloud, Google) and US-origin models.
3.4 CDAO AI Inventory and Registry
Per the AI in Government Act of 2020 (Division U, Title I of the Consolidated Appropriations Act 2021), agencies must inventory their AI use cases. The CDAO maintains the DoD AI inventory. Any deployment of a foreign-origin model would appear in this inventory and would be subject to review — creating an institutional check against unvetted foreign models.
Sources: CDAO RAI Toolkit (rai.tradewindai.com); DoD press releases on Task Force Lima (August 2023); AI in Government Act of 2020.
4. NIST AI Risk Management Framework
4.1 NIST AI 100-1: AI Risk Management Framework (January 2023)
The NIST AI RMF, released January 26, 2023, is a voluntary framework but has become the de facto standard for government AI risk management (EO 14110 directed its use). It is organized around four functions: Govern, Map, Measure, and Manage.
Key provisions relevant to foreign-origin AI:
GOVERN Function:
- GOVERN 1.1: Establish policies for AI risk management, including acceptable risk thresholds. An organization following this guidance would need to explicitly address whether foreign-adversary-origin models fall within acceptable risk.
- GOVERN 1.5: Addresses organizational processes for AI risk, including third-party and supply chain considerations.
- GOVERN 6: Specifically addresses AI supply chain risk. It calls for organizations to:
- Document provenance of AI models and components.
- Assess risks from third-party data and models.
- Establish criteria for acceptable third-party AI providers.
- Maintain awareness of third-party risk factors, “including potential for embedded bias, backdoors, or other vulnerabilities.”
MAP Function:
- MAP 3: Categorize AI risks including those from “third-party software, hardware, and data.” A Chinese-origin model is a third-party component with significant provenance risk.
MEASURE Function:
- MEASURE 2.6-2.8: Address assessment of AI for robustness, security, and resilience. Models from adversarial-nation entities face elevated scrutiny here because their training processes may have introduced vulnerabilities intentionally.
MANAGE Function:
- MANAGE 2.3: Addresses mechanisms for tracking and responding to known AI risks, including those that “may arise post-deployment from adversarial manipulation.”
Key Quote from AI RMF (Section 5.2):
“AI risks may emerge from third-party software, hardware, or data… Organizations should be aware of risks that may arise from using third-party AI technologies, including risks related to data quality, bias, privacy, and security.”
4.2 NIST AI 100-2: Adversarial Machine Learning Taxonomy (January 2024)
Published January 2024 as “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations” (NIST AI 100-2e2023), this document provides a structured taxonomy of attacks against AI systems. Directly relevant categories:
-
Poisoning Attacks (Section 4): Attacks that occur during training to embed malicious behavior. The taxonomy describes:
- Backdoor/Trojan attacks: “The adversary inserts a backdoor into the model during training. The backdoored model behaves normally on clean inputs but produces adversary-chosen outputs when a specific trigger pattern is present in the input.”
- Targeted data poisoning: Manipulating training data to cause specific misclassifications or behaviors.
-
Supply Chain Attacks (Section 7): Attacks targeting the AI development pipeline:
- Compromised pre-trained models.
- Malicious model repositories.
- Tampered model serialization formats.
-
Evasion Attacks: While focused on adversarial inputs, the framework notes that models with known architectural details (as with open-weight models) are more susceptible.
Relevance: A Chinese-origin open-weight model is essentially a pre-trained artifact from a potentially adversarial supply chain. NIST AI 100-2 provides the technical vocabulary for describing exactly why this is a risk. Any defense program following NIST’s taxonomy would be required to assess poisoning and supply chain attack vectors — assessments that cannot be cleanly resolved when the training pipeline is controlled by a PRC entity.
Sources: NIST AI 100-1, “Artificial Intelligence Risk Management Framework,” January 26, 2023; NIST AI 100-2e2023, “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” January 2024.
5. ITAR/EAR and Export Control Implications
5.1 Framework Overview
Two primary export control regimes govern defense-related technology:
- ITAR (International Traffic in Arms Regulations): Administered by the State Department’s Directorate of Defense Trade Controls (DDTC). Covers defense articles on the US Munitions List (USML).
- EAR (Export Administration Regulations): Administered by the Commerce Department’s Bureau of Industry and Security (BIS). Covers dual-use items on the Commerce Control List (CCL) and items subject to EAR jurisdiction.
5.2 The “Deemed Export” Problem
Under both ITAR (22 CFR 120.17) and EAR (15 CFR 734.13), a “deemed export” occurs when controlled technology is released to a foreign national within the United States. The key question: Does processing controlled data through a foreign-origin AI model constitute a “deemed export” or “deemed re-export”?
Analysis:
- ITAR Technical Data: If classified or ITAR-controlled technical data is used as input to a Chinese-origin model (even running locally), the model’s processing of that data could be argued to constitute making the data available to a “foreign person” — not in the traditional sense (no human foreign national receives it), but the legal framework has not been updated to address AI processing specifically. This is an area of legal ambiguity but significant risk.
- The conservative interpretation: Many defense trade compliance attorneys advise treating AI model processing as analogous to providing technical data to a foreign-developed system, triggering at minimum EAR controls and potentially ITAR violations if the data is defense-related.
- CUI and CDI: While CUI is not automatically export-controlled, certain CUI categories (e.g., Export Controlled, NOFORN) would create compounding compliance issues if processed through a PRC-origin model.
5.3 BIS Entity List and Sanctions Considerations
Several major Chinese AI entities are on the BIS Entity List:
- Huawei and subsidiaries (added 2019, expanded since).
- Various Chinese research institutes and companies have been added in successive rounds (2020-2024).
- Alibaba Cloud has faced scrutiny, and the broader Alibaba Group’s position relative to export controls is complex. As of early 2025, Alibaba itself was not on the Entity List, but the regulatory environment was shifting rapidly.
Key Question: Even if the model weights are freely downloadable, does using a model produced by an entity that may be subject to future sanctions or Entity List designation create future compliance risk? The answer is likely yes — organizations that integrate such models into workflows may face costs of extraction if the regulatory posture tightens.
5.4 Commerce Department AI Chip Export Controls (October 2022, Updated October 2023)
BIS published rules restricting export of advanced AI chips to China (October 7, 2022, updated October 17, 2023). While these rules target hardware, they reflect a policy posture that treats PRC AI development as a strategic concern. The same policy logic — that PRC AI capabilities pose national security risks — extends naturally to the use of PRC-developed AI in US defense systems.
5.5 Reverse Supply Chain Risk
A novel concern specific to AI: even if no US technology is exported, importing a PRC-origin AI model into defense workflows creates a reverse supply chain risk. The model may have been trained using methods or on data that reflect PRC strategic interests (per China’s Interim Measures for Generative AI, models must “reflect core socialist values”). This ideological alignment requirement, mandated by PRC law, is itself a risk factor for defense applications.
Sources: 22 CFR 120-130 (ITAR); 15 CFR 730-774 (EAR); BIS Final Rule, “Implementation of Additional Export Controls,” October 7, 2022, and updates; PRC Interim Measures for the Management of Generative AI Services, effective August 15, 2023.
6. Executive Orders on AI
6.1 EO 14110 — Safe, Secure, and Trustworthy AI (October 30, 2023)
Issued by President Biden on October 30, 2023, this was the most comprehensive US executive order on AI. Key provisions relevant to foreign AI:
Section 4.2 — Ensuring Safe and Reliable AI:
- Required developers of “dual-use foundation models” to report to the government regarding training activities, security measures, and results of red-team testing.
- Defined reporting thresholds based on compute (10^26 integer or floating-point operations for training) and for models with biological sequence capability.
- Required the Secretary of Commerce to define and update these thresholds.
Section 4.3 — Managing AI in Critical Infrastructure:
- Directed agencies to assess risks of AI in critical infrastructure, including risks from adversarial use of AI.
Section 4.4 — Dual-Use Foundation Models:
- Directed the Secretary of Commerce to require reporting by companies (including foreign companies operating in the US) when developing or planning to develop dual-use foundation models.
- Required reporting on ownership and control, including foreign persons involved.
Section 4.6 — Foreign Persons and Foreign Adversary Considerations:
- Directed the Secretary of Commerce, in consultation with the Secretary of State, Secretary of Defense, and the Director of National Intelligence, to “propose regulations that require U.S. IaaS providers to submit a report when a foreign person transacts with that provider to train a large AI model with potential capabilities that could be used in malicious cyber-enabled activity.”
- This provision directly targeted the concern that foreign actors (including PRC entities) could leverage US cloud infrastructure for AI development.
Section 10.1 — Engagement with Allies:
- Called for international engagement and establishing frameworks with allies to manage AI risk.
Relevance to Chinese-origin models: While EO 14110 did not explicitly ban use of Chinese AI models by the US government, its thrust was clearly concerned with foreign AI as a national security risk. The reporting requirements, the focus on supply chain transparency, and the provisions on foreign persons collectively signaled that the administration viewed foreign-origin AI, particularly from adversary nations, as requiring enhanced scrutiny.
6.2 EO 14148 — Removing Barriers to American Leadership in Artificial Intelligence (January 20, 2025)
On January 20, 2025, President Trump signed EO 14148, which revoked EO 14110 in its entirety. The new order:
- Framed AI policy around maintaining US “global dominance” in AI rather than primarily through a safety lens.
- Removed the reporting requirements for frontier model developers.
- Directed agencies to review and, as appropriate, rescind actions taken pursuant to EO 14110.
- Directed OMB and OSTP to develop a new AI action plan within 180 days.
Critical Caveat for This Analysis: The revocation of EO 14110 removes certain specific reporting requirements, but it does not:
- Repeal underlying statutory authorities (IEEPA, DPA, NDAA provisions).
- Revoke DoD-specific directives and policies (3000.09, CDAO guidance, DoD AI Principles).
- Eliminate NIST frameworks (AI RMF is a voluntary framework, not contingent on executive orders).
- Remove ITAR/EAR constraints.
- Override DFARS requirements or CMMC.
The regulatory barriers to using Chinese-origin AI in defense contexts remain largely intact even after EO 14110’s revocation. The statutory and regulatory infrastructure is multi-layered and does not depend on a single executive order.
6.3 Other Relevant Executive Actions
- EO 13873 (May 2019): “Securing the Information and Communications Technology and Services Supply Chain.” Granted the Secretary of Commerce authority to prohibit ICT transactions posing undue risk from foreign adversaries. AI models could be classified as an “information and communications technology” service.
- EO 13984 (January 2021): Required IaaS providers to verify identities of foreign persons. Related to the foreign-person AI development concern.
- National Security Memorandum on AI (NSM, October 24, 2024): Issued alongside EO 14110 implementation, this NSM (reported publicly but with classified annexes) directed the intelligence community and DoD to accelerate AI adoption while managing risks from adversarial AI. Its status post-January 2025 requires verification.
Sources: Executive Order 14110, 88 FR 75191, October 30, 2023; Executive Order 14148, January 20, 2025; EO 13873, 84 FR 22689, May 2019.
7. Five Eyes and NATO Allied Approaches
7.1 United Kingdom
AI Safety Institute (AISI): Established November 2023 (announced at the Bletchley Park AI Safety Summit). The UK AISI conducts pre-deployment testing of frontier models. The UK’s posture has been to engage with Chinese AI labs (including testing Chinese models) while maintaining security boundaries for government use.
NCSC Guidance: The UK National Cyber Security Centre has published guidance on supply chain security for AI systems. While not naming Chinese models explicitly, the NCSC’s supply chain guidance applies risk-based analysis that would flag adversary-nation provenance.
Defence AI Strategy (June 2022): The UK Ministry of Defence’s AI strategy emphasizes the need for “assured AI” in defense contexts, with supply chain integrity as a core requirement.
Key Position: The UK has taken a more engagement-oriented approach than the US to Chinese AI (inviting PRC participation at Bletchley), but operational defense use of PRC-origin models is not supported by current MOD policy.
7.2 Australia
Defence Strategic Review (2023) and AI Guidance: Australia’s defense establishment has closely aligned with US policy positions. The Australian Signals Directorate (ASD) applies supply chain risk principles to AI that would exclude PRC-origin models from classified systems.
Critical and Emerging Technology Policy: Australia’s Critical Technologies Policy Coordination Office has identified AI as a critical technology, and Australia’s Foreign Relations Act provides mechanisms to review and block arrangements with foreign governments that may affect national security — applicable to technology adoption decisions.
AUKUS Pillar II: The AUKUS agreement (Australia-UK-US) includes AI and autonomy as a Pillar II capability. Any AI used in AUKUS programs would need to be interoperable and trusted across all three partners, effectively requiring allied-origin or thoroughly vetted models.
7.3 Canada
Directive on Automated Decision-Making (2019, updated): Canada’s Treasury Board directive requires transparency and accountability for AI used in government decision-making. The directive requires algorithmic impact assessments that would need to address supply chain provenance.
Canadian Centre for Cyber Security (CCCS): Has published guidance on AI security that aligns with Five Eyes principles of supply chain integrity.
Bill C-27 (Digital Charter Implementation Act): Includes the Artificial Intelligence and Data Act (AIDA), which would establish regulatory requirements for AI systems. As of early 2025, this legislation was still in process.
7.4 NATO
NATO AI Strategy (October 2021): Adopted by NATO members, this strategy establishes principles for responsible AI use including:
- Governability
- Traceability and transparency
- Reliability
- Bias mitigation
The strategy does not name specific nations, but its emphasis on traceability and the NATO trust framework effectively requires that AI used in NATO operations come from trusted allied supply chains.
Data Exploitation Framework Policy (DFFP) and DIANA: NATO’s Defence Innovation Accelerator for the North Atlantic (DIANA) funds AI development specifically from allied-nation innovators. The institutional structure is designed to foster a trusted AI supply chain within the alliance.
NATO Vilnius Summit Communique (July 2023): Reaffirmed the commitment to emerging and disruptive technologies (EDT) development within the alliance, with explicit reference to strategic competition with China and Russia.
7.5 Convergent Five Eyes Position
While no formal Five Eyes “ban” on Chinese AI has been publicly announced, the operational posture across all Five Eyes nations is consistent:
- Classified systems: PRC-origin AI is excluded as a matter of practice.
- Government systems (unclassified): Risk-based assessments effectively exclude PRC-origin AI for sensitive applications.
- Commercial/public sector: More varied, with the UK taking a more permissive approach to engagement than Australia or the US.
The intelligence-sharing requirements of Five Eyes create a practical constraint: if any one partner excludes Chinese AI from its information-handling pipeline, all partners must respect that exclusion to maintain interoperability and trust.
Sources: UK AI Safety Institute announcement, November 2023; UK MOD Defence AI Strategy, June 2022; NATO AI Strategy, October 2021; AUKUS Joint Statement, September 2021 and subsequent updates; NATO Vilnius Summit Communique, July 2023.
8. Congressional Action
8.1 National Defense Authorization Acts (NDAAs)
FY2024 NDAA (P.L. 118-31, December 2023):
- Section 1521: Required the Secretary of Defense to establish policies for AI governance, including risk management aligned with NIST AI RMF.
- Section 1522: Directed assessment of AI-enabled military applications with emphasis on testing, evaluation, verification, and validation (TEVV).
- Sections on Supply Chain: Multiple provisions reinforced supply chain security requirements for defense technology, including ICT and AI components.
FY2025 NDAA: As of early 2025, included provisions further expanding AI governance requirements and supply chain security measures. Specific sections addressing foreign-adversary technology in defense systems were included (exact section numbers should be verified against the final enacted text).
8.2 Targeted Legislation on Chinese AI
Several bills have been introduced (at various stages of progress) specifically addressing Chinese AI in government:
“No DeepSeek on Government Devices Act” (February 2025):
- Introduced in both the House and Senate following the public release of DeepSeek R1 in January 2025.
- Would prohibit the use of DeepSeek (and potentially other PRC-origin AI models) on government devices and networks.
- Co-sponsors included members of the House Select Committee on the CCP and the Senate Intelligence Committee.
- As of early 2025, this had not yet been enacted but had bipartisan support.
“Protecting Americans from Foreign Adversary Controlled Applications Act” (Enacted 2024):
- While primarily targeting TikTok, this law (signed April 24, 2024, with effective dates in early 2025) established the precedent of banning foreign-adversary-controlled applications from operating in the US.
- The legal framework — defining “foreign adversary controlled application” by reference to ownership by entities in PRC, Russia, Iran, or North Korea — could be applied or extended to AI models.
“RESTRICT Act” (S. 686, introduced 2023):
- Would give the Commerce Secretary broad authority to review and block ICT transactions involving foreign adversaries.
- Specifically designed to address technologies from China, Russia, and other adversary nations.
- As of early 2025, had not been enacted but its concepts influenced other legislative efforts.
8.3 Select Committee on the CCP
The House Select Committee on the Strategic Competition Between the United States and the Chinese Communist Party has been active on AI-related issues:
- Held hearings on Chinese AI capabilities and risks.
- Published reports recommending restrictions on PRC-origin technology in government systems.
- Pressed administration officials on policies regarding Chinese AI model use.
Following the DeepSeek R1 release in January 2025, multiple members of the committee publicly called for government-wide bans on Chinese AI models, citing national security and data security concerns.
8.4 Intelligence Authorization Acts
Intelligence authorization legislation has included classified provisions related to AI and foreign technology. While specifics are not publicly available, the general thrust has been toward restricting IC use of technologies from adversary nations.
Sources: P.L. 118-31, James M. Inhofe National Defense Authorization Act for FY2024; Congressional press releases on No DeepSeek on Government Devices Act, February 2025; P.L. 118-50, Protecting Americans from Foreign Adversary Controlled Applications Act.
9. FedRAMP and Emerging AI Authorization Frameworks
9.1 Current FedRAMP Framework
The Federal Risk and Authorization Management Program (FedRAMP) provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud services. Key points:
- FedRAMP Rev. 5 Transition: FedRAMP is transitioning to align with NIST SP 800-53 Rev. 5, which includes enhanced supply chain risk management controls (the SA and SR control families).
- FedRAMP Authorization Act (December 2022): Codified FedRAMP into law as part of the FY2023 NDAA, giving it a stronger statutory foundation.
Current Gap: FedRAMP was designed for cloud services (IaaS, PaaS, SaaS), not for AI models specifically. An AI model running locally on government hardware does not go through FedRAMP. This creates a gap where a Chinese-origin model downloaded from Hugging Face and run on-premises could bypass the FedRAMP authorization process entirely.
9.2 Emerging AI-Specific Authorization Approaches
OMB Memorandum M-24-10 (March 28, 2024): Issued pursuant to EO 14110, this OMB memo titled “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence” established significant requirements:
- Agencies must designate a Chief AI Officer.
- Agencies must inventory AI use cases (including distinguishing “safety-impacting” and “rights-impacting” AI).
- Agencies must implement minimum practices for AI that impacts rights or safety, including:
- Conducting AI impact assessments.
- Testing AI for performance in real-world conditions.
- Providing human oversight mechanisms.
- Ongoing monitoring.
Status post-EO 14148: The revocation of EO 14110 in January 2025 cast uncertainty on the status of M-24-10 and related OMB guidance. Agencies were directed to review actions taken under EO 14110. Whether M-24-10 remains in effect, is revised, or is withdrawn is a critical variable that users of this brief should track.
9.3 FedRAMP for AI / “AI FedRAMP” Concepts
As of early 2025, there was growing discussion of an “AI FedRAMP” or AI-specific authorization framework:
- Concept: A standardized government process for evaluating and authorizing AI models for government use, analogous to what FedRAMP does for cloud services.
- Bipartisan Support: Multiple members of Congress (including Senators from the AI caucus) expressed support for such a framework.
- NIST Role: NIST’s AI RMF was expected to serve as the foundation for any such framework.
- Challenges: AI models are fundamentally different from cloud services — they can be self-hosted, fine-tuned, and combined in ways that make a static authorization snapshot less meaningful.
Key Point: An AI-specific FedRAMP would likely include provenance and supply chain requirements that would formally exclude or heavily scrutinize foreign-adversary-origin models. Until such a framework exists, the gap is partially filled by agency-level risk assessments but lacks standardization.
9.4 DoD Authorization to Operate (ATO) and AI
Within DoD, AI systems must obtain an Authorization to Operate (ATO) from the relevant Authorizing Official (AO) under the Risk Management Framework (RMF, NIST SP 800-37). The ATO process:
- Requires system categorization (using FIPS 199/CNSSI 1253).
- Requires security control implementation and assessment.
- Requires risk assessment including supply chain considerations.
- The AO must accept residual risk.
Practical Reality: No reasonable AO would accept the residual risk of a Chinese-origin AI model processing DoD data. The combination of supply chain risk, adversarial ML risk, and policy signals makes this an unacceptable risk position for anyone whose career depends on the authorization decision.
Sources: FedRAMP Authorization Act (P.L. 117-263, Section 5921); OMB M-24-10, March 28, 2024; NIST SP 800-37 Rev. 2.
10. Synthesis and Implications for Chinese-Origin Models
10.1 The Regulatory Stack
The following table summarizes how each policy layer applies to the question of using a Chinese-origin AI model (e.g., Qwen, DeepSeek) in a defense context:
| Policy Layer | Applicable? | Effect |
|---|---|---|
| DoD Directive 3000.09 | Yes (for autonomous systems) | Requires robustness/trust that PRC models cannot demonstrate |
| DoD AI Principles (Traceability) | Yes | Cannot trace PRC model training processes |
| CDAO RAI Toolkit | Yes | Assessment process would flag supply chain risk |
| NIST AI RMF (AI 100-1) | Yes | GOVERN 6 supply chain provisions triggered |
| NIST AI 100-2 (Adversarial ML) | Yes | Poisoning and supply chain attack taxonomy applies |
| ITAR/EAR | Yes (if processing controlled data) | Potential deemed export / compliance risk |
| DFARS 252.204-7012 / CMMC | Yes (for CUI/CDI) | Supply chain controls apply |
| EO 14110 / EO 14148 | Evolving | EO 14110 revoked; successor policy pending |
| FedRAMP | Partial (cloud only) | Gap for self-hosted models |
| NDAA Provisions | Yes | Supply chain and AI governance requirements |
| Targeted Legislation | Pending | Bills specifically targeting PRC AI in government |
| Five Eyes / NATO | Yes (for coalition ops) | Allied interoperability requires trusted AI |
10.2 Key Finding
No single policy provides an explicit, unambiguous ban on Chinese-origin AI models in all defense contexts. However, the cumulative regulatory burden is effectively prohibitive. A defense program that attempted to use a Chinese-origin model would need to:
- Justify it under DoD AI Principles (traceability — difficult).
- Pass CDAO RAI assessment (supply chain risk — difficult).
- Satisfy NIST AI RMF supply chain provisions (model provenance — difficult).
- Clear export control review (if processing controlled data — risky).
- Obtain ATO from an authorizing official (residual risk acceptance — unlikely).
- Survive congressional and IG scrutiny (political risk — severe).
- Maintain allied interoperability (Five Eyes trust — compromised).
Failing at any single layer would be sufficient to block adoption. Failing at most or all of them makes the case effectively closed.
10.3 The Open-Weight Distinction Does Not Help
Some argue that open-weight models are “just math” and that running them locally eliminates the foreign-origin concern. This argument does not survive regulatory scrutiny:
- Traceability requires understanding the training process, not just inspecting weights.
- Supply chain security looks at the entire provenance chain, including who built the artifact.
- Export controls focus on the nature of the technology and its origins, not just its deployment mode.
- Adversarial ML risk (per NIST AI 100-2) includes poisoning and supply chain attacks that are embedded in the weights themselves.
10.4 Evolving Landscape
The policy environment is shifting in at least two directions simultaneously:
- Tighter on China specifically: Congressional action, public pressure after DeepSeek R1, and bipartisan consensus on PRC technology risks all point toward more explicit restrictions.
- Looser on AI generally: The Trump administration’s EO 14148 signals a deregulatory posture on AI. However, this deregulation is framed as supporting US AI dominance — not as opening the door to Chinese AI in government.
The most likely trajectory is that explicit bans on Chinese AI in government will be enacted legislatively, making the current de facto prohibition into a de jure one.
11. Sources and References
Executive Orders and Presidential Actions
- Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” 88 FR 75191, October 30, 2023.
- Executive Order 14148, “Removing Barriers to American Leadership in Artificial Intelligence,” January 20, 2025.
- Executive Order 13873, “Securing the Information and Communications Technology and Services Supply Chain,” 84 FR 22689, May 15, 2019.
- Executive Order 13984, “Taking Additional Steps to Address the National Emergency with Respect to Significant Malicious Cyber-Enabled Activities,” January 19, 2021.
Department of Defense
- DoD Directive 3000.09, “Autonomy in Weapon Systems,” updated November 25, 2023.
- DoD Ethical AI Principles, adopted February 24, 2020.
- CDAO Responsible AI Toolkit: rai.tradewindai.com
- DoD Task Force Lima announcement, August 2023.
- DoD Instruction 5000.82, “Acquisition of Information Technology.”
NIST
- NIST AI 100-1, “Artificial Intelligence Risk Management Framework (AI RMF 1.0),” January 26, 2023.
- NIST AI 100-2e2023, “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” January 2024.
- NIST SP 800-171 Rev. 2/3, “Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations.”
- NIST SP 800-37 Rev. 2, “Risk Management Framework for Information Systems and Organizations.”
- NIST SP 800-53 Rev. 5, “Security and Privacy Controls for Information Systems and Organizations.”
Export Controls
- International Traffic in Arms Regulations (ITAR), 22 CFR Parts 120-130.
- Export Administration Regulations (EAR), 15 CFR Parts 730-774.
- BIS Final Rule, “Implementation of Additional Export Controls: Certain Advanced Computing and Semiconductor Manufacturing Items,” October 7, 2022, and updates October 17, 2023.
Congressional
- P.L. 118-31, James M. Inhofe National Defense Authorization Act for Fiscal Year 2024.
- P.L. 117-263, FedRAMP Authorization Act (FY2023 NDAA, Section 5921).
- P.L. 118-50, Protecting Americans from Foreign Adversary Controlled Applications Act.
- “No DeepSeek on Government Devices Act,” introduced February 2025.
- S. 686, “RESTRICT Act,” 118th Congress.
OMB and Federal Policy
- OMB Memorandum M-24-10, “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence,” March 28, 2024.
- AI in Government Act of 2020 (Division U, Title I, Consolidated Appropriations Act, 2021).
International and Allied
- NATO AI Strategy, adopted October 2021.
- NATO Vilnius Summit Communique, July 11-12, 2023.
- UK Defence AI Strategy, Ministry of Defence, June 2022.
- UK AI Safety Institute, established November 2023.
- AUKUS Joint Leaders Statement, September 15, 2021, and subsequent joint statements.
PRC Regulations (for context)
- PRC Interim Measures for the Management of Generative Artificial Intelligence Services, effective August 15, 2023.
- PRC National Intelligence Law of 2017, Article 7.
This brief represents analysis of publicly available policy documents and should not be construed as legal advice. Organizations should consult with legal counsel specializing in defense acquisition, export controls, and technology policy for application to specific programs. All citations should be verified against current document versions, as the policy landscape is evolving rapidly.