The US government has many potential roles in AI, from spurring responsible innovation to improving government performance via AI adoption. It wields vast policy levers to shape AI development and use, including standard-setting, export controls, intellectual property laws, and international diplomacy. The breadth of AI’s potential impacts—from national security to healthcare to economic competitiveness—makes AI policy a “whole-of-government effort,” with the legislative branch, courts, states, companies, and civil society organizations all playing important roles.
This profile focuses on the activities of the US executive branch, a central player in AI policy. For AI policy across other policy institutions, explore our guides on Congress, think tanks, national labs, and more.
You’ll find AI-related career opportunities across nearly every federal department and agency. Understanding how these government components work to advance and govern AI is essential for choosing where to work and identifying policy opportunities. This guide will help you understand the big picture:
- What “AI policy” means and why the definition matters
- How different agencies approach AI
- Which agencies handle key AI policy areas like research, standards, and oversight
- How agencies coordinate on AI initiatives (with real-world examples)
- Tips on pursuing executive branch AI policy careers
Our further reading section includes more in-depth resources, including national strategies and think tank reports, which review US government biosecurity efforts.
This guide provides a broad overview of executive branch AI policy and can help you identify potential agencies of interest. It complements our federal agency profiles, which detail how specific departments and agencies contribute to federal AI initiatives, including relevant offices, recent developments, and guidance on finding jobs in each agency.
- Executive Office of the President (EOP)
- Department of Commerce (DOC)
- Department of Defense (DOD)
- Department of Energy (DOE)
- National Science Foundation (NSF)
- Department of Homeland Security (DHS)
- Department of State (DOS)
- Federal Trade Commission (FTC)
- Intelligence Community (IC)
Our researching federal agencies guide further explains how you can conduct your own research on agencies and offices aligned with your policy interests.
What is “AI policy”?
This website offers resources for those interested in AI policy, which we understand broadly as all institutional efforts to govern, advance, and ensure the responsible development of AI. AI policy happens both in government and the private sector, but this website generally focuses on government policy.1 The field encompasses areas like AI innovation, safety, and ethics, with different DC communities using these terms in distinct ways that reflect their professional perspectives and priorities—something worth noting when building relationships across government.
In government contexts, key terms to help navigate AI policy discussions include:2
- AI innovation involves advancing and steering AI progress through government investments, research funding, and public-private partnerships. Governments have many tools (e.g. grants, tax incentives, and regulatory frameworks), to foster AI development in areas such as healthcare, education, and science.
- AI adoption emphasizes AI use to enhance the efficiency and performance of government operations, including improving service delivery, identifying fraud, and streamlining administrative tasks.
- AI safety focuses on technical research and policy safeguards to prevent potential harm from AI systems and ensure their safe and reliable behavior.
- AI security can refer to both protecting AI systems from threats and leveraging AI to secure other systems. This term is prevalent in national security and defense communities that focus on defending and securing AI capabilities.
- AI ethics addresses the moral principles and values that should guide AI development, deployment, and use, including fairness, transparency, accountability, and privacy.
Mapping the federal AI ecosystem
If you’re unsure which department, agency, or office aligns with your interests, start by understanding how they specialize and fit into big-picture AI work. But also keep in mind that an office’s responsibilities may include less publicly visible or classified work and can shift based on new priorities, funding, and political leadership.
To navigate this complex landscape, it’s helpful to understand the operational dimensions that shape how government agencies approach AI-related efforts—ranging from policy development to implementation, research to deployment, and domestic to international focus. Recognizing where an agency or office falls along these dimensions can clarify its role in the federal AI landscape and help you identify agency opportunities that match your skills and interests. The examples below illustrate how some agencies and their subdivisions operate along these dimensions, though many work across multiple areas simultaneously. Note that the specific roles and responsibilities in the examples below may change across administrations; focus instead on the differences they illustrate.
Key operational dimensions and agency examples
- Policy development vs. program execution:3 The Department of Homeland Security’s Cyber, Infrastructure, Risk, and Resilience (CIRR) Policy team focuses on developing new AI policies and frameworks, while the Cybersecurity Division (CSD) of the Cybersecurity and Infrastructure Security Agency (CISA) leads operational cyber defense programs.
- Research and development (R&D) vs. deployment of AI systems: The Department of Defense’s (DOD) Defense Advanced Research Projects Agency (DARPA) focuses on high-risk, high-payoff AI research programs, while DOD’s Defense Innovation Unit (DIU) aims to accelerate the adoption of commercial AI systems in the military.
- Domestic vs. international focus: Federal agencies split domestic and international responsibilities. For example, the State Department’s Bureau of Cyberspace and Digital Policy and Bureau of Arms Control, Deterrence, and Stability help lead international AI diplomacy and standards-setting work, while the Office of Management and Budget sets policies for the US government’s purchase of AI products from the private sector.
- Civilian vs. defense orientation: Most federal agencies focus on civilians, while DOD leads on defense issues. On cybersecurity, the National Institute of Standards and Technology (NIST) and the Cybersecurity and Infrastructure Security Agency (CISA) develop AI cybersecurity standards and guidelines for civilian use, whereas DOD’s Chief Digital and AI Office (CDAO) focuses on military applications.4
- Independent operations vs. interagency collaboration: Agencies’ level of collaboration varies significantly, from those operating largely independently to offices whose primary role is coordinating work across the executive branch. The White House Office of Science and Technology Policy (OSTP) and National Security Council (NSC), for example, lead interagency coordination on AI policy, while the Department of Energy’s (DOE) national laboratories execute relatively independent research programs.
- Executive agency vs. independent agency: Executive agencies operate under Cabinet-level departments and closely follow White House priorities (e.g. DOD and Commerce), while independent agencies like the National Science Foundation (NSF) and the Federal Trade Commission (FTC) function with greater autonomy under specific statutory mandates, making decisions based on scientific, economic, or regulatory criteria rather than political directives.5
What agencies work on in AI policy
Building on the operational dimensions above, we can better understand the executive branch’s AI work by considering specific key policy areas, from military use to consumer protection. Below is a mapping of how different departments and agencies have contributed to AI policy areas. Many work across multiple domains, and this overview may not capture every agency’s multifaceted contributions. Our agency profiles (linked throughout this section) provide more in-depth information on the subagencies and offices contributing to these topics.
Mapping executive agencies to AI topics (non-exhaustive)
- Research & development
- Funding and conducting fundamental AI research — Department of Energy (DOE), NSF, DARPA, Intelligence Advanced Research Projects Activity (IARPA), Federally Funded Research and Development Centers (FFRDCs), military labs
- Developing and piloting public resources for AI development (e.g. the National AI Research Resource) — NSF, NIST
- National and economic security
- Researching security coordination and funding — NSF, Bureau of Industry and Security (BIS)
- Investigating and prosecuting economic espionage — FBI, DOJ, Intelligence Community (IC)
- Developing and managing export controls for AI technologies — Bureau of Industry and Security (BIS), DOD, DOE, State, NSC
- Providing funding to boost domestic semiconductor development — Commerce
- Setting controls on outbound investment (from the US to foreign countries) and foreign investment in the US — Department of the Treasury
- Setting tariffs on semiconductors — United States Trade Representative
- Supporting semiconductor workforce development — Commerce
- Assessing AI development and deployment capabilities of the US industrial base — Commerce
- Military use and defense
- Conducting in-house AI R&D for national security use cases or funding external organizations (e.g. private companies and universities) — DARPA, IARPA
- Adopting AI in military operations — DOD Chief Digital and AI Office, DOD Force Development and Emerging Capabilities Office, DOD Military Services
- AI use cases in cyber operations — National Security Agency (NSA), IC
- Internal deployment and AI talent
- Implementing AI solutions in government operations — all agencies, including:
- AI solutions for immigration, drug enforcement, and disaster recovery (DHS)
- AI solutions for diplomatic documentation and processes (State)
- AI solutions for cybersecurity (DOE, NSA, CISA)
- AI solutions for energy efficiency and management (DOE)
- And in many other agencies — see the 2024 inventory of federal agency AI use cases
- Setting guidelines for governmental AI procurement and adoption — Office of Management and Budget (OMB)
- Setting priorities for the federal budget to hire AI talent and developing pathways for accelerated hiring — OSTP, OMB, Office of Personnel Management (OPM)
- Assessing AI development and deployment capabilities of the US industrial base — Commerce, NSF
- Strengthening the AI research community through grantmaking, educational programs, and partnerships — NSF
- Implementing AI solutions in government operations — all agencies, including:
- Critical infrastructure protection
- Managing and coordinating AI risk assessment across critical infrastructure sectors — CISA
- Assessing infrastructure risks — each Sector Risk Management Agency, such as DOE for electricity grids or the Department of the Treasury for financial institutions
- Setting controls on information and communications technology imports — BIS
- Civil rights and consumer protection
- Convening key stakeholders to discuss the implications of AI for civil rights — Department of Justice (DOJ)
- Proposing and enforcing rules to protect consumer privacy — FTC, DOJ, Consumer Financial Protection Bureau (CFPB)
- Monitoring and highlighting risks of AI usage to consumers — CFPB, FTC
- Ensuring that AI tools used in employment decisions comply with federal anti-discrimination laws — Equal Employment Opportunity Commission (EEOC) in collaboration with CFPB, DOJ’s Civil Rights Division, and FTC
- Cybersecurity and information security
- Economics and labor impacts
- Incentivizing small business development and use of AI — Small Business Administration (SBA)
- Gathering information on AI’s potential impacts on global development — United States Agency for International Development (USAID)
- Analyzing potential labor market impacts of AI — Council of Economic Advisors (CEA)
- Developing best practices for employers on AI use — Department of Labor (DOL)
- Preventing anticompetitive conduct in AI development and use — FTC, DOJ Antitrust Division
- Energy
- Augmenting federal permitting processes for grid infrastructure projects — DOE
- Researching smart grid applications of AI to enhance grid resilience — DOE, DHS, CISA
- Increasing effectiveness of efforts to fight wildland fires, manage Western water, and steward natural resources through AI tools — Department of the Interior (DOI)
- Healthcare
- Enhancing the health and well-being of Americans through AI adoption — Department of Health and Human Services Chief AI Officer
- International coordination
- Coordinating international research partnerships — State, NSF, OSTP
- Developing international norms around the military use of AI — State, DOD
- Engaging with international standards and other technical bodies — NIST, State
- Multilateralizing export controls, for example, through the Wassenaar Arrangement — State, Bureau of Industry and Security (BIS), DOD, DOE
- Standards
- AI safety and security
Chip export controls as a case study in government coordination
Major government initiatives typically involve many agencies, each holding distinct authorities, resources, and capabilities. AI policy is no exception, requiring extensive collaboration across agencies for cross-cutting objectives.
The development of AI chip export controls illustrates this complexity: what began as a Trump Administration initiative to restrict Chinese access to advanced hardware in 2018 has evolved into a comprehensive set of policies developed by the Department of Commerce, working closely with the Departments of State, Defense, Energy, and the White House—with ongoing input from the Intelligence Community (IC), the Department of Justice (DOJ), and other parts of the executive branch.
Agencies involved in chip export controls
Phase 1: Initial actions to restrict Chinese access to chips & equipment (2018–2022)
In this period, the US government increasingly recognized advanced AI chips as critical to national security and took steps to limit their flow into China. In October 2018, the Justice Department indicted a Chinese state-backed chipmaker for stealing trade secrets from American companies. That same year, the White House and DOD began restricting Dutch chip manufacturing equipment sales to China. By 2020, the Department of Commerce had blocked semiconductor shipments to Huawei and placed major Chinese chipmakers on a trade blacklist known as the Entity List, effectively preventing them from acquiring advanced chip manufacturing technology.
Phase 2: Export control development, implementation, and enforcement (2022–2023)
In 2022, Commerce’s Bureau of Industry and Security (BIS) led the development of comprehensive AI chip export controls for national security. Working closely with the Department of Energy, DOD’s Defense Technology Security Administration (DTSA), the Department of State, and the Intelligence Community (IC), Commerce announced new rules in October 2022 requiring US companies to obtain licenses before exporting advanced AI chips and manufacturing equipment to China. Beyond controls on US-based products, Commerce introduced two Foreign Direct Product Rules (FDPRs), extending export restrictions to foreign-made products containing US inputs. These actions built upon earlier concerns, including calls from Members of Congress to strengthen Entity List rules for certain Chinese companies.
The IC played a supporting role, providing threat assessments, technical analysis, and monitoring capabilities to ensure the controls were well-informed and aligned with security priorities. The Department of Energy and parts of the White House (including the National Security Council) also provided ongoing expert analysis for the controls. Supporting enforcement, the Department of Justice (DOJ)’s National Security Division partnered with BIS to co-lead the Disruptive Technology Strike Force, an initiative to prevent the illegal acquisition of US advanced technologies.
Phase 3: International coordination, refinement, and expansion (2023-2024)
To bolster extraterritorial controls enabled by the FDPRs, diplomatic efforts coordinated by the White House and the Departments of Commerce, Defense, and State helped forge agreements with the Netherlands and Japan to adopt similar export restrictions starting in January 2023.
Based on implementation experience and evolving technology, Commerce updated the controls in October 2023, addressing gaps and adjusting thresholds to better align with national security objectives. By December 2024, Commerce issued further revisions, expanding restrictions on high-bandwidth memory and advanced semiconductor manufacturing equipment while broadening the extraterritorial jurisdiction of US export controls. In January 2025, BIS announced new controls on the weights of the most advanced closed-weight AI models and imposed security conditions for advanced model storage.
Lessons learned: Keys to effective AI policy coordination
The development of AI chip export controls demonstrates how major tech policy emerges through coordinated government action. What began as a White House priority transformed into a comprehensive regulatory regime through Commerce’s rulemaking authority, State’s diplomatic channels, the IC and Defense’s intelligence capabilities, and DOJ’s enforcement support. Success required complex coordination with both foreign governments and companies. This case shows how effective AI policy often depends on identifying and engaging the right mix of agencies and partners, each bringing their unique capabilities to achieve specific goals.
Navigating AI careers in the executive branch
As this guide shows, every aspect of AI development and deployment intersects with the executive branch’s work—from the technology’s origin to current governance and adoption. Encompassing hundreds of distinct agencies, the executive branch collectively employs millions of people and offers job opportunities year-round, including many relevant to AI policy.
To complement this general overview of executive branch AI policy work, explore our AI policy agency profiles for deeper dives into the office structures, recent AI initiatives, and employment pathways at specific agencies. Our researching federal agencies guide further explains how you can conduct your own research on agencies and offices aligned with your policy interests.
Understanding the basics of federal employment is essential for pursuing agency roles. Our federal agency application advice includes resources on interviewing for federal positions, understanding USAJOBS.gov (the official website for federal jobs), and federal resume advice. Positions relevant to national security generally require a security clearance, which can take months to more than a year to obtain.
Our guide on building professional networks in DC and our AI policy resources list can also support your journey into AI policy work. For entry pathways, virtually all agencies involved in AI policy offer internships, and many provide fellowship opportunities for early- to mid-career individuals.
Here are some key takeaways if you’d like to work in this space:
- Agencies engage with AI policy in fundamentally different ways. Some develop broad AI strategies and frameworks, while others implement specific AI programs or oversee regulation. Some focus on domestic deployment and governance, while others handle international cooperation. The scope of work is diverse, encompassing research, analysis, coordination, project management, communications, software development, and more.
- The cross-cutting nature of AI means there are opportunities for people from diverse backgrounds to contribute meaningfully. While technical AI/machine learning skills may be required for certain roles, many positions also need expertise in areas like policy analysis, law, economics, ethics, or communications. Your background—whether in financial analysis, communications, or data science—can be valuable in developing and implementing AI policy.
- What’s more, agency staffers often have experience working in Congress and organizations outside government like think tanks, and many move between these different institutions throughout their careers. This career mobility allows you to build relevant experience, networks, and expertise from multiple angles.
- Don’t restrict your search to agency opportunities with “AI” in the title—you should cast a wide net across different policy areas and institutions. For example, you can look for AI-adjacent positions related to technology policy, science policy, economics, or national security in think tanks or Congress. Even in federal positions that aren’t AI-focused, there are often opportunities to get involved in AI initiatives, such as participating in internal AI tool pilots or providing feedback on an AI policy implementation process. Getting your first position is often the hardest, but fortunately (given the highly transferable knowledge and skills), you can typically move laterally between policy roles regardless of where you begin your career.
- Each agency controls different policy levers for AI governance. For example, if you’re passionate about responsible AI development, you might focus on agencies that shape research funding (e.g. NSF and DARPA), develop technical standards (e.g. NIST), or coordinate international AI policy (e.g. State). Offices with strong coordination roles—typically in or near the White House—can provide particularly valuable overviews of how different parts of government work together to achieve cross-cutting initiatives. When exploring agency roles in AI policy, focus on roles that match your background and skillset, and know that moving across agencies is very common.
Appendix: A brief history of AI policy in the US
The US government’s engagement with AI has evolved significantly over time, shaped by technological advances, national security considerations, and changing public concerns. It’s hard to do this history justice in this article, but here is a short overview of key executive branch developments:6
History of US AI policy
Early development (1950s–2000s)
Early AI development depended heavily on government funding, which supported research through multiple channels: in-house at government agencies, at national laboratories, and most significantly through external funding to universities and private companies. DOD’s Defense Advanced Research Projects Agency (DARPA) was particularly instrumental in transforming AI from isolated projects into an established field, primarily by funding research “centers of excellence” at top universities. Additional funding support came from agencies like the National Science Foundation, the National Institutes of Health, and the National Aeronautics and Space Administration.
Government support evolved from emphasizing basic, unrestricted research in the 1960s to more applied, military-focused work following the 1969 Mansfield Amendment, which required DOD research to demonstrate direct military relevance. This shift culminated in DARPA’s $1 billion Strategic Computing Program in the 1980s, which increased industry involvement and focused on specific military applications. Various agencies supported AI research through the 1990s and early 2000s, but as the technology was nascent, overall policy engagement remained limited.
Emergence of strategic focus (2010s)
As machine learning technologies matured and industrial applications expanded, the government began developing more comprehensive AI strategies:
- October 2016: The Obama Administration releases the National AI Research and Development Strategic Plan, providing the first coordinated federal vision for AI R&D.
- September 2018: DARPA announces its AI Next campaign, a multi-year investment of more than $2 billion in AI R&D for national security applications.
- May 2018: The White House announces the creation of the Select Committee on AI under the National Science and Technology Council to advise the White House on interagency AI R&D priorities and improve the coordination of federal AI efforts.
- May 2018: The White House hosts an AI for American Industry Summit, bringing together over 100 senior government officials, technical experts, heads of research labs, and American business leaders to discuss policies to ensure American leadership in AI.
- June 2018: DOD establishes a Joint AI Center to “accelerate the delivery and adoption of AI to achieve mission impact at scale”7 and tasks the Defense Innovation Board with developing principles to guide the ethical application of AI in DOD.
- August 2018: Congress establishes the National Security Commission on AI to make recommendations to “advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the US.”
- September 2019: The White House hosts a Summit on AI in Government, focused on how the federal government can partner with the private sector to improve public services.
- June 2019: The White House updates the National AI R&D Strategic Plan to support the American AI initiative.
- February 2019: President Trump issues an Executive Order on Maintaining American Leadership in AI, aiming to “sustain and enhance the scientific, technological, and economic leadership position of the United States in AI R&D and deployment.”

Acceleration and governance (2020–2024)
The rapid advancement of AI capabilities in the early 2020s, particularly in generative AI, prompted increased government attention to both opportunities and risks:
- December 2020: President Trump’s Executive Order on AI Use in Government requires that agencies “design, develop, acquire, and use AI in a manner that fosters public trust and confidence while protecting privacy, civil rights, civil liberties, and American values.”
- January 2021: The National AI Initiative Act of 2020 passes as part of the National Defense Authorization Act, directing NIST to develop voluntary standards for AI systems, NSF to fund AI research, DOE to establish the National AI Advisory Committee, and OSTP to establish the National AI Initiative Office.
- March 2021: The National Security Commission on AI issues its final report, split into two parts: Defending America in the AI Era and Winning the Technology Competition.
- June 2021: NSF establishes the National AI Research Resource Task Force to lead a national infrastructure program, connecting US researchers to computational, data, software, model, and training resources they need to participate in AI research.
- August 2022: The CHIPS and Science Act becomes law, providing $52.7 billion for semiconductor R&D and manufacturing. Implementation begins under White House coordination.
- October 2022: The US issues export controls on advanced semiconductors and manufacturing equipment to China, coordinated through NSC and other White House policy councils.
- October 2022: OSTP releases the Blueprint for an AI Bill of Rights, outlining five principles that “should guide the design, use, and deployment of automated systems to protect the American public.”
- January 2023: NIST releases an AI Risk Management Framework to help organizations “incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.”
- July 2023: The White House receives voluntary commitments from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI to help move toward the “safe, secure, and transparent development of AI technology.”
- October 2023: President Biden’s Executive Order on Safe, Secure, and Trustworthy AI assigns major new responsibilities across agencies to “establish new standards for AI safety and security, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more.” (See overview here).
- October 2023: Commerce significantly expands its AI chip export controls in October 2023, introducing new technical criteria covering more advanced chips and extending restrictions to 43 additional countries.
- November 2023: Commerce establishes the US AI Safety Institute within the National Institute of Standards and Technology to support responsibilities assigned to Commerce in President Biden’s Executive Order on AI, including to “facilitate the development of standards for safety, security, and testing of AI models, develop standards for authenticating AI-generated content, and provide testing environments for researchers to evaluate emerging AI risks and address known impacts.”
- November 2023: The Executive Office of the President coordinates US participation in the UK AI Safety Summit, an international forum to foster collaboration on the safe development of AI.
- February 2024: Commerce establishes the US AISI Consortium, a group of AI companies, state and local governments, nonprofits, academic partners, and other organizations “supporting the development and deployment of safe and trustworthy AI,” including by “developing guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content.”
- March 2024: The US and over 50 other countries endorse the Political Declaration on Responsible Military Use of AI and Autonomy, advancing international consensus on AI military applications. NSC led the development of the declaration, which was launched in February 2023.
- March 2024: OMB issues a government-wide memorandum (M-24-10) requiring each agency to report an inventory of its AI use cases, update its internal plans to ensure consistency with AI guidelines, and establish a Chief AI Officer responsible for making AI risk assessments and ensuring responsible AI use in the agency (see the 2024 AI use case inventory here).
- July 2024: NIST releases a profile on Generative AI (GAI), a companion to its AI Risk Management Framework that defines and makes recommendations to manage risks “that are novel to or exacerbated by the use of GAI.”
- September 2024: OMB releases a government-wide policy (M-24-18) on advancing the responsible acquisition of AI in government.
- October 2024: The White House announces an AI-focused National Security Memorandum as directed by President Biden’s Executive Order on AI.
- January 2025: President Biden’s Executive Order on AI Infrastructure directs agencies to accelerate AI infrastructure development, including by making federal sites available for AI data centers, fulfilling permitting obligations expeditiously, and facilitating interconnection of AI infrastructure to the electric grid.
Emerging challenges like effective deployment of AI towards national security problems, US-China competition, AI safety concerns, and effective and efficient adoption continue to shape US government engagement with AI policy.
Further reading
- Key US policy and strategy documents on AI
- President Trump’s 2025 AI Action Plan
- President Biden’s 2024 National Security Memorandum on AI
- President Biden’s 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI
- President Trump’s 2019 Executive Order on Maintaining American Leadership in AI
- Final Report of the National Security Commission on AI
- AGORA, an AI policy database, incl. US federal AI policy documents
- Books that include reflections on AI in the US executive branch
- Four Battlegrounds: Power in the Age of AI, Paul Scharre (2023)
- The New Fire: War, Peace, and Democracy in the Age of AI, Ben Buchanan and Andrew Imbrie (2022)
- Genesis: Artificial Intelligence, Hope, and the Human Spirit, Henry A. Kissinger, Eric Schmidt, and Craig Mundie (2024)
- The Oxford Handbook of AI Governance, edited by Justin B. Bullock et al. (2022)
- Chip War, Chris Miller (2022)
Our AI policy agency profiles
If you’re interested in pursuing a career in emerging technology policy, complete this form, and we may be able to match you with opportunities suited to your background and interests.
Footnotes
- While this guide uses the term ‘AI policy,’ it’s worth distinguishing it from the related concept of ‘AI governance.’ AI policy refers to the specific actions and decisions made by specific institutions; governance often more broadly refers to the entire system of shaping AI development and deployment, including how different stakeholders interact, how policies are created and enforced, and the formal and informal mechanisms guiding AI. ↩︎
- These terms often carry different meanings in other settings—for example, ‘AI innovation’ in the private sector typically refers to technical advancement and product development, while we use it here to describe government mechanisms for steering and incentivizing AI progress. Similarly, ‘AI adoption’ in private sector contexts often focuses on commercial implementation, while our definition emphasizes government use cases. ↩︎
- More broadly, there’s a distinction between policymaking, implementation, and enforcement. A common misconception is that “policy work” refers primarily to policymaking, when in fact most executive branch positions—especially at junior levels—focus on implementing policies or informing the policymaking process rather than setting major policy directions. While the ultimate decisions typically come from senior leadership, Congress, or interagency processes, implementation and advisory roles can still significantly influence policy outcomes. These roles shape how policies are carried out at a more granular, practical level, which often determines their real-world effectiveness and impact. ↩︎
- Agencies frequently collaborate on initiatives and guidance spanning civilian and defense domains. ↩︎
- All agencies outside the federal executive departments and the Executive Office of the President are technically “independent agencies,” but these fall into two distinct categories. Independent regulatory agencies, such as the FTC and Federal Reserve, are a subset of 19 agencies listed in the Paperwork Reduction Act. They have specific rulemaking authorities granted by Congress, and their leadership can only be removed “for cause,” providing significant insulation from presidential control. In contrast, independent executive agencies, like the CIA, are led by presidential appointees who serve at the president’s pleasure and can be removed without cause. This makes them functionally similar to executive departments in their alignment with White House priorities, despite their technical independence. The technical definitions are less important, but an agency’s degree of insulation is significant because it affects how responsive they will be to the president’s preferences. ↩︎
- This list excludes increasing Congressional engagement with AI issues. Major developments have included Sam Altman’s Senate testimony, the release of a Senate AI roadmap and a House AI report, and multiple introduced bills, including the Future of AI Innovation Act, the AI Research, Innovation, and Accountability Act, and the Algorithmic Accountability Act. ↩︎
- In 2022, the Joint AI Center was integrated into the DOD Chief Digital and Artificial Intelligence Office (CDAO). ↩︎
