Shine a light on tech’s hidden impacts. Triple your donation.
Skip navigation

News

The Problems Biden’s AI Order Must Address

The Markup gives a section-by-section breakdown of the summarized executive order on artificial intelligence

Image of U.S. President Joe Biden seated at a desk, passing a pen to Vice President Kamala Harris, who is standing. The background screen shows the words 'Artificial Intelligence' with the subheading 'Safety, Security, and Trust' displayed prominently.
Chip Somodevilla/Getty Images

On Monday, the Biden administration released details of an expansive executive order that represents America’s first concerted attempt to control the use and application of artificial intelligence.

The order’s stated aim is to protect Americans from misuses of AI technology, including privacy violations, fraud, and cybersecurity threats. As part of the order, the National Institute of Standards and Technology will be tasked with certifying and testing AI tools. The order also calls for the creation of safety standards, security standards, and rules for any AI technology that could pose risks to national security or critical infrastructure.

The executive order is broad, reaching into all aspects of American life. But it is limited by the fact that there is currently no comprehensive federal legislation covering artificial intelligence. States have passed various AI regulations in recent years, but the technology has been developing faster of late, with tools arriving in the commercial marketplace at dizzying speeds. Used by a wide range of companies and government agencies, AI is now automating decision-making at scale and generating words, images, sounds, videos, and code. 

Indeed, some industry executives have practically begged for regulation. For example, Sam Altman, CEO of OpenAI, the maker of ChatGPT implored members of Congress in May to regulate artificial intelligence, at one point reportedly stating, “if this technology goes wrong, it can go quite wrong.” And recent surveys show that Americans are largely in favor of regulation.

We’ve read through the executive order’s fact sheet, added relevant links, and annotated some of the issues that have led to these new rules.

—Jon Keegan

↩︎ link

New Standards for AI Safety and Security

As AI’s capabilities grow, so do its implications for Americans’ safety and security. With this Executive Order, the President directs the most sweeping actions ever taken to protect Americans from the potential risks of AI systems:

  • Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public.

Tomas Apodaca: A note here: A “foundation model” describes a large language model pretrained on a massive amount of data—work that typically requires resources only available to governments or companies like Google, Meta, or Microsoft. OpenAI’s GPT-4, Google’s PaLM 2, and Meta’s Llama 2 are all so-called foundation models. The term is controversial among AI experts, some of whom dismiss the idea that these LLMs represent the starting point from which future AI systems might emerge. A related term, “frontier models,” has also been criticized as a branding exercise.

  • Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Together, these are the most significant actions ever taken by any government to advance the field of AI safety.

Jon Keegan: There is precedent for NIST involvement with emerging software technology. The agency maintains several tools to evaluate facial recognition applications, including NIST’s “Face Recognition Vendor Testing Program,” established in 2000. NIST also publishes training datasets for facial recognition, including one consisting of mugshots that contained 175 photos of minors.

Monique O. Madan: In the past, AI was only accessible to scholars and experts. Now, with AI touching the lives of millions of people daily, the scope of threats to national security has expanded. They include the risk of nuclear weapons and rocket launch plans in the hands of everyday civilians, along with high-quality forged media that could spur disinformation and cyberattacks. The Pentagon has long thought about how AI could help take down enemy forces, and a recent directive allows weapons systems to autonomously fire “kill shots” against humans (though plans for such shots must first be reviewed and approved by a special military panel).

Who will appoint people to this AI Safety and Security Board? How can we be certain that these appointees are free from conflicts of interest? It sounds like this is different from the National AI Advisory Committee, but it’s unclear at this point how this board will function and who will be on it.

  • Protect against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening. Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.

Jon: Researchers using AI tools for drug discovery realized, to their horror, that the technology could create not just novel compounds for lifesaving treatments but also thousands of new poisons. This excellent Radiolab episode tells their chilling story.

  • Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.
  • Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure.
  • Order the development of a National Security Memorandum that directs further actions on AI and security, to be developed by the National Security Council and White House Chief of Staff. This document will ensure that the United States military and intelligence community use AI safely, ethically, and effectively in their missions, and will direct actions to counter adversaries’ military use of AI.

Tara García Mathewson: I will be interested to see what standards and best practices for detecting AI-generated content come out of this. AI detectors, so far, are extremely unreliable.

Tomas: The U.S. Copyright Office is in the process of developing policy around “AI-generated content”; the notice of inquiry toward that effort defines such content as a work made “without any creative input or intervention from a human author.”

↩︎ link

Protecting Americans’ Privacy

Without safeguards, AI can put Americans’ privacy further at risk. AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems. To better protect Americans’ privacy, including from the risks posed by AI, the President calls on Congress to pass bipartisan data privacy legislation to protect all Americans, especially kids, and directs the following actions:

  • Protect Americans’ privacy by prioritizing federal support for accelerating the development and use of privacy-preserving techniques—including ones that use cutting-edge AI and that let AI systems be trained while preserving the privacy of the training data.
  • Strengthen privacy-preserving research and technologies, such as cryptographic tools that preserve individuals’ privacy, by funding a Research Coordination Network to advance rapid breakthroughs and development. The National Science Foundation will also work with this network to promote the adoption of leading-edge privacy-preserving technologies by federal agencies.

Malena Carollo: This is an interesting shift given the prior stances of federal agencies during the so-called crypto wars. The FBI and others have previously argued that encryption tools that preserve individuals’ privacy hinder law enforcement’s ability to investigate crimes. A resurfaced bill filed earlier this year seeks to hold platforms responsible for any child abuse content posted on them, including encrypted communication; critics argue that staying in compliance would cause tech companies to weaken encryption in order to monitor communication.

  • Evaluate how agencies collect and use commercially available information—including information they procure from data brokers—and strengthen privacy guidance for federal agencies to account for AI risks. This work will focus in particular on commercially available information containing personally identifiable data.
  • Develop guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems. These guidelines will advance agency efforts to protect Americans’ data.

Jon: There have been several documented examples of law enforcement and intelligence agencies purchasing commercially available location data from data brokers, rather than seeking a warrant from a judge. Such a move arguably bypasses Fourth Amendment protections against unreasonable searches. A recent report from the Department of Homeland Security’s inspector general found federal agencies like Customs and Border Protection, Immigration and Customs Enforcement, and the Secret Service had broken the law by purchasing such data and sidestepping the courts. A January 2023 report from the Office of the Director of National Intelligence detailed the U.S. intelligence community’s extensive purchases and use of this data.

↩︎ link

Advancing Equity and Civil Rights

Irresponsible uses of AI can lead to and deepen discrimination, bias, and other abuses in justice, healthcare, and housing. The Biden-Harris Administration has already taken action by publishing the Blueprint for an AI Bill of Rights and issuing an Executive Order directing agencies to combat algorithmic discrimination, while enforcing existing authorities to protect people’s rights and safety. To ensure that AI advances equity and civil rights, the President directs the following additional actions:

  • Provide clear guidance to landlords, Federal benefits programs, and federal contractors to keep AI algorithms from being used to exacerbate discrimination.

Malena: “Clear guidance” is sorely needed. Landlords frequently screen their tenants with algorithms—but the practice is often inaccurate. Latinos are particularly at risk for false matches because they tend to have fewer unique last names. Enforcement may be challenging, particularly as algorithms change over time.

Aaron Sankin: The New Orleans Police Department used a facial recognition tool predominantly to attempt to identify Black subjects, and it failed to correctly identify the person in question in the majority of instances it was employed, according to a recent investigation by Politico reporter Alfred Ng.

  • Address algorithmic discrimination through training, technical assistance, and coordination between the Department of Justice and Federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI.

Tomas: Based on the examples cited in the Blueprint for an AI Bill of Rights, this section seems to employ a definition of algorithmic discrimination that reaches beyond artificial intelligence. It’s not just large language models or even machine learning—it could mean any decision made or influenced by an automated system, which could be as simple as a random number generator.

  • Ensure fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.

Aaron: In 2021, The Markup and Gizmodo published an investigation into a predictive policing algorithm created by a company called Geolitica, formerly known as PredPol. Drawing on predictions the company generated for dozens of law enforcement agencies across the country, the investigation found that the algorithm disproportionately predicted crimes in lower-income, Black, and Latino neighborhoods. A follow-up story, published earlier this year, found that predictions made for a New Jersey police department lined up with a reported crime less than 1 percent of the time.

A 2020 investigation by the Tampa Bay Times showed how a Florida sheriff’s office used a computer system to identify people who were likely to break the law and then subjected them to harassment by deputies. David Kennedy, a criminologist at John Jay College of Criminal Justice, called it “one of the worst manifestations of the intersection of junk science and bad policing—and an absolute absence of common sense and humanity.” 

Colin Lecher: One related field not directly mentioned here: Child protection agencies are increasingly turning to similar tools to predict abuse, but those tools are raising their own concerns. A recent investigation by the Associated Press questioned whether an AI tool used to predict child abuse might target parents with disabilities. 

Tomas: It’s not included in the list here, but autonomously piloted drones and robots are already being used for surveillance—see this press release from the Department of Homeland Security previewing the use of robot dogs for border enforcement—and could be used to deploy deadly or less-than-lethal force.

↩︎ link

Standing Up for Consumers, Patients, and Students

AI can bring real benefits to consumers—for example, by making products better, cheaper, and more widely available. But AI also raises the risk of injuring, misleading, or otherwise harming Americans. To protect consumers while ensuring that AI can make Americans better off, the President directs the following actions:

  • Advance the responsible use of AI in healthcare and the development of affordable and life-saving drugs. The Department of Health and Human Services will also establish a safety program to receive reports of—and act to remedy – harms or unsafe healthcare practices involving AI.

Malena: The standards for harm here will be a crucial aspect to watch. We investigated an algorithm overseen by HHS that determines who gets a lifesaving liver transplant. We found that a change in the algorithm took livers away from several poorer Midwestern and Southern states whose patients face greater barriers to health care than those who gain the donated livers. But powerful executives in coastal states that benefited from the new policy—who also engineered the change—argued this was an appropriate outcome. The algorithm is still in use today.

  • Shape AI’s potential to transform education by creating resources to support educators deploying AI-enabled educational tools, such as personalized tutoring in schools.

Tara: Colleges and universities across the country are already rolling out chatbots for operational and academic uses. Admissions offices, financial aid offices, and IT departments are using them to answer straightforward questions from students. Faculty will increasingly be able to direct students to online chatbots built into educational software that colleges use to host class discussion and materials. The promise of personal tutors is great, but privacy advocates caution that additional student data will be captured by these systems and that they pose risks to student anonymity.

Colin: When things go wrong in education, the problems can throw the system into chaos. In Britain, an automated grading algorithm for standardized exams ended up favoring wealthy private school students, leading to widespread protests against the system. Meanwhile, ChatGPT is already creating a student plagiarism panic, with one professor reportedly failing an entire class after falsely believing ChatGPT wrote everyone’s papers. (The university has denied that anyone received a failing grade.)

Photograph of a protester holding up a sign that says, “AI HAS NO SOUL!”
Caption: People picket outside of FOX Studios on the first day of the Hollywood writers strike on May 2, 2023 in Los Angeles. Credit:David McNew/Getty Images
↩︎ link

Supporting Workers

AI is changing America’s jobs and workplaces, offering both the promise of improved productivity but also the dangers of increased workplace surveillance, bias, and job displacement. To mitigate these risks, support workers’ ability to bargain collectively, and invest in workforce training and development that is accessible to all, the President directs the following actions:

Miles Hilton: The effect that automation has on people’s jobs is nothing new—50%–70% of U.S. wage inequality increases over the last 40 years can be attributed to automation, according to a working paper distributed by the National Bureau of Economic Research—but AI threatens many jobs previously considered “future-proof.” According to Goldman Sachs report from March, an estimated 300 million jobs are in danger. In June, the Washington Post reported on copywriters who had lost contracts or been laid off after their companies transitioned to using free software like ChatGPT to generate content.

Colin: In fact, many problems with AI in the workplace start before someone is even offered a job. Companies and government agencies are increasingly turning to AI-powered screening tools to recruit and hire workers, but those tools come with concerns about privacy and bias. In 2021, we uncovered documents showing how more than 20 public agencies were using this type of software.

  • Develop principles and best practices to mitigate the harms and maximize the benefits of AI for workers by addressing job displacement; labor standards; workplace equity, health, and safety; and data collection. These principles and best practices will benefit workers by providing guidance to prevent employers from undercompensating workers, evaluating job applications unfairly, or impinging on workers’ ability to organize.
  • Produce a report on AI’s potential labor-market impacts, and study and identify options for strengthening federal support for workers facing labor disruptions, including from AI.

Tara: When it comes to evaluating job applications unfairly, AI always risks reinforcing biases. When companies are not diverse, it may look like only certain types of people can be successful there, which can mislead systems designed to identify promising job candidates. This can be avoided, though, and good guidance from the feds may help.

↩︎ link

Promoting Innovation and Competition

America already leads in AI innovation—more AI startups raised first-time capital in the United States last year than in the next seven countries combined. The Executive Order ensures that we continue to lead the way in innovation and competition through the following actions:

  • Catalyze AI research across the United States through a pilot of the National AI Research Resource—a tool that will provide AI researchers and students access to key AI resources and data—and expanded grants for AI research in vital areas like healthcare and climate change.

Ross Teixeira: The National AI Research Resource (NAIRR) Task Force projected in its latest report that up to 120 research teams will be able to train models at the scale of OpenAI’s GPT-3 language model each year. The report is noticeably scant on concern for the environmental impact of the platform, though. For example, it suggests that the NAIRR operator “could consider” energy efficiency and/or environmental sustainability when designing the platform, or “could work” with the EPA’s Energy Star for Data Centers program, but does not claim these are necessary or important factors.

We’ve observed the high environmental cost of training large models before, and it’s ironic that the report discusses the helpful ability of AI to model climate change while glossing over the harm that training large models may contribute to climate change itself.

Miles: The corporate stranglehold on AI research has already stymied scientific inquiry more than once. In 2020, Google allegedly fired prominent AI ethics researcher Timnit Gebru for co-authoring a paper that criticized the financial and environmental costs of large language models and that called attention to the ways these models could perpetuate biases and inequities. (A Google executive subsequently claimed that the paper did not meet the bar for publication and that Gebru resigned after the company declined to meet her conditions for continuing to work at Google.) According to the New York Times, the company subsequently fired a second co-author, and the following year it fired a third researcher, Satrajit Chatterjee, for attempting to publish challenges to a celebrated Google-approved research paper.

  • Promote a fair, open, and competitive AI ecosystem by providing small developers and entrepreneurs access to technical assistance and resources, helping small businesses commercialize AI breakthroughs, and encouraging the Federal Trade Commission to exercise its authorities.
  • Use existing authorities to expand the ability of highly skilled immigrants and nonimmigrants with expertise in critical areas to study, stay, and work in the United States by modernizing and streamlining visa criteria, interviews, and reviews.

Monique: How will the United States Citizenship and Immigration Services (USCIS) manage to get this done in a system that’s been flawed for decades? We know AI has already caused a lot of havoc within this system, with incorrect time frames, erroneous pronouns, and garbled translations leading to asylum rejections. Although the U.S. has been a clear leader in attracting international technical talent, scholars say today’s immigration policy, made worse by the Trump administration, is risking our advantage. Failing to address these barriers to immigration at a policy level will have military and economic repercussions, leaders warn.

↩︎ link

Advancing American Leadership Abroad

AI’s challenges and opportunities are global. The Biden-Harris Administration will continue working with other nations to support safe, secure, and trustworthy deployment and use of AI worldwide. To that end, the President directs the following actions:

  • Expand bilateral, multilateral, and multistakeholder engagements to collaborate on AI. The State Department, in collaboration, with the Commerce Department will lead an effort to establish robust international frameworks for harnessing AI’s benefits and managing its risks and ensuring safety. In addition, this week, Vice President Harris will speak at the UK Summit on AI Safety, hosted by Prime Minister Rishi Sunak.
  • Accelerate development and implementation of vital AI standards with international partners and in standards organizations, ensuring that the technology is safe, secure, trustworthy, and interoperable.
  • Promote the safe, responsible, and rights-affirming development and deployment of AI abroad to solve global challenges, such as advancing sustainable development and mitigating dangers to critical infrastructure.

Tomas: In her response to this fact sheet, former U.S. deputy chief technology officer Jennifer Pahlka called out the federal agency hiring process as a key impediment to building core competency around AI disciplines in the federal government. (Full disclosure: Pahlka is a donor to The Markup.)

  • As we advance this agenda at home, the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI. The Administration has already consulted widely on AI governance frameworks over the past several months—engaging with Australia, Brazil, Canada, Chile, the European Union, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK. The actions taken today support and complement Japan’s leadership of the G-7 Hiroshima Process, the UK Summit on AI Safety, India’s leadership as Chair of the Global Partnership on AI, and ongoing discussions at the United Nations.

Aaron: It’s notable that China, a global leader in AI development along with the United States, is not one of the countries the White House listed as having been consulted during the development of its AI regulations. In recent years, China has developed its own detailed regulatory system concerning AI. But amid rising tensions between Washington and Beijing it may not be surprising that China’s AI regulation isn’t lauded by the Biden administration as a model.

The actions that President Biden directed today are vital steps forward in the U.S.’s approach on safe, secure, and trustworthy AI. More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.

The Markup will continue to track developments related to this executive order. If there is something you think we should look into, please reach out to us.

We don't only investigate technology. We instigate change.

Your donations power our award-winning reporting and our tools. Together we can do more. Give now.

Donate Now