US Government Announces AI Safety Board

On Friday, the US Department of Homeland Security (DHS) announced the creation of the Artificial Intelligence Safety and Security Board, a panel of 22 members drawn from the tech industry, government, academia, and civil rights organisations. This board, established under the directive of President Biden’s AI executive order, aims to safeguard American citizens and businesses from the risks associated with AI technology. The board’s formation reflects the fundamental assumption that AI is inherently risky and that its misuse poses significant threats that need to be mitigated.

The board’s primary goals include protecting US infrastructure from foreign adversaries using AI, developing recommendations for the safe adoption of AI in sectors like transportation, energy, and internet services, fostering collaboration between government and businesses, and creating a platform for AI leaders to share information on AI security risks with the DHS. Despite its ambitious agenda, the board faces the challenge of defining the nebulous term “AI,” which encompasses a broad range of technologies with vastly different applications. This lack of a clear definition complicates the board’s mission, as members may not even agree on which aspects of AI they are safeguarding against.

The diverse backgrounds of the board members highlight the varied interpretations of AI. While companies like OpenAI, Microsoft, and Anthropic focus on generative AI systems like ChatGPT, other members, such as Delta Air Lines CEO Ed Bastian, emphasise different AI applications, like crew resourcing and turbulence prediction. This diversity underscores the challenge of achieving a consensus on what constitutes risky AI technology.

Critics have also expressed concerns about the board’s composition, which heavily favours tech industry leaders. This group includes CEOs from major AI vendors such as OpenAI, Microsoft, Alphabet, and Anthropic, as well as key figures from Nvidia, IBM, Adobe, Amazon, Cisco, and AMD. Critics argue that this tech-heavy representation could result in policies that prioritise corporate interests over public welfare. Timnit Gebru, founder of The Distributed AI Research Institute (DAIR), criticised the inclusion of certain tech giants, likening it to “foxes guarding the hen house.”

The board’s membership also includes representatives from aerospace and aviation industries, such as Northrop Grumman and Delta Air Lines, and notable figures from civil rights organisations and academia. These include Dr. Fei-Fei Li from Stanford University, Maya Wiley from The Leadership Conference on Civil and Human Rights, Damon Hewitt from Lawyers’ Committee for Civil Rights Under Law, and Alexandra Reeve Givens from the Center for Democracy and Technology. However, some experts believe the board lacks critical voices from other influential organisations in AI ethics and policy, such as Data & Society, DAIR, AI Now, and Georgetown Law’s Center on Privacy & Technology.

The board’s formation is part of a broader effort within the DHS to respond to the rapid emergence of AI technology. This initiative follows President Biden’s executive order on AI, issued in October 2023, which set comprehensive regulations on generative AI systems. The order mandates testing for advanced AI models to ensure they can’t be used to create weapons, suggests watermarking AI-generated media to combat deepfakes, and addresses privacy and job displacement concerns.

The executive order leverages the federal government’s purchasing power to enforce AI standards, requiring that federal agencies only contract with companies that comply with the new AI regulations. For the first time, developers of powerful AI systems that pose risks to national security, economic stability, or public health must notify the federal government when training a model and share safety test results and other critical information.

The National Institute of Standards and Technology (NIST) and DHS are tasked with developing standards for “red team” testing to ensure AI systems are safe and secure before public release. The order also encourages, but does not mandate, the watermarking of AI-generated media, reflecting concerns about the potential for AI-generated disinformation, particularly in the context of the upcoming 2024 presidential campaign. Federal agencies will develop tools to help Americans distinguish authentic communications from AI-generated content.

Several agencies are directed to establish safety standards for AI use, including the Department of Health and Human Services and the Department of Labor, which will study AI’s impact on the job market. These studies aim to inform future policy decisions to mitigate the socioeconomic impact of AI adoption. While the executive order emphasises internal guidelines to protect consumer data, it stops short of mandating comprehensive privacy protections. The administration recognizes the need for robust privacy legislation to fully protect Americans’ data and addresses concerns about data collection and sharing by AI systems.

The Federal Trade Commission (FTC) is expected to play a more active role in consumer protection and antitrust enforcement in the AI sector. Although the FTC is not bound to follow the executive order’s recommendations, it has already begun investigating companies like OpenAI over possible consumer privacy violations. The new rules will also require cloud service providers to disclose information about their foreign customers to the US government, following recent US actions to restrict the export of high-performance chips to China.

Despite the general support for AI regulations in the tech industry, there are disagreements over the extent of government oversight. Companies like Microsoft, OpenAI, Google, and Meta have voluntarily committed to AI safety and security measures, such as third-party stress-testing of their systems. However, critics point out that the executive order does not address key issues such as data transparency, copyright and intellectual property in training data, or protections for artists from AI impersonation.

The order also lacks provisions to protect data workers who might be exposed to traumatising content during AI system training, a common concern among AI ethics advocates. Issued just days before an international meeting on AI safety in the UK, the executive order signals a significant effort by the US to catch up with other nations and organisations like the EU that have already moved toward regulating AI.

As the DHS AI Safety and Security Board begins its work, it faces the monumental task of navigating the complex and evolving landscape of AI technology. The board’s success will depend on its ability to balance diverse perspectives, address critical ethical and security concerns, and develop robust policies that ensure the safe and responsible deployment of AI for the benefit of all Americans.

for all my daily news and tips on AI, Emerging technologies at the intersection of humans, just sign up for my FREE newsletter at www.robotpigeon.be