Challenges for Biden’s AI Regulation Plan

In the rapidly evolving landscape of artificial intelligence (AI), President Joe Biden’s executive order unveiled in October outlines a comprehensive plan to regulate and ensure the responsible development and use of AI systems in the United States. However, as the deadline approaches, the ambitious initiative encounters unexpected obstacles, particularly in the form of budget constraints at the National Institute of Standards and Technology (NIST) and concerns about potential industry influence on the regulatory standards.

The heart of Biden’s plan relies on the National Institute of Standards and Technology (NIST) to establish new standards for stress-testing AI systems, uncovering biases, hidden threats, and rogue tendencies. However, sources reveal that NIST faces a daunting challenge in meeting the July 26, 2024 deadline due to budgetary constraints. With an already modest budget of $1.6 billion in 2023, the agency lacks the financial resources needed to independently complete the extensive work required for AI safety testing.

Members of Congress are growing uneasy about the possibility that NIST may heavily depend on AI expertise from private companies. The fear is that these companies, with vested interests in shaping standards that align with their AI projects, could potentially compromise the objectivity and transparency of the regulatory process. A bipartisan open letter, signed by six Congress members on December 16, expresses concern about NIST potentially enlisting private companies without clear guidelines on the decision-making process for grants or awards.

While NIST has been tasked with regulating AI, the agency’s resources pale in comparison to tech giants like OpenAI, Google, and Meta, which invest billions in developing powerful AI models. The stark contrast in financial capabilities raises questions about NIST’s ability to navigate the complexities of AI regulation effectively. Some experts argue that, despite NIST’s nonpartisan stance, the agency needs more than mandates and good wishes to fulfil its role as a leader in addressing AI risks.

Recognizing the concerns raised by lawmakers and experts, NIST is taking steps to increase transparency. The agency issued a request for information on December 19, seeking input from external experts and companies on standards for evaluating and red-teaming AI models. This move aims to gather diverse perspectives and involve the broader AI community in shaping guidelines for responsible AI development.

AI experts, who have spent years probing AI systems for biases and vulnerabilities, share the concerns raised by lawmakers. They emphasise the importance of NIST’s role in cutting through the hype and speculation surrounding AI risks. However, they stress that NIST requires more than just directives and goodwill to fulfil its responsibilities effectively.

The White House executive order places an aggressive deadline on NIST, calling for the establishment of an Artificial Intelligence Safety Institute and the formulation of guidelines, principles, and plans to advance responsible global technical standards for AI development. The order reflects the administration’s commitment to addressing the challenges posed by AI comprehensively.

Beyond regulatory measures, Biden’s executive order includes initiatives to strengthen the US government’s AI capabilities. The creation of a dedicated job portal at AI.gov aims to attract experts and researchers into government service. Additionally, a training program is set to produce 500 AI researchers by 2025. Changes in immigration policy are also proposed to facilitate the entry of AI talent into the US, recognizing the need to compete globally in the AI field.

Biden’s executive order acknowledges the potential harms associated with AI implementation, emphasising the importance of addressing issues like discrimination and unintended effects in housing and healthcare. The order tasks the White House’s Office of Management and Budget with developing guides and tools to help government employees make informed choices when purchasing AI services from private companies, addressing ethical concerns and promoting accountability.

Despite the comprehensive approach outlined in the executive order, challenges remain in extending regulatory standards to state and local law enforcement agencies. Issues of false positives and biases in AI-powered technologies used in criminal justice and policing persist. To enforce compliance at these levels, federal lawmakers may need to link adherence to regulatory standards with funding for state and local law enforcement agencies.

Shifting focus from government initiatives, recent developments at Twitter shed light on the challenges surrounding AI transparency and fairness. The decision to cut a team of AI researchers, including those working on making Twitter’s algorithms more transparent and fair, raises concerns about the prioritisation of cost-cutting over addressing biases and ethical concerns. The move is met with criticism, especially considering the team’s valuable work in identifying and mitigating biases in Twitter’s algorithms.

In another sphere of the AI landscape, Microsoft’s AI red team has been actively working since 2018 to assess and expose weaknesses in AI platforms. With a multidisciplinary team, Microsoft focuses not only on traditional security failures but also on responsible AI failures, emphasising the importance of accountability in addressing AI system failures. The team’s efforts underscore the evolving nature of AI security and the need for a nuanced approach that goes beyond conventional cybersecurity measures.

As the United States grapples with the challenges of regulating and harnessing the potential of artificial intelligence, the unfolding scenarios at NIST, Twitter, and Microsoft highlight the complexities involved. While Biden’s executive order sets a robust framework for addressing AI risks, the hurdles faced by regulatory agencies and the industry underscore the need for continuous evaluation and adaptation. Balancing innovation with accountability remains a delicate task, and as the AI landscape evolves, policymakers, industry leaders, and researchers must collaborate to ensure the responsible and ethical development of AI technologies.

for all my daily news and tips on AI, Emerging technologies at the intersection of humans, just sign up for my FREE newsletter at www.robotpigeon.beehiiv.com