California’s Attempt at AI Governance: A Balance between Innovation and Privacy

The California Privacy Protection Agency (CPPA) is setting the stage for robust regulations to govern Automated Decision-Making Technologies (ADMT), commonly known as Artificial Intelligence (AI). California, often at the forefront of tech innovation, aims to establish a comprehensive framework that not only safeguards citizens’ privacy but also sets a precedent for responsible AI use.

California’s status as a tech hub places it in a unique position to shape the rules for digital behemoths. The CPPA’s Executive Director, Ashkan Soltani, describes the recently published draft regulations as the “most comprehensive and detailed set of rules in the AI space.” Drawing inspiration from the European Union’s General Data Protection Regulation (GDPR), the CPPA’s approach goes beyond, aiming to create more specific provisions, making it challenging for tech giants to manoeuvre around.

At the core of the proposed regulations lie essential components designed to empower individuals in the age of AI. The framework includes robust opt-out rights, pre-use notice requirements, and access rights. California residents would gain the ability to opt out of their data being used for automated decision-making. This has profound implications for companies relying on behavioural advertising, potentially reshaping the landscape for ad tech giants like Meta.

Moreover, the CPPA’s approach is risk-based, aligning with the EU’s AI Act. This framework prioritises the regulation of AI applications based on their potential risks, echoing the global conversation around the responsible use of AI.

The proposed regulations extend their reach to AI-based profiling, posing challenges for companies engaged in tracking and profiling users for targeted advertisements. The draft mandates that businesses offer California residents the ability to deny their data being processed for behavioural advertising. Exemptions to this right are limited, emphasising the CPPA’s commitment to privacy in the AI landscape.

While the impact of California’s AI rules may be local, they could resonate globally. With the absence of a comprehensive federal privacy law in the U.S. and ongoing discord around the EU’s AI Act, California might emerge as a key global rulemaker in the realm of AI governance.

The CPPA’s move aligns with broader efforts to regulate AI on a national level. President Joe Biden recently issued an executive order focusing on AI safety and security. The order sets new standards for AI developers, particularly those working on powerful AI systems. Developers are now required to share safety test results with the federal government, marking a significant step towards ensuring the safe deployment of AI technologies.

While Biden’s executive order addresses national concerns, California’s efforts hone in on the intricate details of AI governance, reinforcing the state’s commitment to privacy and individual rights. The draft regulations echo the sentiment that as AI capabilities grow, so should protections for Americans’ safety and security.

California’s bid to tackle AI through regulations is not without challenges. The draft regulations are currently in the early stages, with a consultation process and potential revisions ahead. The CPPA’s ability to enforce these regulations hinges on the state’s borders, but the influence of tech giants headquartered in California could propel these standards beyond state lines.

As California pioneers AI regulations, a delicate balance emerges between fostering innovation and protecting privacy. The draft regulations acknowledge the transformative potential of AI while emphasising the need for responsible use. Striking this balance is crucial to ensure that AI continues to drive progress without compromising individual rights.

The proposed regulations from California and Biden’s executive order reflect a growing awareness of the need for comprehensive AI governance. With the global community grappling with the implications of AI, California’s journey from the draft stage to formal rulemaking will be closely watched. As technological advancements outpace regulatory frameworks, finding the right balance remains an ongoing challenge.

To understand the potential impact of California’s proposed regulations, it’s crucial to delve into the specifics of the CPPA’s comprehensive framework. The draft regulations outline how the new privacy protections that Californians voted for in 2020 could be implemented.

Specifically, the draft regulations propose requirements for businesses using ADMT in any of the following ways:

1.Decisions with Significant Impacts: For decisions that tend to have the most significant impacts on consumers’ lives. This would include decisions about their employment or compensation.

2.Employee, Contractor, Applicant, or Student Profiling: Profiling an employee, contractor, applicant, or student. This would include using a keystroke logger to analyse their performance, and tracking their location.

3.Profiling Consumers in Publicly Accessible Places: Profiling consumers in publicly accessible places, such as shopping malls, medical offices, and stadiums. This would include using facial-recognition technology or automated emotion assessment to analyse consumers’ behaviour.

4.Behavioral Advertising Profiling: Profiling a consumer for behavioural advertising. This would include evaluating consumers’ personal preferences and interests to display advertisements to them.

The draft also proposes potential options for additional consumer protections around the use of their personal information to train these technologies.

For the above uses of ADMT, the draft regulations would provide consumers with the following protections:

1.Pre-Use Notices: Businesses would be required to provide “Pre-use Notices” to inform consumers about how the business intends to use ADMT, so that the consumer can decide whether to opt-out or to proceed, and whether to access more information.

2.Opt-Out Mechanism: The ability to opt-out of the business’s use of ADMT (except in certain cases, such as to protect life and safety).

3.Access to Information: The ability to access more information about how the business used ADMT to make a decision about the consumer.

These draft requirements would work in tandem with risk assessment requirements that the Board is also considering. Together, these proposed frameworks can provide consumers with control over their personal information while ensuring that automated decision-making technologies, including those made from artificial intelligence, are used with privacy in mind and in design.

As we explore the intricacies of California’s proposed regulations, it becomes evident that the impact reaches far beyond the tech industry. By empowering consumers with opt-out mechanisms and access to information, the regulations aim to rebalance the equation between the use of AI and individual privacy. The potential implications for society are profound, affecting areas such as employment, consumer profiling, and public surveillance.

California’s ambitious approach to AI governance contributes to the global conversation on AI ethics. The proposed regulations not only set the stage for responsible AI use within the state but also challenge other regions and countries to reevaluate their stance on AI governance. The emphasis on transparency, consumer rights, and risk-based frameworks aligns with the broader goals of ensuring AI benefits humanity without compromising fundamental rights.

While the draft regulations lay a robust foundation, challenges lie ahead. The delicate balance between innovation and privacy requires continuous refinement. Future AI regulations must adapt to the evolving technological landscape, addressing unforeseen challenges and ensuring that ethical considerations remain at the forefront.

In conclusion, California’s bold move to regulate AI signifies a critical step towards shaping the future of technology. The proposed regulations, if implemented, could set a precedent for other regions to follow. As the world navigates the uncharted waters of AI governance, California stands at the helm, steering the conversation towards a future where innovation coexists harmoniously with privacy and individual rights. The journey from guidelines to enforcement is underway, and the global community eagerly watches as California pioneers ethical AI in the 21st century. For all my daily news and tips on AI, Emerging technologies at the intersection of humans, just sign up for my FREE newsletter at