Advertisement
California to bar AI vendors that can’t prove bias safeguards
Technology

California to bar AI vendors that can’t prove bias safeguards

March 31, 2026
Computerworld
Scroll

AI vendors selling to the California state government must prove they have safeguards against algorithmic bias, civil rights violations, and illegal content, or risk being barred from state contracts, under an executive order signed by Governor Gavin Newsom. The order directs the Department of General Services and the California Department of Technology to develop new vendor certifications within 120 days.

California to bar AI vendors that can’t prove bias safeguards

Companies seeking state contracts would be required to attest to safeguards covering the “exploitation or distribution of illegal content, such as child sexual abuse material and non-consensual intimate imagery,” the “utilization of models that display harmful bias or lack governance to reduce the risk of such harmful bias,” and “violation of civil rights and civil liberties such as free speech, voting, human autonomy, and protections against unlawful discrimination, detention, and surveillance,” the order noted. The order also directed the Government Operations Agency, within the same 120-day window, to recommend reforms that would bar entities “judicially determined to have unlawfully undermined privacy or civil liberties” from state contracts. That threshold goes beyond the financial stability, security certifications, and past-performance criteria that typically govern vendor eligibility in government procurement, the order added. “California leads in AI, and we’re going to use every tool we have to ensure companies protect people’s rights, not exploit them or put them in harm’s way,” Newsom said in a statement accompanying the order. The order’s reach is amplified by California’s position in the global AI market. The state is home to 33 of the world’s top 50 privately held AI companies and accounts for 25 of US AI patents, Newsom’s office said in a statement. “California essentially wants to set a benchmark for de facto AI standards when it comes to procurement, safety, and ethics,” said Neil Shah, VP for research and partner at Counterpoint Research. Decoupling from federal oversight Beyond vendor certification, the order gives California’s State Chief Information Security Officer authority to review federal supply chain risk designations. Where the CISO concludes a federal ban is improper, the Department of General Services (DGS) and the California Department of Technology (CDT) must “jointly issue guidance ensuring that departments and agencies can continue to easily procure from that company,” the order said. Shah said the CISO override mechanism reduces geopolitical risk for vendors. “The new state legislation effectively overrides federal sanctions, if any, protecting or giving one more chance to vendors that were rejected elsewhere,” he said. Newsom signed the order against a backdrop of retreating federal AI oversight. President Trump revoked Biden’s 2023 executive order on the safe and trustworthy development of AI in January 2025, eliminating mandatory red-teaming for high-risk models, structured oversight for AI in critical infrastructure, and safety reporting requirements for frontier developers. His replacement order reoriented federal policy toward deregulation. In December 2025, Trump directed the Justice Department to challenge state AI laws deemed inconsistent with federal policy, but explicitly exempted state government procurement from that preemption push. Watermarking, genAI tools, data minimization The order goes beyond vendor oversight. It directs the California Department of Technology to issue best-practice guidance for watermarking “AI-generated or significantly manipulated images or video” consistent with state law. Newsom’s office said the requirement is the first of its kind in the US. The order also directs state agencies to give employees access to vetted GenAI tools with “appropriate privacy and cybersecurity safeguards,” pilot a GenAI application for resident-facing government services, and publish a data minimization toolkit for departments handling sensitive data. Part of a longer California push The order is not Newsom’s first move on AI governance. In September 2025, he signed the Transparency in Frontier AI Act, which required large frontier AI developers to publish safety frameworks and report critical safety incidents, with fines of up to 1 million per violation. More than 20 California AI statutes covering employment, healthcare, education, and pricing algorithms took effect in January 2026. The European Commission has pursued a similar procurement-as-governance approach through its Model Contractual Clauses for AI, updated in March 2025 to align with the EU AI Act, though those clauses remain advisory for member-state buyers. “California government is essentially adding a third vector for CIOs and CISOs for enterprise procurement of ethical governance beyond uptime and pricing,” Shah said. He added that the new certification requirements could drive greater complexity and fragmentation, particularly for smaller vendors. “This can drive a greater compliance burden to get ‘California certified.’ However, once proven, it also sets a strong precedent for these players to expand globally relatively smoothly,” he said.

Computerworld
Computerworld

Coverage and analysis from United States of America. All insights are generated by our AI narrative analysis engine.

United States of America
Bias: center
Advertisement
You might also like

Explore More