AIEnergyEnergy InfrastructureFeaturedpower gridUnited States

The US Power Grids Need AI Disclosure Requirements

AI disclosure requirements are essential to managing risk as artificial intelligence plays a growing role in the power grid.

The electrical grid is a complex system that is managed for reliability above all else—in no small part because it supports other life-sustaining services like water systems and telecommunications. Increasingly, though, electric utilities are turning to artificial intelligence (AI) to help manage power demand and assist in decision-making. While this has the potential to enhance overall energy security, it introduces potential risks that regulators should mitigate and investors should price in. 

Currently, there are no dedicated policies on how AI applications should be used on the power grid, nor clear liability rules. Generally, risk-averse utilities and grid operators will apply a level of scrutiny on their own. But there would also be value in some top-down oversight. 

As a start, US regulatory bodies should develop specific disclosure requirements about AI practices. Understanding AI architecture, usage, and how AI systems interact will help all stakeholders assess their risk profile, leading to better decisions and reduced liability risks. 

Given the nature of electricity markets, it is improbable that any grid will rely on a central AI to manage power generation and demand for all parties involved. Instead, we anticipate that power generators and distributors will utilize their own AI systems. This decentralized approach, where multiple entities leverage AI, is known as a “multi-agent system” (MAS). More accurately, a power grid is a hybrid MAS, where human operators interact with automated systems performing various functions. 

Understanding Failure in Multi-Agent Systems 

We have a good sense today of how AI systems fail. But the unique failure modes of a MAS or hybrid MAS are less understood. Unlike typical AI predictive tasks such as classification or token generation, multi-agent systems depend on interactions among distinct components, which leads to different kinds of failures.

A recent publication by the Cooperative AI Foundation identified three of them, all of which would have detrimental effects on the utility sector. First, there is competition, where the system yields a suboptimal overall solution because stakeholders are pursuing individual goals. Another is miscoordination, where an array of diverse AI agents struggles to collaborate effectively, resulting in less-than-ideal outcomes, failing to optimize a system for effectiveness or efficiency, or improve its robustness due to coordination issues. This could potentially be addressed by using a unified set of AIs; however, this introduces the risk of a third failure mode: collusion. In a collusion failure, AI agents prioritize a collective interest over, say, those of other stakeholders like customers. (This idea surfaced in the court case against RealPage and its rent-setting algorithms.)

Technical failures of single AI systems don’t go away, either. Hallucinations, out-of-sample predictions, and model drift are all still issues that can arise and damage a MAS. AI opens power grids to unique cybersecurity vulnerabilities, such as, theoretically, model or data poisoning attacks. Finally, it’s essential to recognize that these are sociotechnical systems, meaning market dynamics and human factors significantly affect risk, too. 

The Case for AI Disclosure Requirements

In short, AI-related failures could cause disruptions that grid operators and stakeholders are unprepared for. US regulatory bodies such as the Federal Energy Regulatory Commission, the Department of Energy, and the Securities and Exchange Commission need to get ahead of this. They could start by researching what type of AI disclosures should be required to mitigate those risks to the grid. An additional question is whether those disclosures should be mandatory or voluntary.

Disclosures would provide several benefits. First, energy investors could use this information when they are pricing in risks. Second, regulators could use them to analyze how AI risks stack up against potential benefits—data that potentially could shape how AI is adopted. Finally, information collected from AI disclosure reports can help energy law keep pace with AI. For instance, delineating responsibility between human operators and AI decision-support tools will be critical for determining liability. Or, what if AI agents autonomously collude to raise prices?

Aligning Regulation with the AI Era

Everyone in the energy sector needs to agree on how AI implementation fits into existing regulatory frameworks, as well. Our current energy policies and practices were designed for pre-AI environments. If too much ambiguity remains, that uncertainty might slow down desired progress. 

Formalizing AI disclosures will take time, especially given the complexity of the power grids and the dynamic nature of AI. And given the speed at which AI is being deployed, federal agencies should act proactively before avoidable risks become realized. 

About the Authors: Ismael Arciniegas Rueda and Daniel Tapia

Ismael Arciniegas Rueda is a senior economist at RAND, the nonprofit, nonpartisan research institution, and a professor of public policy at the Pardee RAND Graduate School.

Daniel Tapia is a political scientist at RAND.

Image: D shevon/Shutterstock

Source link

Related Posts

1 of 148