Podcast

AI’s Big Future in Energy and Climate Regulation

Cary Coglianese, director of the Penn Program on Regulation, explores AI’s potential to help regulators keep pace with energy sector growth and climate-tech innovation.

The ongoing transition to a cleaner energy system has positive implications for climate, energy security, and equity. Yet the same transition poses myriad challenges for regulators, who are faced with an energy system that is more complex and distributed than ever, and where rapid innovation threatens to outpace their ability to tailor rules and effectively monitor compliance among a growing number of regulated entities.

Cary Coglianese, director of the Penn Program on Regulation, discusses the role that AI can play in optimizing regulation for an increasingly dynamic and innovative energy sector. Coglianese explores the role that AI might play in the development of rules and in measuring regulatory effectiveness. He also examines challenges related to AI energy consumption and bias that must be addressed if the technology’s potential as a regulatory tool is to be realized.

Professor Coglianese’s work discussed in this podcast includes the following papers he has published: “Deploying Machine Learning for a Sustainable Future,” “Optimizing Regulation for an Optimizing Economy,” “Regulating by Robot: Administrative Decision Making in the Machine-Learning Era,” “Transparency and Algorithmic Governance,” and “Procurement and Artificial Intelligence.”

Andy Stone: Welcome to the Energy Policy Now podcast from the Kleinman Center for Energy Policy at the University of Pennsylvania. I’m Andy Stone.

Over the past year, artificial intelligence and machine learning have jumped to the fore of public consciousness. Tools such as ChatGPT have made it easy for anyone to interact with AI and find uses for it that hopefully make our increasingly information-centric lives a little bit easier. Yet one area where the potential and potential peril of AI has been most actively discussed is in relation to the energy system which, in a mirror of our own daily lives, has become increasingly complex and interconnected. As a prime example, the electric grid is becoming ever more diverse in terms of how and where power is supplied, with renewable and distributed resources forming an ever larger part of the generation mix.

On today’s podcast, we’re going to look at how this accelerating pace of change in the energy system creates challenges for the regulators who are tasked with overseeing the system. To point, we’ll be looking at the role that AI might play in helping regulators to keep pace with energy sector innovation and complexity, while continuing to provide essential oversight of grid, environmental, and community impacts.

Today’s guest is Cary Coglianese, Director of the Penn Program on Regulation and Professor of Law at the University of Pennsylvania. Cary’s recent work has focused on the intersection of machine learning and regulation. Cary, welcome back to the podcast.

Cary Coglianese: It’s nice to be here, Andy.

Stone: So an energy transition, as we know, is underway, and it seeks to address the impact of the energy system on everything from the environment and the climate to our communities and public health. Regulation, which we’ll be talking about today, is both driving and reacting to this transition. Could you introduce us to how the regulatory environment around the energy industry has changed, and I would assume has become more complex as the energy transition has progressed?

Coglianese: We can see that we’re shifting to an era in which there will be just simply a larger number of sources of electricity being placed upon a grid, and that’s introducing an enormous amount of complexity, rather than having a fixed number of large utility power generators to what was, itself, a very complex grid operation to begin with, even 10, 15 years ago, as we have individual households putting electricity on the grid, as we have smaller power generation units coming on. It simply is a function of that increased number that will make grid operation itself more complex.

I should also add, though, that the nature of regulation and the nature of management of the grid are more complex in an era of climate change, where we have an increased frequency and intensity of storms and wildfires that pose threats to the structure of the grid and its reliability. So thinking about grid resilience today factors in and makes management of our energy system that much more complex. You add also, by the way, the complexities around what might be thought of as traditional environmental regulatory controls, as well, as we’re in an era where we’ve never been more concerned about and needing to address emissions of various kinds of pollutants, but especially of greenhouse gasses.

So all of these factors make for a much more complex energy and environmental context today than we have seen in recent decades, to be sure.

Stone: So we have more elements on the grid, distributed generation — everything from rooftop solar to, as you said, kind of the tradition of generators — as well as the increasing attention that is being paid to the environmental and climate aspect of the energy system, all of which, as you just mentioned, really multiples the number of elements that need to be regulated, it seems.

Coglianese: Absolutely, and it increases the values, if you will, that the regulatory system is aiming to serve. Some of these issues, the regulatory issues today in an era of climate change are difficult ones. These are maybe second generation, maybe third generation environmental problems, one might say. If you think about the problem of methane leaks, this is not just a problem that one might say is relatively simple, and not that pollution control has ever been truly simple, but rather than targeting a big smokestack that was emitting pollutants, we’re now trying to look at all of these renegade potential sources of methane leaks. That’s a much harder problem, too. It’s no wonder that the EPA’s methane rule comprises hundreds of pages of text to reflect the complexity of a third generation environmental problem like that.

Stone: Have regulators’ capabilities, in terms of their manpower, other resources increased as the numbers of regulations have grown, as well?

Coglianese: You’re right. The number of regulations has grown. By the way, this is something that occurs in Democratic and Republican administrations. One can look at the number of pages in the Code of Federal Regulations, and it’s a steady increase. And that, itself, adds a degree of complexity, to be sure. And we’re also, then, facing the reality that resources for government agencies are constrained relative to the nature of the problem, or even sometimes relative to the past. We have, quite frankly, in a lot of regulatory spheres, concern about an aging-out of regulators who have the expertise and knowledge of very complex energy systems, whether nuclear or more conventional sources of energy.

Stone: It sounds like this regulatory complexity, in a sense, is going to befuddle regulators, because again, there are more elements that need to be regulated, more oversight that’s needed, and the resources have not grown. So I want to go to something that you wrote several years ago. It was a paper titled “Optimizing Regulation for an Optimizing Economy.” In that paper, you note a fundamental mismatch between the ways that private and public sectors operate and the challenges that this creates for effective and efficient regulation, noting what we’ve just been talking about, particularly in terms of regulation that enables or is maybe not a barrier to innovation. Could you dive into this concept of the optimizing economy and what it would mean for regulators to be optimized, as well?

Coglianese: Well, sure. Let me just start by saying that when it comes to optimizing regulators to build on what we had been talking about, with an increased complexity in the world at large and limitations on resources and capacity, regulators are being asked to do more with less, and that calls for itself an obvious need for optimizing, to use existing governmental capacities much more wisely and effectively.

But it is also the case, as you say, that there is something else happening in the economy that I call the “optimizing economy.” The private sector is also getting good at doing more with less and with operating in ways that make more targeted and efficient uses of private resources, often through more individualized decision-making and customization.

Let me give you just a couple of examples, and then I can also tie that into this notion of optimizing regulation. In terms of the optimizing economy, we see the use of artificial intelligence being able to deliver precision medicine that’s customized to individuals and their genetic make-up or their own health conditions. Precision marketing is being used. Social media is getting very good, and then so are websites like Amazon and Netflix, at telling us exactly what we might like to buy next or watch next. That’s a type of precision and individualization that’s happening. We see this with fintech. We see also a customization and optimization of resources, with new services in what some people have called the “sharing economy.” But we had this capacity in housing that wasn’t being used, but now with Airbnb and other kinds of online matching services, we can actually use these resources much more optimally, rather than having them sit vacant or unused.

The same would be the case with something like Uber, as well, with replacing systems that had built in a lot of capacity for taxicabs, but a lot of people having their cars sitting in their driveways. Well now we have people and their labor being used in more optimal ways. Now all of that is exciting, but it is also something that creates different kinds of regulatory challenges, if we take, for example, precision medicine. That might be a really good illustration of this.

We no longer have a regulatory system that would say, “Let’s test a drug on a large sample of people and see whether it works or whether there are any side effects,” because the whole point of precision medicine is it’s not going to be something you use on a large number of people. So how do you do that? How do you regulate something that’s also distributed? That’s another part of the optimizing economy. And 3D printing is an example of that. Now you have the possibility of people producing their own weapons through a 3D printer. How do we control that?

And now I’ll bring it back to energy, of course, and that is with distributive energy — anything from rooftop solar to some kind of micro-generation unit complicates the problem.

Stone: This is the optimized and individualized type of issue that we’re seeing in the energy space, right?

Coglianese: Absolutely. So this is happening everywhere in the economy, but it’s also happening in the energy sector, absolutely.

Stone: So Cary, we’re going to look at the role that AI can play in efficient regulation of the energy system. But before we dive into that, I want to ask this very basic question to get started. Is the type of AI that would be used in regulation similar to the ChatGPT that has become so popular recently, or is it something different?

Coglianese: Well, yes and no. One thing to know is that there are lots and lots of different types of data analytic techniques that are being trumpeted as artificial intelligence today. Some of them are simple automation of operations or digitization of existing operations. My automated thermostat in my house, one might say is artificial intelligence because I set it at certain temperatures, and it will automatically change. But that’s not really what we’re talking about when we’re talking about artificial intelligence. We’re really talking about what are known as “machine learning algorithms.” These are very sophisticated algorithms that process large — huge volumes of data, often billions of data points, to generate somewhat semi-autonomously the forecast that the data analyst is asking it to generate.

Now with something like ChatGPT, that is based upon machine learning algorithms. It’s what’s often called “generative AI,” but it’s basically doing the same sort of thing, of scanning through and drawing upon large volumes of data and making a forecast. It’s, at its most basic level, taking a question that you might ask of it and then generating an answer almost one word at a time. Saying, “Given the first word of my answer, I’m going to make a forecast based upon scanning the patterns in all of the written texts on the internet, what the next word should be.” And it turns out that this is a remarkable technology. It can generate what we call “hallucinations” or erroneous answers, and it can be hacked and led, if it’s not designed well, to tell people to do harm to themselves. And we have to worry about those sorts of things.

It’s still quite remarkable, but it’s still basically the same kind of thing that I’ve been talking about with semi-autonomously scanning through all of the large volumes of data and making a forecast, in that case the forecast of the next word to make a coherent answer to a question.

So in some sense, yes, they’re all the same, but the reality — and this is one thing that is important to know — there are a lot of variations in machine learning. Some are structured models. Some are unstructured models. Some are semi-structured. As I said, there’s generative AI or foundation models. One could go on and on, but the point is that we’re in an era where digital computing is allowing us to create these somewhat autonomous models and algorithms that will generate forecasts of whatever the parameters are set in for them to make a forecast of, and come back with often a very accurate set of results.

Stone: A lot has been said about the potential for AI in the energy space. The International Energy Agency, in an article that appeared on its website pretty recently, called the pairing of AI and energy “the new power couple.” It notes that the role that AI will play in managing the flow of power on an increasingly complex and distributed grid, which you’ve talked about. Can you differentiate for us a little bit more the role that AI would play in the energy regulation, as opposed to more operationally-focused AI applications?

Coglianese: Well, there is going to be a range of applications. This is the other thing about AI. Not only are there different types of machine learning algorithms, they are being put to a wide variety of uses. Some of those uses in the energy sector would involve using these algorithms to forecast energy demand much better, and that would enable a much more sophisticated and reliable management of a much more complex distributed energy system. So we’re likely to see a distributed energy system going hand-and-glove with the need for a more sophisticated set of data analytic tools, including AI, to manage that system.

We were talking earlier about ChatGPT, though, and there’s another use case that Microsoft is exploring using right now. It’s trying to process regulatory permits and applications and supporting documents for getting approval for nuclear power plants. These approval processes often require tens of thousands of forms and documents and studies, and this kind of generative AI of ChatGPT variety, Microsoft thinks might actually help them in processing all of that paperwork. You could also see it working on the government side in managing the flow of paperwork to be able to rely on some of these large language models of the ChatGPT variety.

Stone: So documentation is one area where it can help out. What are some of the other areas? I’ve seen identification of the regulatory targets, and then ongoing monitoring and detection of regulated facilities.

Coglianese: Sure, so a large utility company or energy system might be worried, for example, about methane leaks — something we were talking about earlier. It may have innumerable possible sources of leaks to try to detect, and a limited number of people to go out and check those or tighten values, or what have you. And so one thing that machine learning algorithms can do is optimize scarce resources by identifying likely targets. Where in a complex industrial system might there be more likely to be a methane leak? That’s something that an AI tool could, in principle, help design.

From the regulatory side of things, there’s actually some very good research that shows that environmental regulators and other regulators — this has been shown in a variety of settings — can find those firms that are more likely to be in non-compliance by using machine learning algorithms and big data analysis to allocate their scarce inspection resources. Rather than just randomly sending inspectors out, you come up with a sophisticated machine learning algorithm that forecasts which companies or facilities are most likely to be in non-compliance. Let’s send our resources there.

So those are a couple of other examples, and quite frankly, what we’re going to find in the years ahead is that almost anything we have been doing to date could be improved, probably through AI. Or there may be many new uses, things that we haven’t thought of before, that we could apply AI to help improve performance and maybe reduce or shift the nature of human workload and human oversight, especially in these complex areas that we’ve been talking about.

Stone: What is the current state of development of AI in the regulatory space? Have regulators at the federal and the state levels really begun to implement AI into the regulatory process?

Coglianese: Well, the Energy Department actually is using AI for a lot of research. We do know that. The federal government released in October a spreadsheet showing about 710 different use cases of AI across the federal government. A lot of that is, right now, in the area of research, of forecasting and doing a better job of understanding problems. But over time, we’re going to see it being used increasingly to supplement or maybe even substitute for humans. Another big area where it is being used by federal regulators is in the detection of possible regulatory violations.

Stone: What role might there be for AI in understanding supply chains in their climate impact? So I’m thinking here broadly of pending reporting requirements. ESG reporting requirements from the SEC, tracking of upstream and downstream, Scope 2, Scope 3 emissions, all very complex, lots of elements there. Can AI be useful in that, as well?

Coglianese: Again, a couple of things. One is optimizing supply chains. It’s an optimizing scenario, and machine learning algorithms can be really good at making better use of scarce resources. They are being used in procurement, for example, to identify whether a business is likely to be able to come through and deliver on a contract and provide goods or services on time. So in that sense, in supply chain management, AI can be a useful tool.

But as you say, even keeping track of where you should be looking in your supply chain for opportunities to reduce energy or reduce greenhouse gas emissions — again, those are optimizing problems, as well. So anywhere we have a situation where we’re trying to target better or use limited resources more wisely, then we’re likely to see AI being a very promising candidate.

Stone: We talked about AI as a tool to enable effective regulation. It seems to me that one of the key issues that also needs to be considered is whether regulation is, in fact, effective, right? So to what extent might there be a role for AI in assessing regulatory effectiveness?

Coglianese: Well, it is almost a truism in a way. You can’t have AI working at all unless you have large sources of data. If we’re able to start using AI for some regulatory function of one sort or another, then it by necessity means we have data, and that data itself, then, can be useful as a means, whether through more conventional, evaluative statistical techniques to assess the performance of regulation, or to assess, by the way, even bias in regulation or bias in the use of an algorithm, quite frankly, which is certainly a very real concern. But when you have the data, you could start to analyze performance and side effects and other problems in a way that you can’t when you just don’t have the data to begin with.

Stone: A related question here. We’ve talked a lot about the regulators. We’ve talked a little bit about the regulated. From the regulated perspective, the regulated side of all those things, companies, entities that are going to be regulated, that are going to be watched by AI — they’re going to want to know what’s in that AI black box. Transparency is going to be very important. Do you see this transparency as a potential hurdle to implementing AI in regulation? What’s being done to ensure that transparency exists?

Coglianese: Sure, it’s a common issue because these machine learning algorithms not only have the property of being somewhat autonomous, they also are opaque. Because they are scanning this large data to find patterns on their own, it’s not often clear how to interpret the patterns that are found. In a conventional statistical analysis, a human identifies variables and specifies some kind of mathematical relationship between them, and then runs it on the data. If you get some kind of statistically significant results, you can start to say something about which variables seem to matter, and to what degree. With machine learning, that kind of interpretability that we have some to expect with traditional statistical analysis isn’t quite there. So these algorithms have sometimes been called “black box algorithms.”

Now the techniques for interrogating machine learning algorithms are improving, and data scientists are working on ways of making them  less opaque, but there still is certainly some kind of fundamental difference about these algorithms that give rise to the kind of transparency issue that you presented. And actually now about 7 or 8 years ago, I started working on a project looking at whether government regulators could rely on black box algorithms. We think about government as needing to be transparent. How could government agencies rely on these black box algorithms and still fulfill their legal obligations to be transparent?

To just put briefly a very long legal analysis that I provided in a paper called “Regulating by Robot,” and then a follow-on paper called “Transparency in Algorithmic Governance,” I think the legal rules can be satisfied, as long as some of the basic parameters and basic information about how these algorithms are operating can be provided to the public. And certainly if these algorithms are being used in the law enforcement context, well, we’ve never really expected there to be freedom of information and openness about how law enforcement officers make their targeting.

But transparency problems are certainly intrinsic, but I think the law has been pragmatic, and agencies shouldn’t shirk from trying out and using these tools where appropriate, and to do so obviously with care and with good documentation. Also, by the way, I’ve written some other papers recently that say that instead of government agencies, if you’re relying on some third-party contractor to do you data analysis and come up with and build an AI-based tool, you’d better make sure that those contracts that you’ve developed through your procurement process are written in a way that will allow you, the government agency, to be able to provide the public with some modicum of information to sustain the use of that tool in the face of some kind of challenge. There’s this intrinsic black box nature to the algorithm, but there’s almost a second black box nature when you have this third party providing services and claiming some kind of trade secret protection. But that secondary black box nature to these tools can be just readily solved by making sure that government contracts are written in a way that ensures that the contractor must provide some modicum of information.

Stone: I have a final question for you here, and that’s on the net energy impact of AI in regulation. One of the big criticisms, concerns around AI has been in its actual energy consumption, which is very, very large. In using AI in regulation, do we use so much energy that we almost offset what we’re trying to achieve here, in terms of more efficiency and environmental friendliness of the energy system?

Coglianese: Well, some of these AI tools, particularly these large language models are very data-centered, demand-intensive. They require a lot of computing power and a lot of electricity power to run. So I think what you’ve framed as the question is really the challenge for us going forward. How do we make sure that society overall can benefit from the positive in efficiency gains and improvements in workflows and other advances that are made possible through artificial intelligence, without incurring and offsetting energy demand that itself is counterproductive and contributing further to the climate crisis at hand.

Chip technology keeps advancing, and certainly we are seeing advances in renewables and expansion of use of renewable sources of electricity. One would hope that these are developments that can go hand-and-glove together, but it is a challenge that we face as a society, and with this technology it is to make sure going forward that the energy demands and the consequent emissions from any non-renewable sources of electricity don’t overwhelm the positive benefits that could come from the technology.

Stone: Cary, thanks very much for talking.

Coglianese: Thank you. It was fun.

Stone: Today’s guest has been Cary Coglianese, Director of the Penn Program on Regulation and Professor of Law at the University of Pennsylvania.

guest

Cary Coglianese

Edward B. Shils Professor of Law
Cary Coglianese is the Edward B. Shils Professor of Law and Professor of Political Science at the Carey School of Law. He also is the director of the Penn Program on Regulation.
host

Andy Stone

Energy Policy Now Host and Producer
Andy Stone is producer and host of Energy Policy Now, the Kleinman Center’s podcast series. He previously worked in business planning with PJM Interconnection and was a senior energy reporter at Forbes Magazine.