Podcast

Planning the Grid in an Age of Uncertain Demand Growth

Electricity
Get our podcast on

AI data centers are driving rapid demand growth, exposing the limits of traditional electricity forecasting and planning.

Electricity demand in the United States is rising fast, fueled in large part by the rapid expansion of AI data centers. Grid operators have repeatedly revised their demand forecasts upward as they try to anticipate how much new power these facilities, along with other emerging loads such as advanced manufacturing and crypto mining, will require.

In January, however, something unexpected happened. PJM Interconnection, the nation’s largest grid operator, lowered its demand growth outlook, just weeks after a capacity auction driven by expectations of booming demand produced record high prices.

Estimating how much electricity new data centers and other large loads will actually add to the grid is difficult, and the uncertainty cuts both ways. Overestimating demand can leave consumers paying for grid infrastructure that never gets fully used. Underestimating it can threaten reliability. All of this is playing out as the rapid buildout of data centers is increasingly framed as a question of economic competitiveness and national security.

On the podcast, Shana Ramirez and Arne Olson of Energy and Environmental Economics argue that while improving forecast accuracy remains important, uncertainty itself needs to play a more central role in how the grid is planned and governed. In a recent E3 paper, they lay out why demand forecasts will remain imperfect, and why grid rules and planning processes should be designed to work across a range of possible outcomes rather than relying on a single view of the future.

Ramirez and Olson discuss the reliability and cost challenges this uncertainty creates and describe governance approaches that could help the power system remain reliable and affordable as new loads come online.

Stone: Welcome to the Energy Policy Now podcast from the Kleinman Center for Energy Policy at the University of Pennsylvania. I’m Andy Stone. Electricity demand in the U.S. is growing at a breakneck pace, driven largely by power-hungry AI data centers. Grid operators and industry watchers have responded by repeatedly revising their electricity demand forecasts upwards, sometimes dramatically, to estimate where future demand is heading.

Yet something unexpected happened in January. PJM Interconnection, the largest grid operator in the US, actually lowered its demand growth expectations in its January 14 forecast. This reduction came just after the market’s most recent capacity auction, where expectations of booming power demand led to record setting prices and costs that will ultimately be borne by consumers.

There are many threads to unpack here, but today we’re going to focus on one in particular, which is uncertainty. Specifically, the uncertainty around how much power AI data centers, new manufacturing facilities, and crypto mining operations will actually need from the grid. Getting this number right is critical. Overestimate demand, and consumers pay for excess grid infrastructure they don’t need. Underestimate it, and the reliability of the grid could suffer. All the while, the challenge of getting new data centers online quickly has been framed as nothing less than a question of the nation’s economic future and security. But here’s the catch. Demand forecasts are inherently uncertain, even more so in today’s dynamic electricity system.

On today’s podcast, I’ll be speaking with two industry experts who suggest reframing the challenge. They argue the question isn’t as much about forecast precision, as it’s about coming to terms with the uncertainty that forecasting inevitably entails, and then operating the grid in ways that embrace that uncertainty. This means developing rules and processes— what’s broadly called grid governance— that prepare us for a range of possible futures.

My guests, Shana Ramirez and Arne Olson, are with Energy and Environmental Economics, a power industry consultancy. In December, E3 released a paper describing the uncertain demand future facing the grid and proposing governance solutions to manage that uncertainty. Shana and Arne will walk us through that thinking and give us a detailed look at the reliability and economic challenges uncertain new demand presents. Shana and Arne, welcome to the podcast.

Arne Olson: Thank you for having us.

Shana Ramirez: Thank you.

Stone: And Arnie, welcome back, as well, to Penn. I understand you are a graduate of Penn’s Energy Management and Policy Program.

Olson: Yes, exactly. Yeah. I had a chance to visit the Penn campus last summer, and really enjoyed it, and so I’m very happy to be on the podcast.

Stone: Well, it’s great to have you back, both of you back here today. So let’s go ahead and get started. Power demand has been growing very rapidly in this country, and forecasting the pace of this growth has suddenly become a critical priority issue for the electric power sector and for policy makers. So start us out. Can you tell us what is at stake with these forecasts?

Olson: Well, as you mentioned in your introduction, the forecasts are what the utilities and the grid operators use to develop their plans for how much infrastructure needs to be added. So what’s at stake, really, is that question. How many new transmission lines, how many new distribution lines, how many new power plants need to be developed to serve the load that’s expected? In order to have a good forecast, so that you can right size the infrastructure, you need to have an accurate forecast of how much load you expect to serve next year, the year after and the year after. The more uncertainty in your load forecast, the less you can right size those investments, and the higher the risk that you either overinvest— and as you noted, result in higher costs for consumers— or underinvest, and potentially degrade the reliability of the power system.

Stone: So Shana, I want to take a moment here. We’re going to be talking a lot about what’s called large load additions, or LLAs in the industry. I want to ask you what qualifies as large, and why do these projects in particular create big challenges for the grid?

Ramirez: So, there’s no single industry definition of large. We see thresholds anywhere from one megawatt in some large load tariffs to 100 megawatts. I think what matters more than the absolute size is the degree to which a single customer materially can change the system. A useful way to think about it is proportionality. So adding 200 megawatts to a large RTO footprint is very different than adding 200 megawatts to a smaller, vertically integrated utility. In many cases, these loads represent a step change, not a marginal increase.

And then the planning challenge comes from timing and irreversibility, right? So generation and transmission take years to build, and recover costs over decades. When a large load arrives quickly or departs unexpectedly, it creates a mismatch that is difficult and expensive to unwind, or you can’t unwind it.

Stone: So these big loads are coming in. They’re coming in really fast, faster than the grid has ever seen before. And you’re saying, getting the power capacity, the generating capacity, takes a lot longer. And getting the grid ready to serve these new loads takes a lot longer than it does to actually build the loads themselves.

Ramirez: Oh, very much so. a data center can be built in less than a year. Some transmission and generation can take three to 10 years to build, just depending on the size and location and all of that.

Stone: And I just want to also get a little bit more perspective here. You mentioned 200 megawatts as an example. I assume that some of the data centers could consume that much capacity. But give us an idea, what does 200 megawatts look like in something that we might be able to relate to in terms of a number of homes that could be served with that amount of capacity, or something like that?

Olson: Let’s say a medium sized utility might have a million customers, and that utility might serve 5000 megawatts of peak load. That utility, if it’s growing at a normal industry rate, one or 2% per year, might be adding 100 to 200 megawatts per year at most, just from organic growth. Now, the thing about organic growth is that it depends on a million customers, and maybe I’m adding 10,000 or 20,000 customers this year. Maybe the economy is growing at one or two or three or 4%. Because there are so many customers, I can really capture the uncertainty just through the law of large numbers. I can use statistics to just understand what the range of potential reasonable outcomes might be from my future load forecast.

If I’m looking at a single customer that has 250 megawatts of load, that might be two years worth of organic growth. And that’s not a law of large numbers, normal distribution. That’s a binary distribution. Either it comes online or it doesn’t. So that’s a huge amount of uncertainty, much larger than the uncertainty that I would normally face as a utility planner just planning my organic growth.

Now, if I have 10 of those customers that are 250 megawatts each, that’s 2500 megawatts. And remember, my hypothetical utility, only had 5000 megawatts that it was serving at the starting point. So that then becomes just a tremendous amount of uncertainty for that utility to deal with, and understanding how many power plants it needs to add and how much new transmission it needs to add, et cetera, becomes a really fraught exercise.

Stone: So those really are, again, outsized additions. And so I guess the next question that comes out of this is, there’s a lot of uncertainty around how many of those new 200 megawatt, or whatever they may be, demand centers are coming online. And this uncertainty, as I think I mentioned earlier on, is really the focus of your recent paper from E3. And that’s titled, for people who are interested in looking it up, Forecasting Large Loads in the Age of AI Data Centers. So I want to ask you, Shana, what are the main sources of uncertainty around how many data centers, or how much data center demand, will actually materialize in the coming years?

Ramirez: The first source of uncertainty, in my opinion, is the speculative behavior that’s happened with some entities. So right now, many different types of entities are positioning themselves as data center developers when they haven’t ever done that before, and oftentimes submitting multiple interconnection requests to multiple utilities to hedge timing and location risk, when they may not ever have any desire to actually build in that utility service area. So that inflates the apparent demand, but without a corresponding increase in actual projects.

And I think the second piece to that is, AI is such a new technology, nobody knows— at least I don’t know— if it will stand the test of time. We don’t know how the compute efficiency and the architecture and the business models will evolve. It’s hard to plan for something when you don’t have any idea if that demand will still be there in the future. And I think history tells us that energy intensity rarely stays constant over time.

Stone: So there’s a lot in here. There’s speculation about how many of these centers will actually come along. There’s speculation about how much energy they’re actually going to use if and when they’re up and running, right? Throwing the spaghetti against the wall, and some of these speculative proposals from the data center developers, you just don’t really know what’s real and what’s not.

Olson: Just to chime in, there’s also uncertainty about where they’ll locate. So, we don’t have a good guide, unfortunately, to tell them, “If you locate here, that’ll be the lowest cost interconnection.” The only thing that they can do is just ask. They make a request. “Can I interconnect 500 megawatts here?” And then the utility has to go off and study, what would it take to interconnect 500 megawatts in that location?

But because those studies take time, and they don’t know what the answer will be, what they do is put in a number of requests. Say, “Can I locate here? How about here? How about here? How about here?” So they might have 10 requests in to different utilities for every one or two data centers that they actually intend to build, because that’s what they need to do to be able to identify a good site to locate from the perspective of the system. So there’s both uncertainty about how much AI demand there will be overall, as Shana noted, and there’s also uncertainty about where exactly it will locate, and how many of those will locate in any one given utility service area.

Stone: Does that mean that some of these new data centers are getting double, triple, quadruple, counted in the load forecast, because they’re actually proposing to build, in many different areas, the same data center?

Ramirez: Potentially. And I think that is true. What we’ve recommended is to start counting them when they’re more certain. So, when there has been some financial capital put down, or when energy service agreements have been executed. That makes it much more real than just a name in a queue.

Stone: Well, another point that you make in the paper as well, is you point out that not only do we have the uncertainty about what’s coming, but we don’t necessarily have really strong data on what exists out there today. I understand there’s not a real— I don’t know if “consistent” is the right word, but something like that— database of what actually exists on the grid today. Is that right, in terms of data centers?

Ramirez: Yes, that’s definitely true. And any type of database, or anything that does have some of that information is very much incomplete. And it’s also going off of the announcements of data centers that they’re going to be building here, or news that this is happening. So it’s very much not a science at this point.

Olson: Part of it is, we have some data center developers that are, in effect, hyperscalers, saying that power is the constraint on their growth. And we almost hear that they’ll take as much power as you can possibly deliver. Now that can’t be true. There has to be a limit to their appetite at some point. But this is what the proponents themselves are saying. Mark Zuckerberg has been out there saying power is the constraint on our ability to grow our business. And just given the state of uncertainty in the AI business, as Shana alluded to before, they all are in a race to develop this new technology. It’s almost like a land rush. You know, if you get there first, then you have a huge advantage over all of your competitors who are trying to catch up. If you’re the one that’s left behind, that could be an existential risk to your business.

So they’re all trying to get as much done, as much compute power out there, as much large language model training as they can. Power is a constraint to that. So there’s very much a land rush mentality happening right now, and the power sector is at the kind of back end of that, bearing the consequences.

Stone: Yeah. I want to ask you more, Arne, about those consequences. You already hit on it a little bit, but I wonder if we could go a little bit further, about the real world consequences of under and over forecasting electricity demand. Tell us a little bit more about what’s at stake here.

Olson: Well, we already discussed a little bit about what the consequences would be. A more normal situation is, if the utility over forecasts its load, that likely means that it overinvests in grid capacity. And that imposes some additional costs onto its rate payers. Now, if the utility continues to grow, then load growth can catch up with that overinvestment, and so it’s really a matter of, you built to you built too early, and there are costs associated with that.

Stone: I want to ask you, when you’re talking about overbuilding, we’re talking about transmission? We’re talking about generation? Both? I just want to make sure I’m clear on that.

Olson: Yeah, no. Good question. It’s really all of it, and it just depends on the specific site, whether transmission is needed and how much, or distribution, substations, et cetera. But yeah, just think of it as all of the infrastructure needed to generate and deliver power to a specific customer.

If you underinvest, or you under-forecast, then you might really put grid reliability at risk. So you know, there’s “I have more load than I expected, I have less power than I need. I might be able to go out to go out to the market and buy that power, but I might not be able to.”

So normally, the utility would look at those two risks and see a large asymmetry. It’s much better to build a little bit too early and spend a little bit extra money than it is to put the grid reliability at risk and potentially have rotating blackouts because you didn’t have enough power. So typically, they would, all else be equal, probably over-forecast, or be a little bit conservative in their forecast.

With these very large new loads now coming online, the amount of money that’s at stake for the investment component of that is so much larger, and the risks are so much larger, that turns that on its head a little bit. And because those risks are so large, that’s why some additional tools and risk management becomes really important. And that’s why E3 was interested in writing the white paper to sort of deal with this new form of uncertainty, and to think of it from a risk management perspective.

Stone: Well, great for leading into that. Because Shana, I want to go into that issue with you right now. So again, as Arnie just said, this has been really framed as a forecasting problem. Get the forecast right, because you need to know how much infrastructure to build to support these new data centers, these new large loads. But again, the future. There is no crystal ball. We don’t know. So the report is about reframing this from a “get the forecast nailed” problem to a “manage the risk” problem. Tell us a little bit more about this shift in framing.

Ramirez: Yeah. So, I mean, as we all know, a forecast is never going to be perfect. It’s never going to be quote, unquote, “right.” But it’s amplified in an environment with rapid technological change and speculative investment. So the mistake is treating the forecast as a prediction, rather than just an input to making your decisions, whether that’s resource procurement or investment in upgrading your transmission system.

And what we think utilities really need to manage is the exposure. How much risk are they taking on if load grows faster than expected? How much if it falls short? And framing it as risk management shifts that focus towards flexibility and financial safeguards. And it is more of a looking at a range of possibilities, and not just what historically has been like the most likely scenario. Because that’s just not where we’re at right now in the energy energy industry.

Stone: So we’re talking about looking at the future as a risk management challenge. But just going for a moment on something that Arnie said a little bit earlier, you noted the magnitude of infrastructure that is in play here when we talk about building out the grid to meet these future demands. Hopefully, they will arrive, materialize, if we’ve built to plan for them. But a related issue here is that the electricity industry itself is not really built to operate at a level of speed and adaptability and nimbleness that is currently being called for. I wonder if you could just kind of acknowledge or talk about this fundamental tension, because it will impact the ability of data centers to connect to the grid, and it will ultimately influence how much new demand will be coming on and when.

Olson: Sure. I mean, I think you need to start with the fact that electricity infrastructure is enormously capital-intensive. And it just requires, historically, a long time to engineer, to plan, to permit, to construct, to commission and to energize. So just to give you an example, a new high-voltage transmission line can often take 10 years or even longer from the time that you first start to plan that line to when you bring it online. There’s a famous power line in the Northwest, where I’m from, that connects Idaho with Oregon that is now, finally, well under construction. It’s been 19 years for that project. That’s just one example. Power plants will typically take four to six years. Transformers and a lot of other things are in short supply.

So those are the types of time frames that that we’re dealing with here. And there’s typically a lot of public process around both, how much should the costs be, and who should bear them? But also the environmental impacts of the infrastructure that’s being built. The land use impacts from transmission lines, long, linear facilities. There’s emissions impacts from power plants, and land use impacts as well. So the public has has historically wanted to have a say in the nature, size, extent of these types of facilities.

If you contrast that with the world that a lot of the hyperscalers are used to dealing with, you know, Moore’s law is only focused on 18 months, right? It says, “You’re going to double your computing power every 18 months.” So they’re used to building such stuff much faster. So constructing chips, racking them up into servers, into larger data centers, is just not nearly as as capital-intensive and as extensive of an exercise. So it’s a bit of a clash of mindsets, in a way, between these new large consumers of electricity just having a time frame that doesn’t match with the way that the energy sector has traditionally done business. And it’s not clear that we can really speed things up that much more rapidly with respect to the electricity sector, because the grid impacts of these new facilities are very, very extensive, and they need extensive study to ensure that other people won’t be harmed by these new loads coming online.

Ramirez: Yeah. I think of it as like a cultural mismatch. And as Arnie said, data center developers operate in a highly competitive, fast moving environment. And utilities operate in almost the exact opposite, in a very regulated, risk-averse one. And for important reasons, both of those are the way that those industries work. But coming together, it’s very hard. There’s not ever going to be a match, I think, with the speed that the data centers want, with what the utilities can actually do.

Stone: As you start looking at the future, again, as you know uncertainty to be managed, you—E3, working with E3— have built your own models for what future demand will look like. And in doing so, you’re not throwing forecasting out the window completely. But instead, you’re presenting a range of scenarios of possible futures of electricity demand.

And scenarios are used a lot in forecasting. The International Energy Agency, for example, uses forecasts when it looks at future global energy demand. Again, for example. In your view, how and why should utilities and grid operators be using scenarios in their efforts to understand future ranges of demand and the uncertainty around that, Shana?

Ramirez: Scenarios are acknowledging explicitly the uncertainty that’s happening in the industry right now. Whereas a single point in time forecast creates an idea of false precision. It implies a level of certainty that just doesn’t exist with the amount of new load being planned for at this time. And I use the hurricane “cone of uncertainty” analogy. You’re not going to evacuate based on the projected landfall in one certain city. The cities around that or towns around that are still going to prepare and maybe evacuate, because the hurricane can change track while it’s going.

So that’s the way that we’re trying to think of forecasting, as not a single point in time. You need to keep testing it. You need to make sure that your assumptions are correct, and that they’re as accurate as they can be on an ongoing basis.

Stone: So despite all this, we’re still seeing grid operators— I’m thinking about PJM in the most recent case— they’re still focusing on a single-point forecast. So the question here might be, does this scenario type of forecasting work within the context of a competitive wholesale electricity market operator, again, like PJM, which is a regional transmission organization? Arne, I want to ask you, why hasn’t such a market published a range of possible futures when it releases its forecast?

Olson: Yeah. So they do publish ranges, and they do use scenarios for long term transmission planning. So, you know, MISO had, I forget, five different futures that they published when they did their their longer-term transmission outlook, 20-year transmission outlook. And we always focus on MISO Future 2A as an example of one that’s policy compliant, and a lot of our clients find useful for planning. So for long-term planning, they do. And the range of uncertainty, of course, is larger the farther that you go out in time.

The issue is when they get into their capacity market. So this is where they’re procuring the capacity that the system needs to meet its resource adequacy standard, a year to three years in advance, depending on the market and what setup they have. In those cases, the market operator is running an auction, and billions of dollars are being cleared through the auction in capacity payments. So in that context, they really need to have a number that they procure to. In fact, they don’t have a single number. They have a sloped demand curve. So I guess you could say, maybe their embedding scenario was a little bit in their sloped demand curve. But in effect, they’re buying capacity for a fixed, expected amount of load. And they’ll buy a little bit more if the price is cheaper, and a little bit less if the price is more expensive. But it’s because they need a number to clear the auction with.

That isn’t to say that that’s the only generation planning anybody should do. The RTO is really almost the backstop, or the spot market for capacity. Any individual end user should be doing its own scenario planning and doing its own hedging against that spot capacity price, when it’s thinking of its own long term interest. The RTO-run auction is just a price signal that, really, the market should use to do its own scenario planning. But the definition of a market is that there is no central planning, right? Generation is a is a competitive market. No one is centrally planning that generation portfolio. All of the market participants are doing their own individual scenario planning.

Stone: In the solutions to all this, in managing this uncertainty, you point out that demand response can play a very important role, again, for managing the uncertainty of the grid. And you say, quote, that “demand response is a critical hedge against forecast error.” I wonder if you could explain a little bit more the role of DR, demand response, in addressing this forecast uncertainty.

Ramirez: I think the way we’re thinking of it, treating it as a hedge, is that demand response can become a planning asset. It provides optionality if load grows faster than expected. And it reduces stranded asset risk, if it does not.

I would caution, though, two things. Not all data centers are equally flexible. And assuming they are can be a big risk to your forecast. And then, flexibility doesn’t mean that they’re shutting down their operations. They may be shifting workload to other data centers. And most likely, they’re going to be relying on behind the meter generation. And that, at this point, is most likely fossil-fuel-based.

Stone: So some of those data centers are more flexible inherently, right? I think a data center that’s focused on AI training may be able to reschedule its processes, where a data center that actually has to respond to queries in the moment in real time may not be. Is that right?

Ramirez: That’s exactly right. And I would even say that that within the AI data centers, what we’ve seen through a few studies and speaking with a bunch of data center developers is that they have less flexibility than— I think there’s some hype around that. And being able to be curtailed and all of that. Because they do have contracts with their customers that they have to fulfill, and being down is very expensive for them.

Stone: And you mentioned curtailed. Curtailed is when they would actually be— their supply would be reduced. The grid operator would make that decision, essentially for them.

Ramirez: Correct.

Olson: Well, I think we’re seeing load flexibility, demand response, on site generation, really all the things that you can do on the site of a data center, as really being an important tool, especially for speed to market. I talked earlier about how the grid studies, impact studies that need to be done for large data centers, are extensive and time consuming. And the grid infrastructure can be expensive and time consuming to construct.

If you can put in place a data center that has resources on site— and it’s a combination, again, of actual compute flexibility, maybe it’s storage, maybe it’s on site generation— and you can agree to operate the data center in such a way that you’re not having any impacts on the rest of the grid, that may not be the most cost optimal way for that data center to come online. It might be better if it were fully grid integrated. But it’s a way for them to get online quicker, while all the studies are being done about how to fully grid integrate that data center. So it’s a way to get online faster. And then over time, it will more and more fully participate in the grid and the wholesale market, as all the studies are done to ensure that it can be done reliably.

Stone: Well, I guess the next question here would be, how do you incentivize these data centers and other large loads to actually be flexible, right? So, to make it worth their while, financially or otherwise, to be flexible. And you highlight the role of rate design in incentivizing flexibility. So for those who may not be familiar, could you tell us what rate design is, and how might it be used to incentivize large electricity consumers to be flexible in their consumption of power, as you’ve just described?

Ramirez: Rate design is basically what is done to ensure— it is the way that the utility recovers their costs to serve their customers. And for example, traditionally, transmission and generation are recovered in demand charges versus energy recovered in $1 per kWh charge. And there’s all different types of rate designs you can do. By “time of use” period. So high-cost periods, the rates are higher than the low-cost periods, so you’re incentivizing people to cut their usage when they’re in the high-cost periods.

And we just think of it as one of the few levers that utilities have that can translate the system’s needs to customer behaviors. But there is the caveat that these customers are not necessarily very price sensitive.

Stone: Meaning, they’re going to run whatever the price of electricity is, right?

Ramirez: Right, right. So, I think it’s more like non-price incentives matter more. So what we’ve seen is faster interconnection, non-firm service options. And those can be more motivating to a data center than than price signals. But we’re in the gold rush time period right now. If this eventually slows down, they most likely will become a little bit more price sensitive.

Stone: So on the speed to connect side, basically, what you’re saying is, part of the incentives here is, “We’ll get you hooked up, but then you’re going to have to be flexible in return.” Is that correct? If there’s not enough electric power at any given time, you’re going to have to lower your consumption, but at least you’re going to be able to connect to the grid. Is that right?

Ramirez: That’s what we’re seeing in several markets.

Stone: I think the ERCOT, Texas market, is really pushing forward on that. Is that correct?

Ramirez: Yes, it is in ERCOT.

Stone: In the report that we’ve been talking about, you also emphasize the importance of what you call financial symmetry, and its importance in reducing risk and uncertainty around the magnitude of future demand. What is that financial symmetry that you’re talking about? And why is it so important in this context?

Ramirez: We think of financial symmetry as aligning the financial commitments of the data center to the amount of risk to the utility and other rate payers. So a good example of this is, there are, say, 10 data centers in the interconnection queue, and two of them are in the very earliest phases of just exploring if they can be interconnected. Well, at that time, the risk to utility is is very low from a financial commitment standpoint. You know, there still is some risk with, if you’re taking that as order for your forecasting. But the utility hasn’t put out a bunch of capital to upgrade transmission lines or build new generation. And so the financial commitment from the data center should reflect that low risk.

However, when we’re getting to doing— transmission upgrades are occurring, energy service agreements are being executed— the risk to the utility and other rate payers of stranded assets becomes much greater. And that is where you see things like CAIC, contribution in aid in construction, payments being made. You see direct pass-through of transmission upgrade costs and collateral requirements that ramp up with the risk to the utility and other rate payers.

Olson: You talked about rate design, before. Rate design, historically, has been a method of ensuring equitable apportionment and recovery of costs from the variety of different customers that a utility might have. These data centers are large enough that we’re now having to think about equitable allocation of risk. It’s not something that the regulated utility and its regulators have had to think about before, but the risks are so large now that we’re having to apply some of those same techniques. And you know, in effect, requires some security from these new large loads. That if the utility is going to make billions of dollars of investments, assuming that they’re going to come, they need to have some financial security to protect their other rate payers against those risks.

Stone: Basically getting to the point here, they have to abide by their commitments, essentially, is what we’re talking about?

Olson: Yeah. Again, which is not something that’s been done historically. But that’s because we’ve been dealing with a million customers that are a few megawatts each, and not a few customers that are hundreds of megawatts each.

Stone: If we’re looking ahead at the future, how should our listeners be thinking about the next few years of large load growth? What is success going to look like in managing this uncertainty and managing it well, even if the forecasts keep changing? Shana, let’s start with you on that one.

Ramirez: I expect that the current pace of the projected growth will eventually slow, not not because the demand disappears, necessarily, but just physical constraints of not being able to have generators and all of that. The backlog is getting very big. But I think the broader takeaway is, alignment between load growth, resource development and the system capability has to be intentional and thought through. And that’s why we’re having this exact conversation now, at this time.

Olson: It’s going to be messy for the next five years. And this is going to play out in a thousand different individual cases across the country, across all 50 states. A data center is going to want to come online. They’re going to put in a projection, they’re going to put in a request. The utility is going to study it. There will be cases where the utility spends some money and the data center load goes away. It’s unavoidable. I think we want to make sure that the financial consequences fall, correctly, on the data center, if they’re the ones that pull out. That’s a bit what the point of the paper is. But there are going to be cases like that.

Hopefully, there’ll be few and far between. Hopefully, the rate payers will be protected from from those impacts, and hopefully the investments will be right sized, and a lot of this new load will come online and will be served with cost-effective new resources in a timely manner. That’s really what success is. You know, these are exciting new developments from a technology perspective, potentially saving huge amounts of labor, potentially saving huge amounts of carbon, if we can be more productive with the capital and labor that we have.

And I don’t know how everyone feels about the national security implications of this. I’m not an expert on that, but there certainly seems to certainly seems to be a lot of discussion and a lot of concern about that. So I think we do need to serve these new loads. We want to do so in a way that’s reliable and that protects other customers.

And we’d like to do so in a way that’s sustainable as well. And you know, there probably is going to be a wave of investment in fossil fuel plants to ensure that these loads can be served reliably. I don’t think that necessarily means that you’re locking in a lot of natural gas consumption over time. If you can follow up those reliability investments with more and more investments in clean resources like wind and solar, maybe even nuclear, then you can use those natural gas plants less and less over time, and it doesn’t need to be a long term negative for the climate.

Stone: Shana and Arne, thanks for talking.

Ramirez: Thank you.

Olson: Yeah, thanks again for having us.

guest

Shana Ramirez

Director of Asset Valuation and Markets, E3

Shana Ramirez is director, asset valuation and markets at Energy and Environmental Economics (E3). She is an expert in utility rate design, green utility tariffs, data centers and load growth, and pairing large C&I customers with renewable resources to power their operations.

guest

Arne Olson

Senior Partner, E3

Arne Olson is a senior partner at Energy and Environmental Economics (E3). Arne leads E3’s Integrated System Planning practice helping clients navigate changes to bulk electric system operations and investment needs brought about by higher levels of clean and renewable energy production.

host

Andy Stone

Energy Policy Now Host and Producer

Andy Stone is producer and host of Energy Policy Now, the Kleinman Center’s podcast series. He previously worked in business planning with PJM Interconnection and was a senior energy reporter at Forbes Magazine.