Teri Viswanath: According to the Washington Post, investments in AI represent some of the largest infusions of cash in a specific technology — and they could serve to further entrench the biggest tech firms at the center of the U.S. economy as other companies, governments and individual consumers turn to these companies for AI tools and software. But, from a utility perspective, what are the AI investments being made now? And will these investments pay off?
Hello, I’m Teri Viswanath, the energy economist at CoBank, and your co-host of Power Plays. Today we want to have a practical discussion on how utilities are thinking about AI. Joining me for this conversation is my co-host Tamra Reyolds, a managing director here at CoBank. Hello Tamra.
Tamra Reyolds: Hey Teri. According to a recent IBM study, nearly three-quarters of energy and utility companies have implemented or are exploring using AI in their businesses. Operational segments across the utility value chain are uniquely well defined and documented compared to other industries — which likely explains why utilities are test-driving the technology at a faster rate than their peers.
But where exactly will utilities find the greatest value for their investments? So, to figure out the best approach to AI adoption, we will make sure that we have an open dialogue on this emerging technology. With that in mind, Teri, there were a couple of conversations that you thought were worth sharing.
Viswanath: That’s right. Jason Bowling, the CEO of Sulphur Springs Valley Electric Cooperative in Arizona, led a fascinating AI session at the NRECA CEO Close-up in January. I then had a chance to catch up with Chris VanLokeren, the chief information officer at North Carolina EMC, at PowerXchange a few months later. They both shared their experiences on deploying AI that I thought would make for a terrific podcast.
Our conversation starts off with Jason. And, as he will tell you, he was actually surprised that utilities were at the forefront of implementing AI. But I will let him explain.
Jason Bowling: I approach a lot of what I do based on my history. I'm originally a human resources person that's found himself in a CEO role, so I appreciate the question. When you mentioned that statistic to me, I think about it in terms of one survey instrument we used for management development, the Management Development Questionnaire, or MDQ, that measures management competency along several different dimensions.
And then it benchmarks that competency across all the industries, right? When we looked at SSVC's management population and compared to the benchmarks, a lot of the seemed to regress right toward the mean. We were right in line with what other industries were reporting, with one exception, and that being risk tolerance. We had a much lower risk tolerance than the average respondent to the MDQ, and we rationalized that as just a feature of our environment.
Most saliently, I guess, the safety hazards that are associated with the work we do would make it so that having a high tolerance to risk would not be a good thing. We’d in fact be more risk averse, and that’s what came through in our findings. It's maybe painting with too broad a brush, but I've taken that to also apply to other forms of risk tolerance like the readiness to adapt to new technologies or to innovate, not to say that utilities don't adapt or don't innovate. We certainly do, but it seems as though there's a reluctance at times to be on the bleeding edge of new technology, mainly because we have this hardwired desire to focus on safety and proceed carefully.
Reynolds: How AI will change and shape our utility workforce is anyone’s guess. But, with 20% of electric co-op workers retirement-eligible in the next five years — compared to just 10% across the broader energy industry — I’m hoping that this technology can help bridge the gap. Jason provides some perspective here.
Bowling: Workforce management is a really important concept to consider with regards to new technology.
When we look back historically at our cooperative, we saw probably 10 plus years ago this point at which member behaviors had changed. They went from making payments through traditional means to more modern means, right? Instead of writing a check, they were shifting more towards using Smart Hub or IVR. That shift, like I said, about 10 years ago, it hit this point where there was a majority using high-tech means of paying their bill. That trend has just continued ever since. In fact, during COVID, it just accelerated that trend.
I think the important point to take away from that is, are we still staffing under the assumptions of how we operated 20 years ago, or are we staffing how things exist today and how we anticipate that they'll exist into the future?
Reynolds: Related to Jason’s comments, the National Bureau of Economic Research measured productivity gains in a customer service department of a Fortune 500 software company after the introduction of a generative AI support tool. The company pre-trained the open AI large language model, or LLM, on data from 5,000 customer service agents and specifically designed it to lend support to existing agents dealing with in-bound customer inquiries. Not surprisingly, the greatest benefits occurred with less-skilled and less-experienced staff with a massive increase in the number of issues they were able to resolve each hour.
But, in order to realize possible work productivity gains, co-ops will need to adopt the technology. So, how can co-ops deploy AI across their organizations?
Viswanath: Jason and I talked about this in our conversation. I mentioned an AI instruction blog that was written by Dr. Ethan Mollick from the University of Pennsylvania, to Jason. Ethan identified three common corporate executive pitfalls for AI, if you will, the first is that companies might choose to ignore the technology entirely. They could outright ban it. Or they could centralize the technology, making AI like any other enterprise-level software rollout. Here’s Jason’s take…
Bowling: Banning, ignoring, and centralizing all seem like the wrong strategies for any new technology. In a way, banning and ignoring suggest some form of reluctance to adapt, or risk aversion. I think that a smarter move is to make sure your employees have a safe space to play with the technology, to experiment with it. The first step, I think, is adopting simple policy guidance to give them some clear guidelines. What do we know for sure they can't do?
The challenge with that is we don't know yet how this will be used, but there are some simple basic concepts like we don't want confidential member or employee data being uploaded into an LLM like ChatGPT, where that's now out in the wild. Reminding employees that those confidentiality protections that exist all the time definitely also exist in this context. Because I think it can be too often taken for granted that because it's on the computer and I'm interfacing with it, it's somehow safe. It probably is not, or at least we don't have the assurances that it is. Reminding them through policy guidance that we have clear lines of demarcation, like do not share confidential data.
Then we also start the conversation around ethical use, like when is it appropriate to cite your sources, that sort of thing. Then also policy guidance that suggests that management must train its employees on this technology and keep the training current. That's how we're approaching it because we do want them to experiment with it and we find that when they use it, that's really where the interesting use cases come about is when they find ways to streamline their own work and augment their capabilities with this fancy new technology.
Reynolds: Teri, you and Jason talked about an unexpected ‘use-case’ that organically came about at SSEVC, which probably would not have come into being if staff members weren’t allowed to play with the technology. But I’ll let Jason tell their story.
Bowling: We look at all the chatbots for member service, we look at wildfire mitigation, and those sorts of things, but this was a great example of when employees are free to use the technology, they can come up with innovative solutions. The backdrop, I guess, and I'm sure this will be familiar for others in the co-op space, there is a lot of diversity in the skill sets throughout our organizations, which is a wonderful thing, but it also creates a siloing effect where you have an expert in one area of the business and others that aren't expert in that same area are not conversant in it, sometimes don't communicate as well. We don't all speak the same language.
For years, I've heard, well, we just need to job shadow, or we just need to cross-train more. It's a fine and valid solution to that problem. The other problem, though, is that work just gets in the way. It gets deprioritized and deprioritized. I got to give credit where credit is due because this came from our VP of Strategy and Compliance Annie Tindall, wonderfully brilliant person, augmented with artificial intelligence, was able to develop an end-to-end solution for our job shadowing initiative. To actually take it from a wish list sort of thing that we would like to get done when we can get around to it, to an actionable plan with complete metrics and reporting.
The way this works is an employee that's got an interest in job shadowing can fill out an in-text form. That form then pushes out the email and calendar invitation work that needs to be done to make sure that the job they want to shadow is synced up, that we get the appropriate supervisory approvals, and we also identify specific learning objectives so that when the shadowing takes place, that we're doing it with a goal in mind. Then when it's done, when the job shadowing has happened, we get a baked-in opportunity to go back and assess where the learning objectives met, and it creates this beautiful word cloud on the results of our initiatives and rates the quality of it all.
The end effect is that it busts silos, it makes our workforce seamless, and it makes communication much more fluid between departments that may not otherwise speak the same language. I like that example, again, because it's not the typical application of artificial intelligence. It's something unusual that was completely organically grown from within our organization thanks to our team having access to these tools like Power BI, ChatGPT, Microsoft Copilot, and so forth.
Viswanath: For an industry that is facing a real knowledge gap — with one-fifth of the workforce approaching retirement — cross-training and having a measurable route for skill development seems like a great application for AI. But what other practical applications do utilities have for the technology?
Tamra, I asked Chris VanLokeren, NCEMC’s chief information officer, this question and here’s what he had to say.
Chris VanLokeren: I think when we look at utilities and AI, there are some, and especially rural utilities, cooperatives, I think there's really a great use case that we're all very familiar with, which is AMI. Rural utilities led the way with AMI rollout.
With AMI, you get some AI-like capabilities with that, the ability to turn people on and off remotely, measure their usage, and get that billing quality data much faster than having to go out and read meters one by one. I feel like that's a really good use case as far as what utilities have done with AI years ago, especially rural utilities. I think that with the data that we do capture, hourly data for all of our meters, 8,760 reads a year, we certainly get a lot of data from each meter. When you add that all together with all the meters in the system, it gets to be quite a big data set, a very rich data set.
What can you do with that? You've heard that utilities are doing things around peak prediction and modeling. You hear around trying to optimize their system better, more efficiency, add more reliability to their overall system. I think with all that data that's being captured, we're making some really good progress on the AI front with utilities in general.
Reynolds: Teri, you mentioned Chris thought that the real opportunity for AI would prove to be in machine learning. But, if we think about how AI is currently being used, 75% of all web traffic seems to be directed toward LLMs. And, if we look at that data, maybe 75% of user traffic is captured by ChatGPT. So, what is the promise of machine learning for electric utilities?
VanLokeren: Look, generative AI is fascinating and it's fun to use. We can all go to the website and ask who won the World Series in 1972 or whatever, and then you get these really great answers back. It's kind of fun to use. I think as we try and create a presentation and try and make it interesting, those use cases are great. I think generative AI, we have to be careful because we can't take our confidential data and put it in an LLM. Well, we can, but then that data becomes out in the wild, out in the world. That's not really maybe the safest thing.
I would just caution folks that there needs to be some guidance and guardrails around LLMs to ensure that people aren't taking their data and putting it out there. You could have a very confidential Excel spreadsheet with all your biggest loads on it, that may be something that you don't want the rest of the world to know about. You put that in your LLM and that's out in the wild. Now everyone knows who your biggest customers are, or your biggest C&I customers or something to that effect.
I think we need to be very careful about what we do with LLMs and making sure that our employees are not just putting anything they want into those models and that utilities and cooperatives establish robust governance processes for LLMs.
That being said, I do think the machine-learning aspects of AI are really fascinating. We talked about all this data being pulled back and how do we start parsing it out and trying to make better decisions faster. I think you hear a lot about AI is going to take away jobs, but when I think about AI and when NCMC thinks about AI, we're trying to think, how do we make better decisions faster?
One really concrete example that I can share with you, Teri, is that we have built with machine learning algorithms on the Watsonx platform, a peak prediction model. We are taking in our forecast data, our weather data, our real-time load signals, and trying to figure out, based on those factors, when the peak is going to happen. We're on 12 CP for most of our members. If we can find that hour of the month where we think that we're going to peak from the demand perspective, we can save our members and ourselves a lot of money.
It's a concrete use case and it's something that we obviously have been doing for years trying to predict the peak and we were really good at it. We would get 10 out of 12 months of the year, nine out of 12 months of the year, but since we've used this AI model, which was implemented about 24 months ago or so, we've missed one peak. It missed one peak. It was August of '23, and the thunderstorms rolled through a little sooner in the day than they were supposed to. It was hour 15 instead of hour 16.
Viswanath: That's pretty amazing.
Reynolds: Chris is talking about how, as the G&T supplier, NCEMC can become better informed about member load. But this upstream coordination — or as Amadou Fall refers to it as the “power of 3” — might really unfold with AI.
VanLokeren: How can we make it advantageous for the member-consumer to use their battery when it's going to help with peak management, which is again that 4 to 6 PM timeframe in the summertime? Then that would help the distribution utility save some money as well since they're buying less electricity during that peak hours. Then certainly help the generation and transmission, because they don't have to generate as much energy during those times when the demand is much higher than the rest of the day.
Having that conversation with that member-consumer aligning the generation, the distribution, and the member-consumer together so that we're all on the same page.
Viswanath: I asked Chris how the jobs of the future might change with AI, and whether we can look at the changes that are unfolding now for clues about how the utility skillsets might evolve with this technology. Here’s the conversation we had.
VanLokeren: That's one that has been really fascinating. We were talking with one of our distribution cooperatives a month or so ago, and they told a great story about that. They were talking about their system and trying to figure out where a fault was occurring on the system. In the past it was drive down main street and turn right at the oak tree, and then back in the woods there, that's where the line is. That has been this tribal knowledge that was acquired over years and years and years of being at that distribution utility.
What they've done is use drones to fly their lines to look for faults. Use drones, taking pictures, bringing that data back, using some AI models to look at the pictures to try and figure out where there are issues on those distribution lines. It's a lot more precise with longitude and latitude versus going down the road and turning right at a tree or at the schoolhouse or something like that. It's trying to be more precise and not rely on that tribal knowledge any longer as folks retire.
If I could take a little bit of a step back and talk about AMI for a moment again, there were a bunch of meter readers in the early parts of the cooperative world since it started til the '70s or '80s. As AMI came into being in the '90s and 2000s, those roles went away, but the employment for cooperatives has grown. Why is that? It's because, with technology changes in general, they tend to create more jobs than lose jobs.
The thing I think that I keep hearing when we have this discussion is that jobs are going to change. Yes, AI may be coming, and yes your job may not be what it is today. We still need these roles and we still need people to help us monitor AI, for example. Teri, how many people at CoBank are AI monitors today? Probably zero, but you may have 10 AI monitors next year as CoBank, for example, starts building out AI models.
How many do we have here? Maybe a quarter of somebody's time with that peak prediction model that I talked about earlier. But as we build more AI models, we need to look at the models and ensure that they're still giving the answer we expect, we monitor them for drift and for bias and all those types of things. Because we don't want to create AI models that are giving bad results obviously. That's not the outcome that we're looking for.
Reynolds: With this in mind, what sort of guidelines would Chris as a chief information officer, recommend? Posing the same question we asked Jason: Should co-ops ignore, ban, or centralize? What was NCEMC’s experience with developing guardrails?
VanLokeren: Yes. I think what we're trying to do and I guess this answers your question. ChatGPT got released in November of 2022. It was there for six months. Last summer of 2023, we saw more usage of it. Coincidentally, we saw it with our interns, the folks that were here for the summer out of college, which was fascinating because the kids in university were using ChatGPT at school. We put out a quick statement that just said, "Don't use AI, or if you're going to use AI, let your C-suite person know about it, and let's have a conversation, and make sure that we're just not putting data out there in generative AI LLM models."
Then we took three or four months and created better guidelines. We have goals. I think our goal with AI is making better decisions faster, but that's going to require oversight and ethics and governance, and pillars of trust. When I say pillars of trust, the AI needs to be explainable, it needs to be fair, it needs to be robust, transparent. We need to be sure we have privacy concerns nailed down, cyber concerns. As we put this new memo out, and this was in December, we provided some guardrails and some governance to our employees as far as the use of AI so that they have a little bit of direction.
I'm not a proponent of ignoring it, I'm not a proponent of banning it, because I think that they're going to use it anyway. Maybe they'll use it at their home computer. I think banning it's probably not really feasible, but I would say that trying to figure out how to make it fit within your organization is really important.
Creating some guidelines, approval processes, people may see that as a little bureaucratic, but they need to understand that the leadership of the organization is trying to protect ourselves from the dangers of AI, which there are some. There's the bias. There's ethical things that can happen with AI. The drift, as I talked about before, which the answers were right on day one but they drifted to an area where they don't make sense anymore. There are some dangers of AI that we all need to be sure that we're not ignoring.
Viswanath: This was a really great conversation with lots of practical guidance on AI. And I hope all of you have enjoyed visiting with Jason Bowling and Chris VanLokeren. I really want to thank our guests for spending time with us on this topic, and providing some pro tips on wading into AI.
Reynolds: Catch us next month as we talk about data center development and financing. Bye for now.