Scott Schneider: They’re not going anywhere. I think those communities should think through the opportunities in growing those facilities. Even those rural locations that are landing gigawatt campuses. We as an industry tend to follow one another as either a hyperscaler or one of these GPU as a services or another data center operator picks a location that maybe previously wasn’t known for data centers, we all start looking at it and say, “Boy, what are the aspects in that market? Why did they move there? Should we be there too?”
Jeff Johnston: That was Scott Schneider, chief financial officer for Cologix, about why the risk of data centers leaving a market, say after the initial lease term expires, is pretty low.
Hi, I’m Jeff Johnston and welcome to the All Day Digital podcast where we talk to industry executives and thought leaders to get their perspective on a wide range of factors shaping the digital infrastructure market. This podcast is brought to you by CoBank’s Knowledge Exchange group.
Understanding what kinds of workloads are happening in data centers and how that’s impacting where new ones will be built is important stuff. For example, it’s not just power and land that makes rural America an attractive place for data center development. To better understand all of this I asked Scott to come on the podcast. As the CFO of Cologix, Scott has real-time insight into the market, couple that with his deep domain expertise and he was primed to drop some knowledge.
So, without any further ado, pitter patter let’s see what Scott has to say.
Scott, welcome to the podcast. It’s a pleasure to have you on. How have you been?
Schneider: Doing great. Thanks, Jeff. Appreciate you spending some time with me.
Johnston: Hey, look, I really appreciate you making time for me today. I know you got a lot on your plate right now, but I was super excited when you agreed to come on the podcast and talk about what’s happening in the data center market. I guess Scott, first of all, what I’d like to do is for you to help our listeners understand the different types of workloads that are being done in data centers. I guess specifically what I’m thinking about here is inference versus training. Maybe you can just help listeners understand what that is.
Then secondarily, help us understand how we should think about inference versus training from a data center location perspective. If we can start there, that would be great.
Schneider: Yes, certainly. There certainly is a lot going on in the industry right now. I think it is important to help people understand the difference between those two things. Clearly, the headlines are being commanded around AI. Now, there’s an important piece that there’s a lot of other cloud services and enterprise workloads and content delivery, and network optimization that are still going on within the data centers. To your point, a lot of what’s going on in the big flashy headlines of multi-gigawatt campuses in tens of billions of dollars seems to be centralized around AI.
Understanding the difference between training and inference can be as simple as training is exactly that: large data centers that are training these large language models to create different types of AI platforms. Inference is the use of those models once they’re trained. Now, importantly, training isn’t a one-and-done exercise. Clearly, a number of us are using different AI platforms today. They’re getting better every day as that training continues and is improved. The use of those models through inference will continue to grow because of how we’re going to use these platforms.
Right now, we’re having fun with it, having it write poems for us, maybe creating cartoon pictures. But as you start actually consuming a huge amount of data through the use of the models and inference, those inference workloads are going to become larger themselves. The way I look at it, being part of the explosion of cloud services over the past decades, it’ll follow a similar path, where you will have centralized compute nodes, and you will have more decentralized networking and use nodes to access those services.
Johnston: That’s super helpful. It’s like training is building the foundation, the knowledge, that these applications will sit on top of to deliver applications that we’ll be using in our lives and how we work and live.
Schneider: That’s right. One component that clearly data centers needs, right now, especially for training, is an abundance of power. That’s why we’re seeing these larger training facilities being built in more rural locations that either currently have excess power or have a quicker ability for those users to build onsite generation. Whereas it’s been before data centers located closer to major populations, the end users of their services now, since training is a bit location agnostic, those are going to where they can find the power.
Johnston: Yes, that’s a really important point, I think you just made, Scott. I just want to spend a little bit more time on that because again, a lot of folks in rural America are getting calls from data center operators wanting to build data centers in rural America. I think just understanding, yes, the path to power is there, the land is there, but there’s not that latency requirement where you’ve got to locate these data centers closer to where the applications are being used. Training doesn’t have that.
Schneider: That’s exactly right, Jeff. As you think about the workload in a training facility, it is consuming data all the time, but the speed at which it’s consuming that data and, in turn, digesting it back out to the user, that speed is less important than inference of actually using it. Again, as you think about maybe how some of us are using AI today, it’s okay to have a bit of a delay if it’s analyzing the document. If you’re using AI to run a manufacturing facility or control a drone that is doing spraying of a field, the latency is very, very important.
The dispersion of these inference nodes are going to be a lot of them in a lot of locations that’ll likely start near major metros, and just like cloud services, spread out to more rural locations over time.
Johnston: First of all, I’m assuming that most of the capital that’s been deployed so far in AI, is for training. I guess that’s my first question. Then secondly, as we move more towards inference, how do you see that capital being deployed? Do you think we’ll get to a point, maybe in a couple of years, where the majority of capital and the majority of energy demand is going to be coming from inference?
Schneider: I think there’s a few points to make. The first one would be to answer your question in yes. The large amount of capital that’s being put work today around AI is for training facilities. And rural America is getting the benefit of that first step in having excess power to support those large training facilities. Over time, I do believe the use of those models through inference, the capital and the power that’s going to be needed, granted it’ll be very dispersed, geographically diverse, that inference need of capital and megawatts will become greater than what we’re seeing today, that’s being put to work in training.
That really is because, as more and more use cases of AI get created, there will have a need for certain workloads and AI supporting customers being closer to that end user. To create a bunch of these, call them pops, on-ramps, inference nodes, I don’t think the industry has actually came up of what the term is going to be, but it’s going to be 1 to 5 megawatt nodes supporting AI all around the United States and the world versus these large, centralized gigawatt training campuses.
Now, the other point I do want to make is, and I mentioned earlier, training isn’t an exercise that you do once and you’re done. Training will continue. As inference grows, training will also continue. The amount of data that’s created today, the amount of data that most of us don’t think about, that will be created by AI, in how that will be used to continue to refine those models, will show a place for those training facilities and probably growing training facilities alongside inference in the future.
Johnston: Well said. Excellent. Just to put a finer point on it, the communities who have embraced this data center opportunity and have had campuses being built in their communities, they shouldn’t really need to lose sleep over those data centers picking up their marbles and going home as inference starts to absorb more of the capital. Those rural learning training data centers are there to stay for-- Nothing lasts forever. Look, for all intents and purposes, they’re not going anywhere.
Schneider: They’re not going anywhere. I think those communities should think through the opportunities in growing those facilities. Even those rural locations that are landing gigawatt campuses. We as an industry tend to follow one another as either a hyperscaler or one of these GPU as a services, or another data center operator picks a location that maybe previously wasn’t known for data centers, we all start looking at it and say, “Boy, what are the aspects in that market? Why did they move there? Should we be there too?”
I think there’s an opportunity for these rural economies to think about not just one deal being done in the tax revenue they’ll get from that, but do they have an opportunity to embrace the industry and bring more growth to that location?
Johnston: That’s a perfect segue of where I wanted to go next, at the economic development conversation around rural America and data centers. I’m with you in embracing that land and expand strategy, but is it more than just a campus of data centers? Should we think about this land and expand opportunity from an ecosystem perspective? Are there other vendors and partners that will come to these rural towns to support an Amazon or a Microsoft campus?
Schneider: Yes. There will be a need for what I would call auxiliary support services. As you build a data center, be it a hyperscaler or a co-location provider like ourselves, clearly, we will staff the facility with our employees. And that can be 30 to 50 to 60 jobs. As you think about the industries and the companies that we need nearby to support either one facility or multiple campuses could be fuel delivery, landscaping services. There’s all these auxiliary things of running the campus that we won’t do internally, and we need third parties there to support us.
At the same time, we have technicians and master electricians and HVAC people, but we outsource those services as well. As we either have customers installing or perhaps a customer leaves, a new one comes in, there’s a lot of third-party services that we look to hire, either because we don’t have the expertise or simply, we don’t have the personnel to get the job done quick enough that we’re going to look outside for.
Johnston: Good stuff. I think there’s a lot of examples, a lot of success stories of rural communities who have really leaned into this opportunity and have come out the other side, in better economic shape. Hey, I want to just pivot here a little bit, and talk about broadband in the context of data centers in rural America. A lot of our listeners are rural broadband operators. Intuitively, all of this sounds like a nice tailwind for the industry.
The more data centers you have, the more broadband you need and the redundancy to connect these data centers. Just feels like it’s a great tailwind for the industry. Maybe you can talk just a little bit about that? How you see broadband being built out with developers and hyperscalers. Do they like to partner with local broadband operators? Do they like to do it themselves? What are you seeing out there?
Schneider: As you think about the opportunity for more currently rural broadband operators, I think the number one selling point for them is their local expertise. As a developer ourselves, I would say this does also hold true to the hyperscalers, is we value someone that has been on the ground, understands how things works.
They’re the ones that we’ll trust to get the job done, get it quicker, and we won’t get a call from them saying, “Ooh, sorry. You know what? We ran into a roadblock. We’re not familiar with this market.” We absolutely value that local expertise. I think those rural broadband operators have a few different options. Connectivity amongst campuses. Ourselves, we would love to build all-in-one campus. It would be simple. We could have centralized operations with a bunch of buildings around it. Realistically, we either run into actual acreage constraints, or more common, there’s so many different power zones fed off of different substations.
It’s causing operators like ourselves to look elsewhere, maybe 5 miles away, 10 miles away, 20 miles away, creating more of a hub-and-spoke approach. We’re going to want to connect all those buildings and campuses, and we’re using the rural experts in those markets to do that, rather than the big national guys. I think that’s a key piece. I think it’s creating the partnership, making sure to the hyperscalers, the operators like ourselves, that you are known in that market. Show your expertise, and we’re going to trust that.
Johnston: That’s great to hear. I always intuitively felt like that was an exciting opportunity for those folks, and it sounds like it is. Hey, let’s talk about cooling technology, Scott, data center cooling technologies, and the impact that these technologies, have historically had on water supply and where we are today, because I think there’s a misconception out there around the impact that data centers have now on water supply. Maybe you can just help us understand what’s been happening there, because I think that’s really important for people to understand.
Schneider: I think it’s a really important point that I think as an industry, we don’t talk enough about, we don’t brag enough about, because there is this misconception that continues that data centers use a lot of water. They don’t. Data centers, maybe 15 years ago that used evaporative cooling, were water users. Today, every data center that’s going up, either built by a hyperscaler, built by a company like us, built by GPS users company, all of these developers are using closed-loop systems.
You no longer are using water that’s being evaporated. Now you fill a system with maybe 300,000 or 400,000 gallons, and you’re done, and that’s it. You don’t even top up the systems. All you do is you put fresh glycol mixed in with that water every few months, and that’s it.
The other important piece is outside of just the cooling infrastructure, completely changing that it’s essentially waterless, is the adoption of GPUs, higher-density workloads, and the requirement to not cool a room with fan walls, but cool those chips themselves by pumping water directly onto them.
That creates efficiencies for the site where you’re actually able to use more power that you’re getting from the utility for your customers because you’re becoming more efficient. That’s more from a power perspective, but it’s driven by how these new workloads are being cooled.
Johnston: What would be the gist, the ballpark, I guess, Scott, reduction in water demand over a 10-year period? If you were to compare today’s closed-loop system to previous generations, are we looking at a 80%, 90% reduction in water usage over a 10-year period? Any ballpark number on that? Schneider: 99% usage reduction.
Johnston: Wow. Really? 99%.
Scott: It is that drastic because these systems we’re using now being completely closed loop again, you fill them once, and that’s it. You keep the purity, for the lack of a better term, of that water by mixing in glycol, and it’s really the glycol that will disintegrate over time. In these systems, you’re not flushing the water out of them and filling them back up periodically. All you’re doing is topping it up with more glycol periodically and that’s it. The efficiency of a 15-year-old design using evaporative cooling to today’s very basis of design for nearly everyone being closed looped, is a 99% efficiency gain.
Johnston: That’s wonderful to hear. I think you’re right, that is not well understood or appreciated. I think it’s really important that you made that point and shared that with us. Hey, Scott, look, this has been great. Look, man, your knowledge of this industry is incredibly impressive. It’s been a pleasure chatting with you here today. Before we say goodbye, I just wanted to give you an opportunity to talk about things that we didn’t cover that you think are important. The stage is yours at this point.
Scott: No, I appreciate that, Jeff. Thanks again for inviting me on this podcast. I think in closing from my side is, we’re in very exciting times. As everyone’s reading headlines about AI, it’s also important to stay grounded in non-AI-related services, continue to grow. Cloud services, SaaS providers, content delivery. The consumption of data from, I would say, more typical means, not including AI, continue to grow. This AI revolution, I think, we’re at the very beginning of it.
It’s very exciting to continue to see how people are using these platforms, how there’s going to be multiple platforms, just like cloud services today, that are going to be very valuable in the future. I think we’re very early on, and I think everyone in the industry and the community is doing a very good job embracing this new wave of technology. I’d leave it as I’m pretty excited. I’ve been doing this for about a decade, and I’m excited to see what’s going to come over the next decade.
Johnston: Excellent stuff. Thank you for being on today, Scott, and spending time with me. I really appreciate it.
Scott: Thanks, Jeff.
Johnston: A special thanks goes out to Scott for being on the podcast today. What’s that old saying – may you live in interesting times? Well I think we can all agree that these are indeed interesting times. There is still uncertainty around how this whole AI story will play out. But I think it’s safe to say that for the foreseeable future, demand for electrons and data center capacity will continue to grow and that this represents an opportunity for rural America – be it electric cooperatives, G&Ts, rural broadband operators or the residents themselves.
Hey thanks for joining me today and a special thanks to my CoBank associates Christina Pope and Tyler Heron because without them there wouldn’t be an All Day Digital podcast. Watch out for our next episode.