Wes Cummins 0:00
We've been using almost 200 megawatts for the last three years there. We'll see how this year pants out there, but we get a report at the end of the year that it saved the utilities customers because they share the revenue back with the customers. So the customers on MDU saved $5.3 million the first year we were there, massive, probably the biggest supercomputer in the world, because you have 100 megawatts. 100 megawatts of critical IT load, all interconnected with InfiniBand. But the reason we did the three stories, you run network core through the middle, then you can interconnect all of that on kind of that 30 meter distance limitation on InfiniBand. And so for me to be able to take the cutting edge of technology back to an area that's just like I grew up in is rewarding, but also the big positive as well as it's a huge positive impact on the community in which we locate our data center.
Eric Bell 0:59
Wes thank you for joining the Baxtel podcast.
Wes Cummins 1:02
Thanks for having me, Eric. I appreciate it. Great.
Eric Bell 1:05
So before we dig in, I want to learn about you. In 60 seconds. I'll ask you a series of quick questions, and please answer the first thing that comes to mind. So you prefer late nights or early mornings. Early mornings, sweet or salty, salty. Introvert or extrovert? Extrovert, favorite food, cheesEric Bellurger, favorite sport, football. What was your first job? You know, early on outside the industry.
Wes Cummins 1:35
So at first, I worked on my parents farm, the family farm, as I grew up. And then my first job off the farm was I was a auto mechanic in college.
Eric Bell 1:45
That's impressive. I didn't expect that, and I've never considered being a mechanic, but good for you from that
Unknown Speaker 1:53
perspective, skills learned on the farm, true.
Eric Bell 1:56
I guess you repair your own farm equipment. Let's see. So fiction or non fiction,
Unknown Speaker 2:03
non fiction.
Eric Bell 2:05
Secret talent,
Wes Cummins 2:07
super power I call it, is that I don't have an allergic reaction to mosquitoes, so they bite me, but I don't swell up. It's, it's really, really great.
Eric Bell 2:17
That is a superpower. Say you're, if I don't know how many mosquitoes are down in Texas, but if you're up in good and then probably in North Dakota, there's probably plenty
Wes Cummins 2:30
up there, yeah, in the summertime, and they're big, yeah.
Eric Bell 2:34
Okay, so getting into work. Some more questions. Do you prefer email or a meeting? Email and then cadmut or rack,
Unknown Speaker 2:45
rack, rack,
Eric Bell 2:47
and then grid, or behind the meter, power
Unknown Speaker 2:50
grid, very strongly, grid,
Eric Bell 2:53
batteries or gas Gen. Can I say both? Sure you can say what you want, who would play you in the Netflix series data center wars? So some actor,
Unknown Speaker 3:09
I think, I think McConaughey.
Eric Bell 3:11
Okay, all right, now that we got to know you a little bit with these series of fun questions, let's get into it. Applied digital is building massive campus, or a series of massive campuses in a very rural part of North Dakota. Why do you choose Ellendale, or maybe the other you know, Ellen Dale is first, but why? Why are you choosing very remote spots?
Wes Cummins 3:32
Yeah, so I would say this, we it's, it's a mix of we choose, but the site kind of chooses us to and I say that by the way we found these sites, is where is the power available? And then you need a few more boxes to check, which is that it's primarily power, the type of power you get. And then fiber, what's the fiber access look like? So it's not unlike data center capacity that you've that is traditionally been planned, you know, years and years and years ago, where, if you wanted more capacity in the Dallas market or in the Phoenix market, you pulled up the tool that told you where real estate was available with fiber. And, you know, you landed there, and then you requested power from from the utility. So it's similar, but we're looking for large, large scale power, and so it takes you to different sites where you have the grid infrastructure, the power Gen and and the fiber that all comes together. And these happen to be the locations where this all comes together.
Eric Bell 4:32
I see, yeah, so I imagine it's part of like long haul fiber routes, you know, and Ellen Dale, and I haven't looked at it before starting the podcast here, but I imagine it's long haul fiber routes probably going, what's that
Wes Cummins 4:44
a lot of cross country routes that go through North Dakota. It's a great state to do business in, and I think it's probably a great state to lay a lot of fiber through. And a lot of the routes went, if you want to run Seattle to New York, probably goes through North Dakota, and it. So we actually in Ellendale, North Dakota, which is 1100 people, we have four different fiber providers and four diverse routes, both lit service and dark fiber. So it's, it's really robust fiber, not just there, but through the state.
Eric Bell 5:16
Do you know the approximate latency down to Chicago? Because I imagine Chicago is the closest interconnect hub.
Wes Cummins 5:23
So I Chicago is like 15 round trip milliseconds. But the ones we've really tested to it actually, Chicago is kind of one of them, but, but like Des Moines, which is big Microsoft data center, place is one, and you get it's right around seven or eight seconds, milliseconds latency round trip, and then Cheyenne, which is a pretty big market for some of the hyperscale, or Denver, Minneapolis and Omaha, which is big Google data center area, and we have really good latency to all of those. The our Ellendale campus was qualified by three, yeah, three different hyperscale ers as big dual use, mixed use inference and training, so it hits the kind of the latency targets that they need to go to their nearest peering point or two period. You usually actually have to do two peering
Eric Bell 6:20
points, although it sounds like, you know, having interconnection to their campuses. Say, if it's a Microsoft, you know, any hyperscale, like having a connection and low latency to some of their existing campuses, in addition to interconnect points, internet interconnect, yes, but just,
Wes Cummins 6:40
I'll tell you, just from the due diligence we've been through, the onboarding processes we've been through, they focus much more on their own peering points, because we've had to do it for different hyperscale or to different locations. They focus on their own much more than Internet connections,
Eric Bell 6:55
Okay, interesting, which would be their own campuses, yes, okay, yeah, yeah, that'd be great, particularly because there's more and more campus, hyperscale campuses in rural locations.
Wes Cummins 7:07
Well, you've seen the trend even before. Ai. The trend was moving towards the middle of the country, towards power, right? I don't really consider Omaha population center for the United States. I don't consider Des Moines a population center, Cheyenne, a population center. So you've, you've already seen that moving in. So like, you know, North Dakota is, is a pretty easy extension of that, and we're really creating that market.
Eric Bell 7:32
Makes sense to me. What is the most surprising thing about, like, this rural colocation, you know, both positive
Wes Cummins 7:39
and negative. So the negative is just, you know, it takes a little bit more to get the labor force there, and a little bit more of our own infrastructure for for labor on site. As you can imagine, the real positive is. So, just in my background, I mentioned that I grew up working on the family farm. I grew up in a town of 200 people, and this was, you know, I graduated high school in 1995 and kind of, as we're going through the, you know, the internet was just starting. And I make a joke, but it's funny, because it's true. You know, I think my parents stopped using dial up internet, like 2018 and it's they, it just was an area that was left behind. And so for me to be able to take the cutting edge of technology back to an area that's just like I grew up in is rewarding, but also the big positive as well as it's a huge positive impact on the community in which we locate our data center. So think of these small communities, like I said, 1100 people in Allendale. We work like we're very integrated into the community, and we work closely with them to make sure we have a positive impact. There are negatives that come with it, just in the extra traffic that goes through for the construction phase. But once these are running, you know, you have a few 100 people to go in and out every day. It creates really high paying jobs. But it doesn't create so many that you completely change the face of the community and the reason that the residents want to live there in the first place. But you create some jobs. So a lot of people who grew up, we've had, I think 10 people so far that grew up in in or around Ellendale, North Dakota left because there wasn't really any job or career for them there. Now have come back because there's, there's a, you know, high paying job that they can work. So that's great. And then obviously, the, you know, the tax revenue, they'll become almost, you know, not overnight, but over the time span of a couple of years, the, maybe the wealthiest school district in the state, all of their the thing that people don't think about in these small communities is so you have a town of 1100 people, they're still doing a water system, sewage system, they have roads. And the only way for them to pay for these, usually, if they have any extraordinary expenses, is extra tax levy on all their residents. And. And now that that kind of goes away, so they can pay for their infrastructure, that's the first conversation we have, like, typically, when you go to the mayor or the City Council, and the only thing they think of is, man, we we're going to be able to fix the water system or fix the sewer system, or fix this road or that road, and it's all infrastructure. So it's a really big benefit to these communities. And so it's, it's, very rewarding for me because of where I came from. Even if I didn't come from there, I think it'd still be very rewarding. But it's that's definitely the biggest positive.
Eric Bell 10:28
I want to get into the community, community part later by almost scratch out a little bit. Now it's almost like the these like Ellendale will have the best football stadium, high school, Island, right? Because there's a data center there, and I think that's what you're you're getting to in terms so there's, there's obviously sales tax, you know, from a tax perspective, because I think that there's a lot of misunderstanding and fun out there around data centers that are being built and proposed, right? There's in many different communities. It's surprising me, and we are actually starting to track this on Baxtel Community sentiment and and how communities are which datacenters are being canceled because, you know, communities are pushing back. And it's almost, it almost seems like they're, well, it's, maybe I should make a judgment on it's whose best interest, but it feels like there's a lot of misunderstanding of what data centers are. And so, I mean, from, even from a tax perspective, you know that, and you touched on it earlier, there's, there's sales tax, but what other types of you know, what type of other taxes or revenue pieces to the community. Can, can data centers
Wes Cummins 11:42
bring the property tax is the number one, right? I mean, this is going to to the community, to the school district. It's state by state or or town by town, how it works, but the property tax is number one. And so, you know, the with the size of campuses that we're building the community directly benefits by, you know, millions or 10s of millions of dollars per year from a tax revenue perspective, and then, you know, sales tax to what's applicable depending on the state, right? That's more of a state revenue generator versus a local revenue generator. But I love the fact that it's mostly property tax that goes directly where we're building, directly for that town, directly for that county. And so it's, it's a really big benefit there. And then the other benefits are just, you know, the ancillary benefits, the ancillary economic benefits, like, while we're doing construction, there's a lot of, you know, economic activity that wouldn't exist in those locations that does exist, and because it's in a small area. And how we do, you know, deal with the labor force there, the surrounding communities get a big benefit as well, right?
Eric Bell 12:51
And what, you know, the biggest concerns from communities, maybe I'm missing seven. Correct me if I'm wrong, but it's maybe lack of jobs that are created water that the data center going to suck up all our water that they're going to if the data center is built here, our electrical rates will increase. From an electrical rate perspective, all these things are solvable in, you know, I think, by the industry with different technologies, right? But from an electrical price perspective, I think there's been studies that have shown that states with the most datacenters or most data center growth have had have increased the least. North Dakota, I think was one of those steps.
Wes Cummins 13:39
Yeah, it's so that study just came out, Berkeley. It was, it was an educational institution. I was actually just reading yesterday. I wish I remembered exactly who it was, but that study just came out and and this is true, and we'll, I'll give an example of specific to our Ellendale campus. We've been using almost 200 megawatts for the last three years there. We'll see how this year pans out there, but we get a report at the end of the year, and it saved the utilities customers because they share the revenue back with the customers. So the customers on MDU saved $5.3 million the first year we were there, and they saved $5.8 million the second year we were there. And why is that? It's because where we locate specifically, there's excess power, and then we utilize existing infrastructure to and then we pay our Tariff, right? We pay our transmission fee to the utility, which is, you know, revenue and profit to the utility. And then they get to keep some but then they share a lot of that back. Because in the this works almost the same everywhere, but there, you know, it's a regulated utility, and they get a certain return on their assets, and so if we utilize those assets more efficiently or higher utilization, then they get more revenue off those assets, but they're capped at what they can make from a rate of return. And that's just generally how utilities work. So then they share that back with their constituents inside the utility. And so data centers at large, because we use so much power at this single location, it takes a lot less infrastructure to service that power. And if you just went to North Dakota and he said, Okay, well, this, this data center uses the same power of 60,000 homes. Understood, fine, that's a that's a big number. But then, when you think about the electrical infrastructure build to service 60,000 homes spread out in a rural area, by the way, versus one single site, you just see how that that pans out from actually lowering rates for everyone versus increasing rates, and the study you're referencing did show that the datacenters pay more than their fair share on the utility side, and it's because of this. It's because you're utilizing so much at a single location, and you don't have to build and the utility doesn't get to spend because utilities want to spend money right that they're a yield based model, so the more CapEx they can do, the more return that they can get. So but again, a location like ours, there's only so much CapEx to do. We actually located right next to an existing substation. We pull from that substation, we have a new redundant substation on site that we paid for, so that doesn't go into the pool, but, yeah, it's, it's a, it's, we've seen an absolutely clear direct benefit, right?
Eric Bell 16:28
No, that's good point. It's a perfect number for it actually might be. They a lot of people who need to reach train as data center technicians in those in those communities, yeah. See, how do you come up with the name? You're changing the topic a little bit. How did you come up with the name Polaris forge players.
Wes Cummins 16:46
So I've done this education. So Polaris is the, you know, the North Star, and then the Forge is where things are made. And so, you know, that was, we did a lot of tinkering around with that. But that's the reason for the name. It's, it's North Dakota. And we, we're, we're far north. It's about as far north as you can be in the in the United States. And it's where big data is made and where large language models are forged. And so I thought it was a good name. It's, it's got a good meeting,
Eric Bell 17:18
yeah, it's great. There's too many times in the real estate, because datacenters are like a real estate business, and there's too many times where the name of the billion is the name of the address. Just fine. I mean, it's indicative, but it's not interesting, yeah.
Wes Cummins 17:32
And I you know, inside the company, you say this name over and over and over again, then so that one of the typical is, is the address or the nearest airport, right? And so then it's just the three letters of the nearest airport. So we could have a br Which is Aberdeen, South Dakota, and then, and then far which would the nearest airport would be Fargo to our others. But that you know you you say these all the time inside the company. And so it's a name you deal with constantly. So I wanted something that's a little more exciting for everyone. And then sounds exciting when you say it, because you use it every single day, many, many
Eric Bell 18:10
times. Yeah, although you probably shorten it to pF,
Unknown Speaker 18:14
it's pF, one, pF, two. Yep, you are correct. Cool.
Eric Bell 18:17
All right, so let's understand how. You know, I think you guys have some insight in terms of how AI datacenters are different. A couple years ago, you guys made a pivot, you know, you made a pivot, of course, out of crypto and into, you know, HPC. But even beyond that, once you moved into HPC, as I understood it, right and correct me if I'm wrong, you you stopped the design process and redesigned it, at least your first building there. And we'd love to hear background around that decision and and are those assumptions you made then are true now? Because I guess that's a yeah,
Wes Cummins 18:55
let me, let me walk through that. So we started. The way the company started was we built 500 megawatts of Bitcoin data centers, and we were a data center operator for Bitcoin. We didn't mine ourselves. The original idea was for, for if anyone wanted to be a bitcoin miner, So, Eric, you want to be a bitcoin miner, I'm gonna, I'm gonna be the Equinix of Bitcoin mining. You show up. I help you get the equipment, or you have it yourself. We put it in. You get, you know, scaled out infrastructure, and Bitcoin hits your wallet and you pay me dollars for your rent, and so that's that was where it started. What we ended up doing was hyperscale model. So we moved over hyperscale, we signed marathon, and we grew really rapidly together, and we helped marathon become the largest bitcoin miner in the world, but that was our primary customer, and it was just at hyperscale, and then they paid us, so it's the same model, but instead of retail, we went to hyperscale, and then in so about a year into that, we're still building it out aggressively. I thought about doing high performance computing is a nice add on to on our campuses. So and. And you remember when you talked about HPC back in 22 these were we said, Okay, we're going to do five to 15 megawatts of HPC. And that was a decent size bell for high performance computing, right? Pretty limited use cases for high performance computing. And so that we, we first did some retrofitting of our, one of our Bitcoin facilities, to run GPUs. Who said, That doesn't work. We're not doing that. That's not a path we're going to go down. So then we hired some guys who were building data centers for meta. Then we designed this new, what I would call a low cost, high performance computing like purpose built but but less redundancy that you have in a tier three data center. And we and we started building that. And then we by December 22 we had some GPUs running at our facility in North Dakota, and we had some customers, mostly universities that were doing deep learning and machine learning on on the GPUs. And then chat GPT hits. Nothing really changed for us when chat GPT hit, other than you know, that was cool and but then March of 23 Nvidia h1 100 is introduced, and that's when the world started to change quite a bit for us. So we were out marketing the data center capacity, and we're running GPUs in it to show that it worked. And then we started getting introduced into some AI labs and some frontier model companies. And when I started talking to them, they were, they were just looking for GPUs. They didn't want to deal with datacenters. So we ended up buying GPUs and providing a GPU service to them. So we had about 1024 so a full cluster of Nvidia h1, hundreds running by July of that year. So we signed our first customer in May, had him running in July. That was pretty fast. If you remember, it was almost impossible to get h1 hundreds at that point, those first few months, even the first year, really. So we were doing the GPU service, and with everything I saw in the market, and then with my my historical background, which was, I was a tech investor for a long time. It popped into my head that I was thinking to myself, look, Nvidia is going to fix wafer throughput at TSMC, right? I invested in semiconductors for a long time, and so they're going to fix wafer throughput at TSMC. They're going to fix the advanced Co Op packaging for the semiconductors they're going to fix transceiver manufacturing, which was, these were all the bottlenecks that they kind of went through. I said they'll fix all of that much faster than they can. We can supply the amount of data center capacity that's going to be needed, because I was looking around the market, and we were in the market with GPU. So we had our capacity, but then we're looking at third party capacity. There's really nothing out there for that kind of power density, right? 50 kW Iraq is what we were looking for, that we were lucky. That happens to be what we built in Jamestown. So 50 kW Iraq and and then I'm looking, we're looking at the roadmap. And so I went back to my team, and I said, Guys, we need to go back to building hundreds of megawatts like we did at Bitcoin, but in high performance computing. And so we spent the time to figure out, you know, we'd been through these few iterations. And then we spent time, and we worked with, actually, Nvidia's Data Center team, and we designed to the NVIDIA roadmap, and we designed a facility specifically, actually for Blackwell. It wasn't called Blackwell at the time. Hadn't been named, hadn't been announced yet, but Nvidia provides the roadmap and what's going to be the future. So we designed a facility around Blackwell, so we actually increased the footprint and increased flexibility, but streamline the way we build, so that we can build faster and you know, cheaper and just repeatable. But, yeah, it's, it's been a pretty big evolution. I'm glad that we, we what I call stubbed our toe at small scale, right? We were building much smaller when we went through a lot of these big experiences that we said, this doesn't work. I would have hated to figure out this doesn't work at a really big scale, but, but we're we should expect just continuous refinements to how we build. And I think you know this, it's, it's, uh, AI is? It's dog years, right? It's, everything's changing so fast. So you're just trying to plan, you know, if you're trying to plan for five to 10 years into the future, because we're building assets that we think are good for 30 to 50 years. You're trying to do everything you can to future proof, right?
Eric Bell 24:27
So in terms of one of those early buildings, you you move to like a three story design, right? Are you still at that three story design? Are you more horizontal now?
Wes Cummins 24:35
So now we've gone single story again. Land is not a premium, but so this is, this is one of the things that we learned so on the first building, which is going to is a really cool building, there's InfiniBand limitations on distance, and so we wanted to design a building that every GPU inside the building could be interconnected with InfiniBand. And you'd create one massive supercomputer. I think, when it's, you know, the buildings. Turned over, should all the equipment will be turned on in the next few weeks, and it'll be a massive, probably the biggest supercomputer in the world, because you have 100 megawatts of critical IT load, all interconnected with InfiniBand. But the reason we did the three stories, you run network core through the middle, and then you can interconnect all of that on kind of that 30 meter distance limitation on infiniba. It went to 50 meters if you use single mode optics, but those are more expensive, so it was designed specifically for that, and then as we went through and learned more, okay, so now we can have three or maybe four data halls that are interconnected. That way. We do six data halls in the flat format now, but, but you don't need a lot more than that, and the technology is evolving for for Ethernet connectivity to go in between the infinite band. So it's just it. This made more sense for the future. What we were doing at the time made a ton of sense. But this is what I talked about for future proofing. The building works just fine on Ethernet, too, just you we were solving specifically for this InfiniBand limitation, and that's why it was built the way it was built.
Eric Bell 26:07
But from an infinite band perspective, it sounds like you build these clusters and then you connect those clusters. I'm talking as layman. But you know, are you connecting these clusters together? Meaning you don't need to have this cube like structure where you're shortening the fiber links to meet, you know, single mode or multi mode. You know, limitations is that, is that? Why? Because you're you're connecting up different clusters together.
Wes Cummins 26:31
So you can connect up different clusters, but if you want the InfiniBand altogether to create one cluster out of all of it, right? That's what that building is designed for. So you have 100 megawatts of critical IT load. That is one cluster, right?
Eric Bell 26:45
But I think that isn't that what, what most I mean you, you, you're ground zero for understanding, you know, hyperscale requirements and stuff like that. But I would think that they want to build as big of a cluster as possible. And the cube type format seems to make in my mind, seems to make sense. And so, you know, I look at other hyperscale buildings, you know, they build their own buildings, and they're usually one or two stories, so they're not doing that cube thing. Sounds like you've gotten away from the cube. So it sounds like there's something else, you know, I'm just trying to figure that
Wes Cummins 27:15
piece out. And as you go through what the technology is doing now, like from a networking perspective, you're going to, even if those are the flat building and they're further apart, the interconnect with Ethernet is going to work better to make those clusters operate. And then the software is getting more sophisticated so that clusters can work together on different work sets of the same model, right? Because so if you, if you look at why do you look at, why do you want all of these connected together? It's specifically for training. So training versus inference. Now, if they're connected together like this, they work for training and for inference. If they're not connected together, they only work for inference, generally, that's that's how it works. So ours is dual use of training and inference. But then now what we're seeing from from our customers, is that you actually chop these up and make the data halls themselves their individual cluster, so 25 megawatts of each one, and then they interconnect them with Ethernet. But you don't require even inside of a cube building, they would still be doing the same thing. So you can do it in a flat format. Gotcha.
Eric Bell 28:19
Okay, that makes sense. All right, let's talk a little bit about GPU as a service. It sounds like you guys had GPUs as service versus HPC cloud. So I see like two very successful companies yourself, applied digital and iron, right? You know, it feels like you guys are going in different directions, you know, but both seen success. So for example, you guys had the GPU cloud and are kind of moving away from it. It sounds like, and iron seems like they're going, you know, running full, full bore towards that, you know, adding more more, more Nvidia chips, that sort of thing, and offering as a service. And so I guess any thoughts from that perspective? Yeah.
Wes Cummins 29:03
So I think you just, my opinion is you decide on one or the other. We were datacenter first, and, you know, we added the GPU business, the cloud business, as you know, when, when that's we were at the early stage with those customers, and that's all they wanted, right? It was the cloud business. But then I looked at the market, I am saying, Okay, well, what? What company do I want to create? And again, we're very focused on data center build, and we wanted to go directly after the investment grade hyperscale is right? I looked at the private companies in the space, and I like the business model, longer term contracts, guaranteed revenue, versus the GPU business and and lower risk, right? Lower technology risk from owning the GPU perspective, and so that, that was the business that I wanted to be in, was the, I'll call it the high. Highly technical landlord business, versus owning the GPU stack and providing cloud service. I don't want to be a hyperscale or myself. And that's really the model you're choosing. Those two different models. Can I be a hyperscale or do I want to service the hyperscale or and one of the things that we saw as we went through negotiations with the hyperscale or on datacenter lease, some of them really didn't like us having that cloud business, even though it was very small. And so we said, look, we need to choose, are we data center company? Are we a cloud company? And we are on the data center business? I like the risk profile of that business. Are the contracts we're signing? You know, we've signed to core weave and then an investment grade hyperscale or 16 billion of contracted revenue with that over 15 years. Both of them are 15 years. Both of them are non, effectively, non cancellable contracts they can cancel, but they owe us the 15 years of payment. So 100%
Eric Bell 30:57
make whole. Yeah, that makes sense, and I agree with you. It seems like you got to make one or another. You can't have can't have your feet in both worlds because you start to compete with your customers. Correct? Let's in the rapid fire questions you very firmly or quickly said, grid power. You're like, I want grid power over self Gen. But you know, you know, you probably have looked at it. So what are your What are your thoughts on on site power? And there's a lot of chatter in the industry about, you know, demand flexibility, so this multi part question. But you know, what are your thoughts on on site power?
Wes Cummins 31:33
So I did a behind the meter project already once, and that's it for me. That was enough fun. And so and and what we see our strategy in the future. And we, this announcement was actually made by the company we're doing business with. We have some power gen stuff that we're working on, but it will all be on grid power Gen. And there's some, you know, some even much, much bigger things that we're working on, where you'll do on site generation, but it will always be grid tied, and we'll work with the utility that's there to make sure that happens. And what we found in the kind of the behind the meter world was a lot more obstacles and roadblocks than we had anticipated, and it was largely because we didn't, we weren't working within the system, right? There's a system then these are, you know, regional or local, or even national, and there's been a system created for electricity in the United States, you know, for the last 100 plus years, right? And and working inside the system seems to work a lot better than working outside the system. So it was just a fight on everything from on the behind the meter, just from, you know, an entitlement, and getting things done and getting permitted and all of those things. And this, the site ended up being much later than we expected it to be because of those types of roadblocks that we had. And then if you look at those customers that I mentioned that we're going after, every single one of those customers would prefer grid power over a behind the meter or on site Gen solution, every single one. And so if that's what my customers prefer, that's what I'm going to go after. And so we have a lot of locations with grid power that, you know, we went and found these a while ago, and we've been developing them. And so we have what I would call this really premium, you know, properties with with abundant grid power and, and, and, you know, we're negotiating more of these. So we have two campuses under construction and signed, and I said publicly that we have two more that we're under negotiation on, and that that continues to evolve to even more. So, you know, what we're seeing is that our customers, we know they're, they prefer grid. It's what they've been doing, you know, for the past 15 years and and that's their preference, and they prefer to operate inside the system as well. But we did. We did do a deal where we're, you know, there was a gigawatt of we this one's This one's interesting. We're kind of going backward in in time on this. So, as you probably know, if you want to do new gas power Gen, if you want to get in line today for the three or four guys that that make the turbines, you're going to get delivery in 2031 at the earliest, and probably 2032 so then, then you have solutions out there, like bloom, which, you know, I think there's going to be a lot of that sold in the market. So that's, that's a really interesting solution that they do on a on a fuel cell that uses that gas as well. But what we did was went back, and we've been working with a company that's 160 year old, company that's deployed 400 gigawatts of power generation worldwide. There was a big, big player in the coal industry, but they've done a lot of retrofits from coal on steam boilers to nat gas. They've never done new build, but they they've done new build on coal, and then they've retrofitted them to nat gas. So now we're just going to do new build of nat gas steam turbines, and we can start getting delivery in late 27 and 28 but those, those will be on grid assets that then can feed our our facilities, so it will still stay within the system. But the thing I like about that is, you know, it's we know what the useful life is. We know what the cost is. We know what the efficiency is like. These have been done for over 100 years, and and they last forever, and so it's a great asset for us to build and know exactly what we're getting instead of something new.
Eric Bell 35:38
Got it? Yeah, is this something that you'll be operating, or is that that partner, that you're that
Wes Cummins 35:44
will be, we won't be in the power business ourselves, applied digital will not. But so we're working with with partners on that. But we wanted to go procure the supply chain. I thought that it was a pretty novel idea of doing this. We've been working on it for a little while and but, you know, it was already in discussions with our, you know, our utility partners on this. And even if it wasn't the utility itself to build it, right, it'll go on to their system. And you could bring an IPP, or some kind of IPP into it that would do it as well.
Eric Bell 36:14
So it's almost like you're making your own colocation, or you guys are jointly choosing, you're probably choosing the site, or jointly choosing the site together, and colocation together,
Wes Cummins 36:22
yeah, for sure, on the on the power Gen. And then, like the way we choose our sites, the a lot of the power is there, but, but the electrical infrastructure to go to much, much bigger, is already there and but at some point, if you're going to do in the, you know, in the same state, if we want to do three, four or five of these, we need new power Gen. For sure. There's going to be new power Gen all over the country, so we need new power Gen, but it'll go inside of the system. But the electro like, you don't have to co locate it on the site. You just need to get it on in an area of the system that can deliver, like there's already robust delivery there, so you can deliver it to the site,
Eric Bell 36:58
makes sense. So, I mean, you brought up a site selection a little bit, or at least you allude to it. So like, let's talk about site selection. What do you look for in a new site? Obviously, if there's available grid, that sort of thing, it sounds like maybe a second question there is, will you be expanding outside the Dakotas? You probably can't talk about specific locations, but just generally, on the
Wes Cummins 37:19
second question. The answer is, I expect two Yes. One that would be nice is to expand to a southern state so that we can do dirt work in the winter. Right now we're in a place where, you know, we're hitting the time of year where kind of any dirt work is done for the year. It's been good weather there this year so far, so we've been able to continue, but it would be nice to move into a state where you can do year round construction. In North Dakota, we do year round construction, but you can't do dirt and foundation work once it freezes. You're kind of out of that until you know the late March or April timeframe. So you should expect us to move to do it, expand into a different state. And then we just want to diversify geographically as well. It just makes sense for our business. And then on the first question, when we do site selection on, you know, we're looking for for power and we're looking for fiber, then those are the two. And then obviously the available land.
Eric Bell 38:15
What I guess, where do you find the information on substation uptime?
Unknown Speaker 38:20
We had to do it with the utility, okay?
Eric Bell 38:24
So they can, almost they self report, in a way, what the uptime was.
Wes Cummins 38:28
I don't know if they self report publicly, but they could. They did the study for us historically to tell us what it was.
Eric Bell 38:34
Okay, yeah, interesting. Okay, makes sense. So the DOE, you know, Christopher Wright, the Secretary of Energy, you know, talked about a rule change for load flexibility, right? Get Connected quicker. If you're flexible, you get, you know, through the queue in 60 days or so. We'll see how that shakes out. But what are your thoughts on that? Because it seems like it'll change the game from a data center site selection perspective in a pretty big way. Yeah, but it still depends.
Wes Cummins 39:05
So flexible load, right? You We need our end customer to be okay with that, and historically have they have not been. They're extremely rigid, or historically have been extremely rigid. You know, things are changing depending on demand levels all the time. But, you know, you can create a, you can create a non flexible, you know, a fully redundant data center, while using flexible load. But you need to install, you know, more backup on site, whether that's battery energy storage systems or, I mean, we have backup Gen but you really don't want to, you want to run that as little as possible. So if it's a flexible up down a lot, then you definitely need to do battery and you may need to do that a lot anyways, just just for load balancing. Perspective on site, with these really large sites, but it's, it's something that we're. We're into a lot, as far as you know, will our customers accept that? How do we make it so that it looks like firm power to them, even if it's flexible load, because it's, it's a good, good rule change, and potentially a, you know, really helpful one for the industry.
Eric Bell 40:20
Maybe, in other words, look at Battery stocks to invest in.
Wes Cummins 40:27
How about this? If you find a good one, let me know. I mean, we've been digging through a lot the issue you have with battery. There's a lot of good companies out there doing it is so so with with data center, with these large GPU loads. You know, the power usage fluctuates, and it fluctuates fast, and it would be nice to balance that with batteries. Generally. What you see is they can discharge at the rate you need to, but they can't absorb power or charge at the rate you need them to, right? That's, that's the mismatch. Did you balance that? I mean, we've been working a lot of on that, like, you know, super capacitors, something of that nature. So, so there's a lot of work going on there, but if you find something really interesting, or one of your listeners has something really interesting, please get in touch with me on that. Because it's, it's one of the problems that are one of the solutions we're looking for.
Eric Bell 41:19
Got it so solving the discharge and charge imbalanced, or
Wes Cummins 41:24
if there's another non battery technology that's not just load banks right to manage the power fluctuation, right?
Eric Bell 41:32
Okay, makes sense. And so from a kind of a rocky mountain, or Midwest, upper midwest, the Dakotas, I think, well, Colorado, for sure, does not have sales tax exemptions, right? There's only like 10 states North Dakota does. North Dakota does South Dakota doesn't. Do you know, anything about like, you know, in terms of how we as an industry can kind of push any that listeners or myself can get involved and help to kind of unlock that piece, because I think it's a big unlock once, yeah, to get that, we're
Wes Cummins 42:06
working a lot in South Dakota and trying to help them understand the big benefits of data center, that you still get a huge benefit, even without the if you give the sales tax exemption, right? This is just on the IT equipment internal to the data center, and it's 40 or 41 states have the exemption. So if you're one of the states that does not because it's a big disadvantage, and we have a location in South Dakota that we could have been building already had that exemption been in place, but it's a hang up for our customers, right? It's not an expense that we pay, right?
Eric Bell 42:41
I can agree more from that perspective. I live in Colorado, so it's kind of close to my heart here, you know, kind of getting more more data centers into the state, all right, so kind of moving to the last couple sections of our podcast here. We call it data center war story. You know, it's effectively a time in your career that might be definitive or a dramatic, something dramatic that happened.
Wes Cummins 43:08
So this was prior to running these, these AI factories, the HPC factories, the we run in our Bitcoin facilities, and we had transformers that that started to fail. They were faulty. And these were, these were Chinese source transformers. When you build Bitcoin, it's built for costs. That's truly what it's built for, is for cost. And they started failing. And, you know, we dig in because, you know, generally you get one that fails first, and then we started digging into it with, you know, it was a flaw. Took in these things. It takes a while to kind of figure out what's going you're trying to do the course post mortem. So you, you know, you hope it's just an easy fix and a component. And so you end up going weeks on this to try to, you know, figure it out. And we figured out eventually that it was a, it was a fatal flaw for these transformers, and it was through the entire site, right? There's 180 megawatts of capacity. The good news is Bitcoin, you know, your contracting is a lot more flexible, because in the Bitcoin world, 95% uptime is really, really, really good. So it's just, it's just different. It's a totally different animal, and, and, but like our team mobilize, this is in a market where it was difficult to find transformers, but we found a lot of of the two and a half megawatt transformers that we needed. We replaced them in a few months timeframe. We had our customer back up, and our customer was patient with us to get that done, but I was really proud of how quickly the team swap that out because of massive, massive task to get that done. And it was, it was definitely a scary time. It was our largest campus at the time that had the issue, and it was. Was extraordinarily frustrating, but the team just fought through it and got it done and back online. But it was, it was a serious test for everyone,
Eric Bell 45:11
I bet, yeah, and certainly, bitcoins very different than HTC. Your expectations. They can be down, you know, it's every 10 minutes you're you're minting a new Bitcoin, or there's a new block that's written. And so, yeah, you can be offline. And anyways, let's move on to the next section, the top picks. So this is something that you want to share that's useful to your life. If you have something mine or I could go with one. And while you you kind of think,
Unknown Speaker 45:41
go ahead, I'm curious to hear yours. All
Eric Bell 45:43
right, so I got married this this summer. Second marriage, the sequel is always better, I guess, yeah,
Wes Cummins 45:53
we always, we hope for sure
Eric Bell 45:55
that's not my topic, though, that getting married isn't my topic, although it probably should be. It is, I have never gotten a custom suit before, like, you know, I don't wear suits very often. And I got a custom suit for the wedding, in part because I wanted a three piece suit, you know, to stand out a little bit more, you know, rather tuxedo and that sort of thing. And so there's not too many, you know, when you go to the stores, or not too many three piece suits. So I had one. Someone said, you should have it made. And I did. And the tailor came over and he looked at my wife's paintings on the My fiancee, at the time, paintings on the wall. She's like, he's like, this would be great for inside the suit. And so we put and so that's what we did. We put one of her paintings inside the lining jacket. He had it like printed, and then so the lining of the jacket had her painting in there. So I thought that was pretty awesome. That's, that's really cool.
Wes Cummins 46:47
That will be, I mean, it's, I like it because you it's obviously a keepsake, but it's something you can use all the time too, that has that keepsake inside permanently. That's great. Mine, I was thinking different along the lines or along a different line. I was just like, you know, what do I do? So I, you know, we've grown and ran this business. Something that people don't really know about me is I have a lot of kids and big family, and over over a big age range too. But between work and and, you know, doing a lot of kids stuff consumes most of my time, the big change I've made over the over the past five years, which is just been a really big change for me, is I, this is super boring. I get up. I just get up early. I get up early every morning, like I get up earlier than everyone else gets up, and it's truly the only time I get to myself during the day, so that I had to go find that time somewhere, and that turned out early. But something I highly recommend for everyone, this is a so even if I'm traveling, whether it's vacation or work, I find myself and I find my days to be a lot better when I have early mornings versus late nights. And like, my biggest one is I always tell people, like, get like, you travel anywhere, get up for the sunrise. Like, don't, don't worry about sunset. Like, everyone sees the sunset, but you get up to sunrise, especially go into Europe or a lot of other places in the world, like you own that location at sunrise like Nolan else is around, there's almost no one around, especially in the summertime, but, but that that's been a big booster in life for me is to get that extra time just to myself, collect my thoughts.
Eric Bell 48:40
I like that. Yeah. You know, I'm turning into an early morning person as well, almost by accident. And there's a lot of research now. You know, Huberman lab and others are talking about how you get this early morning light. You know that sunlight right in this sunrise, and it's good for you to help set that circadian rhythm? Yeah? Makes sense? Awesome.
Unknown Speaker 49:01
The suit, the suits a lot cooler, though, maybe
Eric Bell 49:05
I had more time to think about it. Yeah, so, but yeah, it was great to have you on. I appreciate getting to know you little more and learning about the business and that sort of thing. Where can we find more about you and the business,
Wes Cummins 49:20
our wEric Bellsite. Apply digital.com. We do. We post a lot on our socials. We try to be as absolutely as transparent as possible. You get monthly updates on our builds, construction, lot of video footage, all of those things,
Eric Bell 49:38
awesome. Well, I appreciate it. Thanks for coming on. Yeah, thanks
Unknown Speaker 49:41
for having me, Eric, I appreciate it. You.
Transcribed by https://otter.ai