🌾 Welcome to StableBread’s Newsletter!
DigiPowerX (DGXX) is a Miami-based AI infrastructure operator with ~400MW of secured power capacity across four U.S. sites.
The company started as a Bitcoin mining and energy operator on February 18, 2017, and is now repurposing that power into Tier 3 data center capacity sold to GPU-as-a-Service customers under its "NeoCloudz" brand.
The pivot has been in motion since November 2025, when the company filed an amended $200M at-the-market equity offering and started developing the ARMS 200 (its 1MW prefab GPU data center pod design) with Super Micro Computer.
This is about when I started following the company, but held off on the write-up until I saw some contracted revenues.
That changed on April 20, 2026, when DGXX produced its first piece of contracted evidence: a $19.6M, 24-month bare-metal GPU rental agreement with a stealth AI lab called SubQ. The deal is small relative to the company's stated 100MW AI plan, but it validates the thesis DigiPowerX has been building toward.
Beyond the SubQ contract, here's what DGXX has going for it:
Leadership: Recent senior hires from Oracle (ex-Director of Engineering, now VP AI Infrastructure), Deutsche Bank (former CFO), and Verizon/BlackRock (advisor Hans Vestberg) build enterprise infrastructure depth around the founders.
Power capacity: ~400MW of secured power across four U.S. sites, with NY production at ~$0.04/kWh vs. $0.07-0.10 typical for data centers.
First pod live: ARMS 200 commissioned, a 1MW prefabricated modular data center unit housing up to 400 NVIDIA Blackwell GPUs.
SMCI partnership: Co-developed the ARMS 200 with the OEM whose Blackwell reference architectures sit inside Meta, Microsoft, and OpenAI deployments.
Tier 3 certification: Under ANSI/TIA-942 (99.982% uptime, redundant power and cooling paths).
Liquidity: $93M of cash and digital currency, zero traditional debt.
DGXX trades around $3.50, giving the company a $244M market cap (69.8M shares × $3.50). After netting the $93M cash and digital currency position, enterprise value is $151M. The cash backing alone is $1.33/share.
Today DGXX trades at 4.4x EV/trailing sales on $34.2M of mostly legacy revenue. Management's FY 2027 capacity goal is 100 MW of AI infrastructure (90 colo + 10 GaaS), implying ~$282M of annualized revenue at full utilization. Our base case forecast lands at ~$197M after realistic haircuts.
Either way, today's $151M EV trades at 0.5-0.8x forward sales.
AI infrastructure peers trade at 4-9x forward EV/Sales (CRWV at 4.2x, NBIS at 6.3x, IREN at 6.9x). Even at the low end of the peer range applied to FY 2027 base case revenue, EV expands ~4x from current levels. At peer-mean (~7x), it's a 6-7x EV re-rating.

DGXX: Base-Case Valuation
Per-share return runs lower than EV expansion because of dilution from ATM and lease financing funding the buildout. Our base case projects $15.62/share by FY 2027 (+346%). Our bull case scales to $26.28/share by FY 2027 (+651%).
The downside is bounded by ~$1.33/share of gross cash backing today, but that floor erodes as the buildout consumes the balance sheet through 2026.
Let's dive in deeper now to fully understand the business and the risk/reward scenarios.
Business Transition
DigiPowerX’s legacy business is built on three revenue lines, each at a different stage of repositioning into AI infrastructure:
Colocation services: Rents space and power to Bitcoin miners. $17.5M in FY2025, up 10.6% y/y from $15.8M in FY2024.
Energy and electricity sales: Operates a natural gas cogeneration plant in North Tonawanda, NY that sells electricity to the grid. $13.2M in FY2025, up 21.1% y/y from $10.9M in FY2024.
Direct Bitcoin mining: Mines BTC for its own account. $3.5M in FY2025, down 65.9% y/y from $10.3M as power gets pulled off self-mining.
Full year 2025 revenue came in at $34.2M, down 7.6% y/y. The decline is intentional—capacity that previously generated low-margin self-mining revenue is being redirected toward higher-multiple AI infrastructure.
Worth noting: on the April 1, 2026 earnings call, CFO Paul Ciullo specifically pushed back on the "legacy" framing for colocation and energy, calling them "the foundation upon which the company's AI data center build-out is anchored."
What's actually being wound down is Bitcoin self-mining.
AI colocation at $150/kW/month generates ~$1.8M/MW annually. GPU-as-a-Service at $3.50/hr/GPU generates ~$9.8M/MW annually. Bitcoin self-mining produces a fraction of either on the same power footprint.
Infrastructure
Per the company's December 2025 shareholder letter, DigiPowerX has secured ~400MW of power across four U.S. sites:
Columbiana, Alabama (70MW secured): Tier 3 AI data center site under continued development, featuring dual-path power and GPU cluster readiness. This is where the ARMS 200 pods are being deployed.
North Tonawanda, NY (123MW secured): 60MW natural gas cogen plant operating today, with load study additions bringing total secured capacity to 123MW. Produces electricity at ~$0.04/kWh vs. $0.07-0.10 typical for data centers.
Buffalo, NY (18.7MW secured): Powered by Niagara Falls hydropower.
Hildebran, NC (200MW available): Targeted for phased AI data center development through 2028-2029. Adjacent to a Duke Energy substation, in the same Hickory metro area as Google's Lenoir data center.
Greenfield power capacity at this scale typically takes 24-48 months and $30-75M of upfront infrastructure spend per 100MW interconnect. DigiPowerX already has substations, transmission, and zoning in place across all four sites.
The cogen power advantage is structural and quantifiable. At ~$0.04/kWh vs. $0.07-0.10 for grid-priced peers, DigiPowerX saves $0.03-0.06 per kWh on every MW served by the NY cogen plant.
At full ramp of the NY site (~60MW operating today, scaling to 123MW secured), that's roughly $15-30M/year of operating margin advantage (60,000 kW × 8,760 hrs × $0.03-0.06/kWh) over competitors paying market rates.
Leadership Team
DigiPowerX has been adding senior people across the disciplines needed to operate enterprise AI infrastructure:
Michel Amar (Chairman & CEO): Founded the company in 2017. Provides operating continuity through the crypto-to-AI transition.
Alec Amar (President): Public face on business development, including the SubQ negotiation.
Jagan Jeyapaul (CTO): Leads the NeoCloudz technical buildout. Background in AI infrastructure, HPC, and GPU-as-a-Service.
Venkat Rangasamy (VP of AI Infrastructure): Recently joined from Oracle, where he was Director of Engineering. Brings hyperscaler-grade infrastructure experience to the GPU fleet rollout.
Gerard Rotonda (board / advisor): Former CFO of Deutsche Bank Americas. Adds institutional finance credibility for capital raises and enterprise contract negotiations.
Hans Vestberg (Senior Advisor & USDC Co-founder): Former Verizon Chairman/CEO and current BlackRock board director.
Luke Marchiori (EVP, energy operations): From EnergyMark, a NY natural gas marketer. Brings energy industry relationships relevant to power procurement.
Faraz Zobairi (Senior Support Leader): 20+ years in enterprise support, relevant for the 24/7 SLA standards AI customers expect.
DigiPowerX is hiring senior people from large-scale operators across the disciplines that matter: engineering and infrastructure from Oracle and Verizon, finance from Deutsche Bank, and institutional capital access through BlackRock.
That mix gives the AI pivot a credible execution bench beyond its crypto roots.
SMCI Partnership and the ARMS 200
DigiPowerX partnered with Super Micro Computer (SMCI) to co-develop the ARMS 200, a prefabricated, modular data center pod.
SMCI provides the rack-scale GPU servers and reference architecture; DigiPowerX (via its USDC subsidiary) handles pod-level design, integration, and manufacturing.
Each unit delivers 1MW of compute and houses up to 400 NVIDIA B200/B300 GPUs (per the SubQ press release confirming 4,000 GPUs across 10 pods).

SMCI is one of NVIDIA's primary OEM partners and the leading supplier of AI-optimized GPU servers. Its reference architectures sit inside hyperscaler deployments at Meta, Microsoft, and OpenAI, and its rack-scale systems set the integration standard for Blackwell-class compute.
SMCI carries some accounting overhang from a 2024 Hindenburg short report and subsequent SEC investigation, but product delivery has continued on schedule.
For DigiPowerX, the partnership de-risks GPU integration and shortens time-to-market on each pod. ARMS 200 deployment time is ~180 days vs. 24-36 months for traditional AI-grade data center construction. The first pod was commissioned at Alabama in mid-March 2026 and entered GPU testing shortly after.
The ARMS 200 received Tier 3 certification under ANSI/TIA-942, requiring 99.982% uptime. That's the bar enterprise customers need before signing meaningful contracts.
Two monetization paths off the platform:
AI colocation: Charges for space and power at market rates (~$150/kW/month per management).
GPU-as-a-Service (NeoCloudz): On-demand and contracted compute at ~$3.50/hr/GPU.
DigiPowerX is also developing ARMS 300 and ARMS 400 variants aimed at hyperscaler and government workloads. The ARMS 300 is being engineered around next-generation NVIDIA chips for a planned ~120MW deployment at the New York facility. The ARMS 200 is patent-pending.
USDC Subsidiary
In early March 2026, DigiPowerX publicly announced US Data Centers Inc. (USDC), a subsidiary focused on global deployment of modular Tier III AI data centers.
The entity itself was established December 31, 2025 per CFO commentary on the April 1, 2026 investor conference.
Hans Vestberg, former Verizon chairman/CEO and current BlackRock board director, joined DigiPowerX as senior advisor on February 2, 2026, then was named co-founder of USDC the following month.
The restructure rolled out with limited initial transparency. Concerns over dilution (DGXX shareholders effectively giving up 45% of a venture they might have expected to own outright) and unclear governance drove the stock down to recent lows around $1.89.
DigiPowerX's March 16 clarification press release and a follow-up interview with CEO Michel Amar walked back most of the concerns.
Management clarified that USDC owns no operating assets. It's a manufacturing and distribution business for the ARMS platform. All pods, GPUs, sites, and customer revenue stay 100% with DigiPowerX, which buys ARMS units from USDC at cost.
A single 100MW customer order would require $250-300M in capex. Funding that through a $200M-class market cap parent either wrecks the balance sheet or forces heavy dilution.
Spinning USDC out as a separate manufacturing entity puts the capex risk on a separate balance sheet. DigiPowerX gets upside through its 55% stake without owning the manufacturing risk.
Here’s DGXX’s current cap table:
DigiPowerX: 55% economic interest
Management and co-founders (including Vestberg): ~35%
Seed investors: ~10%
Vestberg's BlackRock board seat gives DigiPowerX a direct line to hyperscalers and institutional capital.
Governance remains the open question. The March 16 release discloses that DigiPowerX does not have voting control over USDC despite owning 55% of equity, and the voting structure has not been publicly explained.
Until that's resolved, DGXX owns 55% of the economics but doesn't formally control the decisions.
The First Customer
Now let's talk about their first customer. The infrastructure, partnerships, and team only matter if real customers show up. April 20, 2026 was the day that started happening.
DigiPowerX signed a 24-month bare-metal GPU rental agreement with SubQ AI. Here’s the terms:
Total contract value: ~$19.6M over 24 months ($9.8M annual run rate).
Upfront payment: $2.95M, non-refundable, equal to 15% of TCV.
Billing: Monthly invoicing, payable Net-15.
Effective date: May 15, 2026.
Hardware: Latest-generation NVIDIA Blackwell B300 GPUs with 192GB HBM3e, on InfiniBand/RoCE v2 fabric.
Architecture: Fully dedicated, non-virtualized bare metal with root-level customer control.
Facility: Inside a DigiPowerX-owned, Rated 3 facility with redundant utility feeds.
The math checks against management's stated rate: $3.50/hr × 400 GPUs × 8,760 hrs × ~80% utilization = ~$9.8M/year, matching the $19.6M / 24 months contract.
DigiPowerX CEO Michel Amar explained the deal here:
Sized against the company's stated 10-pod / 4,000-GPU GaaS footprint, this deal is roughly 1MW or ~10% of GaaS-specific capacity. Sized against management's full 100MW AI plan (90MW colocation + 10MW GaaS), it's only about 1% of the total target.
The $2.95M upfront covers ~15% of the per-pod buildout cost ($18-20M). Monthly Net-15 billing keeps cash collection tight, and the 24-month term locks in $9.8M of annualized contracted revenue.
Management is clear that this is the first customer, not the destination:
"This deal is just the beginning. We intend to rapidly scale our relationship with SubQ AI and welcome additional high-growth AI customers onto our dedicated Blackwell fleet."
The CTO's comments the same day point in the same direction:
"We're evaluating a Silicon Valley presence to accelerate our AI infrastructure roadmap and hire senior AI engineers."
A Silicon Valley office isn't the kind of decision a company makes if SubQ is one-and-done.
Who SubQ Actually Is
The customer matters as much as the contract. SubQ AI is a Miami-based AI lab that rebranded from "Aldea" in early 2026.
The team is small but credible. Justin Dangel runs the company as CEO. Cofounder Alexander Whedon previously worked at Tribe AI and Meta. Headcount sits at roughly 15-30, with active hiring in Miami and the Bay Area.
SubQ is building a proprietary "post-transformer subquadratic" architecture aimed at scaling more efficiently than today's quadratic-cost transformers. That means longer context windows at meaningfully lower training and inference cost.
The current product is a Speech-to-Text API at $0.09 per hour with 6.1% WER and sub-100ms latency, positioned as Deepgram-compatible. A Speech-to-Speech product is in alpha. An LLM platform is on the roadmap.
The closest comp is Cartesia, founded by the authors of the Mamba paper, which has raised ~$91M.
Training a post-transformer foundation model from scratch requires bare-metal Blackwell GPUs with root access, which is exactly what DigiPowerX is offering through NeoCloudz.
"Our roadmap calls for scaling to several thousand GPUs over the next few quarters as we advance our proprietary architecture."
If SubQ ramps to even 1,000 GPUs (~2.5MW) over the next few quarters, that's ~$24.5M of annualized revenue from one customer at SubQ-contract economics. The full multi-thousand-GPU target could fill the entire 10MW GaaS footprint over the next 4-6 quarters.
FY2025 Financials
FY2025 results were reported on March 31, 2026:
Subscribe to StableBread's Research to keep reading!
Full access to this write-up and every future one. Cancel anytime.
View PlansSubscribe to unlock:
- 1-3 write-ups per week
- Periodic industry, market, and thematic research
- Financial models behind most write-ups
- Full archive of all past research
