Wall Street keeps treating Micron as a cyclical memory trade dressed up in AI clothing — and that framing is doing damage to how the stock gets valued.
From $295 in late January to $395 in mid-February, then a modest pullback to $368 by early April — that price path isn’t noise. It’s a market slowly renegotiating what memory is worth in an infrastructure cycle that doesn’t behave like prior ones. The 52-week range of $61.5 to $471.3 tells you just how violently the market has oscillated between “commodity trap” and “AI infrastructure backbone.” At $367.9, it sits closer to the skeptic’s price than the believer’s. That’s the opportunity.
DRAM prices have surged, and the primary suppliers are effectively booked out through the end of the year. That’s not a soft signal or an analyst projection — that’s the physical state of the market. Hyperscalers building AI data centers need high-bandwidth memory in volumes that current global capacity cannot satisfy on a short timeline. Micron isn’t just participating in that dynamic. It’s one of three companies on earth capable of supplying at scale.
The Revenue Print Isn’t the Story — the Trajectory Is
FY 2025 revenue came in at $37.4B versus $25.1B in FY 2024 — a 49% year-over-year jump. Most companies would spend a quarter celebrating that. Micron responded by deploying $15.9B in capital expenditure, a capex-to-revenue ratio of roughly 42.4%. That number should unsettle anyone who thinks this is a company coasting on a favorable cycle. That’s a company betting its own balance sheet that the demand doesn’t go away.
R&D spend tells a similar story: $3.8B, or about 10.2% of revenue. In a business where node transitions and packaging architecture determine who gets design wins at NVIDIA and AMD, that number isn’t overhead. It’s the foundation of future pricing power. You don’t sustain 10% R&D intensity to defend a commodity position. You do it to stay in a race where falling behind means becoming irrelevant.
A 42.4% capex ratio compresses near-term free cash flow and adds execution risk if the cycle turns — that’s worth sitting with. Capital-heavy bets on demand continuity have burned memory companies before. But AI training and inference workloads require persistent, high-bandwidth memory access in a way that consumer electronics demand never did. The use case is stickier. The buyer is less price-sensitive. Supply chain lead times are long enough that even if demand softened tomorrow, the undersupply condition wouldn’t resolve quickly.
Citi’s 16% price target cut got attention. It probably deserved less. Trimming a price target during a macro pullback while the underlying demand thesis remains structurally intact isn’t insight — it’s calendar management. Analysts who anchor to near-term price momentum and call it fundamental revision are providing a service, just not to anyone trying to hold a position for 18 months.
The China Variable Nobody Wants to Quantify
Here’s where the bull case earns its skepticism: China’s indigenous DRAM development. Export controls have materially slowed progress, but “slowed” isn’t “stopped.” If domestic Chinese producers achieve viable HBM output by late next year — even at yields below Micron’s — the supply picture shifts. Not catastrophically, but enough to compress the pricing premium that currently justifies the capex cycle Micron is running.
How substantial is that threat? The technology gap in advanced memory is significant, and closing it requires equipment that’s increasingly difficult to procure. But state-directed capital doesn’t need a return on investment the way a publicly traded company does. That asymmetry matters, and the weakest assumption in the bull case is that export controls hold firm long enough to keep Chinese HBM output marginal through the end of the decade.
Even pricing in a partial China supply recovery by late next year, the demand growth trajectory from AI inference expansion, edge deployment, and next-generation training clusters likely absorbs it. The bull case doesn’t require China to fail permanently — just to remain constrained long enough for Micron to lock in customer relationships and technology contracts that are hard to unwind. Design wins in HBM tend to be sticky. Once your memory is inside someone’s architecture, switching costs are substantial.
The software efficiency argument — that memory demand could be moderated by optimization techniques — deserves acknowledgment without overweighting. Every major infrastructure wave in the last thirty years has faced a version of this argument. Demand has consistently grown through it. Efficiency gains tend to expand use cases rather than shrink total consumption. More tokens generated per GPU-hour doesn’t mean fewer GPUs or less memory. It means more applications running economically viable inference at scale — which pulls more memory into the system, not less.
At $368, the stock sits roughly 22% below its 52-week high. The 15–20% correction before the recent rebound already absorbed a meaningful amount of cyclical anxiety. What the current price doesn’t fully reflect is the combination of a two-year structural supply constraint, a customer base with limited alternatives, and a capex program that suggests Micron’s own management sees the window as both genuine and time-limited. Management doesn’t spend 42 cents of every revenue dollar on infrastructure because they think this is a one-quarter phenomenon.
The China timeline is uncertain. Free cash flow is under pressure while the capex cycle runs. Anyone holding this needs a multi-year horizon or they’re playing momentum with extra steps. But the structural case — tight supply, irreplaceable product, sticky hyperscaler demand — hasn’t broken. Market cyclicality is already in the price; the durability of the new supply constraint is not.
$368 is the market charging you a discount to own one of three companies that can supply the memory stack an entire generation of AI infrastructure depends on. You can argue about the multiple. It’s harder to argue with the supply ledger.
The same investors who complain that AI is overhyped will turn around and underprice the one physical component you literally cannot build AI without — and then act surprised when the trade works.