The supply of memory chips has come under strain as AI companies buy far more capacity than phone and PC makers. DRAM and NAND are in laptops and handsets, and the same technology feeds large language models at a far bigger scale. IDC said in December that supply growth for DRAM and NAND in 2026 would be below past norms.
An IDC analyst wrote, “The memory market is at an unprecedented inflexion point, with demand materially outpacing supply.” The note added, “For an industry that has long been characterised by boom and bust cycles, this time is different. The rapid expansion of AI infrastructure and workloads is exerting significant pressure on the memory ecosystem.”
Samsung, SK Hynix and Micron control most output for consumer devices and also make high bandwidth memory for AI systems. AI buyers place huge orders. OpenAI has committed about $1.4 trillion to data centre projects over eight years. Meta told investors that AI infrastructure spending for 2025 would reach $70 billion to $72 billion. Google expects capital spending of $91 billion to $93 billion this year. IDC said hyperscalers buy far more chips per order, leaving fewer units for devices.
What Does This Mean For Phones And PCs?
IDC expects sales pressure for device makers as memory costs take a larger share of build prices. For a mid range device, memory already accounts for 15% to 20% of total cost, IDC wrote. Tighter supply means higher bills for brands or higher prices for buyers.
The research group cut its outlook for 2026. In November it saw a 0.9% drop for global smartphone sales. In December it outlined a 2.9% drop under a moderate view and a 5.2% drop under a pessimistic view. PCs experienced a similar downgrade from a 2.4% drop to 4.9% and as much as 8.9%.
Price changes have started. Dell told staff in an internal email seen by Business Insider, “Global memory and storage supply are tightening fast.” The company planned increases from $55 to $765 for high end memory. Asus also announced higher prices, citing DRAM costs. Framework lifted prices for DDR5 modules and referred to “a period of extreme memory shortages and price volatility.”
Why Are GPUs Getting Pricier As Well?
China Times reported that Taiwanese card makers have adjusted pricing for AMD and Nvidia RTX 50 series products. MSI has already announced a second revision, and Asus and Gigabyte may follow.
DDR6 and DDR7 VRAM prices came up, leading AMD and Nvidia to lift wholesale pricing by 10% to 15% depending on model. Retail partners then decide how much reaches shoppers. Changes often appear quietly through listings and stock mix.
Prices for AMD RX 9000 cards came up by 10% to 18% in Europe and China. Nvidia RTX 50 series cards with 16GB VRAM saw changes of 15% to 20%. Nvidia has leaned toward 8GB models such as the RTX 5060 and 5060 Ti, which cost less to build.
Sashin Gaji, chief executive of Synopsys, said, “The memory shortage will continue until 2026 and 2027.” He added, “Most memory produced by major companies is being directly channeled into AI infrastructure.” Lenovo finance chief Winston Chung said, “Memory prices will rise due to high demand and insufficient supply.” TrendForce expects memory revenue of $842.7 billion in 2027, up 53%, after DRAM prices jumped 53% to 58% late last year and are set to come up over 60% early this year.
How Will The 2026 Global Memory Shortage And GPU Rise Impact Industries?
Experts share reactions to the global DRAM shortage, answering to how GPU is involved, the industry impact, as well as what’s causing these things…
Our Experts:
- Michael Wu, GM and President, Phison Technology Inc. (USA)
- Joyce Odette Reyes, Business Intelligence, Technology & Growth Expert, Strategic Growth XP
- Jon Bikoff, Chief Business Officer, Personal AI
- Val Cook, Chief Software Architect, Blaize, Inc
- Scott Dylan, Founder, NexaTech Ventures
Michael Wu, GM and President, Phison Technology Inc. (USA)
![]()
“AI has fundamentally changed how much memory the world needs and how quickly it is consumed. When the AI wave turned into a boom, the focus was on GPUs for AI training. Now, organisations are focused on driving revenue through inference on that trained data, which has shifted focus to storage due to the amount of data generated. That shift is now driving sustained demand for high-capacity NAND flash and DRAM across data centres, edge devices and client systems to store critical data. At the same time, storage architectures are moving rapidly away from HDDs to SSDs, and SSDs are becoming the primary tier, further increasing the number of NAND bits required.
“On the supply side, adding meaningful capacity is not a quick exercise, particularly in the current memory supercycle. Advanced 3D NAND and DRAM nodes require large, long-term capital investments, and new fabs take years to plan, build and ramp. For data center operators and OEMs, that means memory and storage can no longer be treated as just-in-time commodities. We see customers locking in longer-term supply arrangements, qualifying multiple sources and designing platforms with more flexibility in SSD capacities and configurations. My advice to enterprises is to bring memory and storage into AI road maps early and treat NAND and DRAM as strategic resources that directly affect the performance and cost of AI services.”
Joyce Odette Reyes, Business Intelligence, Technology & Growth Expert, Strategic Growth XP
![]()
“In 2026 and beyond, the global memory shortage is being driven by explosive demand for GPUs used in AI, cloud computing and advanced analytics. High-performance GPUs require large volumes of specialised DRAM and HBM memory. Unfortunately, supply is currently not scaling fast enough to support the hardware and systems needed to meet this processing demand.
“The industries most impacted will be AI, data centers, automotive and consumer electronics. Companies building or deploying AI models will face higher costs, longer deployment timelines, and tighter competition for hardware. Smaller companies may be priced out entirely or constrained in their ability to scale. We’re already seeing that in the rising costs of game consoles, even.
“The root causes include underinvestment during prior market downturns, long semiconductor development cycles, manufacturing complexity, energy and resource constraints, as well as a rapid shift toward memory-intensive workloads. This shortage will likely accelerate industry consolidation, push companies to optimise software efficiency and increase investment in alternative architectures, better sustainability and edge computing.”
More from News
- Golden Visas Used To Be A Pathway To The World’s Best Passports…But Not Anymore
- Researchers Are Reporting ChatGPT Using Grokipedia Answers
- SER Group Rebrands To ‘Doxis’ And Enhances Leadership To Reflect AI-First Focus As ‘The Document Intelligence Company’
- Are Businesses Ready To Face Rising Quantum Computing Security Threats?
- One in Three Ecommerce Brands Now Use AI Agents To Help Convert Sales
- Could Apple Release Its First AI Powered Wearable?
- New Instagram Scam? Why Are Users Receiving Random Password Reset Emails?
- AI Incidents Reached 346 Reported Cases In 2025, AI Incident Database Says
Jon Bikoff, Chief Business Officer, Personal AI
![]()
“For most industries, the RAM market is downstream and invisible. We’ve all become desensitised to running inefficient workloads: context-heavy prompts sent to LLMs, massive uncompressed files moving between people, services, and servers. It’s like everyone driving SUVs and trying to use the HOV lane, nothing moves!
“We can’t control the semiconductor supply chain, which ultimately follows supply and demand, but we can change our behavior. Smart executive teams and engineers certainly will. The current memory crunch will reward companies that downsize workloads, compress context, and run more efficient, distributed systems on smaller models. Instead of SUVs, we need sports cars that are lighter, faster, and designed to move efficiently across all lanes.”
Val Cook, Chief Software Architect, Blaize, Inc
![]()
“The fact that memory chip providers are narrowing their product focus, which may result in shortages in some sectors, signals both the insatiable demand for AI solutions and the urgent need for a more practical approach to providing those solutions. At Blaize, we believe the optimal approach lies in a hybrid architecture in which heterogeneous devices work cooperatively, applying the appropriate level of compute and requisite memory bandwidth to each processing domain, enhancing overall efficiency of the solution. This approach not only meets the current demands of AI processing but also paves the way for more sustainable and scalable technological advancements in the future.”
Scott Dylan, Founder, NexaTech Ventures
![]()
“The 2026 memory shortage isn’t cyclical — it’s structural. We’re witnessing a zero-sum reallocation of global wafer capacity, and AI is winning that battle decisively.
The GPU Connection
“High Bandwidth Memory (HBM) sits at the heart of this crisis. Every AI accelerator — from Nvidia’s Rubin platform to enterprise data centre GPUs — requires HBM stacks that consume roughly three times the wafer capacity of standard DDR5 per gigabyte. When hyperscalers like Microsoft, Google, and Meta are placing open-ended orders for AI infrastructure, and OpenAI’s Stargate project alone absorbs 40% of global DRAM output, there’s simply no capacity left for consumer markets. Samsung, SK Hynix, and Micron control 95% of production, and they’ve all told investors the same thing: HBM capacity is sold out through 2026.
Industry Impact
“The ripple effects are brutal and asymmetric. PC manufacturers are facing 15-20% price increases, with memory now representing 15-18% of production costs versus single digits historically. The smartphone market is particularly vulnerable — Android manufacturers are reversing a decade of spec democratisation because memory constitutes 15-20% of a mid-range device’s bill of materials. But the real concern is how this cascades beyond tech. Automotive production faces potential disruptions by 2028 as legacy DRAM lines shut down. Even household electronics — televisions, smart appliances — could become prohibitively expensive when margins are already razor-thin.
Root Causes
“This isn’t about pandemic-style supply chain disruption. Manufacturers deliberately reallocated production capacity towards high-margin enterprise products. The numbers tell the story: DRAM prices surged 171% year-on-year, whilst SK Hynix overtook Samsung in revenue for the first time since 1992, purely on the strength of HBM sales. The industry spent decades optimising for smartphone and PC memory; now data centres will consume 70% of global memory production in 2026. That inversion won’t reverse quickly — new fabrication plants take years to bring online, and current estimates suggest the shortage persists well into 2027 or 2028.
“What troubles me most is the strategic dimension. Three companies control nearly all global DRAM production, creating systemic vulnerability when priorities shift. We’re also seeing geopolitical fragmentation compound the problem, with export controls and friend-shoring accelerating supply chain uncertainty. The era of cheap, abundant memory has ended, and industries that haven’t secured long-term supply agreements — like Apple reportedly did through Q1 2026 — will face both cost pressures and availability constraints simultaneously.
“This isn’t just an IT procurement challenge; it’s a fundamental restructuring of the semiconductor value chain, with AI infrastructure permanently displacing consumer electronics as the primary demand driver.”