Nvidia's Artificial Intelligence (AI) Chips Still Need Memory. Here's Why the Micron Sell-Off Has... 12h ago

Alphabet's recent unveiling of TurboQuant, a software product designed to compress memory footprints in large language models during inference, triggered a significant sell-off in Micron Technology shares due to its reliance on Nvidia's AI chips, which utilize Micron's high-bandwidth memory (HBM). However, this panic-selling is considered premature as Nvidia's AI GPUs fundamentally require substantial amounts of specialized memory, like Micron's HBM and DRAM, to process the vast data volumes of modern AI models. TurboQuant optimizes working memory but does not reduce the overall model size or the need for rapid data transfer, and may even enable more intensive workloads. Nvidia's latest chip architectures are designed for larger HBM stacks, underscoring the critical role of memory bandwidth, and switching memory suppliers like Micron, a long-term partner with integrated solutions, is a complex and lengthy process.

















