While Nvidia remains a key AI chip player, recent market activity highlights significant gains for Intel, AMD, and Micron, reflecting evolving hardware needs in AI data centers beyond GPUs.

  • Intel and AMD shares rose about 25%, Micron jumped 37% amid persistent memory shortages.
  • Nvidia’s stock lags, reflecting a shift from GPU dominance to broader AI hardware components.
  • Data center CPU market projected to more than double by 2030, driven by AI agent workloads.

Market signal

Since the AI boom kicked off in 2022 with models like ChatGPT, Nvidia’s GPUs have dominated the compute landscape, driving its rapid revenue growth. However, recent market activity demonstrates a notable rotation as investors allocate capital towards CPU and memory vendors such as Intel, AMD, and Micron. This signals broadening confidence that AI infrastructure will require more than just GPUs for the next wave of innovation.

Micron’s unprecedented memory shortage and pricing surge have propelled its market value, reflecting a critical supply-demand imbalance that benefits established memory manufacturers. Meanwhile, AMD and Intel are witnessing rising demand for server CPUs, which support complex AI workloads particularly as focus shifts to AI agents beyond traditional machine learning models. This change complements the overall AI hardware ecosystem expansion.

Advertising
Reserved for inline-leaderboard

Operator impact

For technology operators and data center buyers, the diversification of AI hardware suppliers means more options and potentially new architectures to meet performance and capacity demands. The surge in memory prices and supply constraints at Micron highlight the need for early procurement planning and risk management in sourcing critical components.

Intel’s resurgence, driven in part by U.S. government investment and new partnerships such as with Apple, suggests growing competition and capacity increases that could improve supply stability over time. AMD’s uplift, supported by strong data center CPU growth, points to an expanding CPU footprint in AI infrastructure, which operators should watch for performance and compatibility within their AI workloads.

What to watch next

Tracking the evolution of AI workloads will be crucial, especially how AI agents impact demand for CPUs and memory over GPUs. Industry watchers should monitor capacity expansions or bottlenecks at major chip and memory manufacturers as well as emerging vendors that could disrupt supply dynamics.

Additionally, developments in partnerships and production strategies—such as Intel’s collaboration with Apple and Samsung—may influence the competitive landscape and availability of key components. Operators should also stay alert to pricing trends in memory markets and advances in CPU architectures designed specifically for AI workloads to optimize infrastructure investments.

Source assisted: This briefing began from a discovered source item from CNBC Technology. Open the original source.
How SignalDesk reports: feeds and outside sources are used for discovery. Public briefings are edited to add context, buyer relevance and attribution before they are published. Read the standards

Related briefings