At CES 2026, AMD Chairman and CEO Dr Lisa Su delivered the keynote address, showcasing the company’s progress in artificial intelligence (AI) and its collaborations across various industries. The presentation highlighted how AMD’s product portfolio is driving meaningful advances in AI technology.
The keynote featured contributions from several partners, including OpenAI, Luma AI, Liquid AI, World Labs, Blue Origin, Generative Bionics, AstraZeneca, Absci, and Illumina. Each demonstrated how they are utilizing AMD technology to drive breakthroughs in AI. Dr. Su emphasized, “At CES, our partners joined us to show what’s possible when the industry comes together to bring AI everywhere, for everyone.”
Dr. Su said that the rapid adoption of AI is leading to an era of “yotta-scale computing,” marked by notable growth in both training and inference capabilities. She outlined AMD’s role in providing the foundational computing infrastructure for this next phase of AI, including a commitment to open platforms and extensive collaborations with industry partners.
Blueprint for Yotta-Scale Compute
The keynote highlighted the need for advanced computing infrastructure to support the growing demand for AI. Current global compute capacity is around 100 zettaflops, with projections expecting this to exceed 10 yottaflops in the next five years. Achieving yotta-scale infrastructure will require not only performance but also an open, modular design that can adapt across generations.
AMD introduced the “Helios” rack-scale platform as a blueprint for this yotta-scale infrastructure, claiming it can deliver up to 3 AI exaflops per rack. This platform aims to maximise bandwidth and energy efficiency for large-scale AI training and is powered by AMD Instinct MI455X accelerators, AMD EPYC “Venice” CPUs, and AMD Pensando “Vulcano” NICs. The Helios platform integrates within the open AMD ROCm™ software ecosystem.
AMD also offered a preview of the complete AMD Instinct MI400 Series accelerator product line, including the newly unveiled MI440X GPU, designed for enterprise AI deployments. This new GPU aims to facilitate scalable training, fine-tuning, and inference workloads in a compact eight-GPU form factor.
AI in Personal Computing
The company announced updates to its AI PC portfolio, anticipating that AI will become a fundamental aspect of the user experience for billions of PC users. The next-generation AMD Ryzen AI 400 Series and Ryzen AI PRO 400 Series platforms were introduced, promising a 60 TOPS NPU and enhanced efficiency with full ROCm platform support for cloud-to-client AI scaling. Initial systems are set to ship in January 2026.
AMD further expanded its on-device AI compute offerings with the Ryzen AI Max+ 392 and Ryzen AI Max+ 388, which support models with up to 128 billion parameters. These new platforms are intended to enhance local inference, content creation, and gaming experiences on portable and small-form-factor computing devices.
Embedded AI Solutions

AMD also unveiled the Ryzen AI Embedded processors, a new line of x86 embedded processors designed for AI applications in edge computing. These processors aim to deliver high performance across a range of applications, including automotive digital cockpits, smart healthcare, and autonomous systems.

