At this 12 months’s CES, AMD’s CEO Dr. Lisa Su gave the pre-show keynote, and he or she got here with an overabundance of bulletins and visitors. The theme of the keynote was high-performance and adaptive computing to assist clear up issues. On-stage visitors included representatives from HP, Intuitive Surgical, Lenovo, Magic Leap, and Microsoft. It additionally included a former NASA astronaut for a bonus.
Dr. Su made seven key product bulletins spanning power environment friendly PCs to excessive processors designed for supercomputers. With all of the bulletins and visitors, it’s no surprise the keynote ran considerably longer than deliberate. It appeared that each group in AMD acquired a chunk of the keynote.
A PC Replace
Regardless of the present hunch in PC gross sales, most of AMD’s bulletins centered on PCs – each desktop and notebooks. Of explicit curiosity was AMD’s determination so as to add a devoted AI core to its new Ryzen 7040 sequence pocket book processors. The AMD Ryzen 7040 is designed to suit into ultrathin notebooks with 15W to 45W TDPs (thermal design energy) and is made utilizing TSMC’s superior 4nm course of. The Ryzen 7040 has AMD’s most superior Zen 4 CPU cores matched with the newest RDNA3 GPU structure.
The Ryzen 7040’s embedded AI expertise was developed by Xilinx, which AMD acquired final 12 months. However even previous to the acquisition, AMD had engaged with Xilinx to license the expertise. Xilinx developed the AI engine for its Versal ACAP adaptive computing product. The small print of the Ryzen AI engine, as AMD is now calling it, are nonetheless to be revealed, however Versal AI Engines are 2D arrays of VLIW vector-vector and matrix-matrix compute engines. Whereas AI workloads might be run on AMD’s CPUs and built-in GPUs, AMD claims that working these workloads on a devoted AI engine is extra energy environment friendly, which is why the corporate has launched it’s first in a pocket book processor. Microsoft’s EVP & Chief Product Officer Panos Panay gave an incredible endorsement for Ryzen AI and the way Microsoft plans to make use of it initially for Groups and for different options sooner or later. We encourage AMD to make entry to Ryzen AI by way of open device chains and APIs along with Microsoft’s Studio instruments.
It’s nice to see AMD actually embracing the pervasiveness of AI in computing. AMD has GPUs and FPGAs/ACAP for AI acceleration, and now’s incorporating AI expertise into mainstream pocket book processors. Intel can even incorporate an built-in AI accelerator in its Meteor Lake processors later this 12 months.
AMD additionally introduced its RDNA 3 structure to cellular discrete GPUs by saying the Radeon 7000M sequence. These new cellular GPUs can energy 1080p gaming on the highest sport settings and assist graphically intensive content material creation purposes. Once you mix the AMD Radeon RX 7000M sequence graphics with Ryzen 7000 Sequence processors, AMD SmartShift RSR expertise intelligently distributes picture rendering, upscaling, and presentation calls for between APU and GPU sources to optimize efficiency. This functionality is predicted to be obtainable within the first half of 2023.
Desktops are additionally getting a processor improve. The brand new Ryzen 7000 sequence X3D desktop processors are geared up with as much as 144MB of cache and sport as much as 16 cores and 32 threads. The brand new processors are a bit bizarre in that solely one of many two CPU cluster chiplets within the bundle get the stacked reminiscence, giving the CPU an uneven bias. For instance, the Ryzen 9 7950X3D has one 8 core CPU chiplets with out the 3D cache that may clock to five.7GHz with increase, however the different 8 core chiplet within the processor with the stacked 3D cache have extra cache reminiscence however a decrease increase clock. It is going to be attention-grabbing to see how nicely the OS thread supervisor and software builders deal with such an uneven configuration.
The opposite PC-related bulletins embody a high-power Ryzen 7045HX pocket book processor design for bigger laptops, geared toward players and inventive professionals. This processor makes use of the chiplet design from the desktop half to ship as many as 16 cores.
There was additionally a low-key announcement of AMD Ryzen 7000 Sequence desktop processors coming quickly. Constructed on the Zen 4 structure and that includes a 65W TDP, this new lineup of Ryzen 7000 Sequence processors is optimized for each effectivity and efficiency. There are some cost-optimized AM5 motherboards coming earlier than the spring as nicely.
AI within the Knowledge Heart
There have been three bulletins on how AMD is working to advance AI for knowledge middle purposes: AMD Vitis Medical Imaging Libraries, Alveo V70 AI accelerator, and the Intuition MI300 high-performance computing accelerator.
AMD introduced its Vitis Medical Imaging libraries to convey premium medical imaging merchandise to market sooner by decreasing growth instances. These software program libraries run on AMD Versal SoC gadgets with AI Engines to ship high-quality, low-latency imaging for medical purposes.
AMD additionally displayed its XDNA adaptive AI structure in a discrete PCIe add-in card with the Alveo V70. Primarily based on AMD XDNA with AI Engine structure, the Alveo V70 extends pervasive AI from edge to cloud.
Xilinx FPGAs presently are used within the Perseverance Mars rover, so AMD hosted an interview with former NASA astronaut Dr. Cady Coleman. Dr. Coleman was selling NASA’s agenda on human house journey to the Moon and finally Mars with the Artemis program. However the broader message was about selling STEM training and provoking extra folks, together with ladies, to comply with a profession in science.
Dr. Su ended her keynote with a preview of the corporate’s subsequent monster GPU for high-performance computing – the MI300- which consists of 13 chiplets and makes use of each 2.5D and 3D packaging strategies. It has a complete of 146 billion transistors. The MI300 affords not simply GPU cores, but additionally contains three Zen 4 CPU chiplets within the bundle. The objective of packaging the CPU cores within the GPU cluster is to have the ability to run duties nearer to the GPU and get rid of transaction latency with the primary host CPU. The CPU/GPU cluster will probably be supported by 128GB of HBM (Excessive Bandwidth Reminiscence) native reminiscence within the bundle. This design is much like Nvidia’s Grace-Hopper HPC processor, which mixes Grace’s ARM CPUs in the identical bundle with the Hopper GPUs. The MI300 ought to ship in 2H23.
The quite a few introductions throughout Dr. Su’s keynote confirmed that AMD is constant to offer management merchandise in PCs and high-performance computing. Regardless of the current expertise market’s downturn, AMD continues to put money into new, cutting-edge merchandise and is accelerating its AI roadmap.