Intel Sapphire Rapids roadmap: start production in 2022

[ad_1]
Intel announced through a blog post this morning that Sapphire Rapids will be put into production in the first quarter of 2022 and mass production will begin in the second quarter of 2022.The revised production schedule is The final Sapphire Rapids roadmap, The company expects that production will start at the end of 2021 and mass production will begin in the first half of 2022.
This still means that these chips will mainly be AMD’s EPYC Milan Processor, but also with 5nm Zen 4 EPYC Genoa The chip will be launched later in 2022. Intel also revealed new details about its Advanced Matrix Extension (AMX) and Data Stream Accelerator (DSA) technology that debuted in Sapphire Rapids.
Intel’s release cadence for its data center products can be a bit tricky-the company usually starts shipping to its largest customers (super-large companies such as Facebook and Amazon) soon after the chips are put into production. Generally, it will be launched after 6 months, which marks the traditional official release, that is, the chip and OEM system are found to be available to the public.
Production/Leading Customers | Volume ramp / general availability | |
Original roadmap | 2021 (fourth quarter) | First half of 2022 |
Revised roadmap | First quarter of 2022 | Second quarter of 2022 |
As a result, Intel’s new schedule represents a delay in initial production and final silicon availability for its largest customer (they do have samples), but deployment time (that is, the time between first shipment and full listing) is compressed. By shortening the “deployment time” from six months to three months, Intel’s official release is still scheduled in the first half of 2022.
Lisa Spelman, corporate vice president and general manager of Intel Xeon and Memory Division, used Sapphire Rapids’ new technology breadth as the driving force for the timeline adjustment. She said: “In view of the enhanced breadth of Sapphire Rapids, we will add additional verification time beforehand. The production version, which will simplify the deployment process for our customers and partners.”
Sapphire Rapids represents Intel’s first chip using the latest 10-nanometer enhanced SuperFin process, while its Ice Lake predecessors are manufactured on the (now older) 10-nanometer+ node. Given that Intel’s transition to its 10-nanometer process is notoriously difficult, the public’s perception may first focus on the potential problems of the process technology. This is a possibility, but it is very slim. Intel’s Sapphire Rapids chip has been extensively sampled (and leaked), so pre-release/verification chips are available for leading customers. In addition, yield issues usually take more than three months to resolve, so process technology issues seem unlikely.
Spelman’s blog post cites “new enhancements” that need further verification, and also talks about its Advanced Matrix Extensions (AMX) and Data Stream Accelerator (DSA) technologies (more on these technologies below), but does not combine them As a direct reference to postpone the production schedule.
Sapphire Rapids is equipped with a large number of new technologies, among which DDR5 and PCIe 5.0 are superior connection enhancements compared to the previous generation components. Sapphire Rapids will represent Intel’s first server chip to support DDR5, so Intel and its hardware/software partners are required to conduct extensive verification at the platform level. In addition, the PCIe 5.0 endpoint ecosystem is still in its early stages, which means that it may be difficult to fully verify new and faster interfaces with actual end-user devices.
Intel also has Sapphire Rapids model with HBM memory It will be available a few months after the main release. These chips will be available for the general market, but given that they are already on a different schedule and may not be part of the first production run, it makes no sense to delay them.
The pace at which Intel launched Sapphire Rapids has always been a bit strange. Currently, 10-nanometer Ice Lake processors can be used for single-socket and two-socket servers, and 14-nanometer Cooper Lake processors can be used for four-socket and eight-socket servers. In contrast, Sapphire Rapids is expected to reunify Intel’s data center product stack through its Eagle Stream platform to solve the problem of all servers (from 1 to 8 slots).
Intel’s Ice Lake is known for being late, and its release date is very close to that of Sapphire Rapids. This is not ideal because it does not give OEMs enough time to recoup their investment in the platform. It also did not give customers longer time to deploy Ice Lake servers until newer and faster models were available. This may cause some customers to skip the Ice Lake/Cooper Lake generation.
Intel has a different view: DDR5 is still in its early stages and it is expensive. From a total cost of ownership (TCO) point of view, the additional cost of the new module is worth it, but some data center operators are more sensitive to capital expenditures and may reject the higher upfront costs associated with DDR5. In addition, the PCIe 5.0 device ecosystem that will be plugged into Sapphire Rapids is extremely limited and takes time to build. This may also be the reason why customers postpone the large-scale migration to Sapphire Rapids.
Intel’s follow-up Granite Rapids will not expire until 2023 (Probably the end of 2023). Therefore, Intel believes that Ice Lake and Sapphire Rapids will coexist in the market, the Ice Lake product line is used for more general applications, and the data center is located on Sapphire Rapids servers to achieve higher performance applications.
Intel Advanced Matrix Extension (AMX) and Data Stream Accelerator (DSA)
Intel’s blog post covers its Advanced Matrix Extension (AMX) Debut with Sapphire Rapids. These new ISA extensions add instructions that use new two-dimensional registers (called slices) to improve the performance of matrix multiplication operations, which are the main content of deep learning workloads. Spelman said that AMX provides twice the performance of current Ice Lake chips in terms of training and inference workloads. These benefits come from early chips and there are no advanced software adjustments, so we can look forward to greater improvements in the future.
Sapphire Rapids also marks the first appearance of Intel’s Data Stream Accelerator (DSA) technology, which has built-in features to optimize streaming data movement and conversion operations by reducing the computational overhead associated with performing operations on the CPU core. Think of this as similar to the technology we see in DPU and Intel’s own IPU, but built into the processor. We have been told that DSA is naturally not as powerful as a standalone solution, but the technology significantly reduces the computational overhead associated with data movement, and can be used alone or with IPU/DPU.
wrap up
Intel’s revised timetable does not represent a huge change in its competitive position with x86 competitors: AMD has stated that we will see a combination of Milan and Genoa in 2022. This means that Sapphire Rapids will still mainly face AMD’s Zen 3 EPYC Milan 2022 for most of the time, and then compete with 5nm Zen 4 EPYC Genoa parts towards the end of the year.
Intel encountered an obstacle somewhere in its production schedule. Nevertheless, its new AMX or DSA technology seems unlikely to be the culprit-Sapphire Rapids also brings challenging new features in the hardware, such as PCIe 5.0 and DDR5, which seems more likely to cause verification delays the reason.
The continued shortage of key materials has also weakened the global supply of chips.Intel has weathered the beginning of the chip shortage better than many of its peers. This is a The advantage of being an IDM, But it is now facing tensions in substrate and packaging capacity/materials, just like many of its competitors. However, Intel has a history of prioritizing higher-margin Xeon chips over its consumer products, so the company can only divert its resources from low-end chips, so it is difficult to determine whether or how much this might play a role.
The company recently reorganized its data center business, and Parted ways with the head of DCG operations, Navin Shenoy. Intel veteran Sandra Rivera now leads the new data center and artificial intelligence team, responsible for Xeon CPU, FPGA and artificial intelligence products. Therefore, it is natural to think that one of her primary goals is to realign the company’s Xeon roadmap to a more stable position.
We will learn more about the chip soon; Spelman’s blog post stated that the company will disclose more technical details about the chip during the Hot Chips 2021 in August and the Intel innovation event in October—and maybe even earlier. .
[ad_2]