Ahead-looking: The AI world wants a variety of knowledge, and that knowledge wants to maneuver a lot quicker onto next-gen storage {hardware}. The PCI Particular Curiosity Group is engaged on PCIe 6.0 and seven.0 requirements to ease the ache with knowledge throughput, although a number of points nonetheless must be ironed out earlier than common deployment.
Final yr, Micron teased its first PCIe 6 SSD, promising spectacular bandwidth charges. Extra just lately, the US reminiscence producer has partnered with change maker Astera Labs to point out the brand new drives in motion. The 2 corporations revealed what a correct PCIe 6 structure can supply throughout a DesignCon 2025 showcase. It was the primary public demonstration of end-to-end interoperability between a PCIe 6.x change and a PCIe 6.x SSD.
The DesignCon demo featured a Micron “Information Middle” SSD related to a Scorpio P-Sequence Cloth Change developed by Astera. Collectively, the 2 new applied sciences surpassed a sequential learn pace of 27GB/s. In 2024, Micron claimed its PCIe 6 drive would supply as much as 26 GB/s sequential learn speeds, so it exceeded that brag.
Astera notes that the Scorpio P-Sequence Cloth Change is the primary designed particularly for PCIe 6 units. The custom-built change can work with 64 PCIe 6.x lanes and a 4-port structure. It’s simply as quick in managing GPU, CPU, SSD, and NIC knowledge flows. The PCIe 6 and PCIe 7 specs aren’t fairly prepared for prime time, as corporations are nonetheless on the lookout for methods to keep away from pace throttling as a result of extreme warmth generated by the brand new bus tech.
Attaining over 27 gigabits per second required a artistic connection administration strategy. Astera linked its change to an unspecified CPU, an Nvidia H100 GPU, and two Micron PCIe 6.x E3.S SSDs. The demo used Nvidia’s Magnum IO GPUDirect Storage know-how to determine a direct knowledge path between the GPU and the SSDs.
Astera Labs claims that PCIe 6.x is changing into required to construct higher AI and cloud infrastructures. The brand new commonplace can double the bandwidth of PCIe 5.0 units, delivering as much as 256 GB/s of bidirectional knowledge charges per x16 lane.
The low-latency, high-speed structure ought to feed sufficient knowledge to satiate AI-related workloads. The demo seemingly proves that PCIe 6.x is now prepared for knowledge heart deployment. Nevertheless, conventional PC merchandise shouldn’t anticipate PCIe 6-compliant motherboards anytime quickly.