As we know, Samsung is the largest advanced memory technology maker globally. The company has developed the industry’s first High Bandwidth Memory (HBM) integrated with artificial intelligence (AI) processing power. The new memory chip named HBM-PIM where PIM stands for ‘processing in memory’.
Nowadays, the computing systems are based on the von Neumann architecture that uses separate processor and memory units to carry out millions of intricate data processing tasks. This sequential processing method requires data to move back and forth constantly, which leads to a slow system bottleneck, especially when dealing with increasing amounts of data.
HBM-PIM places a DRAM optimized AI engine in each storage bank (a storage subunit), bringing processing power directly to the location where data is stored, thereby achieving parallel processing and minimizing data movement.
When applied to Samsung’s existing HBM2 Aquabolt solution, the new architecture can provide twice the performance of the system while reducing energy consumption by more than 70%. HBM-PIM also does not require any hardware or software changes, so it can be integrated into existing systems faster.
“Our groundbreaking HBM-PIM is the industry’s first programmable PIM solution tailored for diverse AI-driven workloads such as HPC, training, and inference. We plan to build upon this breakthrough by further collaborating with AI solution providers for even more advanced PIM-powered applications.” – said SVP of Memory Product Planning at Samsung Electronics.
Samsung’s paper on the HBM-PIM has been selected for presentation at the renowned International Solid-State Circuits Virtual Conference (ISSCC) held through Feb. 22. Samsung’s HBM-PIM is now being tested inside AI accelerators by leading AI solution partners, with all validations expected to be completed within the first half of this year.
| Source |